caplam

Members
  • Posts

    243
  • Joined

  • Last visited

Everything posted by caplam

  1. with 2.00.15 it seems good to me. it started as soon as i restarted the node docker.
  2. up! the problem is still present and i wonder if that's why transfer speeds with smb to macos is pretty damn slow (40min to transfer 5Gb) the mac address detected by the router are the physical adress of the adapter and a mac that seems tied to docker. 289: shim-br0@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 46:8a:aa:36:26:bf brd ff:ff:ff:ff:ff:ff notice : the address has changed since my last post
  3. i changed the name to "godzilla" and i also tried with "godzilla docker" godzilla is the name of my unraid server. Of i course i restarted both dockers.
  4. i found something strange. I have only one node. Server and node are dockers on my unraid server with default parameters except for ip of course. If i change the node name which is defaulted at "node name", the queue is stuck. As soon as i set it to "node name" back the workers start.
  5. it all depends of the load. Nobody can answer your question. It's not the size of your library that matters but the number of clients across your various dockers and/or vm, the need of i/o, disk troughput,...
  6. you have no A record. From my understanding you should at least have : A wipzcream.com "your ip" CNAME nextcloud.wipzcream.com wipzcream.com and others depending on what others servers you may have. edit: i don't know how it works with your registrar but with mine A and CNAME records have to be with the complete subdomain name.
  7. i guess you don't have shown all your dns zone declarations. Investigate with a dns checker for example
  8. try to empty your browser cache
  9. I'm far from being expert but i think your dns declaration is wrong. you should use: CNAME Nextcloud dynamix.wipzcream.com you have assigned ip to dynamic.wipzcream.com so cname declarations should match this name for others subdomains.
  10. Hello, my router tells me i have 2 devices using the same ip. the ip is used by unraid server. I can't find the other one. arp -a on a lan host return unraid ip with an unknown mac address. I can see the mac address in unraid with ip link and it's attached to shim-br0@br0 i use one physical interface i have only one vm running with a different mac address. I have dockers running and one has a different ip (heimdall) what can be this mysterious device? edit: from my understanding, considering the interface shim-br0@br0 it should be a docker on custom network but i can't find it
  11. I don't know, didn't do the math but 1 pass preclear on 6Tb is 24H. 2 disks to rebuild (4 and 6Tb). If it can rebuild 2 disks simultanously at 100MB/s i guess it could be done in 17 hours. For now i'm waiting preclear end on 6Tb replacement drive. 4 tb just finished 1 hour ago. edit: rebuild has started for 2 disks.
  12. ok thank you. Preclear should end in 4 or 5 hours. Rebuild should take at least 3 days.
  13. finally array is started, md4 is mounted. Impossible to tell if all files are here. I have not seen any lost+found directory. Can i rebuild disk3 and 4 simultanously ? is it preferable to copy the content of md4 to an external disk ?
  14. start is not finished but log is showing that. So i guess it should be good. an 23 13:01:10 godzilla emhttpd: shcmd (849): mkdir -p /mnt/disk4 Jan 23 13:01:10 godzilla emhttpd: shcmd (850): mount -t xfs -o noatime /dev/md4 /mnt/disk4 Jan 23 13:01:10 godzilla kernel: XFS (md4): Mounting V5 Filesystem Jan 23 13:01:10 godzilla kernel: XFS (md4): Ending clean mount Jan 23 13:01:11 godzilla kernel: xfs filesystem being mounted at /mnt/disk4 supports timestamps until 2038 (0x7fffffff) Jan 23 13:01:11 godzilla emhttpd: shcmd (851): xfs_growfs /mnt/disk4 Jan 23 13:01:11 godzilla root: meta-data=/dev/md4 isize=512 agcount=6, agsize=268435455 blks Jan 23 13:01:11 godzilla root: = sectsz=512 attr=2, projid32bit=1 Jan 23 13:01:11 godzilla root: = crc=1 finobt=1, sparse=1, rmapbt=0 Jan 23 13:01:11 godzilla root: = reflink=0 Jan 23 13:01:11 godzilla root: data = bsize=4096 blocks=1465130633, imaxpct=5 Jan 23 13:01:11 godzilla root: = sunit=0 swidth=0 blks Jan 23 13:01:11 godzilla root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Jan 23 13:01:11 godzilla root: log =internal log bsize=4096 blocks=521728, version=2 Jan 23 13:01:11 godzilla root: = sectsz=512 sunit=0 blks, lazy-count=1 Jan 23 13:01:11 godzilla root: realtime =none extsz=4096 blocks=0, rtextents=0
  15. i'll use another disk and keep actual disk3 apart. Start and stop of the array are very long. I'm waiting for the start to finish to see if md4 is mounted.
  16. i'm stopping array but it's pretty long. I will restart it in normal mode to see if md4 can be mounted. If yes i suppose the next step is rebuilding disk3 and 4 (for that i have to wait preclear ends)
  17. xfs_repair -L /dev/md4 Phase 1 - find and verify superblock... sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 128 resetting superblock root inode pointer to 128 sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap inode pointer to 129 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary inode pointer to 130 Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... sb_icount 0, counted 40128 sb_ifree 0, counted 349 sb_fdblocks 1464608875, counted 440547342 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 5 - agno = 2 - agno = 4 - agno = 1 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (4:205143) is ahead of log (1:2). Format log to cycle 7. done
  18. xfs_repair -v /dev/md4 Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 6137384 entries sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 128 resetting superblock root inode pointer to 128 sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap inode pointer to 129 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary inode pointer to 130 Phase 2 - using internal log - zero log... zero_log: head block 205153 tail block 205149 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. with the gui i can't run it; nothing happens
  19. perhaps i don't understand correctly. When i write disk3 i mean sdg the physical disk. md3 is the "logical disk". As disk3 is disabled, md3 is emulated. Am i correct?
  20. thank you for your answer. Will do that when preclear ends. If i understood correctly: md3 is fine as it is mounted. Disk3 should be able to be rebuilt. md4 should have file system errors and if xfs_repair correct errors i should be able to rebuild disk4. so once i have md3 and md4 without error i can rebuild simultanously disk3 and disk4. Actual disk 4 (wd60EFRX) is not seen by controller. It makes some click noises at startup.
  21. Hello all, Back in september i had to stop my server for moving. When i restarted it i had problems with some disks. I finally decided to give it a chance. I bought 2 wd mybook 6Tb to shuck drives. Right now one is in the preclearing process. So the situation is: Normally my array has 2 6Tb parity disks 4 data drives: disks 1, 2 and 3 are 4Tb disks. Disk 4 is 6Tb. Now disk 1 and 2 are ok disk 3 is disabled. i can read emulated content. I tried xfs repair -L without success. The disk can't be mounted. disk 4 is not detected and i can't read emulated disk. What can i do to recover disk 3&4 content and have server back on line. For now array is started, vm and docker are disabled and a preclear is running. I attached diags. godzilla-diagnostics-20220122-1520.zip
  22. i guess moving data out of the array was not the right choice. I have to find a way. I want to drop unraid. In 2,5 years this is not the first time i'm in such trouble and it takes ages to recover. I never had such problems with my syno or my proxmox server.
  23. i'm in trouble. I was transferring data out of emulated disk 4 when disk3 had errors and is now disabled. Disk 2 started also to have errors. Now in /mnt/user/ i can't see any files which were on disk3 or 4 But i can see /mnt/disk4 but not /mnt/disk3 godzilla-diagnostics-20210929-2226.zip