Homerr

Members
  • Posts

    66
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Homerr's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. I did end up swapping the second 12tb Parity 2 drive in. Then I had a friend that has more experience with Linux based stuff help me out to get all 3 dockers back up and running. The settings were all still in place and fine. We also moved the errant files off the cache drive. So everything is back as it was. The 10-12tb of extra free space after doing the first parity drive swap is still a bit of a mystery to me. Does parity somehow have this much overhead on a 136tb pool?
  2. Welp, I was reading a thread on reddit of transfer the parity data vs. just yank the parity drive and rebuild. I chose the latter, but I now see that missing form that discussion was turning off dockers. 1. Are the dockers comepletely borked? It's not the end of the world, but if they can be recovered that would be nice. 2. Any insights on why the free space on the server went up roughly 12tb, which happens to be the replacement parity drive size? Besides the docker issue I plan to upgrade the Parity 2 drive to a 12tb one. It seems like I should do that before resetting up the dockers - assuming they are not recoverable. TIA for any help on direction.
  3. I run two parity drives (10tb) and swapped out 'Parity' (not 'Parity 2') with a 12tb drive and let it rebuild. I also have 15 data drives and a SSD cache. It has just finished and several things have happened. Free drive space went from ~8tb to 20tb. Alarmingly, three Dockers have disappeard - Crashplan, Krusader, and Plex. Fix Common Problems has sent a notice: I have only tried one reboot since this has happened. Have I borked my Docker setup or can anything be recovered? unraid-syslog-20211018-0128.zip
  4. I'm having similar issues to jasonmav and scud133b above with CRASHPLAN_SRV_MAX_MEM, see image below of pop-ups. I tried editing the docker settings for memory to 2048M and 4096M to no avail (I have 16gb ram fwiw). I only get the red X on the webui header now (second image is butted up under the first image with the 3 memory warning boxes). In doing the above I also broke the memory setting and I hope one of you can post a screenshot/fix. The last time I edited the docker memory settings I selected 'Remove' thinking it was to remove the value, not the actual tool. I see how to add back in the variable, I just need to see what the memory tool settings are.
  5. Thanks so much johnnie for noting this. I think this was the real culprit all along. In 25 years of computer building I've never had a standard SATA go bad. Everything is back in the array and a parity check showed no errors and it's been running a week now.
  6. Thanks! I just finished consolidating some backup files to free up a 10tb drive and swapped that in as a replacement for Disk 8. It's rebuilding now. I do as you suggested and check it against the files. And then that drive will replace Disk 13.
  7. Both disks have lost+found folders. Disk 8 just has 2 folders with files like 132, 133, 134, etc. with no file extensions. The files are large multi GB, probably movies...? Disk 13 has more recognizable folders and files. Dumb question is - how do I recover each of them from the lost+found trees?
  8. I started the array and the disk now show up with Used and Free space (instead of Unmountable) but are still emulated.
  9. The outputs are long so I put them in text files, attached. Disk 8.txt Disk 13.txt
  10. I let it finish a check of the array while the array was started, it ran all day yesterday and finished without errors. I'm attaching the diagnostics file after that completed. Restarted in Maint. mode and here are the xfs_repair outputs. I reran with no modifier, but did not yet run the -L option. Disk 8: Phase 1 - find and verify superblock... sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 128 resetting superblock root inode pointer to 128 sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap ino pointer to 129 sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary ino pointer to 130 Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Disk 13: Phase 1 - find and verify superblock... sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 97 resetting superblock realtime bitmap ino pointer to 97 sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 98 resetting superblock realtime summary ino pointer to 98 Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. unraid-diagnostics-20191216-1428.zip
  11. Been sick this week, just getting back to this. I bought new SATA cables for all 4 drives running off the mobo including the 2 parity drives. I started the array in Maintenance mode, ran xfs_repair with no modifiers. It said to start the array and then rerun, use -L if that didn't work. Here is the diagnostics file while the array is back up. I have not gone back in to Maintenance mode yet and reran anything. unraid-diagnostics-20191214-2113.zip
  12. oops! unraid-diagnostics-20191208-1555.zip
  13. Diagnostics attached. unraid-syslog-20191208-0001.zip
  14. Disk 8: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 4 - agno = 6 - agno = 3 - agno = 5 - agno = 1 - agno = 7 - agno = 8 - agno = 9 - agno = 2 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (4:2) is ahead of log (1:2). Format log to cycle 7. done And Disk 13: Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error
  15. I got the new LSI card from Art of Server and installed it. I started in Maintenance mode, clicked on each disks 8 and 13, and ran the xfs repair without -n. It then said it needed to mount the drives to get information. I stopped Maintenance mode and started the array in normal mode. I don't know if this is really what it was asking for. Stopped the array again. Went in to Maintenance mode and re-ran xfs repair. It didn't seem to be definitive about what happened, just 'done'. Stopped Maintenance mode and started array as normal. Disks 8 and 13 still having issues. Stopped, back to Maintenance mode. Ran xfs repair with -L to rebuild the log file on each drive. Stop Maintenance mode, start array as normal. No joy. What do I do next? In the jumble of all this my hot spare is now smaller than a drive to replace. Can I force an array rebuild on these two drives without reassigning a different drive?