JorgeB Posted March 8, 2021 Share Posted March 8, 2021 Filesystem check is done with the disks assigned, if there's no option in the GUI you need to first set the correct filesystem by clicking on that disk, or run it in the console. Quote Link to comment
curtis-r Posted March 8, 2021 Author Share Posted March 8, 2021 I started in maintenance mode with all disks assigned, but when I click the unmounted disk, there is no Check Filesystem Status section in settings. I tried xfs_admin -U generate /dev/sdi1 but it returns: ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_admin. If you are unable to mount the filesystem, then use the xfs_repair -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. XFS_repair /dev/sdi1 Returns: XFS_repair: command not found Quote Link to comment
itimpi Posted March 8, 2021 Share Posted March 8, 2021 Linux is case sensitive - the command is xfs_repair. Quote Link to comment
curtis-r Posted March 8, 2021 Author Share Posted March 8, 2021 Ha, guess the wiki needs updating Anyhow, though the command executed this time, it returned: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Quote Link to comment
itimpi Posted March 8, 2021 Share Posted March 8, 2021 6 minutes ago, curtis-r said: Ha, guess the wiki needs updating Anyhow, though the command executed this time, it returned: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. That is quite normal - rerun with the -L option. Quote Link to comment
trurl Posted March 8, 2021 Share Posted March 8, 2021 4 minutes ago, curtis-r said: Ha, guess the wiki needs updating Can you post a link to what you think needs updating? 5 minutes ago, curtis-r said: use the -L option Since Unraid has already determined the disk is unmountable, there is no way to mount it to replay the log. You have to use -L to continue without that. Also, there was no point in doing this: 53 minutes ago, curtis-r said: I tried xfs_admin -U generate /dev/sdi1 That was for a completely different situation and purpose. Quote Link to comment
trurl Posted March 8, 2021 Share Posted March 8, 2021 16 hours ago, trurl said: New Config instead of rebuild is not the recommended approach to fix disabled disks. 4 minutes ago, trurl said: no point in doing this: 58 minutes ago, curtis-r said: I tried xfs_admin -U generate /dev/sdi1 That was for a completely different situation and purpose. You have learned some things that can be done during this thread, but not why or when you should do them. Please ask for advice. Quote Link to comment
curtis-r Posted March 8, 2021 Author Share Posted March 8, 2021 (edited) xfs _repair /dev/sdi1 -L ran correctly. Restarted the array in normal mode. I could then see disk1's data & all appears intact! I'm surprised how much is in lost+found, but I'll dig into that after the Parity is done rebuilding. Anyhow, you all are AWESOME! Moving on, a new HD should arrive today. Any opinion on whether the issue was the drive? It was manf in 2013. My inclination is to decommission disk1 as some air-gapped backup in my closet. Wiki that needs updating: The "XFS" code under Checking File System. EDIT: I forgot that drive already had a lost+found from the earlier issue. I don't think there is anything new in that folder. Edited March 8, 2021 by curtis-r Quote Link to comment
JorgeB Posted March 8, 2021 Share Posted March 8, 2021 31 minutes ago, curtis-r said: xfs _repair /dev/sdi1 -L Not using the md device means parity is no longer valid, you should run a correcting parity check. Quote Link to comment
itimpi Posted March 8, 2021 Share Posted March 8, 2021 1 hour ago, curtis-r said: Ha, guess the wiki needs updating Just checked and you are right! I suspect an auto-correct kicked and got missed. I used to have edit capabilities to fix this sort of issue but that seems to have been lost with the recent upgrade of the wiki I’ll have to contact Limetech to see if they will trust me enough to allow me to do so in the future Quote Link to comment
curtis-r Posted March 8, 2021 Author Share Posted March 8, 2021 47 minutes ago, JorgeB said: Not using the md device means parity is no longer valid, you should run a correcting parity check. Yes, rebuilding the parity currently. Thanks. Quote Link to comment
trurl Posted March 8, 2021 Share Posted March 8, 2021 1 hour ago, JorgeB said: Not using the md device means parity is no longer valid, you should run a correcting parity check. Yet another example of 2 hours ago, trurl said: You have learned some things that can be done during this thread, but not why or when you should do them. Please ask for advice. Quote Link to comment
curtis-r Posted March 9, 2021 Author Share Posted March 9, 2021 Before we close the book on this, I ran an extended SMART test on the disk last night (after the parity completed without errors on that disk). SMART found no errors. I had switched HD bays the other day after the 2nd unmountable fiasco. Does it make sense it was the HD bay to SATA interface and the drive is fine? I've had issues with this box's quality control. I changed the SATA cable between the two fiascos, so it's not that. BTW, there were some parity errors on disk5, which trul noticed connection issues with earlier. I'm going to switch that bay and cable later today & run a SMART. Quote Link to comment
curtis-r Posted March 9, 2021 Author Share Posted March 9, 2021 Spoke too soon. I shutdown and then moved disk5 to a different bay. On booting it reports errors disk1 again. Going to swap that drive right now. tower-diagnostics-20210309-1244.zip Quote Link to comment
trurl Posted March 10, 2021 Share Posted March 10, 2021 Disk 1 is not connected. Fix that and post new diagnostics. Quote Link to comment
curtis-r Posted March 10, 2021 Author Share Posted March 10, 2021 Rebuilt parity with new disk1, but ran a SMART on disk5 & still errors despite changing bay (but I forgot to change the cable). tower-smart-20210310-0622.zip tower-diagnostics-20210310-0713.zip Quote Link to comment
trurl Posted March 10, 2021 Share Posted March 10, 2021 You should go ahead and run extended SMART on all the others. Quote Link to comment
curtis-r Posted March 10, 2021 Author Share Posted March 10, 2021 Will do. Thanks. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.