campfred Posted July 3, 2020 Posted July 3, 2020 (edited) Hello and thank you to anyone reading me! I recently shut down my server gracefully via the Shutdown button on the Main page so that I could move my server to another location in the house. The move of the tower went well with no hit or shock whatsoever. Although, since I powered the server back up, unRAID shows me that the newest data disk of my array (which is only a few months old) has an unmountable file system. The current state of the server is powered on with the array on-line. I'm considering bringing the array off-line to prevent the array exchanging more data while in that state. Of course, you'll find my diagnostics zip attached to this post. I checked the syslog to see any failed commands (happened in the past where there were read errors on my parity drive) but it seems like there is none. So, I am considering attempting to repair the XFS partition like documented in the wiki after reading another post from 2018 about a similar case. Although, I wanted to check on the forums first in case I missed something and of course because I didn't come across this situation often which makes me not have much experience dealing with it. alfred-diagnostics-20200702-2318.zip Edited July 3, 2020 by campfred Quote
JorgeB Posted July 3, 2020 Posted July 3, 2020 4 hours ago, campfred said: I am considering attempting to repair the XFS partition That's what you should do: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui 1 Quote
campfred Posted July 3, 2020 Author Posted July 3, 2020 8 hours ago, johnnie.black said: That's what you should do: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui Thanks for the reply! So far, I ran the test with the « -nv » option in the administration panel and it seem like the only alerts it is showing are that I'm running the command in no modification mode and so, it's ignoring a few things. Did I miss something? Phase 1 - find and verify superblock... - block cache size set to 1042872 entries Phase 2 - using internal log - zero log... zero_log: head block 475434 tail block 475403 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 1 - agno = 2 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... Maximum metadata LSN (4:479122) is ahead of log (4:475434). Would format log to cycle 7. No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Fri Jul 3 12:21:19 2020 Phase Start End Duration Phase 1: 07/03 12:21:14 07/03 12:21:14 Phase 2: 07/03 12:21:14 07/03 12:21:14 Phase 3: 07/03 12:21:14 07/03 12:21:18 4 seconds Phase 4: 07/03 12:21:18 07/03 12:21:18 Phase 5: Skipped Phase 6: 07/03 12:21:18 07/03 12:21:19 1 second Phase 7: 07/03 12:21:19 07/03 12:21:19 Total run time: 5 seconds Quote
JorgeB Posted July 3, 2020 Posted July 3, 2020 Run without -n or nothing will be done, and if it asks for it use -L 1 Quote
campfred Posted July 3, 2020 Author Posted July 3, 2020 Now the output is a bit different. It sounds like there is a disparity between the filesystem's log and the actual data on disk? Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... Maximum metadata LSN (4:479122) is ahead of log (4:475434). Would format log to cycle 7. No modify flag set, skipping filesystem flush and exiting. Quote
campfred Posted July 3, 2020 Author Posted July 3, 2020 Update : I ran the repair with no option first. It completed asking to use the -L option as mentioned by @johnnie.black for attempting to repair since there were issues with the drive's logs. So, I ran the repair again with the -L switch and it completed. Great. Then, I restarted the array but still in Maintenance (I wanted to be safe) and it was still showing unmountable. Turns out I forgot that in Maintenance mode, the drives aren't actually mounted and I need to go in Normal mode. So I re-restarted the array but not in Maintenance mode this time and the data disk seemed to be back to normal. I then proceeded to go through a reboot to see if it would fall unmountable again and so far it's holding up! Thank you @johnnie.black! Marking thread as solve now. 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.