bwv1058 Posted July 20, 2022 Share Posted July 20, 2022 Dear community, I have recently replaced my old parity drive, since the old one had failed (red cross). However, after installing the new drive and starting parity sync, my disk2 shows up as "unmountable". Now I'm really concerned about data loss, since parity had not finished rebuilding and I'm not sure how to fix the mount problem (I've tried different cables and controllers with no change). Any help would be appreciated! tower-diagnostics-20220720-1007.zip Quote Link to comment
itimpi Posted July 20, 2022 Share Posted July 20, 2022 Handling of unmountable disks is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI. Quote Link to comment
bwv1058 Posted July 20, 2022 Author Share Posted July 20, 2022 Thank you for your reply! I've already done the filesystem check, but I don't know what my next steps would be: Phase 1 - find and verify superblock... - reporting progress in intervals of 15 minutes Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - 10:24:41: zeroing log - 29809 of 29809 blocks done - scan filesystem freespace and inode maps... sb_ifree 6816, counted 6810 sb_fdblocks 61151334, counted 61638135 - 10:24:44: scanning filesystem freespace - 32 of 32 allocation groups done - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - 10:24:44: scanning agi unlinked lists - 32 of 32 allocation groups done - process known inodes and perform inode discovery... - agno = 0 - agno = 30 - agno = 15 - agno = 16 - agno = 31 - agno = 1 - agno = 2 - agno = 17 - agno = 3 - agno = 4 - agno = 18 - agno = 5 - agno = 6 - agno = 19 - agno = 7 - agno = 20 - agno = 21 - agno = 8 - agno = 22 data fork in ino 2952917491 claims free block 369115387 - agno = 23 - agno = 24 - agno = 9 - agno = 25 - agno = 10 - agno = 26 - agno = 27 - agno = 11 - agno = 28 - agno = 29 - agno = 12 - agno = 13 data fork in ino 3898087901 claims free block 487260754 - agno = 14 - 10:25:25: process known inodes and inode discovery - 230144 of 230144 inodes done - process newly discovered inodes... - 10:25:25: process newly discovered inodes - 32 of 32 allocation groups done Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (29,721487-721489) only seen by one free space btree - 10:25:25: setting up duplicate extent list - 32 of 32 allocation groups done - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 4 - agno = 3 - agno = 7 - agno = 5 - agno = 6 - agno = 2 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 entry "Bounced Files" in shortform directory 2818603873 references free inode 2975656813 would have junked entry "Bounced Files" in directory inode 2818603873 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - agno = 28 - agno = 29 - agno = 30 - agno = 31 - 10:25:25: check for inodes claiming duplicate blocks - 230144 of 230144 inodes done No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... entry "Bounced Files" in shortform directory inode 2818603873 points to free inode 2975656813 would junk entry - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... would have reset inode 2818603873 nlinks from 3 to 2 - 10:25:46: verify and correct link counts - 32 of 32 allocation groups done Maximum metadata LSN (111:54898) is ahead of log (111:51422). Would format log to cycle 114. No modify flag set, skipping filesystem flush and exiting. Quote Link to comment
itimpi Posted July 20, 2022 Share Posted July 20, 2022 That looks good - just restart the array in normal mode and the disk should now mount OK with all your data intact. Quote Link to comment
bwv1058 Posted July 20, 2022 Author Share Posted July 20, 2022 Unfortunately, it still won't mount... Quote Link to comment
trurl Posted July 20, 2022 Share Posted July 20, 2022 1 hour ago, bwv1058 said: No modify flag set So it didn't actually repair anything. Do again without -n Quote Link to comment
bwv1058 Posted July 20, 2022 Author Share Posted July 20, 2022 Now it's giving me the following warning: Phase 1 - find and verify superblock... - reporting progress in intervals of 15 minutes Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Quote Link to comment
JonathanM Posted July 20, 2022 Share Posted July 20, 2022 remove the -n and add -L like it asks. Quote Link to comment
trurl Posted July 20, 2022 Share Posted July 20, 2022 1 hour ago, bwv1058 said: please attempt a mount of the filesystem before doing this. XFS utility doesn't know that Unraid has already tried to mount it and failed. So you have to 16 minutes ago, JonathanM said: remove the -n and add -L like it asks. Quote Link to comment
bwv1058 Posted July 20, 2022 Author Share Posted July 20, 2022 Wow, seems that the filesystem repair (using -L) did do the trick and my drive is now mountable again. Thanks to each and every one for your help. You're awesome! Two questions: 1. How likely is it that I might have lost some data in the process (I'm not seeing anything obvious...) 2. What caused this mess in the first place? Quote Link to comment
itimpi Posted July 20, 2022 Share Posted July 20, 2022 37 minutes ago, bwv1058 said: 1. How likely is it that I might have lost some data in the process (I'm not seeing anything obvious...) If there is no Lost+Found folder then it is very unlikely that anything was lost. You can only be certain, however, if you have checksums for the data that should be on the drive. 38 minutes ago, bwv1058 said: What caused this mess in the first place? Almost impossible to say. Any sort of glitch that can cause a write to the drive to be lost could cause this. Quote Link to comment
trurl Posted July 21, 2022 Share Posted July 21, 2022 Do you have a lost+found share now? Quote Link to comment
bwv1058 Posted July 21, 2022 Author Share Posted July 21, 2022 Thanks again for your replies! No, fortunately there is no "lost+found" share, so I'm assuming that everything is alright! I've noticed that lately my drives have been running very hot (40-50 Celsius). Could that explain the problems I was having? Quote Link to comment
itimpi Posted July 21, 2022 Share Posted July 21, 2022 4 hours ago, bwv1058 said: I've noticed that lately my drives have been running very hot (40-50 Celsius). Could that explain the problems I was having It should not be as that will be within the temperature range that the drives are rated for. However it could mean that thermal expansion is adversely affecting the SATA connections as it as notoriously fragile type of connector as far as good connection goes. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.