GoChris Posted September 28, 2018 Share Posted September 28, 2018 (edited) Swapped a mobo out, booted up, complained of a couple failed disks. Checked the disks, seemed fine, decided to rebuild. Then was getting read errors on a 3rd disk. Cancelled the rebuild, swapped the original mobo back in, booted up and no disk problems and it's doing the rebuild again. However disk6 is showing the file system error. What gives? The rebuild is already past 3TB on that disk and soon to be finished disk2. Trying to post diagnostics but it's been stuck on collecting for many many minutes. Edit, added diagnostics. tower-diagnostics-20180927-2336.zip Edited September 28, 2018 by GoChris Quote Link to comment
JorgeB Posted September 28, 2018 Share Posted September 28, 2018 When the rebuild finishes check filesystem on disk6.Ideally would like to see the diags from the other board when the errors happened to check what if any serious filesystem damage can be expected. Quote Link to comment
GoChris Posted September 28, 2018 Author Share Posted September 28, 2018 I'm running an xfs_repair on it after stopping the array, it's still running...and has been for 1.5+ hours now. Could I rebuild that disk again if the repair doesn't work? Same as last time, stop the array, unselect that disk, start, stop, reselect the disk, start and rebuild? Quote Link to comment
JorgeB Posted September 28, 2018 Share Posted September 28, 2018 Just now, GoChris said: and has been for 1.5+ hours now. Seems like a lot of time, how are you running xfs_repair, if from the command line post the command used. 1 minute ago, GoChris said: Could I rebuild that disk again if the repair doesn't work? You can but rebuild won't fix filesystem issues. Quote Link to comment
GoChris Posted September 28, 2018 Author Share Posted September 28, 2018 I ran "xfs_repair sdf" from the command line. It's still going. Worst case I should have the disk content file names in a text file, I have it setup to log that each night, so I can re-fetch most of the contents. =\ Quote Link to comment
JorgeB Posted September 28, 2018 Share Posted September 28, 2018 2 minutes ago, GoChris said: I ran "xfs_repair sdf" from the command line. It's still going. That won't work, abort and see: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui or https://wiki.unraid.net/Check_Disk_Filesystems#Drives_formatted_with_XFS I usually post the link, but sometimes I'm on the phone and don't, in doubt you should always ask for help. Quote Link to comment
GoChris Posted September 28, 2018 Author Share Posted September 28, 2018 Thank you for the proper direction! Here is what I've done so far Quote ~# xfs_repair -v /dev/md6 Phase 1 - find and verify superblock... - block cache size set to 1504104 entries Phase 2 - using internal log - zero log... zero_log: head block 135026 tail block 135022 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Quote Link to comment
GoChris Posted September 28, 2018 Author Share Posted September 28, 2018 (edited) Here is the last phase results, would appear to me that the repair hasn't really worked. Quote Phase 7 - verify and correct link counts... resetting inode 99 nlinks from 30 to 23 resetting inode 137 nlinks from 3 to 2 resetting inode 151 nlinks from 2 to 25 resetting inode 4299586694 nlinks from 3 to 2 resetting inode 2147483791 nlinks from 6 to 3 resetting inode 6442451041 nlinks from 105 to 84 resetting inode 2160336756 nlinks from 3 to 2 resetting inode 6442451062 nlinks from 3 to 2 resetting inode 2160336770 nlinks from 6 to 3 resetting inode 4299586719 nlinks from 3 to 2 resetting inode 6442451065 nlinks from 3 to 2 resetting inode 2170858033 nlinks from 4 to 3 resetting inode 4299586722 nlinks from 3 to 2 resetting inode 6442451068 nlinks from 4 to 3 resetting inode 2170878777 nlinks from 3 to 2 resetting inode 6442451085 nlinks from 3 to 2 resetting inode 6489731708 nlinks from 3 to 2 resetting inode 6498183374 nlinks from 6 to 4 resetting inode 6515155458 nlinks from 3 to 2 resetting inode 6522172647 nlinks from 3 to 2 resetting inode 6522177766 nlinks from 3 to 2 resetting inode 6550792845 nlinks from 3 to 2 resetting inode 6974460448 nlinks from 3 to 2 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x208/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x208/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x200/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x200/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x210/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x210/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x108cdf460/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x108cdf460/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x11a09a0/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x11a09a0/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0xb1a48a38/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0xb1a48a38/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x5816e8a0/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x5816e8a0/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x10cd4c708/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x10cd4c708/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0xaf2110a8/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0xaf2110a8/0x1000 Metadata corruption detected at 0x460c82, xfs_dir3_block block 0x10a51dcc8/0x1000 libxfs_writebufr: write verifer failed on xfs_dir3_block bno 0x10a51dcc8/0x1000 Maximum metadata LSN (1:135163) is ahead of log (1:2). Format log to cycle 4. releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list! XFS_REPAIR Summary Fri Sep 28 12:19:19 2018 Phase Start End Duration Phase 1: 09/28 12:14:50 09/28 12:14:50 Phase 2: 09/28 12:14:50 09/28 12:16:17 1 minute, 27 seconds Phase 3: 09/28 12:16:17 09/28 12:16:18 1 second Phase 4: 09/28 12:16:18 09/28 12:16:18 Phase 5: 09/28 12:16:18 09/28 12:16:18 Phase 6: 09/28 12:16:18 09/28 12:16:18 Phase 7: 09/28 12:16:18 09/28 12:16:18 Total run time: 1 minute, 28 seconds done Edited September 28, 2018 by GoChris Quote Link to comment
JorgeB Posted September 28, 2018 Share Posted September 28, 2018 Looks like it did, doe it mount now? Quote Link to comment
GoChris Posted September 28, 2018 Author Share Posted September 28, 2018 Clearly I'm an idiot with this issue. Anyway I have no array start option now. I have noticed the "stale configuration" text. Quote Link to comment
GoChris Posted September 28, 2018 Author Share Posted September 28, 2018 I updated unraid (not needed, unrelated I know) and rebooted and the array is showing online and all good. I'll confirm the files on the drive, but I'm back in business. Thank you very much for all the help! 2 hours ago, johnnie.black said: Looks like it did, doe it mount now? Quote Link to comment
JorgeB Posted September 28, 2018 Share Posted September 28, 2018 3 hours ago, GoChris said: Clearly I'm an idiot with this issue. Anyway I have no array start option now. I have noticed the "stale configuration" text. Yeah, that was the result of running xfs_repair on the sdX device, instead of mdX device, a reboot will fix it as you found, good the disk is mounting, if there is a lost+found folder check it for files, there could be some lost/incomplete files there. Quote Link to comment
GoChris Posted September 28, 2018 Author Share Posted September 28, 2018 1 minute ago, johnnie.black said: Yeah, that was the result of running xfs_repair on the sdX device, instead of mdX device, a reboot will fix it as you found, good the disk is mounting, if there is a lost+found folder check it for files, there could be some lost/incomplete files there. Yup and that makes sense to me what I did wrong. There are some files in the lost+found which is great so I'll move those around and not have to re-download so much. Once again, many thanks! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.