jmoulder Posted February 8, 2020 Share Posted February 8, 2020 I have a disk that shows an error, and won't mount. I ran the check and got the following results: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 5 - agno = 1 - agno = 2 - agno = 0 - agno = 3 - agno = 4 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Where should I go from here? The array appears to be in good health. The only issues appear to be with this drive. I have some "new in box" drives that are the same size as this drive. Should I just replace the drive? Or should I take some actions to salvage this one? Quote Link to comment
itimpi Posted February 8, 2020 Share Posted February 8, 2020 Rerun without the -n flag, and if prompted add the -L flag. After that the drive should mount OK. Quote Link to comment
jmoulder Posted February 8, 2020 Author Share Posted February 8, 2020 Here are the results: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 5 - agno = 1 - agno = 2 - agno = 4 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done Disk is showing as not installed. Device is missing (disabled), contents emulated. Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 Unmountable and disabled are two different things, though often related, does the emulated disk mount correctly now? Also good idea to post the diags, especially if you didn't reboot after the disk got disabled. Quote Link to comment
jmoulder Posted February 9, 2020 Author Share Posted February 9, 2020 The Drive shows as missing/emulated, but I can get to the contents. From other devices it appears as the drive is still there. Unfortunately there have been reboots since the issue started. There were issues with the notifications and gmail. I quit getting email notifications. It wasn't till I went to check on Plex updates that I realized I had an issue. So it looks like I've had an issue for a while that the array functions have successfully covered up. Quote Link to comment
itimpi Posted February 9, 2020 Share Posted February 9, 2020 if the disk is being emulated correctly (using the combination of parity plus ALL the other good drives) then you want to follow the procedure documented here to get the physical drive back into a working state and the array protected again. Quote Link to comment
jmoulder Posted February 9, 2020 Author Share Posted February 9, 2020 So basically replace the drive with a new one? Pull the old one and replace with a new and let it rebuild? Normal replacement This is a the normal case of replacing a failed drive where the replacement drive is not larger than your current parity drive(s). It is worth emphasising that Unraid must be able to reliably read every bit of parity PLUS every bit of ALL other disks in order to reliably rebuild a missing or disabled disk. This is one reason why you want to fix any disk related issues with your Unraid server as soon as possible. To replace a failed disk or disks: Stop the array. Power down the unit. Replace the failed disk(s) with a new one(s). Power up the unit. Assign the replacement disk(s) using the Unraid webGui. Click the checkbox that says Yes I want to do this and then click Start. I actually had 2 NIB drives handy as I was planning on consolidating to fewer larger drives, and adding a second parity drive. Since it was one of the larger drives that failed it looks like consolidating will be put off for a while. As soon as the drive is rebuilt.. I want to get the second parity added to the array. Quote Link to comment
JorgeB Posted February 10, 2020 Share Posted February 10, 2020 15 hours ago, jmoulder said: So basically replace the drive with a new one? Depends on how the old one looks, post current diags. Quote Link to comment
jmoulder Posted February 10, 2020 Author Share Posted February 10, 2020 I already pulled the drive, and the drive successfully rebuilt on a replacement. As far as diagnosing the drive that was giving me trouble... Should I put in a spare slot, and run some diagnostics? Or am I better off adding it to a different computer, and trying to analyze it from outside the array? Quote Link to comment
JorgeB Posted February 10, 2020 Share Posted February 10, 2020 You could start by posting a SMART report, before or after running an extended SMART test. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.