JoshHolme Posted August 1, 2019 Share Posted August 1, 2019 Hello everyone, I recently had one of my drives fail on me and was swapping it out with a new one. When I was trying to figure out which drive failed I realized that I could better arrange my disks so their disk number in Unraid matched the number printed on their drive bay. I swapped around the disks and plugged them in to no apparent errors, but now I notice that 3 of my drives say "Unmountable" while my drive that failed is being rebuilt right now. It seems to be rebuilding fine, but the drives still say unmountable. Was this caused by swapping the bays that they were in? And are there any tips for fixing it? Thanks, Josh Quote Link to comment
trurl Posted August 1, 2019 Share Posted August 1, 2019 Go to Tools-diagnostics and attach the complete Diagnostics zip file to your next post. Quote Link to comment
JoshHolme Posted August 1, 2019 Author Share Posted August 1, 2019 Here are the diagnostics @trurl apollo-diagnostics-20190801-1942.zip Quote Link to comment
trurl Posted August 1, 2019 Share Posted August 1, 2019 Moving the disks shouldn't matter since Unraid keeps track of them using their serial numbers. They shouldn't be unmountable though so something not right. Are they plugged in to motherboard ports or a separate controller? Maybe check the connections. Probably the rebuild isn't going to work anyway since rebuilding requires reading all the other disks. Your syslog has a lot of this. Don't know why. Aug 1 19:08:14 Apollo kernel: blk_update_request: I/O error, dev fd0, sector 0 Aug 1 19:08:14 Apollo kernel: floppy: error -5 while reading block 0 Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 (edited) @trurl The rebuilding process is at 10% right now. I use both ports on the motherboard as well as a PCIe Sata board. It's not a raid card, but the only thing plugged into that right now is my cache disk which is operating normally, so all disks that are unmountable are attached directly to my motherboard. I have no idea what that is in my syslog. I don't have any floppy readers or anything like that in the server, only drive bays. (Also I'm new to the forum, so I don't know if it notifies you of my replies, which is why I keep @ mentioning you) Forgot to add, all sata connections are secure to the motherboard Edited August 2, 2019 by JoshHolme Quote Link to comment
trurl Posted August 2, 2019 Share Posted August 2, 2019 Post a screenshot of Main - Array Devices Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 @trurl Here's the screenshot Quote Link to comment
trurl Posted August 2, 2019 Share Posted August 2, 2019 Emulated disk1 seems to be mounted, so let the rebuild complete. Are all your array disks XFS? Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 I'm not positive, it should be whatever the default is for unraid if I had to guess. Quote Link to comment
trurl Posted August 2, 2019 Share Posted August 2, 2019 I may not be available when the rebuild completes, maybe someone else will pick this up. I will check back tomorrow. Don't do anything else without further advice. Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 Sounds good. Appreciate the help! Quote Link to comment
trurl Posted August 2, 2019 Share Posted August 2, 2019 Keep the original disk1. We may wind up needing to get files from it. Possibly it isn't bad anyway but sounds like you had already removed it before we had chance to look at it. Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 The original disk 1 wasn't being recognized in any bay that I put it in, but other disks were able to be recognized by the bay that disk 1 was in. So I think it's definitely the hard drive that went bad. I still have it but I do have to return it to western digital as the new drive I have is a warranty replacement. Quote Link to comment
trurl Posted August 2, 2019 Share Posted August 2, 2019 4 minutes ago, JoshHolme said: I still have it but I do have to return it Wait on that a while. Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 I do have to send it back relatively soon I believe, but I will hold it until the rebuild completes and we go through the further steps Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 @trurl The rebuild completed successfully with 0 errors. Quote Link to comment
trurl Posted August 2, 2019 Share Posted August 2, 2019 That looks good except for those unmountable disks. I don't understand how you got those. Did you perhaps do all the disk swapping with the power still on or something? Is there anything you left out in your description? Post a new diagnostic. Quote Link to comment
JorgeB Posted August 2, 2019 Share Posted August 2, 2019 Kind of a strange problem, filesystems look corrupt, very unlikely to happen to all 3 with normal use. One thing the OP should do is change the STA controller to AHCI, though this shouldn't change anything for this problem. Also the extra SATA controller is not recommended and has known issues with Unraid, like timeouts and dropping disks, though still unrelated to current issues since no disks are using it. Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 I don't believe it was powered on when I switched them out, but thinking back now I'm not positive. The rebuild completed successfully on my new drive that I put in, which is strange since the other disks are unmountable. Should I try to rebuild the others one by one? It seems like it should work since disk 1 was able to be rebuilt successfully. Quote Link to comment
itimpi Posted August 2, 2019 Share Posted August 2, 2019 1 minute ago, JoshHolme said: I don't believe it was powered on when I switched them out, but thinking back now I'm not positive. The rebuild completed successfully on my new drive that I put in, which is strange since the other disks are unmountable. Should I try to rebuild the others one by one? It seems like it should work since disk 1 was able to be rebuilt successfully. Rebuilding a disk will not fix an 'unmountable' problem as that is normally caused by file system corruption. Rebuilding just recreates the disk including any file system corruption. The correct way to fix such a problem is to stop the array; restart it in Maintenance mode; and then click on a problem drive on the Main tab to get to the options for file system check (and repair). Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 I will try the Maintenance mode when I get home from work today. 10 minutes ago, itimpi said: Rebuilding a disk will not fix an 'unmountable' problem as that is normally caused by file system corruption. Rebuilding just recreates the disk including any file system corruption. But doesn't unraid parity work by bit math? If the filesystem was corrupt, wouldn't the bits be different resulting in a corrupt rebuild of disk 1? Quote Link to comment
itimpi Posted August 2, 2019 Share Posted August 2, 2019 13 minutes ago, JoshHolme said: I will try the Maintenance mode when I get home from work today. But doesn't unraid parity work by bit math? If the filesystem was corrupt, wouldn't the bits be different resulting in a corrupt rebuild of disk 1? When you have a corrupt file system this is typically reflected in the bits stored on the parity drive (which is why the rebuild does not help). Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 (edited) @itimpi I just started the filesystem check with -n and immediately got the error Phase 1 - find and verify superblock... error reading superblock 4 -- seek to offset 1000204861440 failed couldn't verify primary superblock - attempted to perform I/O beyond EOF !!! attempting to find secondary superblock... .................................................................................................. and the dots continue as it's trying to find a secondary. This is on disk 2 Edited August 2, 2019 by JoshHolme Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 (edited) Disk 3 gets the following output with -n Phase 1 - find and verify superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now. Edited August 2, 2019 by JoshHolme Quote Link to comment
JoshHolme Posted August 2, 2019 Author Share Posted August 2, 2019 (edited) Disk 4 gets the following output with -n Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... .................................... Edited August 2, 2019 by JoshHolme Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.