Jump to content

Troubleshooting steps for missing drive?


ElJimador

Recommended Posts

Hi.  I was trying to move disks between my 2 unRAID servers and when I saw that the server I was moving to didn't recognize one of the data disks (and it didn't register in BIOS either), I moved everything back to the first server and saw that the same drive now wasn't recognized there either (this time BIOS shows it connected to my SATA controller but unRAID doesn't see it).  On top of that, I was also removing some data drives that were previously part of the array but that I'd emptied to move to a Windows server.  So now I don't even have the option of starting the array with just the one disk missing because I'm getting the "invalid configuration -- too many wrong and/or missing disks!" message.  So what should I do?  I've never had a drive drop out like this or any kind of failure actually so I'm not really sure the recommended steps to identify and fix the problem.  Thanks.

Link to comment

It wasn't a good idea to do multiple steps at once, but what's done is done, so just be sure you proceed cautiously...

 

Since you removed some drives from the array, you're going to have to do a New Config on whichever server you install the current drives on ... so decide that first.

 

Then install the drives you actually want in the array in that server; and check very carefully to be sure both the data and power cables are securely fastened; and that all ports are enabled in the BIOS.    Some modern chipsets take a while to recognize newly attached drives ... so wait a minute or so before you try to configure the array.

 

It's not apparent from the configurations you've listed just what disks you might be trying to move to which server ... so it would be helpful if you detail the exact configuration you're trying to achieve.

 

Link to comment

It's not apparent from the configurations you've listed just what disks you might be trying to move to which server ... so it would be helpful if you detail the exact configuration you're trying to achieve.

 

Thanks for your help Gary.  My previous config was 6x3TB (1 parity, 5 data) in my original unRAID server + 4x4TB (1 parity, 3 data) in a Windows (FlexRAID) server/HTPC (both mITX).  As I was beginning to run low on space, I invested in a new, larger server and 6x6TB drives and first moved all data from both servers on to that.  All the new 6TB drives were run through 3 preclear cycles, I ran a parity check after the initial move and no disks reported any errors.  At that point I added the 4x4TB drives to the new array for future growth also and w/the 3TB drives on my original unRAID server now empty I upgraded it from v5 to 6 and the file system of the drivers from reiserfs to XFS and I was going to use that for backup.  But then I changed my mind (more on that later) and decided to move all the data back to the 2 little servers but this time w/the original unRAID server running all 6TB drives instead. 

 

First thing I noticed when I moved all the 6TB drives to unRAID1 was that it wouldn't even power on at first.  I had to disconnect all drives to get it to boot and thinking it might be an issue with the power supply I reattached the drives one at a time and monitored the wattage spike on boot through kill-a-watt.  That method eventually identified 1 molex to SATA power adapter that seems to have been the culprit (attached to the drive that's now missing). Once I replaced it the system booted fine with all disks.  BTW when it eventually did boot w/all disks the peak wattage at startup still never spiked over 90w so if there's an issue w/the PSU itself I can't imagine what would be causing that.  I'm just a little leery of that PSU (300w SeaSonic Bronze) because it's still relatively new and I did have an  issue when I first put it in use on this server before where a couple of times deep into parity checks the system would become unresponsive and drop off the network.  (Though after I posted here about that, I ran a syslog tail on the next parity check which completed without errors).  Anyway, back to now:  when I couldn't get the missing drive to show even in BIOS, I moved just the 6TBs back to the new server being sure to set up the array exactly as it was before (minus the 4TBs which have already been moved back to the FlexRAID server and formatted for Windows).  And now the missing drive no longer shows there either.  (I said I saw it in BIOS there but I must have been confusing it w/another drive.  Looking again this morning it's not there and I suspect it never was after I moved it back).

 

Assuming the drive has failed my preference at this point would still be to go ahead with the move back to the original mITX unRAID server however I'd need to temporarily add one of my spare 3TB drives until I get the 6TB replacement from WD because there wouldn't be enough capacity in the rest of the array without it.  More importantly, I'm still concerned there might be an issue w/the PSU on that server.  I don't know how to diagnose if there is or if this problem (and possibly the issue during the parity checks before?) really could have been caused by just 1 bad molex to SATA adapter??  Until I know I'm thinking the safer bet is probably to keep the drives where they are now in the new server and recover there.

 

As for the why of all this, if it helps to know the biggest reason behind all this moving of data is dealing with the limitations of a powerline adapter sitting between my detached office/computer lab and the house.  The 2 mITX servers were in the living room where the modem and router sit atop the TV cabinet also and I thought I had enough throughput w/the powerline adapter to move all data to the new server in my office and run a guest Windows VM on it as my desktop (I've been using a laptop as a desktop substitute for too long now).  But what I found once I moved all my media to the other side of the powerline, my parents sharing my Plex server could no longer watch anything without constant buffering and my wife and I couldn't direct play some full rip blurays w/HD audio even in our own livingroom anymore.  Hence the desire to move everything back to the mITX servers as it was before (only w/a lot more capacity now, thanks to upgrading the original unRAID server to the 6TB drives). 

 

Attached is a syslog after booting this morning.  The missing drive was Disk 4 in the array and should be showing up on port 3 of the SATA controller. 

syslog.txt

Link to comment

Another question I'd have with regards to your performance issue is what specific powerline adapters you're using.    The better adapters can easily sustain a few hundred Mbs ... plenty for streaming anything you might want to watch.    I've used these for several folks with no problem:  http://www.newegg.com/Product/Product.aspx?Item=N82E16833122464

 

Of course the best connectivity is run some Cat-6 (or have an electrician do it)  :)

Link to comment

Thanks Gary.  What about the immediate steps to recover from the failed drive?  If I do a new config through tools minus the drive now missing, will I still be able to recover that data from parity?  I know I probably just need to study the wiki and search around here a bit since I haven't had to deal with a failed drive before but if there's any particular info you could point me to start that would be great.

 

BTW my SeaSonic PSU is this one -- http://www.newegg.com/Product/Product.aspx?Item=N82E16817151086 and here is the manufacturer's data sheet for it -- http://www.seasonic.com/pdf/datasheet/NEW/Bulk/PC/ATX/SS-XXXET%20Active%20PFC.pdf  I can't tell if it's single rail or otherwise since they don't say so outright and I don't know where in the specs to see that.  Either way, if the PSU really is the problem then I've got to think it's a defective unit.  As I mentioned the wattage spikes at boot only to 90w and then idles with all 6 drives running around 25w.  This is a server w/a 9w TDP processor (AMD C-60) and all it's running besides that are the hard drives and case fans.  So how could any (non-defective) ATX power supply not be able to handle that?  I would think I should be able to run that off a Pico. 

 

As for your powerline recommendation, that's actually the unit I'm using (or at least a very similar model since it's also Netgear AV500).  With speedtest.net I get around 68 Mbps internet download speed vs. 110 Mbps connected to the router.  Which I know isn't great but I figured that should be plenty for streaming full rip blurays w/HD audio within the house and no issue for sharing my Plex server remotely either since the bottleneck there should be my 10 Mbps upload speed.  But maybe upstream is much slower than downstream?  Whatever the reason it hasn't worked to my expectations.  So yes, now I'm looking for a contractor to run ethernet or if that costs too much I might try upgrading to one of the new "gigabit" powerline models instead. 

 

 

Link to comment

... If I do a new config through tools minus the drive now missing, will I still be able to recover that data from parity?

 

No => the instant you do a New Config you'll no longer be able to rebuild the failing drive.    If that's what you want to do, you need to do that BEFORE you do anything else.    HOWEVER ... if your parity wasn't good, then the reconstructed drive won't be either.

 

Link to comment

... If I do a new config through tools minus the drive now missing, will I still be able to recover that data from parity?

 

No => the instant you do a New Config you'll no longer be able to rebuild the failing drive.    If that's what you want to do, you need to do that BEFORE you do anything else.    HOWEVER ... if your parity wasn't good, then the reconstructed drive won't be either.

 

Thanks Gary.  I decided to put the 4TB drives back in, so now that the config is exactly as it was before minus the missing/failed drive I do have the option to start the array.  So I guess all I need now is to RMA the failed drive and to find out how to restore it's data from parity to the rest of the array while I wait for the replacement drive.  Let me know if you have any pointers on that otherwise I'm sure I can search around the wiki and find the steps to follow.

Link to comment

Just to confirm you've done everything correctly ... when you boot the array, does the Web GUI show a "missing" drive for the one you didn't include?    It SHOULD ... if you've somehow redone the configuration so it's not showing that you won't be able to rebuild the failed drive.    IF that happens to be the case; and IF you have NOT done any writes to the array since you started changing things around; then it MAY still be fixable.  But if that's the case, do NOT do anything with your array until you have that resolved.

 

As long as it's showing a "missing" drive you should be fine => in fact, if you browse the array with Windows Explorer you should be able to "see" the missing drive and actually access all of its data [the data is being reconstructed by reading all of the other drives in the array].

 

Link to comment

Just to confirm you've done everything correctly ... when you boot the array, does the Web GUI show a "missing" drive for the one you didn't include?    It SHOULD ... if you've somehow redone the configuration so it's not showing that you won't be able to rebuild the failed drive.    IF that happens to be the case; and IF you have NOT done any writes to the array since you started changing things around; then it MAY still be fixable.  But if that's the case, do NOT do anything with your array until you have that resolved.

 

As long as it's showing a "missing" drive you should be fine => in fact, if you browse the array with Windows Explorer you should be able to "see" the missing drive and actually access all of its data [the data is being reconstructed by reading all of the other drives in the array].

 

Well crap.  The failed drive was showing as "missing" -- until I started the array.  I was all set to just wait for the RMA replacement drive but when I saw your post I figured if I can still access the data from that failed drive why not take the extra precaution of backing it up to my FlexRAID array now, in case I messed something up with the restore to the new drive?  As soon as I did that the drive went from "missing" w/serial# to "not installed" and "unassigned", with no Disk 4 viewable through Windows either.  So I'm not sure how I misunderstood your advice (how was I supposed to see the contents of Disk 4 or anything else besides the flash drive through Windows without starting the array?).  Or maybe that should have worked and this is indication that there's something else wrong?  I had already pulled the failed drive for the RMA before I started the array and the only thing I've tried since then is powering down to reinstall the failed drive, hoping that would bring it back to "missing".  It didn't.  I have not written anything new to the array since all of this started and aside from the 2 minutes today that took the failed disk from "missing" to "not installed" the array has been stopped the entire time.

 

So what do I do now?

Link to comment

Somehow you changed the config in the process of "messing" with the systems.  Otherwise you would indeed have been able to access the data from disk #4 even if it was missing.

 

As long as you're CERTAIN you haven't actually written anything to the array, you can do the following ...

 

(1)  With the failed drive #4 installed, do a New Config and assign ALL of the drives exactly as you originally had them, and check the "Parity is Valid" box  [This is the "Trust Parity" option].

 

(2)  Start the array ... and if it starts a parity check (I believe it will), immediately cancel it.

 

(3)  Stop the array; shut down; and disconnect drive #4 (the failed drive).

 

(4)  Reboot and Start the array -- it should (a) show drive #4 as "missing" and (b) you should be able to access it via Windows Explorer  :)    As long as your parity was indeed good, and you indeed haven't changed anything since you started this process, you'll be able to copy all of the files from disk #4 to your backup FlexRAID array.

 

Link to comment

Somehow you changed the config in the process of "messing" with the systems.  Otherwise you would indeed have been able to access the data from disk #4 even if it was missing.

 

As long as you're CERTAIN you haven't actually written anything to the array, you can do the following ...

 

(1)  With the failed drive #4 installed, do a New Config and assign ALL of the drives exactly as you originally had them, and check the "Parity is Valid" box  [This is the "Trust Parity" option].

 

(2)  Start the array ... and if it starts a parity check (I believe it will), immediately cancel it.

 

(3)  Stop the array; shut down; and disconnect drive #4 (the failed drive).

 

(4)  Reboot and Start the array -- it should (a) show drive #4 as "missing" and (b) you should be able to access it via Windows Explorer  :)    As long as your parity was indeed good, and you indeed haven't changed anything since you started this process, you'll be able to copy all of the files from disk #4 to your backup FlexRAID array.

 

Thanks Gary.  I'll give this a try tomorrow and let you know how it goes.  The only other thing I noticed that was different when I tried starting the array today was that it showed Disk 7 as "unmountable".  That was one of the 4TB drives I had moved to the FlexRAID array and then back to unRAID to restore the config to exactly as it was before (minus the failed Disk 4).  Since I had already formatted them for Windows I first put them in my other unRAID server to format them back to XFS before putting them back in this array, and looking at Array Devices prior to starting the array that "unmountable" drive appeared just like the other 4TB drives (w/the serial# and FS showing as XFS, etc).  So I don't know why only that one would show as unmountable but if there's something I need to attempt to correct that before following the steps you've outlined, please let me know.

Link to comment

... The only other thing I noticed that was different when I tried starting the array today was that it showed Disk 7 as "unmountable".  That was one of the 4TB drives I had moved to the FlexRAID array and then back to unRAID to restore the config to exactly as it was before (minus the failed Disk 4).  Since I had already formatted them for Windows I first put them in my other unRAID server to format them back to XFS before putting them back in this array, and looking at Array Devices prior to starting the array that "unmountable" drive appeared just like the other 4TB drives (w/the serial# and FS showing as XFS, etc).  So I don't know why only that one would show as unmountable but if there's something I need to attempt to correct that before following the steps you've outlined, please let me know.

 

It's not clear you really have a good parity drive ==> you formatted a drive for Windows; then reformatted it as XFS; and then moved it back into your server ... that certainly sounds like a lot of write activity.    I really don't know exactly why it's not showing as mountable ... but it's not sounding too good in terms of being able to recover your disk #4.

 

Link to comment

Just noticed from your sig that your two servers are on different versions of UnRAID => one's v5, one's v6.    So it's not clear how your "other UnRAID server" could have formatted the disk with XFS (since v5 doesn't support that).

 

Are the version numbers incorrect?

 

... and if so, are you by chance using an RC now?    If so, and if you have a Plus license, you may have too many attached devices -- thus the "unmountable" message.    If that's the case, just upgrade to RC3, as the device limits were increased with that version, so you shouldn't have that issue.

 

Link to comment

Just noticed from your sig that your two servers are on different versions of UnRAID => one's v5, one's v6.    So it's not clear how your "other UnRAID server" could have formatted the disk with XFS (since v5 doesn't support that).

 

Are the version numbers incorrect?

 

Yes, I just haven't updated my sig in a while. Both were updated the latest RC v3 before I started this move back to the 2 smaller servers with all (non-cache) unRAID drives formatted as XFS.  And to clarify, the 4TB drives were empty when I moved them to my FlexRAID server and formatted them for Windows and still empty when I moved them back to my original unRAID server to reformat them as XFS and then into this array (Server2).  They had data on them previously in Server2 but prior to the move I had moved everything on to the 6TB drives so that the 4TB drives would be empty before I moved them to FlexRAID and reformatted them for Windows.  I intended to move that data back afterwards but I never got that far because of the failure of disk 4 put that on hold.

 

Maybe I shouldn't have done anything w/those 4TB drives until I knew that the full 6TB drives were successfully moved from Server2 to Server1 (basically transferring the entire working array over, since at that point Server1 was nothing but 1 of my PRO keys on a clean install flash drive).  But I couldn't see what the harm would be since those drives were empty before I moved them out. At that point I don't know what else I could have done except what I did.  I reformatted the empty drives back to XFS before restoring them to the Server2 array where all 4 were still showing as "missing".  Once there, unRAID recognizes all 4 and assigns them back to their original disk locations and gives me no indication that there's anything wrong or off with any of them until I start the array and then suddenly one of the disks is "unmountable".  Why?  There are no disk errors.  It's XFS like it was before.  It's empty like it was before.  Why would unRAID see it any differently then when it was in the array previously?

 

Tomorrow before I try anything else I'll pull Disk 7, put it back in the other (empty) unRAID Server1, reconfirm that it's formatted as XFS and that there are no disk errors, run all the SMART tests I can, etc.  If you have any other suggestions beyond that, let me know. 

 

 

 

 

Link to comment

Somehow you changed the config in the process of "messing" with the systems.  Otherwise you would indeed have been able to access the data from disk #4 even if it was missing.

 

As long as you're CERTAIN you haven't actually written anything to the array, you can do the following ...

 

(1)  With the failed drive #4 installed, do a New Config and assign ALL of the drives exactly as you originally had them, and check the "Parity is Valid" box  [This is the "Trust Parity" option].

 

(2)  Start the array ... and if it starts a parity check (I believe it will), immediately cancel it.

 

(3)  Stop the array; shut down; and disconnect drive #4 (the failed drive).

 

(4)  Reboot and Start the array -- it should (a) show drive #4 as "missing" and (b) you should be able to access it via Windows Explorer  :)    As long as your parity was indeed good, and you indeed haven't changed anything since you started this process, you'll be able to copy all of the files from disk #4 to your backup FlexRAID array.

 

Hi Gary.  Per my last post, before attempting the steps above I first installed the "unmountable" Disk7 into my other unRAID server (again, now just a clean install unRAID flash w/nothing but my PRO key) and formatted the drive there as XFS.  Maybe when I did this last time before putting it into the now primary (Server2) array I didn't let it finish formatting?  I don't know but it showed as unmountable and prompted me to format it here too, which I did.  Then I powered down and I have it installed now back in Server2.

 

I'm hesitant to either do a new config now or start the array again though because once again the failed Disk4 (now physically reinstalled also) is still showing as Not Installed / Unassigned.  When you said to do a new config and assign "ALL" of the drives exactly as I originally had them, did you mean except for the failed Disk4 or did you expect I should be able to see it again as missing? Because if I do a new config when it can't see there's a Disk4 there now then how is it going to show again as missing after I physically remove the drive and start the array again?

 

Please tell me there are some other steps to take here to recover the Disk4 data from parity.  I'm supposed to be able to recover from a single drive failure and I don't want to believe I've lost that chance just by starting the array with Disk7 unmountable when there was no warning at all to that effect.  If there was an "invalid configuration -- too many disks missing" or a red or yellow indicator by Disk7 then I never would have started the array.  Instead Disk7 showed exactly as it did when it was in the array before and exactly the same as all other working disks, w/FS=xfs and a green dot for normal operation. 

 

I've purchased 2 PRO licenses and this is my first drive failure.  Please confirm the steps you've outlined should still work or let me know what else I can do.  Thanks again.

Link to comment

IF your configuration still shows good parity with a missing disk #4, then you're probably okay in terms of recovering the data from it.    IF that's the case, you should be able to access disk #4 by simply going to Windows Explorer ... the drive will be emulated from the other disks and parity.

 

If you can NOT see the drive, then the array configuration has been changed somewhere along the way (I suspect this is the case).

 

If that's the case -- AND if you're CERTAIN that no data has been changed on your drives [Clearly that's NOT the case, since you're reformatted at least one drive ... but if that was the exact state it was in before you may be lucky and it may be identical to what it was] ... THEN you can do a New Config INCLUDING the bad disk ... and check the "parity is valid" box.    This "tells" UnRAID that parity is valid for the drive configuration you've just assigned.    I believe it will nevertheless start a parity check when you first Start the array -- but you can immediately cancel that (as I noted above).

 

IF you are correct and parity is indeed valid [i have doubts about that because of the modifications you've made to some of the drives]  then you can then Stop the array; unassign disk #4; then Start the array so #4 shows as "missing".    THEN you can tell if this works by simply accessing disk #4 through Windows Explorer and seeing if the data now shows okay.    If so, you can copy the data to another array and/or replace the drive and let the new drive rebuild.

 

However, you earlier noted that you had the configuration "exactly as it was before" ...

... I decided to put the 4TB drives back in, so now that the config is exactly as it was before minus the missing/failed drive I do have the option to start the array.

 

... and yet that clearly wasn't the case, since you couldn't "see" the emulated drive.

 

So somewhere alone the way you modified the configuration -- and I suspect you may very well have modified some of the disks as well (which means your parity is NOT actually valid).    Certainly won't hurt to try it =>  if the parity drive is in fact not valid, then when you access the emulated drive #4 you simply won't see the correct data.

 

Whatever you do, do NOT do any writes to the array.    You may still be able to do some data recovery from the failed drive if necessary.

 

Link to comment

If you can NOT see the drive, then the array configuration has been changed somewhere along the way (I suspect this is the case).

 

If that's the case -- AND if you're CERTAIN that no data has been changed on your drives ... THEN you can do a New Config INCLUDING the bad disk ... and check the "parity is valid" box. 

 

Okay Gary, I appreciate your patience here but this is exactly what I don't understand and what I wanted to make sure of before I tried doing a new config (because of the big scary warning on the new config page about how it will make it impossible to rebuild a failed drive, etc.):  How do I include the failed Disk 4 in the new config if unRAID doesn't see that there is any Disk 4 installed?  Are you telling me in the new config to just leave that slot empty (or as Disk 4 appears on the main tab now as "Not Installed / Unassigned")?  Or that it should reappear there as "missing"?

 

Sorry, not trying to be dense here.  I was thinking there must be some way to restore Disk 4 to "missing" before doing the new config.  Otherwise how does new config not just confirm that there is no Disk 4 and permanently scotch any chance of recovering it's data?

 

However, you earlier noted that you had the configuration "exactly as it was before" ...

... and yet that clearly wasn't the case, since you couldn't "see" the emulated drive.

 

So somewhere alone the way you modified the configuration ...

 

No, just to be clear on this: Disk 4 was still showing as "missing" right up until the second I started the array with all disks restored to their previous configuration but with Disk 7 unmountable (unbeknownst to me).  Ironically, as I understand it now, I could have just done a new config then with only the 6TB drives and I would have been fine.  In any case, I didn't modify the config, unRAID modified it for me when it registered the array starting with not just 1 but 2 missing or unmountable drives.  Which again, I just wish I could have had some warning of that before starting the array.  Otherwise though I haven't written to the array or made any other changes since Disk 4 failed.  In fact I've been too paranoid to even turn the array on since then (rightly so given how it turned out the only time I did). 

 

Anyway, please confirm on my question on the new config.  If the answer is that I'm already hosed if I don't see Disk 4 reappear there as "missing", and if there's really nothing else I can do either, then I'll still need to know next steps from here (a parity check to rebuild parity for the remaining drives? etc).  Right after I go for a very long bike ride to work out some of my frustration.

 

Thanks again for your help btw. 

 

 

 

 

 

 

Link to comment

If you can NOT see the drive, then the array configuration has been changed somewhere along the way (I suspect this is the case).

 

If that's the case -- AND if you're CERTAIN that no data has been changed on your drives ... THEN you can do a New Config INCLUDING the bad disk ... and check the "parity is valid" box. 

 

Okay Gary, I appreciate your patience here but this is exactly what I don't understand and what I wanted to make sure of before I tried doing a new config (because of the big scary warning on the new config page about how it will make it impossible to rebuild a failed drive, etc.):  How do I include the failed Disk 4 in the new config if unRAID doesn't see that there is any Disk 4 installed?  Are you telling me in the new config to just leave that slot empty (or as Disk 4 appears on the main tab now as "Not Installed / Unassigned")?  Or that it should reappear there as "missing"?

 

Sorry, not trying to be dense here.  I was thinking there must be some way to restore Disk 4 to "missing" before doing the new config.  Otherwise how does new config not just confirm that there is no Disk 4 and permanently scotch any chance of recovering it's data?

 

However, you earlier noted that you had the configuration "exactly as it was before" ...

... and yet that clearly wasn't the case, since you couldn't "see" the emulated drive.

 

So somewhere alone the way you modified the configuration ...

 

No, just to be clear on this: Disk 4 was still showing as "missing" right up until the second I started the array with all disks restored to their previous configuration but with Disk 7 unmountable (unbeknownst to me).  Ironically, as I understand it now, I could have just done a new config then with only the 6TB drives and I would have been fine.  In any case, I didn't modify the config, unRAID modified it for me when it registered the array starting with not just 1 but 2 missing or unmountable drives.  Which again, I just wish I could have had some warning of that before starting the array.  Otherwise though I haven't written to the array or made any other changes since Disk 4 failed.  In fact I've been too paranoid to even turn the array on since then (rightly so given how it turned out the only time I did). 

 

Anyway, please confirm on my question on the new config.  If the answer is that I'm already hosed if I don't see Disk 4 reappear there as "missing", and if there's really nothing else I can do either, then I'll still need to know next steps from here (a parity check to rebuild parity for the remaining drives? etc).  Right after I go for a very long bike ride to work out some of my frustration.

 

Thanks again for your help btw.

Starting the array did not do anything to prevent your recovery. The thing that may have compromised your parity is what you did with the disk in another computer. The idea that one empty drive is just like another is where the fallacy lies. You formatted the drive twice using a different file system each time. An empty file system is not bit-for-bit identical to a completely zeroed drive, and whether the effects of the NTFS file system you created were completely undone when you subsequently formatted the drive as XFS is unknown.

 

When you do a new config, you will be assigning all drives including parity and the problem drives, and you will also at the same time tell unRAID that your parity is valid. It may not be, but if you don't tell it that it will completely rebuild parity.

 

Then you will try to rebuild the drive and hope for the best. Possibly there will still be some more work to do after that to try to recover files from the disk because of the formatting you did.

 

I am only posting here to help clarify the situation. Please wait for gary to respond before proceeding since he has been following this more closely than I.

Link to comment

... How do I include the failed Disk 4 in the new config if unRAID doesn't see that there is any Disk 4 installed?

 

You have to actually install the disk -- THEN, after you've started the array and UnRAID has "seen" it, you can Stop the array and unassign it ... so it shows as missing.

 

As trurl noted above, however, and as I've mentioned before, it's not at all clear that you haven't already done things that will result in this being unrecoverable.    Won't hurt to try --  but anywhere the disks are different than they were when parity was computed will result in corrupted data  on the rebuilt/emulated disk #4.

 

Link to comment

... How do I include the failed Disk 4 in the new config if unRAID doesn't see that there is any Disk 4 installed?

 

You have to actually install the disk -- THEN, after you've started the array and UnRAID has "seen" it, you can Stop the array and unassign it ... so it shows as missing.

 

As trurl noted above, however, and as I've mentioned before, it's not at all clear that you haven't already done things that will result in this being unrecoverable.    Won't hurt to try --  but anywhere the disks are different than they were when parity was computed will result in corrupted data  on the rebuilt/emulated disk #4.

 

Disk 4 is installed, and starting the array again unRAID still doesn't see it.  At least not according to the web GUI.  Under Array Devices there it identifies Disk 4 as "Not installed" and "Unmountable" (though it correctly identifies it as a 6TB drive and XFS as the fs).  Also under Array Operation it does show "Unmountable disk present" and does give me the option to format Disk 4.  Should I do that and then I'll be able to do the new config including Disk 4 and follow the other steps you outlined?

 

BTW I don't know if it's relevant but on the console after starting the array I had the following appear after my login prompt:

 

XFS (md4): Metadata CRC error detected at xfs_sb_read_verify+0xfa/0x106, block 0xffffffffffffff

XFS (md4): Unmount and run xfs_repair

XFS (md4): First 64 bytes of corrupted metadata buffer:

... and then 4 lines starting ffff88020570b000: (-b010, -b020, -b030) with long strings of numbers and letters in pairs after that...

 

Just thought I'd mention that in case it tells you something you didn't already know.

 

Thanks again to you and to trurl as well.  I appreciate the clarifications and all the help I can get on this.

Link to comment

Disk 4 is installed, and starting the array again unRAID still doesn't see it.  At least not according to the web GUI.  Under Array Devices there it identifies Disk 4 as "Not installed" and "Unmountable" (though it correctly identifies it as a 6TB drive and XFS as the fs).  Also under Array Operation it does show "Unmountable disk present" and does give me the option to format Disk 4.  Should I do that and then I'll be able to do the new config including Disk 4 and follow the other steps you outlined?

 

BTW I don't know if it's relevant but on the console after starting the array I had the following appear after my login prompt:

 

XFS (md4): Metadata CRC error detected at xfs_sb_read_verify+0xfa/0x106, block 0xffffffffffffff

XFS (md4): Unmount and run xfs_repair

XFS (md4): First 64 bytes of corrupted metadata buffer:

... and then 4 lines starting ffff88020570b000: (-b010, -b020, -b030) with long strings of numbers and letters in pairs after that...

 

Just thought I'd mention that in case it tells you something you didn't already know.

 

Thanks again to you and to trurl as well.  I appreciate the clarifications and all the help I can get on this.

Whatever you do DON'T format the drive!

 

The stuff on the console is telling you why it is unmountable.

 

Do you have another (new or old) drive that you could use to rebuild disk 4 onto? That would allow you to set the original disk 4 aside and not alter it in any way and use another disk to try to recover what you can from a rebuild, and then if anything is still missing you could try to recover that from the original disk 4.

Link to comment

Just to be sure, did you do a New Config and assign disk #4 as part of that config?

 

... if so, and if it still shows as unformatted, then there's nothing you can do r.e. rebuilding the disk at this point.  You CAN attempt recovery via the Linux file repair tools and/or via a Linux Reader under Windows ... but if it can't be recovered, then you're out of luck short of possibly sending it off for professional data repair ($500 or more).

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...