Failed Drive 1


DougG

Recommended Posts

Hello All,  Hope you can help

 

I had a disk (disk 1) fail.  I removed it and purchased a new one(WD Red WD30EFRX). Everything is running fine (in emulated mode on disk1) .  When I put the new drive in, and assign it, it starts the rebuild and gets errors within seconds.  I purchased another drive (same model), thinking it was a bad drive and put that one in.  Same thing.

The drive goes to Disabled / Emulated.  I then changed the SATA connector to a different SATA port and assigned the disk to disk1.  Same thing happens.  I'm kind of at a loss at what to do to get the drive back to "good"

Attached is my diagnostics report.

 

Thanks in Advance.

dransik-diagnostics-20190130-2131.zip

Link to comment
1 hour ago, Stan464 said:

Seems to be a good few threads with Disk1 dropping offline. Granted, allcould be the same Cable Issue.

If you are referring to the specific disk number, I can easily think of some reasons why disk1 might have a slight "advantage" in the statistics. More people are going to have a disk 1 than a disk10, for one example. When people are first starting out and haven't worked out the problems in their builds, disk1 is going to be the disk most often written to using the default settings, for another.

 

Having said that, I have read a large number of threads and haven't noticed an obvious "advantage" for disk1. But for those reasons I give and probably others, I would expect it to be true.

Link to comment
1 hour ago, trurl said:

If you are referring to the specific disk number, I can easily think of some reasons why disk1 might have a slight "advantage" in the statistics. More people are going to have a disk 1 than a disk10, for one example. When people are first starting out and haven't worked out the problems in their builds, disk1 is going to be the disk most often written to using the default settings, for another.

 

Having said that, I have read a large number of threads and haven't noticed an obvious "advantage" for disk1. But for those reasons I give and probably others, I would expect it to be true.

 

 

That does actually make a fair bit of sense!.

I hadn't thought of it that way! just seems to be the top most failed disk usually regardless of how many are already installed..

Link to comment

Appreciate the input.

I tried a 2nd SATA port on the motherboard.  Then I tried one on the expansion card.  I've been running fine for a year.  I just pre-cleared the drive again.  Swapped SATA cables.  Checked to make sure connections are solid.  Everything seems fine before I add it to the array.  But as soon as I add it back to Disk 1, it fails within minutes.

Edited by DougG
Link to comment

interesting.  I have taken a 2nd computer's power supply and moved 3 hard drives over to that one.  It looks like it might work. It's been rebuilding the drive for about 5 minutes now where before it would fail after 10-20 seconds.

 

The rebuild must draw more power than a pre-clear.  I bet my original drive might still be good too.  Guess that can be a 2nd parity drive.

 

I'll keep you updated on its success/failure.  But it's looking a lot better already.

Link to comment
1 minute ago, DougG said:

interesting.  I have taken a 2nd computer's power supply and moved 3 hard drives over to that one.  It looks like it might work. It's been rebuilding the drive for about 5 minutes now where before it would fail after 10-20 seconds.

 

The rebuild must draw more power than a pre-clear.  I bet my original drive might still be good too.  Guess that can be a 2nd parity drive.

 

I'll keep you updated on its success/failure.  But it's looking a lot better already.

A rebuild is running all drives at once whereas a pre-clear is confined to a single drive.  As a result a rebuild will be the most power consuming operation you can do on an Unraid system.

Link to comment

Thanks for all your input, it has rebuilt and back to normal.  Now I need to purchase a PSU that will hold 9+ drives.  I've been doing some research.  Many say a 350 is big enough.  I have a 250 now.  I do not think I will ever go above 10 drives.

 

I have 2 questions. 

1) How much benefit do you actually get from 2 parity drives?

2) How should I size a PSU for an approx 10 drive system?  (2 drives are SSD)

 

Again, thanks for all your help.

Link to comment

1) having 2 parity drives allows you to recover from 2 simultaneous drive failures without data loss.   In particular it allows for a second drive failing while you are already in the middle of recovering the first one.     It is up to you decide what the likelihood of that happening actually is and the level of risk you are prepared to live with.

 

2).  You need to size the power supply to cater for the worst load case even though most of the time the system will run well below that.    The worst case tends to be when initially starting up the system or running a parity check.     I tend to assume something like 20-25 watts per drive is reasonable with modern drives (the exact details will depend on your actual drive models).   By the time you allow for motherboard, graphics etc I would think you would be looking for something around 450-500 watt PSU to give yourself a little headroom, although you may get away with a little less.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.