DougG Posted January 31, 2019 Share Posted January 31, 2019 Hello All, Hope you can help I had a disk (disk 1) fail. I removed it and purchased a new one(WD Red WD30EFRX). Everything is running fine (in emulated mode on disk1) . When I put the new drive in, and assign it, it starts the rebuild and gets errors within seconds. I purchased another drive (same model), thinking it was a bad drive and put that one in. Same thing. The drive goes to Disabled / Emulated. I then changed the SATA connector to a different SATA port and assigned the disk to disk1. Same thing happens. I'm kind of at a loss at what to do to get the drive back to "good" Attached is my diagnostics report. Thanks in Advance. dransik-diagnostics-20190130-2131.zip Quote Link to comment
trurl Posted January 31, 2019 Share Posted January 31, 2019 Disk1 isn't responding. Check connections, SATA and power, both ends. Probably nothing wrong with any of the disks. Bad connections are much more common than bad disks. If you had asked to begin with you might have saved yourself from buying 2 disks. 1 Quote Link to comment
JorgeB Posted January 31, 2019 Share Posted January 31, 2019 It could also be the controller, Marvell controllers are known to drop disks without a reason. 1 Quote Link to comment
Stan464 Posted February 1, 2019 Share Posted February 1, 2019 (edited) Seems to be a good few threads with Disk1 dropping offline. Granted, allcould be the same Cable Issue. Edited February 1, 2019 by Stan464 Typo. Quote Link to comment
trurl Posted February 1, 2019 Share Posted February 1, 2019 1 hour ago, Stan464 said: Seems to be a good few threads with Disk1 dropping offline. Granted, allcould be the same Cable Issue. If you are referring to the specific disk number, I can easily think of some reasons why disk1 might have a slight "advantage" in the statistics. More people are going to have a disk 1 than a disk10, for one example. When people are first starting out and haven't worked out the problems in their builds, disk1 is going to be the disk most often written to using the default settings, for another. Having said that, I have read a large number of threads and haven't noticed an obvious "advantage" for disk1. But for those reasons I give and probably others, I would expect it to be true. Quote Link to comment
Stan464 Posted February 1, 2019 Share Posted February 1, 2019 1 hour ago, trurl said: If you are referring to the specific disk number, I can easily think of some reasons why disk1 might have a slight "advantage" in the statistics. More people are going to have a disk 1 than a disk10, for one example. When people are first starting out and haven't worked out the problems in their builds, disk1 is going to be the disk most often written to using the default settings, for another. Having said that, I have read a large number of threads and haven't noticed an obvious "advantage" for disk1. But for those reasons I give and probably others, I would expect it to be true. That does actually make a fair bit of sense!. I hadn't thought of it that way! just seems to be the top most failed disk usually regardless of how many are already installed.. Quote Link to comment
DougG Posted February 2, 2019 Author Share Posted February 2, 2019 (edited) Appreciate the input. I tried a 2nd SATA port on the motherboard. Then I tried one on the expansion card. I've been running fine for a year. I just pre-cleared the drive again. Swapped SATA cables. Checked to make sure connections are solid. Everything seems fine before I add it to the array. But as soon as I add it back to Disk 1, it fails within minutes. Edited February 2, 2019 by DougG Quote Link to comment
trurl Posted February 2, 2019 Share Posted February 2, 2019 Rebuild is going to be a bigger power draw than anything else your server does. Maybe your PSU is underpowered or going bad. Quote Link to comment
JorgeB Posted February 2, 2019 Share Posted February 2, 2019 Please post new diags, previous ones didn't have a SMART report for disk1 since it had dropped offline. Quote Link to comment
DougG Posted February 3, 2019 Author Share Posted February 3, 2019 interesting. I have taken a 2nd computer's power supply and moved 3 hard drives over to that one. It looks like it might work. It's been rebuilding the drive for about 5 minutes now where before it would fail after 10-20 seconds. The rebuild must draw more power than a pre-clear. I bet my original drive might still be good too. Guess that can be a 2nd parity drive. I'll keep you updated on its success/failure. But it's looking a lot better already. Quote Link to comment
itimpi Posted February 3, 2019 Share Posted February 3, 2019 1 minute ago, DougG said: interesting. I have taken a 2nd computer's power supply and moved 3 hard drives over to that one. It looks like it might work. It's been rebuilding the drive for about 5 minutes now where before it would fail after 10-20 seconds. The rebuild must draw more power than a pre-clear. I bet my original drive might still be good too. Guess that can be a 2nd parity drive. I'll keep you updated on its success/failure. But it's looking a lot better already. A rebuild is running all drives at once whereas a pre-clear is confined to a single drive. As a result a rebuild will be the most power consuming operation you can do on an Unraid system. Quote Link to comment
DougG Posted February 3, 2019 Author Share Posted February 3, 2019 Thanks for all your input, it has rebuilt and back to normal. Now I need to purchase a PSU that will hold 9+ drives. I've been doing some research. Many say a 350 is big enough. I have a 250 now. I do not think I will ever go above 10 drives. I have 2 questions. 1) How much benefit do you actually get from 2 parity drives? 2) How should I size a PSU for an approx 10 drive system? (2 drives are SSD) Again, thanks for all your help. Quote Link to comment
itimpi Posted February 3, 2019 Share Posted February 3, 2019 1) having 2 parity drives allows you to recover from 2 simultaneous drive failures without data loss. In particular it allows for a second drive failing while you are already in the middle of recovering the first one. It is up to you decide what the likelihood of that happening actually is and the level of risk you are prepared to live with. 2). You need to size the power supply to cater for the worst load case even though most of the time the system will run well below that. The worst case tends to be when initially starting up the system or running a parity check. I tend to assume something like 20-25 watts per drive is reasonable with modern drives (the exact details will depend on your actual drive models). By the time you allow for motherboard, graphics etc I would think you would be looking for something around 450-500 watt PSU to give yourself a little headroom, although you may get away with a little less. Quote Link to comment
DougG Posted February 3, 2019 Author Share Posted February 3, 2019 Appreciate all your help on this. Had the drive (after it successfully mounted and rebuilt) go into a read-only state. Had to run the xfs_repair, but now all is checking out.. I ordered a new PSU. Seems like that will solve all my issues. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.