dodgeman Posted January 8, 2022 Share Posted January 8, 2022 I have an array of 22 drives with 2 parity drives. I've been unable to replace drive 10 without it being disabled, sometimes after seconds or sometimes after hours after the start of a rebuild. I've ran the xfs fix on the drive after reporting the errors and tried again, I've ran a 3 pass clear and then tried to add the drive. I've tried 3 new WD Red drives and all have failed, I have then checked them on another system without any problems being reported. Attached a diag file mhs-diagnostics-20220108-1347.zip Quote Link to comment
Squid Posted January 8, 2022 Share Posted January 8, 2022 It really looks like a cabling issue. SATA isn't know for being a great cabling solution, and if you're not using a hotswap chassis, then a non-locking cable works far better on WD drives than the locking variety (due to how WD designs the shroud and how the majority of cable manufacturers ignore the specifications). Also reseat the power connector. Quote Link to comment
dodgeman Posted January 8, 2022 Author Share Posted January 8, 2022 Using a Norco 24 bay and I've tried moving to drives in and out of the swapable 24 bay. to try and eliminate a bad bay or cable. I've also tried different disks on the motherboard sata controller vs the LSI controller and expander. I'll try replacing the power cable on the backplane(s). Quote Link to comment
trurl Posted January 8, 2022 Share Posted January 8, 2022 You might also consider the PSU itself with so many disks. Quote Link to comment
dodgeman Posted January 8, 2022 Author Share Posted January 8, 2022 700 w PS running only 260W to peak of 300w, with the array on 3 separate legs so I am pretty sure that is not the issue but I can check the amperage on each leg. Thanks for the ideas I am guessing at this one. Quote Link to comment
Frank1940 Posted January 9, 2022 Share Posted January 9, 2022 (edited) 16 hours ago, dodgeman said: 700 w PS running only 260W to peak of 300w, One problem is that hard drives require considerably more current on start-up on the +12V buss than they do once the drive spindle approaches its rated rotational speed. At one time, it was suggested that this startup current surge was 3 Amperes per drive. (Today, I would suspect that it is about two amperes per drive.) However, understand that all PS's monitor all of the busses for current, and any buss that exceeds its rating, action will taken with a millisecond at most. Also look at PS ratings. You will quickly find that there is a lot of 'Specmanship' going on. Often the wattage rating on the +12V buss will be the wattage rating of the total PS. That allows nothing for the +5V for the CPU and fans. Plus, often the +12V buss is actually two busses with one intended for the GPU. When I look at your idle state power, I could see that you could be requiring over 800W ((24x2x12) + 260) at startup... Edited January 9, 2022 by Frank1940 Quote Link to comment
dodgeman Posted March 27, 2022 Author Share Posted March 27, 2022 Frank you were correct it was the 12+ it was 54A and I moved to a new one with 82A and the problem is gone, so I do not think the PS were delivery less than stated, I just think it was just not enough on 12v legs. Thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.