Squnkraid Posted January 11, 2021 Share Posted January 11, 2021 Hi, I've been using Unraid for almost a year now. It started off as a low powered server with a couple of drives. For that instance I used 2,5" 5TB Seagate drives. Currently this has grown to 12 drives (dual parity). All of them 5TB Seagates. They run very quiet and a very power efficient. This setup has been working fine. Untill I looked into the SMART data this weekend after I had to replace a bad cable (UDMA CRC error count message). And there I noticed something strange. All the drives are set to never go in standby. But the Load Cycle Count is going through the roof. Ranging from 80.000 to 175.000 Load Cycle Counts for the oldest drives. The Parity drives are getting hit the hardest. After watching all drives for three days these are the results: Both parity drives = ~72 Load Cycle Counts per HOUR All other drives = ~25 Load Cycle Counts per HOUR The drives are rated at 600.000 LCC, so I'm a bit concerned about longevity. And I'm not sure what I can do about this. Shouldn't setting the spindown time to NEVER prevent these sort of things from happening? Any thoughts on how to proceed? Only thing I can think of is to set the spindown time to like 15min and see if this changes things, but I can't imagine it would. Quote Link to comment
JorgeB Posted January 11, 2021 Share Posted January 11, 2021 2.5" drives usually have much more aggressive power saving enable, you can limit or disable that by lowering or disabling the APM level, you can do that with hdparm -B, will need to be re-applied after a reboot wit a user script or by using the go file. Quote Link to comment
Squnkraid Posted January 11, 2021 Author Share Posted January 11, 2021 Hi @JorgeB Thanks for your reply. I haven't dealt with "hdparm" before. I thought setting the spindown time to 'never' would also prevent the headparking. Guess I was wrong. I've tried google and this forum, but I can't seem to find all the information needed to apply this. Just bits an pieces. I can find these options that seem to be relevant: 1. hdparm -B254 2. hdparm -B255 3. hdparm -Z Found this post in which all the options are listed: Not sure if -Z will do anything, because here it didn't do anything: But these are all posts from 2012... so not sure if this information still applies today? And I think the order in which to try this is 3, 2 and as last resort 1? Not sure how to apply this to a script. All I can find is that the command needs to be something like: hdparm -B 255 /dev/xxx In which 'xxx' stands for the particular drive I assume? So in my case I need to add this line 12 times, one for each drive? Quote Link to comment
JorgeB Posted January 11, 2021 Share Posted January 11, 2021 11 minutes ago, Squnkraid said: In which 'xxx' stands for the particular drive I assume? Yes, hdparm -B 0 /dev/sdX to make it disable or hdparm -B 254 /dev/sdX for maximum performance, either should achieve desired results. 12 minutes ago, Squnkraid said: So in my case I need to add this line 12 times, one for each drive? Yes, look at the user scripts plugin, good for that. Quote Link to comment
Squnkraid Posted January 11, 2021 Author Share Posted January 11, 2021 25 minutes ago, JorgeB said: Yes, hdparm -B 0 /dev/sdX to make it disable or Is this the same as "hdparm -B 255 /dev/sdX"? Because I've read that this disables it too? I haven't seen "-B 0" been mentioned anywhere? And do you agree with the following: first try "hdparm -Z /dev/sdX" before trying "-B 255/0" and "-B 254"? Quote Link to comment
JorgeB Posted January 11, 2021 Share Posted January 11, 2021 7 minutes ago, Squnkraid said: Is this the same as "hdparm -B 255 /dev/sdX"? Yes, it should be 255 to disable, not 0, my mistake. 8 minutes ago, Squnkraid said: And do you agree with the following: first try "hdparm -Z /dev/sdX" Never tried that, it won't hurt. Quote Link to comment
Squnkraid Posted January 11, 2021 Author Share Posted January 11, 2021 (edited) @JorgeB So I ran the following script: #!/bin/bash hdparm -Z /dev/sdo hdparm -Z /dev/sdm hdparm -Z /dev/sdd hdparm -Z /dev/sde hdparm -Z /dev/sdc hdparm -Z /dev/sdb hdparm -Z /dev/sdn hdparm -Z /dev/sdl hdparm -Z /dev/sdf hdparm -Z /dev/sdh hdparm -Z /dev/sdj hdparm -Z /dev/sdk All drives, except sdk for some reason, return the following: disabling Seagate auto powersaving mode SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 04 53 40 00 21 04 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Drive sdk only reports disabling Seagate auto pwersaving mode? I'm not sure what this means. I found the following post regarding the last message: https://askubuntu.com/questions/768373/hard-drive-error-bad-missing-sense-data but can't really make heads or tales of it unfortunately. One answer mentions "sdparm" instead of "hdparm". Any thoughts on this? Edit: seems like sdparm is for SAS devices, so it's not that. Edited January 11, 2021 by Squnkraid Quote Link to comment
Squnkraid Posted January 12, 2021 Author Share Posted January 12, 2021 Changed the script to "hdparm -B 255 /dev/sdX" and it seems to work ! Since running the script at boot this morning the LCC numbers have stayed the same for all drives! Wish I had known about this from the start. 2020 has been a bad year for my drives as well haha. Thanks for the help @JorgeB 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.