doron Posted November 17, 2019 Share Posted November 17, 2019 (edited) This has been discussed in other threads, e.g. here, but I didn't find an entry in Feature Requests so here goes. Unraid is not spinning down SAS drives. It appears to try, and the GUI indicates that the drive is spun down (with a grey ball and temp not being presented), but in reality these drives keep spinning, remaining warm and drawing full power, 24x7. The problem seems to be that hdparm, which is used to spin down drives, does not affect SAS drives. A solution might be to use the sg_start command (haven't tested this thoroughly but it seems to be doing the right thing): sg_start -s /dev/sdX <== spin up sg_start -S /dev/sdX <== spin down (unfortunately the above does not seem to do the right thing unto SATA drives, so we'll either need to have conditional logic, or maybe just run both tools in sequence for each spindown/spinup operation.) I'm sure adding SAS spindown capability will be met with massive gratitude from a lot of us. EDIT 2020-09-20: A temporary stopgap solution is now available, you can get it here. EDIT 2020-09-29: The temporary solution is now available as a plugin. You can install the plugin from this URL. Edited September 29, 2020 by doron Plugin added 6 Quote Link to comment
tah Posted November 27, 2019 Share Posted November 27, 2019 This method really works!!!! Hope dev make this feature in 6.8 Maybe use some user script to solve the problem for now? Quote Link to comment
limetech Posted November 27, 2019 Share Posted November 27, 2019 On 11/17/2019 at 9:51 AM, doron said: This has been discussed in other threads, e.g. here, but I didn't find an entry in Feature Requests so here goes. Unraid is not spinning down SAS drives. It appears to try, and the GUI indicates that the drive is spun down (with a grey ball and temp not being presented), but in reality these drives keep spinning, remaining warm and drawing full power, 24x7. The problem seems to be that hdparm, which is used to spin down drives, does not affect SAS drives. A solution might be to use the sg_start command (haven't tested this thoroughly but it seems to be doing the right thing): sg_start -s /dev/sdX <== spin up sg_start -S /dev/sdX <== spin down (unfortunately the above does not seem to do the right thing unto SATA drives, so we'll either need to have conditional logic, or maybe just run both tools in sequence for each spindown/spinup operation.) I'm sure adding SAS spindown capability will be met with massive gratitude from a lot of us. Nice! Somehow I never knew about that 'sg_start' command - indeed looks like will do the trick. thanks 1 1 Quote Link to comment
doron Posted November 28, 2019 Author Share Posted November 28, 2019 On 11/27/2019 at 4:45 PM, tah said: Maybe use some user script to solve the problem for now? That might be a bit tricky. I thought about wrapping hdparm as a stopgap hack, but for most spinup/down actions Unraid does not call the userland hdparm program - it uses its kernel code (and /proc/mdcmd) to issue the relevant ATA command to the drive. Frontending that interface would be a more intrusive hack. Also, there doesn't seem to be an "Event" script upcall for spinup / spindown event (there's a thought...). Let's hope @limetech adds this capability to core Unraid Soon. Quote Link to comment
limetech Posted November 28, 2019 Share Posted November 28, 2019 4 minutes ago, doron said: Soon Hey we have that trademarked! 2 Quote Link to comment
doron Posted November 28, 2019 Author Share Posted November 28, 2019 Just now, limetech said: Hey we have that trademarked! You sure do. I did capitalize, but you're right, should have been Soon™. Mea Culpa. 1 Quote Link to comment
jvdivx Posted December 2, 2019 Share Posted December 2, 2019 Sorry, I didn't see this thread. And I asked for the same. https://forums.unraid.net/topic/85673-sas-disks-spin-down-68-rc7/ I have installed version 6.8.0-rc7 that has support for sdparm. Of the 27 disks that I have in array 4 of them are SAS disks. But I can't spin-down I was testing the sg_start command in both version 6.7.2 and 6.8-rc7, without satisfactory results. Execute the command without errors but no spin-down disks. If needed I can help you as a tester for your developments Thanks. Quote Link to comment
doron Posted December 2, 2019 Author Share Posted December 2, 2019 1 minute ago, jvdivx said: I was testing the sg_start command in both version 6.7.2 and 6.8-rc7, without satisfactory results. Execute the command without errors but no spin-down disks. Interesting. When you say the disk did not spin down - how did you test that hasn't actually spun down? Quote Link to comment
jvdivx Posted December 2, 2019 Share Posted December 2, 2019 (edited) 51 minutes ago, doron said: Interesting. When you say the disk did not spin down - how did you test that hasn't actually spun down? Because the disk server LED does not go out, I also see the active disk in grafana. SYSLOG: Edited December 2, 2019 by jvdivx Quote Link to comment
Spies Posted December 2, 2019 Share Posted December 2, 2019 Could use this in userscripts to automatically spin them down https://gist.github.com/viljoviitanen/4570091 Quote Link to comment
Spies Posted December 2, 2019 Share Posted December 2, 2019 I must add, this doesn't seem to work for me, on typing the command sg_start --stop /dev/sdg I get the following in the log Dec 2 15:55:31 Tower kernel: sd 13:0:2:0: [sdg] Spinning up disk... Dec 2 15:55:42 Tower kernel: ...........ready Dec 2 15:55:42 Tower kernel: sdg: sdg1 Quote Link to comment
jvdivx Posted December 2, 2019 Share Posted December 2, 2019 1 hour ago, Spies said: I must add, this doesn't seem to work for me, on typing the command sg_start --stop /dev/sdg I get the following in the log Dec 2 15:55:31 Tower kernel: sd 13:0:2:0: [sdg] Spinning up disk... Dec 2 15:55:42 Tower kernel: ...........ready Dec 2 15:55:42 Tower kernel: sdg: sdg1 The unsuccessful result of launching the command on my system is the following: sg_start --stop /dev/sdda Dec 2 17:56:11 jvdivx-unraid kernel: sd 2:1:44:0: [sdaa] Spinning up disk... Dec 2 17:56:26 jvdivx-unraid kernel: .ready Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: detected capacity change from 10000831348736 to 0 Dec 2 17:56:26 jvdivx-unraid kernel: sd 2:1:44:0: [sdaa] 2441609216 4096-byte logical blocks: (10.0 TB/9.10 TiB) Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: detected capacity change from 0 to 10000831348736 Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: sdaa1 Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: sdaa1 Quote Link to comment
limetech Posted December 2, 2019 Share Posted December 2, 2019 30 minutes ago, jvdivx said: The unsuccessful result of launching the command on my system is the following: Try adding -r option, this means 'readonly'. SCSI is PITA - you can see why there hasn't been much appetite on my part to fix this. <rant> Why anyone would want to use SAS drives is beyond me - they are more expensive for nothing - only reason I can think of is someone got a "deal" on a bunch of them and now they want to use them (the reason there is a "deal" is because there is no demand for this tech). </rant> Anyway eventually might get this working. Our issue is that we have a single SAS hard drive and that one appears to not work correctly, meaning constant timeouts, etc. Not much appetite to go purchase a bunch of SAS gear either. First phase of working on this will be to prevent Unraid from attempting spin-down on SAS drives - that will eliminate the syslog messages anyway. <honest> As you might imagine, we have much bigger things to work on that trying to get SAS working. </honest> 1 Quote Link to comment
jvdivx Posted December 2, 2019 Share Posted December 2, 2019 2 hours ago, limetech said: Try adding -r option, this means 'readonly'. SCSI is PITA - you can see why there hasn't been much appetite on my part to fix this. <rant> Why anyone would want to use SAS drives is beyond me - they are more expensive for nothing - only reason I can think of is someone got a "deal" on a bunch of them and now they want to use them (the reason there is a "deal" is because there is no demand for this tech). </rant> Anyway eventually might get this working. Our issue is that we have a single SAS hard drive and that one appears to not work correctly, meaning constant timeouts, etc. Not much appetite to go purchase a bunch of SAS gear either. First phase of working on this will be to prevent Unraid from attempting spin-down on SAS drives - that will eliminate the syslog messages anyway. <honest> As you might imagine, we have much bigger things to work on that trying to get SAS working. </honest> With -r I got the same result. I planned to change the SATA drives to SAS drives. Much more reliable and faster. Quote Link to comment
JonathanM Posted December 2, 2019 Share Posted December 2, 2019 1 minute ago, jvdivx said: I planned to change the SATA drives to SAS drives. Much more reliable and faster. Source? Quote Link to comment
jvdivx Posted December 2, 2019 Share Posted December 2, 2019 (edited) 17 minutes ago, jonathanm said: Source? Among other things because I have system heating problems. And these disks withstand higher temperatures, they can work at more than 80º. In addition the life of these disk is much longer and are prepared for 24/7 I put a source where you compare SATA and SAS disks https://www.diffen.com/difference/SATA_vs_Serial_Attached_SCSI Edited December 2, 2019 by jvdivx Quote Link to comment
JonathanM Posted December 2, 2019 Share Posted December 2, 2019 3 minutes ago, jvdivx said: I put a source where you compare SATA and SAS disks https://www.diffen.com/difference/SATA_vs_Serial_Attached_SCSI https://www.backblaze.com/blog/enterprise-drive-reliability/ Quote Link to comment
doron Posted December 2, 2019 Author Share Posted December 2, 2019 (edited) 6 hours ago, limetech said: SCSI is PITA - you can see why there hasn't been much appetite on my part to fix this. Yes, understood. There are good reasons to prefer SAS over SATA, although admittedly most of them(*) reside in the ballpark of enterprise computing as opposed to home / SOHO, where I'm guessing most of Unraid install base lives. (I'm a diehard veteran of the former, yet my Unraid is the latter, ergo...). I'll concede that in retrospect, I would have been be better off if my 12TB HGST's were SATA - if not for any other reason, for this spindown one. (*) Much larger and more complex drive topologies, including multihomed and multi-tier connections and enclosures, faster bus transfer speeds (12Gb/s on SAS3), more reliable performance in the presence of those complex configurations, and more. Again - most/all reside in the enterprise computing realm. Quote First phase of working on this will be to prevent Unraid from attempting spin-down on SAS drives - that will eliminate the syslog messages anyway. That's a good start, yes. Thanks! It is well understood and appreciated that you have bigger fish to fry at this time. Hence: <plea> Would you consider adding, on top of the above (not attempting to spin them down), an upcall hook - in the spirit of the EVENT scripts calls - for all spindown/spinup actions? This would allow those of us who are knee deep in this (and paying the electricity bills...) to script this up (currently this is Hard™ - as the action takes place in the kernel code). I will definitely take a stab - others might too. Once this is (hopefully) brushed up, you can consider adopting it into the core product. Does this make sense? </plea> Edited December 3, 2019 by doron 2 Quote Link to comment
Spies Posted December 2, 2019 Share Posted December 2, 2019 It's my understanding that SAS drives have better fault tolerance and can correct on the fly, much like ECC memory. My case is that they were pulled from enterprise kit and they are 3TB drives, so I'm not just going to bin them 😉 Quote Link to comment
limetech Posted December 2, 2019 Share Posted December 2, 2019 3 minutes ago, Spies said: correct on the fly All HDD's do this. Quote Link to comment
Spies Posted December 2, 2019 Share Posted December 2, 2019 (edited) 6 minutes ago, limetech said: All HDD's do this. https://www.google.com/amp/s/www.techrepublic.com/google-amp/blog/data-center/how-sas-near-line-nl-sas-and-sata-disks-compare/ In reliability, SAS disks are an order of magnitude safer than either NL-SAS or SATA disks. This metric is measured in bit error rate (BER), or how often bit errors may occur on the media. With SAS disks, the BER is generally 1 in 10^16 bits. Read differently, that means you may see one bit error out of every 10,000,000,000,000,000 (10 quadrillion) bits. By comparison, SATA disks have a BER of 1 in 10^15 (1,000,000,000,000,000 or 1 quadrillion). Although this does make it seem that SATA disks are pretty reliable, when it comes to absolute data protection, that factor of 10 can be a big deal. But admittedly for what we use unraid for, completely unnecessary. Send me 3x 3TB SATA and I'll gladly send you these SAS drives 😂 Edited December 2, 2019 by Spies Quote Link to comment
limetech Posted December 2, 2019 Share Posted December 2, 2019 4 minutes ago, Spies said: https://www.google.com/amp/s/www.techrepublic.com/google-amp/blog/data-center/how-sas-near-line-nl-sas-and-sata-disks-compare/ Yeah a lot of marketing in there. Maybe times have changed but "back in the day" when I was more directly involved in HDD tech, ATA/SCSI and later SATA/SAS HDA assemblies were all made in the same factories of which there are only a handful in the world. The main differences had to do with electronics: how large of a RAM buffer, whether device was dual-ported, etc. Maybe they screen the HDA's somehow to pick out the "best" ones (doubt it) but the BER mainly relates to length and cost of warranty. The extra cost for "enterprise-class" devices made up for warranty replacement and then some. That said, it's very possible I've become way too cynical and indeed there are measurable differences that justify the $high price tag. 2 1 Quote Link to comment
doron Posted December 2, 2019 Author Share Posted December 2, 2019 17 minutes ago, Spies said: https://www.google.com/amp/s/www.techrepublic.com/google-amp/blog/data-center/how-sas-near-line-nl-sas-and-sata-disks-compare/ I believe that article is both quite dated (2/2012) and quite inaccurate. It compares things that should not really be compared; the SAS drives considered are the 10KRPM and 15KRPM little beasts, which are indeed very different animals than the 7.2KRPM spindles, in terms of actual drive technology. But this has little to do with SAS per se: They were manufactured only with SAS interfaces, simply because no home user would spend the $$$$ for these ultrafast, yet relatively lower capacity drives. Before SSDs became the rage, those critters have been your tier-1 storage of choice (e.g. cache). When you compare apples to apples, e.g. 7.2KRPM 12TB enterprise-level drives, the difference is only in the attached electronics as @limetechsaid. You can actually buy the same drive and select your electronics. Case in point, check out this datasheet. Check the bottom for ordering options. As mentioned above, the main differences lie with the bus protocol performance and extremely different configuration and topology options. Not the performance or reliability of the actual spindle. 1 Quote Link to comment
GairyS Posted February 6, 2020 Share Posted February 6, 2020 On 12/2/2019 at 3:42 PM, doron said: Would you consider adding, on top of the above (not attempting to spin them down), an upcall hook - in the spirit of the EVENT scripts calls - for all spindown/spinup actions? This would allow those of us who are knee deep in this (and paying the electricity bills...) to script this up (currently this is Hard™ - as the action takes place in the kernel code). I will definitely take a stab - others might too. Once this is (hopefully) brushed up, you can consider adopting it into the core product. Does this make sense? I am also hunting this fix down, and would be happy to test. I run unRAID on 2 Dell R515 with LSI cards and SAS disks (4TB). I'm in the process of moving one of my servers to an HPE DL380 G9 with an LSI card to a Supermicro 24 disk JBOD w/ IPMI. I'd love to be able to spin my disks down when not in use. I have SAS in my environment because we have a blanket policy that no disks live in our environment after 3 years, which leaves me with a lot of older spares. I normally do a 7 pass write before adding them to unRAID and running preclear, so they are thoroughly tested before I use them, and they're free so I can't pass that up... 1 Quote Link to comment
GairyS Posted February 6, 2020 Share Posted February 6, 2020 On 12/2/2019 at 12:43 PM, limetech said: Anyway eventually might get this working. Our issue is that we have a single SAS hard drive and that one appears to not work correctly, meaning constant timeouts, etc. Not much appetite to go purchase a bunch of SAS gear either. ....if someone were so inclined, perhaps an upgrade license from Plus to Pro could result in a few 4TB SAS drives arriving for testing.... Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.