[Plugin] Spin Down SAS Drives


doron

Recommended Posts

53 minutes ago, doron said:

Thanks for reporting! I'm really happy to hear that.

 

Would you mind running


/usr/local/emhttp/plugins/sas-spindown/sas-util

and send me the resulting file (/tmp/sas-util-out)?

(when run without parameters it doesn't do anything intrusive, just reports the HDD and controller(s) models in JSON format)

You can pm me or post here (no sensitive info such as serial numbers etc. is shared). Thanks

Sure thing.

Doesn't look like it is showing the controller. I have UnRaid running in a VM Proxmox now. So the controller is a LSI SAS9200-16e flashed to P20 IT mode.

Happy to help any way I can. Thanks again very very much!

 

sas-util-out

Link to comment
4 minutes ago, nlcjr said:

Doesn't look like it is showing the controller.

Sure is... Check out the output file (or see below). Thanks for your help!

 

      "controller": {
        "Slot": "01:00.0",
        "Class": "Serial Attached SCSI controller [0107]",
        "Vendor": "Broadcom / LSI [1000]",
        "Device": "SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] [0064]",
        "SVendor": "Broadcom / LSI [1000]",
        "SDevice": "9200-16e 6Gb/s SAS/SATA PCIe x8 External HBA [3030]",
        "Rev": "02"

Link to comment
5 hours ago, doron said:

Sure is... Check out the output file (or see below). Thanks for your help!

 

      "controller": {
        "Slot": "01:00.0",
        "Class": "Serial Attached SCSI controller [0107]",
        "Vendor": "Broadcom / LSI [1000]",
        "Device": "SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] [0064]",
        "SVendor": "Broadcom / LSI [1000]",
        "SDevice": "9200-16e 6Gb/s SAS/SATA PCIe x8 External HBA [3030]",
        "Rev": "02"

Not sure if you need this info or not. After the sas-util command, the drives didn't go back to Standby as UnRaid still had them grey balled. I started use some of the drives and those went to standby after the time set in UnRaid (1hr). The others I just clicked on the grey ball in UnRaid and they went green, then after 1hr they also went to Standby.

 

Working Great!

 

 

  • Thanks 1
Link to comment

@doron @limetech

 

Additional changes has been added to smartctl 7.2 for SCSI Drives the -s option to compliment the ATA existing setup.

 

This now supports setting drive to standby.

 

-s standby,now spins down drive.

-s standby,off  spins up drive.

 

-n standby has a bug which I have logged. I am hoping this will be fixed before 7.2 is released. So far only SEAGATE drives are affected and get spun up. HGST and HITACHI seem ok for the ones I have. Ticket ref is [smartmontools] #1413: Some SCSI Drives spin up when using -n option.

 

Example of -s output.

root@Tower:/usr/local/sbin# smartctl.r5131 -s standby,off /dev/sdi
smartctl 7.2 2020-12-15 r5131 [x86_64-linux-5.9.13-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

Device placed in ACTIVE mode
SCSI device successfully opened

Use 'smartctl -a' (or '-x') to print SMART (and more) information

 

root@Tower:/usr/local/sbin# smartctl.r5131 -s standby,now /dev/sdi
smartctl 7.2 2020-12-15 r5131 [x86_64-linux-5.9.13-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

Device placed in STANDBY mode
SCSI device successfully opened

Use 'smartctl -a' (or '-x') to print SMART (and more) information
 

  • Thanks 1
Link to comment

Just letting you all know I installed this plugin on my Dell R510 chassis with an Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) controller.

 

It worked 100% on my system and spun down all six SAS drives immediately.

 

I am attaching my diagnostics file in case any other information would be handy.

cobblednas-diagnostics-20201221-1206_BEFOREPLUGIN.zip cobblednas-diagnostics-20201221-1606_AFTERPLUGIN.zip

Edited by xaositek
Link to comment
17 minutes ago, xaositek said:

Just letting you all know I installed this plugin on my Dell R510 chassis with an Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) controller.

 

It worked 100% on my system and spun down all six SAS drives immediately.

 

I am attaching my diagnostics file in case any other information would be handy.

cobblednas-diagnostics-20201221-1206_BEFOREPLUGIN.zip 80.82 kB · 0 downloads cobblednas-diagnostics-20201221-1606_AFTERPLUGIN.zip 85.83 kB · 0 downloads

Thanks for reporting - that's good to hear.

If you could also run:

/usr/local/emhttp/plugins/sas-spindown/sas-util

and send over the resulting file (pm me or post), that'd be great.

Link to comment

root@cobblednas:~# /usr/local/emhttp/plugins/sas-spindown/sas-util

 

SAS Spindown Utility (v20201217.01)

 

 

 

sdc | ST31000424SS | 1000:0072:1028:1f1e |  n/a |

sdd | ST1000NM0023 | 1000:0072:1028:1f1e |  n/a |

sde | ST31000424SS | 1000:0072:1028:1f1e |  n/a |

sdf | ST1000NM0023 | 1000:0072:1028:1f1e |  n/a |

sdg | ST31000424SS | 1000:0072:1028:1f1e |  n/a |

sdh | ST31000424SS | 1000:0072:1028:1f1e |  n/a |

sdi | MBF2300RC | 1000:0072:1028:1f1e |  n/a |

sdb | MBF2300RC | 1000:0072:1028:1f1e |  n/a |

 

Run completed. The output is at /tmp/sas-util-out.

 

I will PM you the text file.

  • Thanks 1
Link to comment
  • 2 weeks later...
On 10/6/2020 at 9:20 AM, doron said:

This manual for this drive (in fact, the entire Constellation ES.3 series) seems to indicate (sec 6.1) that an explicit NOTIFY needs to be sent to the device to recover from the spindown mode we're sending the device into (Standby_Z). 

This is in contrast with other devices tested (e.g. WD/HGST) that automatically spin up when sent to this state.

 

Has anyone seen positive results with this drive and this plugin?

 

We might end up having to enumerate the drive types where this works well vs. those that fail, and build a white list in the plugin. Nasty 😞

 

Can I get brief messages here from anyone who's using (or tried using) the plugin, reporting success/failure? Just a one liner with:

<HDD Model> <Success/Failure> (<optional comment>)

would be great. Example;


HUH721212AL4200  Success

PM would also work if you don't want to post. Thanks!

@doron this plugin looks amazing, great work!

I have Constellation ES.2 series SAS HDD, documentation has the same note about requiring explicit NOTIFY being sent to device to recover from spindown

Is this something outside the scope of this plugin? I don't really understand what the explicit NOTIFY entails.

Thanks again and happy holidays

Link to comment
19 hours ago, magg1e16 said:

@doron this plugin looks amazing, great work!

I have Constellation ES.2 series SAS HDD, documentation has the same note about requiring explicit NOTIFY being sent to device to recover from spindown

Is this something outside the scope of this plugin? I don't really understand what the explicit NOTIFY entails.

Thanks for the kind words. What we've been seeing is kind of hit and miss, so no strict verdict re a certain series of HDDs; you may want to just try.

Re the specific issue, some of the Seagate documentation seems to imply that power mode 3 (the mode we're setting the drive into) is implemented as needing an explicit command to spin the drive back up. This does not jive too well with how Unraid is expecting things to behave (basically, it is expected that a subsequent i/o to the drive will implicitly spin it back up). 

 

Bottom line, you may want to give it a try and see.

 

You can also use the provided sas-util (bundled with the plugin at /usr/local/emhttp/plugins/sas-spindown/sas-util, or you can take it from here), with the parameter "test", to try and predict how things will work. (Unfortunately, even this test is not 100% accurate - I have at least one happy camper whose SAS drives merrily spin down and up with the plugin, but still fail the test...). If you do, please post (or pm) the resulting file.

 

Happy 2021!

Link to comment
15 hours ago, doron said:

Thanks for the kind words. What we've been seeing is kind of hit and miss, so no strict verdict re a certain series of HDDs; you may want to just try.

Re the specific issue, some of the Seagate documentation seems to imply that power mode 3 (the mode we're setting the drive into) is implemented as needing an explicit command to spin the drive back up. This does not jive too well with how Unraid is expecting things to behave (basically, it is expected that a subsequent i/o to the drive will implicitly spin it back up). 

 

Bottom line, you may want to give it a try and see.

 

You can also use the provided sas-util (bundled with the plugin at /usr/local/emhttp/plugins/sas-spindown/sas-util, or you can take it from here), with the parameter "test", to try and predict how things will work. (Unfortunately, even this test is not 100% accurate - I have at least one happy camper whose SAS drives merrily spin down and up with the plugin, but still fail the test...). If you do, please post (or pm) the resulting file.

 

Happy 2021!

After reading this thread, I should have realized that just testing it was the best way to know.
In case it's useful, I'm on Unraid 6.8.3 using Unassigned Devices + ZFS with my SAS drives, no SAS drives in Unraid array.

 

I ran "sg_start -r --pc=3 /dev/sdb" last night, waited a bit and ran  


# sdparm --command=sense /dev/sdX
    /dev/sdX: SEAGATE   ST33000650SS      0003


Didn't look like the spin-down occurred. I didn't see anything in System logs with SAS Assist, spin down
Checked system log again this morning and nothing new, but the drive did spin down (gray balled) and I get the below when I checked this morning

 

#  sdparm --command=sense /dev/sdb
open error: /dev/sdb [read only]: No such file or directory
# sg_start -r --pc=0 /dev/sdbsas-util-out
sg_start failed: No such file or directory

 

So, the drive did spin down sometime last night, but spin-up failed and I didn't see anything in system logs.

 

EDIT: I restarted my computer (after re-attaching all the cables) and the drive successfully came back online

"/usr/local/emhttp/plugins/sas-spindown/sas-util test" output attached, matches my findings that spin-down request wasn't followed

 

Edited by magg1e16
Found solution to problem (loose cables)
Link to comment
8 hours ago, magg1e16 said:

In case it's useful, I'm on Unraid 6.8.3 using Unassigned Devices + ZFS with my SAS drives, no SAS drives in Unraid array.

Ah. This might make the testing somewhat less accurate. If the drives are active during the test (with real data moving, or even just filesystem housekeeping), that activity might spin the drive back up while you're waiting between setting and testing.

Quote

I didn't see anything in System logs with SAS Assist, spin down

I presume the plugin is installed?

You'd not see a "SAS Assist" message if you fiddle with the drives manually. You should see it if, instead, you'd hit the green button next to one of your SAS drives (that are under UD), to cause the drive to spin down. Have you tried that? Do you see a SAS Assist message then?

 

If you do try that, and the GUI does show a grey ball, then chances are it's spun down properly - you can double check with the sense command (without doing the sg_start thing). 

Quote

EDIT: I restarted my computer (after re-attaching all the cables) and the drive successfully came back online

"/usr/local/emhttp/plugins/sas-spindown/sas-util test" output attached, matches my findings that spin-down request wasn't followed

Thanks for posting!

See my first comment above. If you want to further test, you may want to edit the sas-util script, remove (or comment out) the one line that says "sleep 3s" and try again. I'd be curious to see the results.

 

EDIT: Alternatively, you can just try this:

sg_start -rp3 /dev/sdb && sdparm -C sense /dev/sdb

and show the results.

Edited by doron
Link to comment
7 hours ago, doron said:

I presume the plugin is installed?

Yes, plugin installed. I should correct my statement and say there were no further lines referencing SAS Assist after initial install/initialization

Unless I've missed a really key set-up step, I don't have the option to spin down the UD SAS HDD. Hovering over the balls don't produce a tool-tip.

I believe Unraid 6.9.0 uses emhttpd to spin-down UD HDD, so maybe if I were to upgrade I would be able to click to spin down?

 

I did both tests, in case that's helpful

 

# sg_start -rp3 /dev/sdb && sdparm -C sense /dev/sdb
    /dev/sdb: SEAGATE   ST33000650SS      0003

sas-util-out

Link to comment
1 minute ago, magg1e16 said:

Unless I've missed a really key set-up step, I don't have the option to spin down the UD SAS HDD. Hovering over the balls don't produce a tool-tip.

I believe Unraid 6.9.0 uses emhttpd to spin-down UD HDD, so maybe if I were to upgrade I would be able to click to spin down?

Sorry, that was my bad. I forgot you're at 6.8.3 (I'm too deep into 6.9 already).

Yeah, 6.9 does this for UD as well.

Not sure it's gonna help much, given that your drives refuse spin down 🙂

1 minute ago, magg1e16 said:

 

I did both tests, in case that's helpful

 

# sg_start -rp3 /dev/sdb && sdparm -C sense /dev/sdb
    /dev/sdb: SEAGATE   ST33000650SS      0003

sas-util-out 7.16 kB · 0 downloads

Very helpful. It's an almost definitive testimonial that these Seagates with this controller would not spin down.

Link to comment

Seems Seagate X14 12TB (ST12000NM0038) drives with a LSI00344 9300-8i  HBA controller on UNraid v 6.8.3 also does not work, unless I am doing something wrong. I have 11 active and 2 unmounted and is seems they just like spinning lol. 

 

I am guessing this is why UNraid does not support this feature natively. As it seems SAS drives do not have it standardized. That's a shame really.

Edited by Waffle
Link to comment
22 hours ago, Waffle said:

Seems Seagate X14 12TB (ST12000NM0038) drives with a LSI00344 9300-8i  HBA controller on UNraid v 6.8.3 also does not work, unless I am doing something wrong. I have 11 active and 2 unmounted and is seems they just like spinning lol. 

Thanks for reporting. Do you see any "SAS Assist" messages in the log?

22 hours ago, Waffle said:

I am guessing this is why UNraid does not support this feature natively. As it seems SAS drives do not have it standardized. That's a shame really.

Indeed.

Link to comment

I have been waiting for this for ages, so thanks Doron for coming up with an easy to install solution!  I do have a problem however, and I've checked the thread and can't see anything like what error I get.

 

Whenever a spin down request is made, or after the 15 minute no activity period has passed and the server tries to spin down the sas drives, it comes up with the below and absolutely spams the system log.

 

Jan 18 08:51:35 Tower emhttpd: req (1): cmdSpindown=parity&startState=STARTED&csrf_token=****************

Jan 18 08:51:35 Tower kernel: mdcmd (48): spindown 0

Jan 18 08:51:35 Tower rsyslogd: Child 11865 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]

 

I'm running a Dell R510, with a Dell H310 HBA card, and 12 Seagate ST6000NM0034 6TB SAS drives.  Unraid version 6.5.3 (I know, I need to update but it works so don't want to mess with it)

 

Edited by greg0986
Link to comment
2 hours ago, greg0986 said:

Jan 18 08:51:35 Tower emhttpd: req (1): cmdSpindown=parity&startState=STARTED&csrf_token=****************

Jan 18 08:51:35 Tower kernel: mdcmd (48): spindown 0

Jan 18 08:51:35 Tower rsyslogd: Child 11865 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]

 

This might indicate a problem in the syslog hook used to trigger spindown in versions below 6.9.

 

- Can you indicate which version of the plugin is installed?

 

- Are you getting any "SAS Assist" messages in your syslog?

 

- Can you do this, from the Unraid CLI:

touch /tmp/spindownsas-debug

and then try to spin down a SAS drive and paste the resulting syslog excerpt?

 

After doing that you can just 

rm /tmp/spindownsas-debug

To stop the debug messages.

Link to comment
1 minute ago, doron said:

This might indicate a problem in the syslog hook used to trigger spindown in versions below 6.9.

 

- Can you indicate which version of the plugin is installed?

 

V0.83

 

1 minute ago, doron said:

- Are you getting any "SAS Assist" messages in your syslog?

 

None, just the above messages.

 

1 minute ago, doron said:

- Can you do this, from the Unraid CLI:


touch /tmp/spindownsas-debug

and then try to spin down a SAS drive and paste the resulting syslog excerpt?

 

Done:

 

Jan 18 13:26:11 Tower emhttpd: req (10): cmdSpindown=disk1&startState=STARTED&csrf_token=****************

Jan 18 13:26:11 Tower kernel: mdcmd (48): spindown 1

Jan 18 13:26:11 Tower SAS Assist v0.83: debug: syslog filter triggered

Jan 18 13:26:11 Tower SAS Assist v0.83: debug: syslog filter, R=sdf S=1

 

Link to comment
2 hours ago, greg0986 said:

Jan 18 13:26:11 Tower emhttpd: req (10): cmdSpindown=disk1&startState=STARTED&csrf_token=****************

Jan 18 13:26:11 Tower kernel: mdcmd (48): spindown 1

Jan 18 13:26:11 Tower SAS Assist v0.83: debug: syslog filter triggered

Jan 18 13:26:11 Tower SAS Assist v0.83: debug: syslog filter, R=sdf S=1

 

So now you didn't get the "child terminated" message?!

Link to comment

I've just rebooted it, and selected Spin Down on the Main section and this is what it showed:

 

Jan 18 16:16:19 Tower emhttpd: req (2): cmdSpindownAll=true&startState=STARTED&csrf_token=****************
Jan 18 16:16:19 Tower emhttpd: Spinning down all drives...
Jan 18 16:16:19 Tower kernel: mdcmd (48): spindown 0
Jan 18 16:16:19 Tower kernel: mdcmd (49): spindown 1
Jan 18 16:16:19 Tower kernel: mdcmd (50): spindown 2
Jan 18 16:16:19 Tower kernel: mdcmd (51): spindown 3
Jan 18 16:16:19 Tower kernel: mdcmd (52): spindown 4
Jan 18 16:16:19 Tower kernel: mdcmd (53): spindown 5
Jan 18 16:16:19 Tower kernel: mdcmd (54): spindown 6
Jan 18 16:16:19 Tower kernel: mdcmd (55): spindown 7
Jan 18 16:16:19 Tower kernel: mdcmd (56): spindown 8
Jan 18 16:16:19 Tower kernel: mdcmd (57): spindown 9
Jan 18 16:16:19 Tower kernel: mdcmd (58): spindown 10
Jan 18 16:16:19 Tower emhttpd: shcmd (150): /usr/sbin/hdparm -y /dev/nvme0n1
Jan 18 16:16:19 Tower kernel: mdcmd (59): spindown 11
Jan 18 16:16:19 Tower root: HDIO_DRIVE_CMD(standby) failed: Inappropriate ioctl for device
Jan 18 16:16:19 Tower root:
Jan 18 16:16:19 Tower root: /dev/nvme0n1:
Jan 18 16:16:19 Tower root: issuing standby command
Jan 18 16:16:19 Tower emhttpd: shcmd (150): exit status: 25
Jan 18 16:16:19 Tower emhttpd: shcmd (151): /usr/sbin/hdparm -y /dev/nvme1n1
Jan 18 16:16:19 Tower root: HDIO_DRIVE_CMD(standby) failed: Inappropriate ioctl for device
Jan 18 16:16:19 Tower root:
Jan 18 16:16:19 Tower root: /dev/nvme1n1:
Jan 18 16:16:19 Tower root: issuing standby command
Jan 18 16:16:19 Tower emhttpd: shcmd (151): exit status: 25

Jan 18 16:16:19 Tower rsyslogd: Child 7692 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:16:19 Tower rsyslogd: Child 7726 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:16:19 Tower rsyslogd: Child 7760 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:16:19 Tower rsyslogd: Child 7828 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:16:19 Tower rsyslogd: Child 7845 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:16:19 Tower rsyslogd: Child 7862 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]

 

(Highlighted in yellow are the two NVME drives I installed today)

 

 

After entering the code you mentioned above and doing it again, this is what it shows:

 

Jan 18 16:20:14 Tower emhttpd: req (4): cmdSpindownAll=true&startState=STARTED&csrf_token=****************
Jan 18 16:20:14 Tower emhttpd: Spinning down all drives...
Jan 18 16:20:14 Tower kernel: mdcmd (72): spindown 0
Jan 18 16:20:14 Tower kernel: mdcmd (73): spindown 1
Jan 18 16:20:14 Tower kernel: mdcmd (74): spindown 2
Jan 18 16:20:14 Tower kernel: mdcmd (75): spindown 3
Jan 18 16:20:14 Tower kernel: mdcmd (76): spindown 4
Jan 18 16:20:14 Tower kernel: mdcmd (77): spindown 5
Jan 18 16:20:14 Tower kernel: mdcmd (78): spindown 6
Jan 18 16:20:14 Tower kernel: mdcmd (79): spindown 7
Jan 18 16:20:14 Tower kernel: mdcmd (80): spindown 8
Jan 18 16:20:14 Tower kernel: mdcmd (81): spindown 9
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower kernel: mdcmd (82): spindown 10
Jan 18 16:20:14 Tower kernel: mdcmd (83): spindown 11
Jan 18 16:20:14 Tower emhttpd: shcmd (162): /usr/sbin/hdparm -y /dev/nvme0n1
Jan 18 16:20:14 Tower root: HDIO_DRIVE_CMD(standby) failed: Inappropriate ioctl for device
Jan 18 16:20:14 Tower root:
Jan 18 16:20:14 Tower root: /dev/nvme0n1:
Jan 18 16:20:14 Tower root: issuing standby command
Jan 18 16:20:14 Tower emhttpd: shcmd (162): exit status: 25
Jan 18 16:20:14 Tower emhttpd: shcmd (163): /usr/sbin/hdparm -y /dev/nvme1n1
Jan 18 16:20:14 Tower root: HDIO_DRIVE_CMD(standby) failed: Inappropriate ioctl for device
Jan 18 16:20:14 Tower root:
Jan 18 16:20:14 Tower root: /dev/nvme1n1:
Jan 18 16:20:14 Tower root: issuing standby command
Jan 18 16:20:14 Tower emhttpd: shcmd (163): exit status: 25

Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdb S=0
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdf S=1
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdg S=2
Jan 18 16:20:14 Tower rsyslogd: Child 8893 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdd S=3
Jan 18 16:20:14 Tower rsyslogd: Child 8912 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdl S=4
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdk S=5
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdh S=6
Jan 18 16:20:14 Tower rsyslogd: Child 8969 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdm S=7
Jan 18 16:20:14 Tower rsyslogd: Child 8988 has terminated, reaped by main-loop. [v8.33.1 try http://www.rsyslog.com/e/0 ]
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdi S=8
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdj S=9
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sdc S=10
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter triggered
Jan 18 16:20:14 Tower SAS Assist v0.83: debug: syslog filter, R=sde S=11

 

(Highlighted in yellow are the two NVME drives I installed today)

Link to comment

Okay two different things.

Re the NVME - unrelated to the plugin. You want to specify spindown delay of "never" for those, or Unraid will try to spin them down (as you saw) and fail.

 

Re the other thing (Child terminated), will try to figure this out.

Link to comment
On 12/17/2020 at 12:58 PM, SimonF said:

@doron @limetech

 

Additional changes has been added to smartctl 7.2 for SCSI Drives the -s option to compliment the ATA existing setup.

 

This now supports setting drive to standby.

 

-s standby,now spins down drive.

-s standby,off  spins up drive.

 

-n standby has a bug which I have logged. I am hoping this will be fixed before 7.2 is released. So far only SEAGATE drives are affected and get spun up. HGST and HITACHI seem ok for the ones I have. Ticket ref is [smartmontools] #1413: Some SCSI Drives spin up when using -n option.

 

Example of -s output.

root@Tower:/usr/local/sbin# smartctl.r5131 -s standby,off /dev/sdi
smartctl 7.2 2020-12-15 r5131 [x86_64-linux-5.9.13-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

Device placed in ACTIVE mode
SCSI device successfully opened

Use 'smartctl -a' (or '-x') to print SMART (and more) information

 

root@Tower:/usr/local/sbin# smartctl.r5131 -s standby,now /dev/sdi
smartctl 7.2 2020-12-15 r5131 [x86_64-linux-5.9.13-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

Device placed in STANDBY mode
SCSI device successfully opened

Use 'smartctl -a' (or '-x') to print SMART (and more) information
 

The patch for Seagate issue has now been applied.

 

https://www.smartmontools.org/changeset/5179

 

@limetechSource for R5179 can be found here : https://circleci.com/gh/smartmontools/smartmontools/1232

 

  • Thanks 1
Link to comment

My "Seagate Constellation ES ST32000444SS" needs NOTIFY (ENABLE SPINUP) to spinup , too.

I found that I  can use "sdparm_64.exe  --command=stop  PDx"  to spindown it,

and it can be automatically spinup by Win server 2019 when there is a disk request.

 

This scheme is feasible. I want to know if its mechanism can be transplanted to UnRAID?

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.