[Plugin] Spin Down SAS Drives


doron

Recommended Posts

9 hours ago, doron said:

@SimonF, are you seeing the same behavior (with your SATA drives) if you remove the plugin?

 

If NOT (i.e. removing the plugin makes the issue go away), can you please test something for me --

1. Remove the plugin

2. Manually spin down one of the SATA drives on which you've seen the problem. 

3. Issue this command against this SATA drive:

sdparm -ip di_target /dev/sdX

4. Check whether this command caused (a) the drive to spin up (b) message "read SMART /dev/sdX" to be logged.

 

Thanks! (Un)fortunately I can't reproduce this issue locally.

 

(There's also a related report that I'd like to get to the bottom of.)

I don't have  SATA drives in my test system only SAS, Apart from 2 cache drives.

 

Will see if I have a spare SATA drive I can put in. Found 320Mb drive, but has to be connected via SATA port not HBA. Spins down ok.

 

Saw the flashing LEDs report also. I don't have rack type chassis so can't help there.

 

bonienl was having issues with SATA not spinning down, but doesn't have SAS.

Edited by SimonF
Link to comment

@doron Test results, but my SATA drive works fine and spins down. Will try and get working on the HBA and try again.

 

1. Remove the plugin

Plugin removed, all SAS drives spun up as soon as removed.

2. Manually spin down one of the SATA drives on which you've seen the problem. 

Test SATA spins down fine.

3. Issue this command against this SATA drive:

sdparm -ip di_target /dev/sdX

root@Tower:~# sdparm -ip di_target /dev/sdk
    /dev/sdk: ATA       ST3320310CS       SC14
Device identification VPD page:

4. Check whether this command caused (a) the drive to spin up (b) message "read SMART /dev/sdX" to be logged.

Apr 10 09:45:15 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin remove sas-spindown.plg
Apr 10 09:45:15 Tower root: plugin: running: anonymous
Apr 10 09:45:15 Tower sas-spindown plugin: Removing the smartctl wrapper...
Apr 10 09:45:15 Tower sas-spindown plugin: Restoring Unraid OS spindown script...
Apr 10 09:45:43 Tower kernel: sd 7:0:4:0: attempting task abort!scmd(0x000000007e425516), outstanding for 15025 ms & timeout 15000 ms
Apr 10 09:45:43 Tower kernel: sd 7:0:4:0: [sdh] tag#631 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00
Apr 10 09:45:43 Tower kernel: scsi target7:0:4: handle(0x000c), sas_address(0x5000cca027baa9a9), phy(6)
Apr 10 09:45:43 Tower kernel: scsi target7:0:4: enclosure logical id(0x5003048011c7eb00), slot(5)
Apr 10 09:45:43 Tower kernel: sd 7:0:4:0: task abort: SUCCESS scmd(0x000000007e425516)
Apr 10 09:46:44 Tower emhttpd: read SMART /dev/sdj
Apr 10 09:46:44 Tower emhttpd: read SMART /dev/sdh
Apr 10 09:46:44 Tower emhttpd: read SMART /dev/sdg
Apr 10 09:46:44 Tower emhttpd: read SMART /dev/sdf
Apr 10 09:46:44 Tower emhttpd: read SMART /dev/sdc
Apr 10 09:46:44 Tower emhttpd: read SMART /dev/sdi
Apr 10 09:49:40 Tower emhttpd: spinning up /dev/sdk
Apr 10 09:49:45 Tower emhttpd: read SMART /dev/sdk
Apr 10 09:49:51 Tower emhttpd: spinning down /dev/sdk

 

Let me know if any other tests you need. Hi @bonienl would you be able to run sdparm -ip di_target /dev/sdX against one of your SATA drives that doesn't spin down?

  • Thanks 1
Link to comment
2 hours ago, SimonF said:

@doron Test results, but my SATA drive works fine and spins down. Will try and get working on the HBA and try again.

 

Thanks for going through the trouble! For some reason I deduced from your previous post that you're seeing the new issue happening on a SATA drive.

 

My purpose with the test is to figure out whether there are circumstances, with the 6.9.2 kernel, in which the plugin would cause a spun-down SATA drive to spin up. There's a single point where the plugin interacts with a SATA drive, which is with this sdparm command (used to reliably determine whether a drive is SAS or not).

 

BTW the reason all SAS drives immediately spin up with the plugin removed is that vanilla Unraid issues "hdparm -C" to test for a device spin up/down state. SAS drives spin up due to this command.

Link to comment

Am the proud new owner of a LSI 9305-16i (my first adventure into HBAs) - I'm super grateful for your plugin but wanted to know if this is common:

  • works great if I manually spin-down the whole array or individual drives
  • does not work when drives 'time out' and should spin down

Is this a config error on my part?  Not included with normal functionality?

 

Thanks in advance for any help - dave

Link to comment
Am the proud new owner of a LSI 9305-16i (my first adventure into HBAs) - I'm super grateful for your plugin but wanted to know if this is common:
  • works great if I manually spin-down the whole array or individual drives
  • does not work when drives 'time out' and should spin down
Is this a config error on my part?  Not included with normal functionality?
 
Thanks in advance for any help - dave
Definitely not the intended behavior... "Should" work with the Spindown timers.

Can you elaborate on what you're seeing?

- which SAS drives do you have?
- what have you configured as Spindown times?
- what do you see on syslog when the timer expires?

(I presume you aren't seeing the green ball turning grey when the timer expires.)
Link to comment
1 hour ago, doron said:

Definitely not the intended behavior... "Should" work with the Spindown timers.

Can you elaborate on what you're seeing?

- which SAS drives do you have?
- what have you configured as Spindown times?
- what do you see on syslog when the timer expires?

(I presume you aren't seeing the green ball turning grey when the timer expires.)

  • have the 4-to-1 SATA to SAS cables (I guess technically that whole thing should be reversed)
  • 15 minutes (in Settings --> Disk Settings --> Default spin down delay
  • as an addendum to above.....Enable Spinup Groups is set to 'No'
  • In system logs (from GUI) -- nothing
  • dmesg - nothing of value to drives

and....right, no gray ball...just constant green (with drive temp visible).....is it just tricking me?  

 

I think it is a bit disturbing to not see anything in the logs (which is where it checks to issue the spindown....hmm).  This used to work on the same system with it was direct to the SATA drives?

 

Thanks again -dave

P.S.  from other posts (and I'm not sure if it is relevant) but I don't have any SATA drives connected to the on-motherboard SATA ports....only other drives are NVMe on motherboard

 

P.S.S  after doing the manual shutdown (by clicking on the green ball) - logs show:

Apr 11 19:20:34 Tower4 emhttpd: shcmd (144): /usr/local/sbin/set_ncq sdd 1
Apr 11 19:20:34 Tower4 emhttpd: shcmd (145): echo 128 > /sys/block/sdd/queue/nr_requests
Apr 11 19:20:34 Tower4 emhttpd: shcmd (146): /usr/local/sbin/set_ncq sdc 1
Apr 11 19:20:34 Tower4 emhttpd: shcmd (147): echo 128 > /sys/block/sdc/queue/nr_requests
Apr 11 19:20:34 Tower4 emhttpd: shcmd (148): /usr/local/sbin/set_ncq sde 1
Apr 11 19:20:34 Tower4 emhttpd: shcmd (149): echo 128 > /sys/block/sde/queue/nr_requests
Apr 11 19:20:34 Tower4 emhttpd: shcmd (150): /usr/local/sbin/set_ncq sdf 1
Apr 11 19:20:34 Tower4 emhttpd: shcmd (151): echo 128 > /sys/block/sdf/queue/nr_requests
Apr 11 19:20:34 Tower4 emhttpd: shcmd (152): /usr/local/sbin/set_ncq sdg 1
Apr 11 19:20:34 Tower4 emhttpd: shcmd (153): echo 128 > /sys/block/sdg/queue/nr_requests
Apr 11 19:20:34 Tower4 emhttpd: shcmd (154): /usr/local/sbin/set_ncq sdb 1
Apr 11 19:20:34 Tower4 emhttpd: shcmd (155): echo 128 > /sys/block/sdb/queue/nr_requests
Apr 11 19:20:34 Tower4 kernel: mdcmd (37): set md_num_stripes 2048
Apr 11 19:20:34 Tower4 kernel: mdcmd (38): set md_queue_limit 80
Apr 11 19:20:34 Tower4 kernel: mdcmd (39): set md_sync_limit 5
Apr 11 19:20:34 Tower4 kernel: mdcmd (40): set md_write_method
Apr 11 19:20:38 Tower4 emhttpd: shcmd (156): /usr/local/sbin/set_ncq sdd 1
Apr 11 19:20:38 Tower4 emhttpd: shcmd (157): echo 128 > /sys/block/sdd/queue/nr_requests
Apr 11 19:20:38 Tower4 emhttpd: shcmd (158): /usr/local/sbin/set_ncq sdc 1
Apr 11 19:20:38 Tower4 emhttpd: shcmd (159): echo 128 > /sys/block/sdc/queue/nr_requests
Apr 11 19:20:38 Tower4 emhttpd: shcmd (160): /usr/local/sbin/set_ncq sde 1
Apr 11 19:20:38 Tower4 emhttpd: shcmd (161): echo 128 > /sys/block/sde/queue/nr_requests
Apr 11 19:20:38 Tower4 emhttpd: shcmd (162): /usr/local/sbin/set_ncq sdf 1
Apr 11 19:20:38 Tower4 emhttpd: shcmd (163): echo 128 > /sys/block/sdf/queue/nr_requests
Apr 11 19:20:38 Tower4 emhttpd: shcmd (164): /usr/local/sbin/set_ncq sdg 1
Apr 11 19:20:38 Tower4 emhttpd: shcmd (165): echo 128 > /sys/block/sdg/queue/nr_requests
Apr 11 19:20:38 Tower4 emhttpd: shcmd (166): /usr/local/sbin/set_ncq sdb 1
Apr 11 19:20:38 Tower4 emhttpd: shcmd (167): echo 128 > /sys/block/sdb/queue/nr_requests
Apr 11 19:20:38 Tower4 kernel: mdcmd (41): set md_num_stripes 2048
Apr 11 19:20:38 Tower4 kernel: mdcmd (42): set md_queue_limit 80
Apr 11 19:20:38 Tower4 kernel: mdcmd (43): set md_sync_limit 5
Apr 11 19:20:38 Tower4 kernel: mdcmd (44): set md_write_method
Apr 11 19:20:57 Tower4 emhttpd: spinning down /dev/sdc
Apr 11 19:20:58 Tower4 emhttpd: spinning down /dev/sdf
Apr 11 19:20:59 Tower4 emhttpd: spinning down /dev/sdg
Apr 11 19:21:01 Tower4 emhttpd: spinning down /dev/sdb
Apr 11 19:21:03 Tower4 kernel: mdcmd (45): set md_write_method 0

Edited by ds679
Link to comment
2 minutes ago, ds679 said:
  • have the 4-to-1 SATA to SAS cables (I guess technically that whole thing should be reversed)
  • 15 minutes (in Settings --> Disk Settings --> Default spin down delay
  • as an addendum to above.....Enable Spinup Groups is set to 'No'
  • In system logs (from GUI) -- nothing
  • dmesg - nothing of value to drives

and....right, no gray ball...just constant green (with drive temp visible).....is it just tricking me? 

 

Hang on. I may be missing something. Do you have SAS hard drives in your system? 

(or have you connected your SATA drives to the new LSI HBA - in which case, the plugin is not really applicable, although it has been known to help in some corner cases.) 

 

On a related note - which Unraid version are you running? If it's 6.9.2 (latest when writing this), there's an open issue on SATA drives not spinning down (or, actually, spinning back up immediately) in this particular version. Not applicable to any other version. 

 

The fact that you're not seeing anything in the log is peculiar - at the very least you should have seen "spinning down /dev/sdX" type messages.

Link to comment
5 minutes ago, doron said:

 

Hang on. I may be missing something. Do you have SAS hard drives in your system? 

(or have you connected your SATA drives to the new LSI HBA - in which case, the plugin is not really applicable, although it has been known to help in some corner cases.) 

 

On a related note - which Unraid version are you running? If it's 6.9.2 (latest when writing this), there's an open issue on SATA drives not spinning down (or, actually, spinning back up immediately) in this particular version. Not applicable to any other version. 

 

The fact that you're not seeing anything in the log is peculiar - at the very least you should have seen "spinning down /dev/sdX" type messages.

 

Sorry...maybe I misunderstood (I'm still trying to learn even though have been using UnRaid forever.....I have all my SATA drives connected to the LSI HBA....so, I'm guessing this plugin is not the right one.  Thanks for letting me know!

 

Yes...running latest/greatest.....sounds like I'm going to go back to 6.9.1 and test (guess things happened at the same time so I thought it was new hardware)

 

Agreed on the weirdness in logs!

 

Thanks again - dave

Link to comment
1 hour ago, ds679 said:

Yes...running latest/greatest.....sounds like I'm going to go back to 6.9.1 and test (guess things happened at the same time so I thought it was new hardware)

 

Ding...ding...winner-winner, chicken dinner

 

back to 6.9.1 and it works.......thanks again

  • Like 1
Link to comment
  • 2 weeks later...

Hi All

 

I have an HP DL380 G8 using the P420i/2Gb controller and an array of 18 disks using a variety of manufacturers.

 

I recently found this addon and thought it would be great as I have 18 disks permanantly spun up... so, I installed it - but none of the disks ever spin down.

 

I have an  unused cache disk, so I clicked on spin down but it shows as still spun up - log below;

 

Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] 2344225968 512-byte logical blocks: (1.20 TB/1.09 TiB)
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Write Protect is off
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Mode Sense: e7 00 10 08
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Write cache: disabled, read cache: enabled, supports DPO and FUA
Apr 20 19:58:43 DL380p-Rack kernel: sdu: sdu1
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Attached SCSI disk
Apr 20 19:58:43 DL380p-Rack kernel: BTRFS: device fsid 12ff0c5a-8607-4875-93b9-c90cb75a3929 devid 1 transid 42 /dev/sdu1 scanned by udevd (2160)
Apr 20 19:59:08 DL380p-Rack emhttpd: EG001200JWJNQ_WFK27GGJ_35000c500bc3cdd27 (sdu) 512 2344225968
Apr 20 19:59:09 DL380p-Rack emhttpd: import 30 cache device: (sdu) EG001200JWJNQ_WFK27GGJ_35000c500bc3cdd27
Apr 20 19:59:09 DL380p-Rack emhttpd: read SMART /dev/sdu
Apr 20 19:59:27 DL380p-Rack emhttpd: shcmd (107): mount -t btrfs -o noatime,space_cache=v2 /dev/sdu1 /mnt/cache
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): enabling free space tree
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): using free space tree
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): has skinny extents
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): creating free space tree
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): setting compat-ro feature flag for FREE_SPACE_TREE (0x1)
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): setting compat-ro feature flag for FREE_SPACE_TREE_VALID (0x2)
Apr 20 20:51:34 DL380p-Rack emhttpd: spinning down /dev/sdu
Apr 20 20:51:34 DL380p-Rack SAS Assist v0.85: Spinning down device /dev/sdu
Apr 20 20:51:38 DL380p-Rack emhttpd: read SMART /dev/sdu
Apr 20 20:56:31 DL380p-Rack emhttpd: spinning down /dev/sdu
Apr 20 20:56:31 DL380p-Rack SAS Assist v0.85: Spinning down device /dev/sdu
Apr 20 20:56:35 DL380p-Rack emhttpd: read SMART /dev/sdu

 

I have seen varying reports of disk incompatibility... but can anyone confirm if this works with the P420, or have I done something wrong?

 

Thanks

Edited by SliMat
picture not showing
Link to comment
40 minutes ago, SliMat said:

Hi All

 

I have an HP DL380 G8 using the P420i/2Gb controller and an array of 18 disks using a variety of manufacturers.

 

I recently found this addon and thought it would be great as I have 18 disks permanantly spun up... so, I installed it - but none of the disks ever spin down.

 

I have an  unused cache disk, so I clicked on spin down but it shows as still spun up - log below;

 


Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] 2344225968 512-byte logical blocks: (1.20 TB/1.09 TiB)
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Write Protect is off
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Mode Sense: e7 00 10 08
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Write cache: disabled, read cache: enabled, supports DPO and FUA
Apr 20 19:58:43 DL380p-Rack kernel: sdu: sdu1
Apr 20 19:58:43 DL380p-Rack kernel: sd 2:0:19:0: [sdu] Attached SCSI disk
Apr 20 19:58:43 DL380p-Rack kernel: BTRFS: device fsid 12ff0c5a-8607-4875-93b9-c90cb75a3929 devid 1 transid 42 /dev/sdu1 scanned by udevd (2160)
Apr 20 19:59:08 DL380p-Rack emhttpd: EG001200JWJNQ_WFK27GGJ_35000c500bc3cdd27 (sdu) 512 2344225968
Apr 20 19:59:09 DL380p-Rack emhttpd: import 30 cache device: (sdu) EG001200JWJNQ_WFK27GGJ_35000c500bc3cdd27
Apr 20 19:59:09 DL380p-Rack emhttpd: read SMART /dev/sdu
Apr 20 19:59:27 DL380p-Rack emhttpd: shcmd (107): mount -t btrfs -o noatime,space_cache=v2 /dev/sdu1 /mnt/cache
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): enabling free space tree
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): using free space tree
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): has skinny extents
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): creating free space tree
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): setting compat-ro feature flag for FREE_SPACE_TREE (0x1)
Apr 20 19:59:27 DL380p-Rack kernel: BTRFS info (device sdu1): setting compat-ro feature flag for FREE_SPACE_TREE_VALID (0x2)
Apr 20 20:51:34 DL380p-Rack emhttpd: spinning down /dev/sdu
Apr 20 20:51:34 DL380p-Rack SAS Assist v0.85: Spinning down device /dev/sdu
Apr 20 20:51:38 DL380p-Rack emhttpd: read SMART /dev/sdu
Apr 20 20:56:31 DL380p-Rack emhttpd: spinning down /dev/sdu
Apr 20 20:56:31 DL380p-Rack SAS Assist v0.85: Spinning down device /dev/sdu
Apr 20 20:56:35 DL380p-Rack emhttpd: read SMART /dev/sdu

 

I have seen varying reports of disk incompatibility... but can anyone confirm if this works with the P420, or have I done something wrong?

 

Thanks

Which version of the OS are you using, There is a known issue with 6.9.2 with it spinning some drives back up. There is a bug post for that.

Link to comment

Hi @SimonF

 

I upgraded to 6.9.2 and installed this addon at the same time, so have never used it under 6.9.1 (or earlier)... so I suspect that this is the issue... thanks - I will try to find where the issue is noted and look at the bug post.

 

I may try going back to 6.9.1 - but as this is a production machine I will have to try this late one night when I can take it offline.

 

Thanks

Edited by SliMat
Link to comment
On 4/30/2021 at 4:41 PM, shezzannn said:

Jumping in on this - been running this plugging since 6.9.1 and has been working flawlessly.

Once I upgraded to 6.9.2, all my SAS drives continue to spin.

Thanks for reporting. Just to make sure you're seeing an instance of the recently reported 6.9.2 issue (you probably are; see above for more details): In your system log, do you see the "SAS Assist" spindown messages, immediately followed by a "Read SMART" message for same drive?

Link to comment
On 5/1/2021 at 7:52 PM, doron said:

In your system log, do you see the "SAS Assist" spindown messages, immediately followed by a "Read SMART" message for same drive?

I'm having the same issue after upgrading to 6.9.2 as well.

Broadcom / LSI SAS2308 card plus 6x Seagate SAS drives.

From the logs:
 

Quote

May 8 12:44:40 Vault emhttpd: spinning down /dev/sde
May 8 12:44:40 Vault SAS Assist v0.85: Spinning down device /dev/sde
May 8 12:44:45 Vault emhttpd: read SMART /dev/sde
May 8 12:44:46 Vault emhttpd: spinning down /dev/sdg
May 8 12:44:46 Vault SAS Assist v0.85: Spinning down device /dev/sdg
May 8 12:44:53 Vault emhttpd: read SMART /dev/sdg
May 8 12:44:53 Vault emhttpd: spinning down /dev/sdh
May 8 12:44:53 Vault SAS Assist v0.85: Spinning down device /dev/sdh
May 8 12:45:00 Vault emhttpd: read SMART /dev/sdh
May 8 12:45:01 Vault emhttpd: spinning down /dev/sdi
May 8 12:45:01 Vault SAS Assist v0.85: Spinning down device /dev/sdi
May 8 12:45:07 Vault emhttpd: read SMART /dev/sdi
May 8 12:45:08 Vault emhttpd: spinning down /dev/sdj
May 8 12:45:08 Vault SAS Assist v0.85: Spinning down device /dev/sdj
May 8 12:45:14 Vault emhttpd: read SMART /dev/sdj
May 8 12:45:15 Vault emhttpd: spinning down /dev/sdd
May 8 12:45:15 Vault SAS Assist v0.85: Spinning down device /dev/sdd
May 8 12:45:47 Vault emhttpd: read SMART /dev/sdd


Edit: reverted to 6.9.1, all drives standby again :)

Edited by Failquail
Link to comment
  • 2 weeks later...
1 hour ago, SFord said:

So is it the plugin v0.85 2021-02-06 or UnRAID 6.9.2 2021-04-07? This issue has been open for what 40 days? Can we get some help here even if we need to edit some files by hand.

 

It is a 6.9.2 issue. See above for an open bug thread about it. The issue occurs with both SATA and SAS drives (although not all drives), and is seemingly unrelated to this plugin.

Link to comment
  • 1 month later...

Hi Guys, I'm late to the SAS party and recently snagged two ST6000NM0014 drives for a very small sum. 
As I noticed the drives wouldn't spin down, I found this plugin, and the drives do now spin down, but the error count on the WebUI for those disks slowly started to creep up. 
Manually spinning the drives down in the WebUI also swamps the syslog with IO errors, which led to me having to rebuild the array from parity as a result. 

Removing the plugin leaves the drives spun up, but obviously it would be better if these played nice!

 

System details:

  • Unraid 6.9.1 (downgraded from 6.9.2 due to the other spin down issue)
  • LSI SAS2008 controller, with 2x SAS ST6000NM0014's, and the rest are SATA (which spin down fine)

The OP mentions that the issue can be caused by a combination of controller\disk, rather than the non-standard implementation of power management across different brands, but the thread seems to lean heavily towards the latter? 
 

I guess my main questions are:

  • Should I hang onto my ST6000NM0014's?
  • Is there a reason the Constellation ES.3 is currently #'d from the exclusion list if its still misbehaving?
  • Would things be better with a different SAS controller?
  • Can the OP be updated with a list of SAS drives that are known to play nice with this plugin?
  • Is this being addressed at a core unraid OS level for 6.10?

Apologies if these have been answered already but the last few pages of this thread are hard to follow with the 6.9.2 issue being added to the mix!

Link to comment

Hi, thanks for posting.

 

51 minutes ago, billington.mark said:

I guess my main questions are:

  • Should I hang onto my ST6000NM0014's?

Sure, they're generally decent drives. But you will probably need to live with them spinning 24x7 😞

 

51 minutes ago, billington.mark said:
  • Is there a reason the Constellation ES.3 is currently #'d from the exclusion list if its still misbehaving?

Basically, there's conflicting data as to their behavior. I started with excluding them, then started a mini-project of collecting data points from users. Since I did receive a couple of positive data points for these drives, I commented the exclusion out, "for now".

51 minutes ago, billington.mark said:
  • Would things be better with a different SAS controller?

My controller is based on the same chip. I use HGST drives. They spin down/up without a hitch. 

So it could be a combination of the controller/HDDs, or just the latter. I tend to believe it's the latter (the HDDs), but the jury's still out. At any rate, these drives have given much more thumbs-down data points than thumbs up.

51 minutes ago, billington.mark said:
  • Can the OP be updated with a list of SAS drives that are known to play nice with this plugin?

As I said I started collecting this data. Whatever seemed conclusive is in the exclusions file. Perhaps compiling it to a list of "what works" may indeed be a good idea, time permitting.

51 minutes ago, billington.mark said:
  • Is this being addressed at a core unraid OS level for 6.10?

Not as far as I'm aware, but that's up to Limetech to answer authoritatively.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.