• [6.9.2] Parity Drive will not spin down via GUI or Schedule


    SimonF
    • Minor

    Since upgrading to 6.9.2 I cannot spin down parity via GUI, as soon as I spin down it comes back up.

     

    Apr 8 10:18:45 Tower emhttpd: spinning down /dev/sdf
    Apr 8 10:18:58 Tower SAS Assist v0.85: Spinning down device /dev/sdf
    Apr 8 10:18:58 Tower emhttpd: read SMART /dev/sdf

     

    Revert to 6.9.1 issue no longer happens.

     

    I can manually spin down the drive. All other array drives which are also SAS spin down fine.

    root@Tower:~# sg_start -rp 3 /dev/sdf
    root@Tower:~# sdparm -C sense /dev/sdf
        /dev/sdf: SEAGATE   ST4000NM0023      XMGJ
    Additional sense: Standby condition activated by command

     

     

    Also is it possible to get an updated version of smartctl added.

     

    Will continue to do more testing.

     




    User Feedback

    Recommended Comments



    12 hours ago, chris0583 said:

    Absolutely .  Diags attached.   Whatever you need from me to help diagnose the issue I am willing to put the time and effort in to resolving. 

    storage2-diagnostics-20220506-1508.zip

    So are you running the disks via a RAID controller it may be the spin down command is being ignored. Guessing you have disks setup in JBOD mode?

     

    You have only setup 3 drives and specified the controllers are Areca from what I can see for smartctl settings.

     

    [1:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg10
      state=running queue_depth=32 scsi_level=0 type=3 device_blocked=0 timeout=90
      dir: /sys/bus/scsi/devices/1:0:16:0  [/sys/devices/pci0000:00/0000:00:01.0/0000:03:00.0/host1/target1:0:16/1:0:16:0]

     

    [8:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg19
      state=running queue_depth=32 scsi_level=0 type=3 device_blocked=0 timeout=90
      dir: /sys/bus/scsi/devices/8:0:16:0  [/sys/devices/pci0000:00/0000:00:02.0/0000:02:00.0/host8/target8:0:16/8:0:16:0]

     

    Would you also be able to provide a screen grab of the system devices page showing which devices are connect to which controllers. 

     

    Are the parity drives on internal SATA ports?

     

    Unraid use hdparm -y /dev/sdx to spin down devices what output do you get?

     

    Link to comment
    1 hour ago, SimonF said:

    Are the parity drives on internal SATA ports?

    According to diags both parity drives and cache are on the onboard SATA, all others are on the RAID controllers, and very likely the reason they are not spinning down.

    • Like 1
    Link to comment
    6 hours ago, SimonF said:

    So are you running the disks via a RAID controller it may be the spin down command is being ignored. Guessing you have disks setup in JBOD mode?

     

    You have only setup 3 drives and specified the controllers are Areca from what I can see for smartctl settings.

     

    [1:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg10
      state=running queue_depth=32 scsi_level=0 type=3 device_blocked=0 timeout=90
      dir: /sys/bus/scsi/devices/1:0:16:0  [/sys/devices/pci0000:00/0000:00:01.0/0000:03:00.0/host1/target1:0:16/1:0:16:0]

     

    [8:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg19
      state=running queue_depth=32 scsi_level=0 type=3 device_blocked=0 timeout=90
      dir: /sys/bus/scsi/devices/8:0:16:0  [/sys/devices/pci0000:00/0000:00:02.0/0000:02:00.0/host8/target8:0:16/8:0:16:0]

     

    Would you also be able to provide a screen grab of the system devices page showing which devices are connect to which controllers. 

     

    Are the parity drives on internal SATA ports?

     

    Unraid use hdparm -y /dev/sdx to spin down devices what output do you get?

     

    Party Drives are connected to the onboard controller and the data drives are on Arcea 1882 Controllers.  This config has not changed since the day the system was built. 

     

    root@STORAGE2:~# hdparm -y /dev/sdh

     

    /dev/sdh:

     issuing standby command

    SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

     

     

    Screen Shot 2022-05-07 at 10.30.23 AM.png

    Link to comment
    5 hours ago, JorgeB said:

    According to diags both parity drives and cache are on the onboard SATA, all others are on the RAID controllers, and very likely the reason they are not spinning down.

    Thanks for assisting with this issue.

     

    Drive spin down always worked until a UNraid OS update.  The config of the system has not changed since the system was built years ago.  Something changed in the OS.  Party drives and other drives not connected drives pin down as they should.  

    Link to comment
    3 minutes ago, chris0583 said:

    Thanks for assisting with this issue.

     

    Drive spin down always worked until a UNraid OS update.  The config of the system has not changed since the system was built years ago.  Something changed in the OS.  Party drives and other drives not connected drives pin down as they should.  

    Which version did it last work on 6.8,3? Spindown process changed in 6.9.x

    Link to comment
    Just now, SimonF said:

    Which version did it last work on 6.8,3? Spindown process changed in 6.9.x

    6.9.1 Was the last version of unRAID that the drives spun down properly.

    Link to comment

    if you run 

     

     

    smartctl -s standby,now /dev/sdh

    and

    smartctl -n stanby /dev/sdh

     

    Does it report device is in standby?

     

    another thing we can test is to see if 

     

    hdparm -C and -y works with the generic /dev/sgX device for the disk.

     

    use sg_map or lsscsi -g to show sdh -> sgx names

     

    Link to comment
    3 hours ago, SimonF said:

    if you run 

     

     

    smartctl -s standby,now /dev/sdh

    and

    smartctl -n stanby /dev/sdh

     

    Does it report device is in standby?

     

    another thing we can test is to see if 

     

    hdparm -C and -y works with the generic /dev/sgX device for the disk.

     

    use sg_map or lsscsi -g to show sdh -> sgx names

     

    Thank you for all help ! 

     

    root@STORAGE2:~# smartctl -s standby,now /dev/sdj

    smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.35-Unraid] (local build)

    Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

     

    SCSI STANDBY command failed: 

    SCSI device successfully opened

     

    Use 'smartctl -a' (or '-x') to print SMART (and more) information

     

    root@STORAGE2:~# smartctl -n stanby /dev/sdj

    smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.35-Unraid] (local build)

    Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

     

    =======> INVALID ARGUMENT TO -n: stanby

    =======> VALID ARGUMENTS ARE: never, sleep[,STATUS[,STATUS2]], standby[,STATUS[,STATUS2]], idle[,STATUS[,STATUS2]] <=======

     

    Use smartctl -h to get a usage summary

     

    root@STORAGE2:~# hdparm -C /dev/sdj

     

    /dev/sdj:

    SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

     drive state is:  standby

     

    root@STORAGE2:~# hdparm -y /dev/sdj

     

    /dev/sdj:

     issuing standby command

    SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

     

    root@STORAGE2:~# lsscsi -g|grep "Areca"
    [1:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg10  < Arcea 1881i drives 1 - 8 (sdb > sdi) >
    [8:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg19 < Arcea 1881i drives 1 - 8 (sdj > sdq) >

     

    root@STORAGE2:~# smartctl -a -d areca,1,2 /dev/sg19 < device 1 on controller 2 >

    Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

     

    === START OF INFORMATION SECTION ===

    Model Family:     Seagate Enterprise Capacity 3.5 HDD

    Device Model:     ST10000NM0016-1TT101

    Serial Number:    ZA24S56L

    LU WWN Device Id: 5 000c50 0afaeae7f

    Firmware Version: SND0

    User Capacity:    10,000,831,348,736 bytes [10.0 TB]

    Sector Sizes:     512 bytes logical, 4096 bytes physical

    Rotation Rate:    7200 rpm

    Form Factor:      3.5 inches

    Device is:        In smartctl database 7.3/5360

    ATA Version is:   ACS-3 T13/2161-D revision 5

    SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)

    Local Time is:    Sat May  7 17:28:13 2022 EDT

    SMART support is: Available - device has SMART capability.

    SMART support is: Enabled

     

    Warning! SMART Attribute Data Structure error: invalid SMART checksum.

    Link to comment
    13 hours ago, chris0583 said:

    smartctl -a -d areca,1,2 /dev/sg19

     

    for hdparm -y and -C can you try using the sg device for the disk?

     

    looking at your diags sdj maps to sg9 you may want to run sg_map to confirm this is still the case.

     

    [1:0:0:7]    disk    Seagate  ST10000NM0086-2A R001  /dev/sdj   /dev/sg9 
      state=running queue_depth=32 scsi_level=6 type=0 device_blocked=0 timeout=90
      dir: /sys/bus/scsi/devices/1:0:0:7  [/sys/devices/pci0000:00/0000:00:01.0/0000:03:00.0/host1/target1:0:0/1:0:0:7]

     

    then run

     

    hdparm -y /dev/sg9

    hdparm -C /dev/sg9

     

    We may be able to use sg names

     

    smartctl -n never  /dev/sg9 does that show power mode?

    Link to comment
    2 hours ago, SimonF said:

     

    for hdparm -y and -C can you try using the sg device for the disk?

     

    looking at your diags sdj maps to sg9 you may want to run sg_map to confirm this is still the case.

     

    [1:0:0:7]    disk    Seagate  ST10000NM0086-2A R001  /dev/sdj   /dev/sg9 
      state=running queue_depth=32 scsi_level=6 type=0 device_blocked=0 timeout=90
      dir: /sys/bus/scsi/devices/1:0:0:7  [/sys/devices/pci0000:00/0000:00:01.0/0000:03:00.0/host1/target1:0:0/1:0:0:7]

     

    then run

     

    hdparm -y /dev/sg9

    hdparm -C /dev/sg9

     

    We may be able to use sg names

     

    smartctl -n never  /dev/sg9 does that show power mode?

    root@STORAGE2:~# lsscsi -g|grep "Areca"

    [1:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg10

    [8:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg19

    root@STORAGE2:~# sg_map

    /dev/sg0  /dev/sda

    /dev/sg1  /dev/sdb

    /dev/sg2  /dev/sdc

    /dev/sg3  /dev/sdd

    /dev/sg4  /dev/sde

    /dev/sg5  /dev/sdf

    /dev/sg6  /dev/sdg

    /dev/sg7  /dev/sdh

    /dev/sg8  /dev/sdi

    /dev/sg9  /dev/sdj

    /dev/sg10

    /dev/sg11  /dev/sdk

    /dev/sg12  /dev/sdl

    /dev/sg13  /dev/sdm

    /dev/sg14  /dev/sdn

    /dev/sg15  /dev/sdo

    /dev/sg16  /dev/sdp

    /dev/sg17  /dev/sdq

    /dev/sg18  /dev/sdr

    /dev/sg19

    root@STORAGE2:~# hdparm -y /dev/sg9

     

    /dev/sg9:

     issuing standby command

    SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

    root@STORAGE2:~# hdparm -C /dev/sg9

     

    /dev/sg9:

    SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

     drive state is:  standby

    root@STORAGE2:~# smartctl -n never  /dev/sg9

    smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.35-Unraid] (local build)

    Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

     

    Device is in ACTIVE mode

    SCSI device successfully opened

     

    Use 'smartctl -a' (or '-x') to print SMART (and more) information

     

    root@STORAGE2:~# 

    Link to comment

    Ok I dont think hdparm is not going to be the solution.

     

    I suspect this may fail also, so may have to use the -d option, but not sure if the code supports that will take a look.

     

    smartctl -s standby,now  /dev/sg9

    smartctl -n never  /dev/sg9

    Link to comment
    1 hour ago, SimonF said:

    Ok I dont think hdparm is not going to be the solution.

     

    I suspect this may fail also, so may have to use the -d option, but not sure if the code supports that will take a look.

     

    smartctl -s standby,now  /dev/sg9

    smartctl -n never  /dev/sg9

    Something odd happened.  Drives 9 & 10 went on me.  Thank the lord for 2 parity drives.  System is rebuilding.  I will try and issue more commands once rebuilt and 100%. 
     

    Again, thank you for all the help and suggestions.   I am trying to avoid buying a new adapter and there is nothing working with the ones I have.  Something changed in 6.9.2 that broke spin down  or compatibility for Areca adapters.  

    I even logged into the adapter web UI and both have the same config … 

    Link to comment
    9 minutes ago, chris0583 said:

    Something odd happened.  Drives 9 & 10 went on me.  Thank the lord for 2 parity drives.  System is rebuilding.  I will try and issue more commands once rebuilt and 100%. 
     

    Again, thank you for all the help and suggestions.   I am trying to avoid buying a new adapter and there is nothing working with the ones I have.  Something changed in 6.9.2 that broke spin down  or compatibility for Areca adapters.  

    I even logged into the adapter web UI and both have the same config … 

    I suspect the drives/controller do not like the smartctl standby,now option which may have caused the X so suggest not running again.

     

    I think the issue is that sdspin is not reporting correct status will look to see if there is a fix.

    Link to comment

    Just wanted to report. I upgraded to the last RC version (Version: 6.10.0-rc7) still no change in spin down behavior.  i do get this in the logs.  looks like the system is issuing commands to spin down but the drives are not listening except for the parity which are connected to the main board. 

     

    May 10 14:54:59 STORAGE2 kernel: sdb: sdb1
    May 10 14:54:59 STORAGE2 kernel: sdp: sdp1
    May 10 14:57:03 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdm
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdh
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdg
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdd
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdb
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdf
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdn
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdo
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdi
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdp
    May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdk
    May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdl
    May 10 15:09:51 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:12:04 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdm
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdh
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdg
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdd
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdb
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdf
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdn
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdo
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdi
    May 10 15:24:52 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:25:37 STORAGE2 kernel: sdb: sdb1
    May 10 15:25:37 STORAGE2 kernel: sdp: sdp1
    May 10 15:27:05 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdk
    May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdl
    May 10 15:39:53 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:42:06 STORAGE2 emhttpd: spinning down /dev/sd

     

    image.png.75bbfb451b8980057986ba914c0a58b6.png

    Link to comment
    10 hours ago, chris0583 said:

    I upgraded to the last RC version (Version: 6.10.0-rc7) still no change in spin down behavior.

    To be honest the surprise for me is that is was working before, as far as I remember spin down is known to not work with most RAID controllers, including Areca.

    Link to comment
    10 hours ago, chris0583 said:

    Just wanted to report. I upgraded to the last RC version (Version: 6.10.0-rc7) still no change in spin down behavior.  i do get this in the logs.  looks like the system is issuing commands to spin down but the drives are not listening except for the parity which are connected to the main board. 

     

    May 10 14:54:59 STORAGE2 kernel: sdb: sdb1
    May 10 14:54:59 STORAGE2 kernel: sdp: sdp1
    May 10 14:57:03 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdm
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdh
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdg
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdd
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdb
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdf
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdn
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdo
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdi
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdp
    May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdk
    May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdl
    May 10 15:09:51 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:12:04 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdm
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdh
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdg
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdd
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdb
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdf
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdn
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdo
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdi
    May 10 15:24:52 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:25:37 STORAGE2 kernel: sdb: sdb1
    May 10 15:25:37 STORAGE2 kernel: sdp: sdp1
    May 10 15:27:05 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdk
    May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdl
    May 10 15:39:53 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:42:06 STORAGE2 emhttpd: spinning down /dev/sd

     

    image.png.75bbfb451b8980057986ba914c0a58b6.png

    Found a cheap Areca ARC-1880DIX on ebay should be here at the weekend and will look to see if I can find a solution.

    • Like 1
    Link to comment
    On 5/10/2022 at 9:06 PM, chris0583 said:

    Just wanted to report. I upgraded to the last RC version (Version: 6.10.0-rc7) still no change in spin down behavior.  i do get this in the logs.  looks like the system is issuing commands to spin down but the drives are not listening except for the parity which are connected to the main board. 

     

    May 10 14:54:59 STORAGE2 kernel: sdb: sdb1
    May 10 14:54:59 STORAGE2 kernel: sdp: sdp1
    May 10 14:57:03 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdm
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdh
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdg
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdd
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdb
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdf
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdn
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdo
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdi
    May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdp
    May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdk
    May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdl
    May 10 15:09:51 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:12:04 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdm
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdh
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdg
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdd
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdb
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdf
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdn
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdo
    May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdi
    May 10 15:24:52 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:25:37 STORAGE2 kernel: sdb: sdb1
    May 10 15:25:37 STORAGE2 kernel: sdp: sdp1
    May 10 15:27:05 STORAGE2 emhttpd: spinning down /dev/sdq
    May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdk
    May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdl
    May 10 15:39:53 STORAGE2 emhttpd: spinning down /dev/sdr
    May 10 15:42:06 STORAGE2 emhttpd: spinning down /dev/sd

     

    image.png.75bbfb451b8980057986ba914c0a58b6.png

    Not specific to spindown as still looking but there is a bug with smart setup for Areco for which I will log a new bug.

     

    I need to run the following but unraid puts a comma between 16 and 2.

     

    root@computenode:~# smartctl -ad areca,16/2 /dev/sg7 | grep Temp
    190 Airflow_Temperature_Cel 0x0022   059   036   045    Old_age   Always   In_the_past 41 (Min/Max 41/42 #4)
    194 Temperature_Celsius     0x0022   041   064   000    Old_age   Always       -       41 (128 0 0 0 0)

    Link to comment
    On 5/11/2022 at 2:51 AM, SimonF said:

    Found a cheap Areca ARC-1880DIX on ebay should be here at the weekend and will look to see if I can find a solution.

    i have an extra ARC-1280 ML V2. I would of shipped it to you.   thank you for going the extra mile on this. 

    Link to comment
    On 5/11/2022 at 2:42 AM, JorgeB said:

    To be honest the surprise for me is that is was working before, as far as I remember spin down is known to not work with most RAID controllers, including Areca.

    One of the main reason I went to unraid was for the spin down feature.  I have been running QNAP's for years and those drives would spin all day.   Now i can not live without it.  I have turned so many people on to it.  unfortunately  i am their tech support arm whenever they want to do something new with their systems 

    Link to comment
    50 minutes ago, chris0583 said:

    One of the main reason I went to unraid was for the spin down feature.  I have been running QNAP's for years and those drives would spin all day.   Now i can not live without it.  I have turned so many people on to it.  unfortunately  i am their tech support arm whenever they want to do something new with their systems 

    I down graded to 6.8.3 and this was the only release I found where the drives appear to spin down in the GUI but the physical was still spinning.

     

    Have found a way to spin down disk in rc8 but checking for stability as card seems to stop responding to smartctl commands so not sure if it is the card or the process. 

    Edited by SimonF
    Link to comment
    11 hours ago, chris0583 said:

    One of the main reason I went to unraid was for the spin down feature.

    If spin down if important to you and if that's a possibility I would suggest replacing those RAID controllers with LSI HBAs, spin down with RAID controllers is hit and miss, mostly miss.

    Link to comment
    On 5/13/2022 at 8:01 PM, chris0583 said:

    One of the main reason I went to unraid was for the spin down feature.  I have been running QNAP's for years and those drives would spin all day.   Now i can not live without it.  I have turned so many people on to it.  unfortunately  i am their tech support arm whenever they want to do something new with their systems 

    Hi, I have a way to spin down the drives but the card seems to lockout access after a while, disks are still accessable from the system but status updates do not seem to be reliable.

     

    Not sure how it would have worked before maybe the disks didn't spin down but the gui showed them as spun down.

     

    i have found a issue with the way the smartctl is called for my card and have raised a bug report for it.

     

    My controller is on the last version of firmware for my card 1880.

     

    image.thumb.png.a74347ed0013fa7f84bd13494c00e5e9.png

     

    So not sure if the issue is the Expander within my card.

     

    I have installed cli64 and will look to see if I can create a process to update the smart-one settings as the sgx device name seems to change on reboots for me.

     

    Copyright (c) 2004-2011 Areca, Inc. All Rights Reserved.
    Areca CLI, Version: 1.86, Arclib: 310, Date: Nov  1 2011( Linux )
    
     S  #   Name       Type             Interface
    ==================================================
    [*] 1   ARC-1880   Raid Controller  PCI
    ==================================================
    
    CMD     Description
    ==========================================================
    main    Show Command Categories.
    set     General Settings.
    rsf     RaidSet Functions.
    vsf     VolumeSet Functions.
    disk    Physical Drive Functions.
    sys     System Functions.
    net     Ethernet Functions.
    event   Event Functions.
    hw      Hardware Monitor Functions.
    mail    Mail Notification Functions.
    snmp    SNMP Functions.
    ntp     NTP Functions.
    exit    Exit CLI.
    ==========================================================
    Command Format: <CMD> [Sub-Command] [Parameters].
    Note: Use <CMD> -h or
    
    
    root@computenode:~# cli64 disk info
      # Enc# Slot#   ModelName                        Capacity  Usage
    ===============================================================================
      1  01  Slot#1  N.A.                                0.0GB  N.A.      
      2  01  Slot#2  N.A.                                0.0GB  N.A.      
      3  01  Slot#3  N.A.                                0.0GB  N.A.      
      4  01  Slot#4  N.A.                                0.0GB  N.A.      
      5  01  Slot#5  N.A.                                0.0GB  N.A.      
      6  01  Slot#6  N.A.                                0.0GB  N.A.      
      7  01  Slot#7  N.A.                                0.0GB  N.A.      
      8  01  Slot#8  N.A.                                0.0GB  N.A.      
      9  02  SLOT 01 N.A.                                0.0GB  N.A.      
     10  02  SLOT 02 N.A.                                0.0GB  N.A.      
     11  02  SLOT 03 N.A.                                0.0GB  N.A.      
     12  02  SLOT 04 N.A.                                0.0GB  N.A.      
     13  02  SLOT 05 N.A.                                0.0GB  N.A.      
     14  02  SLOT 06 N.A.                                0.0GB  N.A.      
     15  02  SLOT 07 N.A.                                0.0GB  N.A.      
     16  02  SLOT 08 N.A.                                0.0GB  N.A.      
     17  02  SLOT 09 N.A.                                0.0GB  N.A.      
     18  02  SLOT 10 N.A.                                0.0GB  N.A.      
     19  02  SLOT 11 N.A.                                0.0GB  N.A.      
     20  02  SLOT 12 N.A.                                0.0GB  N.A.      
     21  02  SLOT 13 N.A.                                0.0GB  N.A.      
     22  02  SLOT 14 N.A.                                0.0GB  N.A.      
     23  02  SLOT 15 ST3000DM001-9YN166               3000.6GB  JBOD      
     24  02  SLOT 16 N.A.                                0.0GB  N.A.      
     25  02  EXTP 01 N.A.                                0.0GB  N.A.      
     26  02  EXTP 02 N.A.                                0.0GB  N.A.      
     27  02  EXTP 03 N.A.                                0.0GB  N.A.      
     28  02  EXTP 04 N.A.                                0.0GB  N.A.      
    ===============================================================================
    GuiErrMsg<0x00>: Success.
    root@computenode:~# cli64 sys info
    The System Information
    ===========================================
    Main Processor     : 800MHz
    CPU ICache Size    : 32KB
    CPU DCache Size    : 32KB
    CPU SCache Size    : 0KB
    System Memory      : 1024MB/800MHz/ECC
    Firmware Version   : V1.56 2019-07-30
    BOOT ROM Version   : V1.56 2019-07-30
    Serial Number      : E107CACRAR600082
    Controller Name    : ARC-1880
    Current IP Address : 192.168.1.27
    ===========================================
    GuiErrMsg<0x00>: Success.
    root@computenode:~# 
    

     

    I have been testing on a test system so no impact to real data but the following does spin down the drives. But after a while the card does not respond unless I disconnect the drives and I also see timeouts in the event log.

     

    root@computenode:~# lsscsi -g
    [0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda   /dev/sg0 
    [1:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sdb   /dev/sg1 
    [2:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sdc   /dev/sg2 
    [3:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sdd   /dev/sg3 
    [4:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sde   /dev/sg4 
    [8:0:2:6]    disk    Seagate  ST3000DM001-9YN1 R001  /dev/sdg   /dev/sg7 
    [8:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg5 
    [10:0:0:0]   disk    ATA      ST96812AS        3.14  /dev/sdf   /dev/sg6 
    [N:0:1:1]    disk    CT500P2SSD8__1                             /dev/nvme0n1  -     

     

    Spin down

    smartctl -d areca,15/2 -s standby,now /dev/sg5

     

    Spin up

    smartctl -d areca,15/2 -s standby,off /dev/sg5

     

    Status

    smartctl -d arcea,15.2 -n standby /dev/sg5

     

    I am working to see if I can make version a reliable version but suspect the firmware is causing an issue on my card and It will need to be able to get a reliable sgx name.

     

    Do you have two cards or is it a single card that reports as two controllers.

     

    You can download the cli64 tool from the Areca website.

     

    My event log.

     

    Copyright (c) 2004-2011 Areca, Inc. All Rights Reserved.
    Areca CLI, Version: 1.86, Arclib: 310, Date: Nov  1 2011( Linux )
    
     S  #   Name       Type             Interface
    ==================================================
    [*] 1   ARC-1880   Raid Controller  PCI
    ==================================================
    
    CMD     Description
    ==========================================================
    main    Show Command Categories.
    set     General Settings.
    rsf     RaidSet Functions.
    vsf     VolumeSet Functions.
    disk    Physical Drive Functions.
    sys     System Functions.
    net     Ethernet Functions.
    event   Event Functions.
    hw      Hardware Monitor Functions.
    mail    Mail Notification Functions.
    snmp    SNMP Functions.
    ntp     NTP Functions.
    exit    Exit CLI.
    ==========================================================
    Command Format: <CMD> [Sub-Command] [Parameters].
    Note: Use <CMD> -h or -help to get details.
    CLI> event info
    Date-Time            Device           Event Type            Elapsed Time Errors
    ===============================================================================
    2022-05-21 09:43:12  E2 SLOT 15       Device Inserted                          
    2022-05-21 07:51:56  192.168.001.041  HTTP Log In                              
    2022-05-20 19:20:07  H/W MONITOR      Raid Powered On                          
    2022-05-20 18:52:41  E2 SLOT 13       Device Removed                           
    2022-05-20 18:52:36  E2 SLOT 15       Device Removed                           
    2022-05-20 13:09:58  E2 SLOT 15       Time Out Error                           
    2022-05-20 06:17:32  H/W MONITOR      Raid Powered On                          
    2022-05-20 06:09:09  E2 SLOT 15       Time Out Error                           
    2022-05-19 21:01:28  E2 SLOT 15       Device Inserted                          
    2022-05-19 21:01:21  E2 SLOT 15       Device Removed                           
    2022-05-19 18:39:13  E2 SLOT 15       Time Out Error                           
    2022-05-19 18:37:13  E2 SLOT 15       Time Out Error                           
    2022-05-19 18:03:39  E2 SLOT 15       Device Inserted                          
    2022-05-19 18:03:39  E2 SLOT 15       Device Removed                           
    2022-05-19 18:03:18  E2 SLOT 13       Time Out Error                           
    2022-05-19 18:02:29  E2 SLOT 15       Device Inserted                          
    2022-05-19 18:02:29  E2 SLOT 15       Device Removed                           
    2022-05-19 18:01:25  E2 SLOT 15       Time Out Error                           
    2022-05-19 16:25:51  H/W MONITOR      Raid Powered On                          
    2022-05-19 15:11:39  E2 SLOT 13       Time Out Error                           
    2022-05-19 15:10:49  E2 SLOT 15       Time Out Error                           
    2022-05-19 12:29:09  E2 SLOT 13       Device Inserted                          
    2022-05-19 12:29:09  E2 SLOT 15       Device Inserted                          
    2022-05-19 12:27:51  E2 SLOT 15       Device Removed                           
    2022-05-19 12:26:48  E2 SLOT 15       Device Inserted                          
    2022-05-19 12:26:37  E2 SLOT 15       Device Removed                           
    2022-05-19 11:25:34  H/W MONITOR      Raid Powered On                          
    2022-05-18 19:22:06  H/W MONITOR      Raid Powered On                          
    2022-05-17 21:08:28  E2 SLOT 15       Time Out Error                           
    2022-05-17 20:05:00  H/W MONITOR      Raid Powered On                          
    2022-05-17 21:03:26  E2 SLOT 15       Time Out Error                           
    2022-05-16 12:49:50  SW API Interface API Log In                               
    2022-05-16 12:10:30  SW API Interface API Log In                               
    2022-05-16 07:55:00  E2 SLOT 15       Time Out Error                           
    2022-05-15 22:14:09  E2 SLOT 15       Time Out Error                           
    2022-05-15 06:46:47  192.168.001.041  HTTP Log In                              
    2022-05-15 06:40:19  E2 SLOT 15       Device Inserted                          
    2022-05-15 05:50:22  E2 SLOT 16       Device Removed                           
    2022-05-14 14:41:32  H/W MONITOR      Raid Powered On                          
    2022-05-14 14:33:50  H/W MONITOR      Raid Powered On                          
    2022-05-14 13:49:30  E2 SLOT 16       Time Out Error                           
    2022-05-13 20:53:11  H/W MONITOR      Test Event                               
    ===============================================================================
    GuiErrMsg<0x00>: Success.
    
    CLI> exit
    root@computenode:~# 

     

    Link to comment
    On 5/21/2022 at 4:59 AM, SimonF said:

    Hi, I have a way to spin down the drives but the card seems to lockout access after a while, disks are still accessable from the system but status updates do not seem to be reliable.

     

    Not sure how it would have worked before maybe the disks didn't spin down but the gui showed them as spun down.

     

    i have found a issue with the way the smartctl is called for my card and have raised a bug report for it.

     

    My controller is on the last version of firmware for my card 1880.

     

    image.thumb.png.a74347ed0013fa7f84bd13494c00e5e9.png

     

    So not sure if the issue is the Expander within my card.

     

    I have installed cli64 and will look to see if I can create a process to update the smart-one settings as the sgx device name seems to change on reboots for me.

     

    Copyright (c) 2004-2011 Areca, Inc. All Rights Reserved.
    Areca CLI, Version: 1.86, Arclib: 310, Date: Nov  1 2011( Linux )
    
     S  #   Name       Type             Interface
    ==================================================
    [*] 1   ARC-1880   Raid Controller  PCI
    ==================================================
    
    CMD     Description
    ==========================================================
    main    Show Command Categories.
    set     General Settings.
    rsf     RaidSet Functions.
    vsf     VolumeSet Functions.
    disk    Physical Drive Functions.
    sys     System Functions.
    net     Ethernet Functions.
    event   Event Functions.
    hw      Hardware Monitor Functions.
    mail    Mail Notification Functions.
    snmp    SNMP Functions.
    ntp     NTP Functions.
    exit    Exit CLI.
    ==========================================================
    Command Format: <CMD> [Sub-Command] [Parameters].
    Note: Use <CMD> -h or
    
    
    root@computenode:~# cli64 disk info
      # Enc# Slot#   ModelName                        Capacity  Usage
    ===============================================================================
      1  01  Slot#1  N.A.                                0.0GB  N.A.      
      2  01  Slot#2  N.A.                                0.0GB  N.A.      
      3  01  Slot#3  N.A.                                0.0GB  N.A.      
      4  01  Slot#4  N.A.                                0.0GB  N.A.      
      5  01  Slot#5  N.A.                                0.0GB  N.A.      
      6  01  Slot#6  N.A.                                0.0GB  N.A.      
      7  01  Slot#7  N.A.                                0.0GB  N.A.      
      8  01  Slot#8  N.A.                                0.0GB  N.A.      
      9  02  SLOT 01 N.A.                                0.0GB  N.A.      
     10  02  SLOT 02 N.A.                                0.0GB  N.A.      
     11  02  SLOT 03 N.A.                                0.0GB  N.A.      
     12  02  SLOT 04 N.A.                                0.0GB  N.A.      
     13  02  SLOT 05 N.A.                                0.0GB  N.A.      
     14  02  SLOT 06 N.A.                                0.0GB  N.A.      
     15  02  SLOT 07 N.A.                                0.0GB  N.A.      
     16  02  SLOT 08 N.A.                                0.0GB  N.A.      
     17  02  SLOT 09 N.A.                                0.0GB  N.A.      
     18  02  SLOT 10 N.A.                                0.0GB  N.A.      
     19  02  SLOT 11 N.A.                                0.0GB  N.A.      
     20  02  SLOT 12 N.A.                                0.0GB  N.A.      
     21  02  SLOT 13 N.A.                                0.0GB  N.A.      
     22  02  SLOT 14 N.A.                                0.0GB  N.A.      
     23  02  SLOT 15 ST3000DM001-9YN166               3000.6GB  JBOD      
     24  02  SLOT 16 N.A.                                0.0GB  N.A.      
     25  02  EXTP 01 N.A.                                0.0GB  N.A.      
     26  02  EXTP 02 N.A.                                0.0GB  N.A.      
     27  02  EXTP 03 N.A.                                0.0GB  N.A.      
     28  02  EXTP 04 N.A.                                0.0GB  N.A.      
    ===============================================================================
    GuiErrMsg<0x00>: Success.
    root@computenode:~# cli64 sys info
    The System Information
    ===========================================
    Main Processor     : 800MHz
    CPU ICache Size    : 32KB
    CPU DCache Size    : 32KB
    CPU SCache Size    : 0KB
    System Memory      : 1024MB/800MHz/ECC
    Firmware Version   : V1.56 2019-07-30
    BOOT ROM Version   : V1.56 2019-07-30
    Serial Number      : E107CACRAR600082
    Controller Name    : ARC-1880
    Current IP Address : 192.168.1.27
    ===========================================
    GuiErrMsg<0x00>: Success.
    root@computenode:~# 
    

     

    I have been testing on a test system so no impact to real data but the following does spin down the drives. But after a while the card does not respond unless I disconnect the drives and I also see timeouts in the event log.

     

    root@computenode:~# lsscsi -g
    [0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda   /dev/sg0 
    [1:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sdb   /dev/sg1 
    [2:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sdc   /dev/sg2 
    [3:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sdd   /dev/sg3 
    [4:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sde   /dev/sg4 
    [8:0:2:6]    disk    Seagate  ST3000DM001-9YN1 R001  /dev/sdg   /dev/sg7 
    [8:0:16:0]   process Areca    RAID controller  R001  -          /dev/sg5 
    [10:0:0:0]   disk    ATA      ST96812AS        3.14  /dev/sdf   /dev/sg6 
    [N:0:1:1]    disk    CT500P2SSD8__1                             /dev/nvme0n1  -     

     

    Spin down

    smartctl -d areca,15/2 -s standby,now /dev/sg5

     

    Spin up

    smartctl -d areca,15/2 -s standby,off /dev/sg5

     

    Status

    smartctl -d arcea,15.2 -n standby /dev/sg5

     

    I am working to see if I can make version a reliable version but suspect the firmware is causing an issue on my card and It will need to be able to get a reliable sgx name.

     

    Do you have two cards or is it a single card that reports as two controllers.

     

    You can download the cli64 tool from the Areca website.

     

    My event log.

     

    Copyright (c) 2004-2011 Areca, Inc. All Rights Reserved.
    Areca CLI, Version: 1.86, Arclib: 310, Date: Nov  1 2011( Linux )
    
     S  #   Name       Type             Interface
    ==================================================
    [*] 1   ARC-1880   Raid Controller  PCI
    ==================================================
    
    CMD     Description
    ==========================================================
    main    Show Command Categories.
    set     General Settings.
    rsf     RaidSet Functions.
    vsf     VolumeSet Functions.
    disk    Physical Drive Functions.
    sys     System Functions.
    net     Ethernet Functions.
    event   Event Functions.
    hw      Hardware Monitor Functions.
    mail    Mail Notification Functions.
    snmp    SNMP Functions.
    ntp     NTP Functions.
    exit    Exit CLI.
    ==========================================================
    Command Format: <CMD> [Sub-Command] [Parameters].
    Note: Use <CMD> -h or -help to get details.
    CLI> event info
    Date-Time            Device           Event Type            Elapsed Time Errors
    ===============================================================================
    2022-05-21 09:43:12  E2 SLOT 15       Device Inserted                          
    2022-05-21 07:51:56  192.168.001.041  HTTP Log In                              
    2022-05-20 19:20:07  H/W MONITOR      Raid Powered On                          
    2022-05-20 18:52:41  E2 SLOT 13       Device Removed                           
    2022-05-20 18:52:36  E2 SLOT 15       Device Removed                           
    2022-05-20 13:09:58  E2 SLOT 15       Time Out Error                           
    2022-05-20 06:17:32  H/W MONITOR      Raid Powered On                          
    2022-05-20 06:09:09  E2 SLOT 15       Time Out Error                           
    2022-05-19 21:01:28  E2 SLOT 15       Device Inserted                          
    2022-05-19 21:01:21  E2 SLOT 15       Device Removed                           
    2022-05-19 18:39:13  E2 SLOT 15       Time Out Error                           
    2022-05-19 18:37:13  E2 SLOT 15       Time Out Error                           
    2022-05-19 18:03:39  E2 SLOT 15       Device Inserted                          
    2022-05-19 18:03:39  E2 SLOT 15       Device Removed                           
    2022-05-19 18:03:18  E2 SLOT 13       Time Out Error                           
    2022-05-19 18:02:29  E2 SLOT 15       Device Inserted                          
    2022-05-19 18:02:29  E2 SLOT 15       Device Removed                           
    2022-05-19 18:01:25  E2 SLOT 15       Time Out Error                           
    2022-05-19 16:25:51  H/W MONITOR      Raid Powered On                          
    2022-05-19 15:11:39  E2 SLOT 13       Time Out Error                           
    2022-05-19 15:10:49  E2 SLOT 15       Time Out Error                           
    2022-05-19 12:29:09  E2 SLOT 13       Device Inserted                          
    2022-05-19 12:29:09  E2 SLOT 15       Device Inserted                          
    2022-05-19 12:27:51  E2 SLOT 15       Device Removed                           
    2022-05-19 12:26:48  E2 SLOT 15       Device Inserted                          
    2022-05-19 12:26:37  E2 SLOT 15       Device Removed                           
    2022-05-19 11:25:34  H/W MONITOR      Raid Powered On                          
    2022-05-18 19:22:06  H/W MONITOR      Raid Powered On                          
    2022-05-17 21:08:28  E2 SLOT 15       Time Out Error                           
    2022-05-17 20:05:00  H/W MONITOR      Raid Powered On                          
    2022-05-17 21:03:26  E2 SLOT 15       Time Out Error                           
    2022-05-16 12:49:50  SW API Interface API Log In                               
    2022-05-16 12:10:30  SW API Interface API Log In                               
    2022-05-16 07:55:00  E2 SLOT 15       Time Out Error                           
    2022-05-15 22:14:09  E2 SLOT 15       Time Out Error                           
    2022-05-15 06:46:47  192.168.001.041  HTTP Log In                              
    2022-05-15 06:40:19  E2 SLOT 15       Device Inserted                          
    2022-05-15 05:50:22  E2 SLOT 16       Device Removed                           
    2022-05-14 14:41:32  H/W MONITOR      Raid Powered On                          
    2022-05-14 14:33:50  H/W MONITOR      Raid Powered On                          
    2022-05-14 13:49:30  E2 SLOT 16       Time Out Error                           
    2022-05-13 20:53:11  H/W MONITOR      Test Event                               
    ===============================================================================
    GuiErrMsg<0x00>: Success.
    
    CLI> exit
    root@computenode:~# 

     

     

     

    I have two 1882i cards   Latest firmware.  Both cards are identical except for IP and S/N.

     

    I will d/l the cli tool today and  try to spin down the 4 TB drive which is not part of the array.  I use it as a stand alone backup drive to some data.  the 6 TB drive is drive passed thru to a BlueIRIS windows VM for my security system.

     

    Again is can not thank you enough for spending cycle on this.  

     

     

     image.png.ae27e64389573ae679d2d197cc75d8d3.png

     

    IOMMU group 29:[17d3:1880] 03:00.0 RAID bus controller: Areca Technology Corp. ARC-188x series PCIe 2.0/3.0 to SAS/SATA 6/12Gb RAID Controller (rev 05)

    [1:0:0:0] disk Seagate ST6000NM0024-1HT R001 /dev/sdb 6.00TB

    [1:0:0:1] disk Seagate ST14000NM001G-2K R001 /dev/sdc 14.0TB

    [1:0:0:2] disk Seagate ST12000NM0008-2H R001 /dev/sdd 12.0TB

    [1:0:0:3] disk Seagate ST12000NM001G-2M R001 /dev/sde 12.0TB

    [1:0:0:4] disk Seagate ST10000NM0016-1T R001 /dev/sdf 10.0TB

    [1:0:0:5] disk Seagate ST10000NM0016-1T R001 /dev/sdh 10.0TB

    [1:0:0:6] disk Seagate ST10000NM0086-2A R001 /dev/sdi 10.0TB

    [1:0:0:7] disk Seagate ST10000NM0086-2A R001 /dev/sdj 10.0TB

    IOMMU group 30:[17d3:1880] 02:00.0 RAID bus controller: Areca Technology Corp. ARC-188x series PCIe 2.0/3.0 to SAS/SATA 6/12Gb RAID Controller (rev 05)

    [8:0:0:0] disk Seagate ST10000NM0016-1T R001 /dev/sdm 10.0TB

    [8:0:0:1] disk Seagate ST10000NM0016-1T R001 /dev/sdn 10.0TB

    [8:0:0:2] disk Seagate ST10000NM0016-1T R001 /dev/sdo 10.0TB

    [8:0:0:3] disk Seagate ST10000NM0016-1T R001 /dev/sdp 10.0TB

    [8:0:0:5] disk WDC WD4002FYYZ-01B7C R001 /dev/sdq 4.00TB

    [8:0:0:7] disk Seagate ST6000VN0041-2EL R001 /dev/sdr 6.00TB

     

    Link to comment
    7 hours ago, chris0583 said:

     

     

    I have two 1882i cards   Latest firmware.  Both cards are identical except for IP and S/N.

     

    I will d/l the cli tool today and  try to spin down the 4 TB drive which is not part of the array.  I use it as a stand alone backup drive to some data.  the 6 TB drive is drive passed thru to a BlueIRIS windows VM for my security system.

     

    Again is can not thank you enough for spending cycle on this.  

     

     

     image.png.ae27e64389573ae679d2d197cc75d8d3.png

     

    IOMMU group 29:[17d3:1880] 03:00.0 RAID bus controller: Areca Technology Corp. ARC-188x series PCIe 2.0/3.0 to SAS/SATA 6/12Gb RAID Controller (rev 05)

    [1:0:0:0] disk Seagate ST6000NM0024-1HT R001 /dev/sdb 6.00TB

    [1:0:0:1] disk Seagate ST14000NM001G-2K R001 /dev/sdc 14.0TB

    [1:0:0:2] disk Seagate ST12000NM0008-2H R001 /dev/sdd 12.0TB

    [1:0:0:3] disk Seagate ST12000NM001G-2M R001 /dev/sde 12.0TB

    [1:0:0:4] disk Seagate ST10000NM0016-1T R001 /dev/sdf 10.0TB

    [1:0:0:5] disk Seagate ST10000NM0016-1T R001 /dev/sdh 10.0TB

    [1:0:0:6] disk Seagate ST10000NM0086-2A R001 /dev/sdi 10.0TB

    [1:0:0:7] disk Seagate ST10000NM0086-2A R001 /dev/sdj 10.0TB

    IOMMU group 30:[17d3:1880] 02:00.0 RAID bus controller: Areca Technology Corp. ARC-188x series PCIe 2.0/3.0 to SAS/SATA 6/12Gb RAID Controller (rev 05)

    [8:0:0:0] disk Seagate ST10000NM0016-1T R001 /dev/sdm 10.0TB

    [8:0:0:1] disk Seagate ST10000NM0016-1T R001 /dev/sdn 10.0TB

    [8:0:0:2] disk Seagate ST10000NM0016-1T R001 /dev/sdo 10.0TB

    [8:0:0:3] disk Seagate ST10000NM0016-1T R001 /dev/sdp 10.0TB

    [8:0:0:5] disk WDC WD4002FYYZ-01B7C R001 /dev/sdq 4.00TB

    [8:0:0:7] disk Seagate ST6000VN0041-2EL R001 /dev/sdr 6.00TB

     

    FYI I only have test drives on my controller, so cannot confirm it will not cause issues with other drives on the same controller.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.