-
Posts
55 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by chris0583
-
-
On 5/21/2022 at 4:59 AM, SimonF said:
Hi, I have a way to spin down the drives but the card seems to lockout access after a while, disks are still accessable from the system but status updates do not seem to be reliable.
Not sure how it would have worked before maybe the disks didn't spin down but the gui showed them as spun down.
i have found a issue with the way the smartctl is called for my card and have raised a bug report for it.
My controller is on the last version of firmware for my card 1880.
So not sure if the issue is the Expander within my card.
I have installed cli64 and will look to see if I can create a process to update the smart-one settings as the sgx device name seems to change on reboots for me.
Copyright (c) 2004-2011 Areca, Inc. All Rights Reserved. Areca CLI, Version: 1.86, Arclib: 310, Date: Nov 1 2011( Linux ) S # Name Type Interface ================================================== [*] 1 ARC-1880 Raid Controller PCI ================================================== CMD Description ========================================================== main Show Command Categories. set General Settings. rsf RaidSet Functions. vsf VolumeSet Functions. disk Physical Drive Functions. sys System Functions. net Ethernet Functions. event Event Functions. hw Hardware Monitor Functions. mail Mail Notification Functions. snmp SNMP Functions. ntp NTP Functions. exit Exit CLI. ========================================================== Command Format: <CMD> [Sub-Command] [Parameters]. Note: Use <CMD> -h or root@computenode:~# cli64 disk info # Enc# Slot# ModelName Capacity Usage =============================================================================== 1 01 Slot#1 N.A. 0.0GB N.A. 2 01 Slot#2 N.A. 0.0GB N.A. 3 01 Slot#3 N.A. 0.0GB N.A. 4 01 Slot#4 N.A. 0.0GB N.A. 5 01 Slot#5 N.A. 0.0GB N.A. 6 01 Slot#6 N.A. 0.0GB N.A. 7 01 Slot#7 N.A. 0.0GB N.A. 8 01 Slot#8 N.A. 0.0GB N.A. 9 02 SLOT 01 N.A. 0.0GB N.A. 10 02 SLOT 02 N.A. 0.0GB N.A. 11 02 SLOT 03 N.A. 0.0GB N.A. 12 02 SLOT 04 N.A. 0.0GB N.A. 13 02 SLOT 05 N.A. 0.0GB N.A. 14 02 SLOT 06 N.A. 0.0GB N.A. 15 02 SLOT 07 N.A. 0.0GB N.A. 16 02 SLOT 08 N.A. 0.0GB N.A. 17 02 SLOT 09 N.A. 0.0GB N.A. 18 02 SLOT 10 N.A. 0.0GB N.A. 19 02 SLOT 11 N.A. 0.0GB N.A. 20 02 SLOT 12 N.A. 0.0GB N.A. 21 02 SLOT 13 N.A. 0.0GB N.A. 22 02 SLOT 14 N.A. 0.0GB N.A. 23 02 SLOT 15 ST3000DM001-9YN166 3000.6GB JBOD 24 02 SLOT 16 N.A. 0.0GB N.A. 25 02 EXTP 01 N.A. 0.0GB N.A. 26 02 EXTP 02 N.A. 0.0GB N.A. 27 02 EXTP 03 N.A. 0.0GB N.A. 28 02 EXTP 04 N.A. 0.0GB N.A. =============================================================================== GuiErrMsg<0x00>: Success. root@computenode:~# cli64 sys info The System Information =========================================== Main Processor : 800MHz CPU ICache Size : 32KB CPU DCache Size : 32KB CPU SCache Size : 0KB System Memory : 1024MB/800MHz/ECC Firmware Version : V1.56 2019-07-30 BOOT ROM Version : V1.56 2019-07-30 Serial Number : E107CACRAR600082 Controller Name : ARC-1880 Current IP Address : 192.168.1.27 =========================================== GuiErrMsg<0x00>: Success. root@computenode:~#
I have been testing on a test system so no impact to real data but the following does spin down the drives. But after a while the card does not respond unless I disconnect the drives and I also see timeouts in the event log.
root@computenode:~# lsscsi -g [0:0:0:0] disk SanDisk Cruzer Blade 1.00 /dev/sda /dev/sg0 [1:0:0:0] disk SanDisk Cruzer Blade 1.00 /dev/sdb /dev/sg1 [2:0:0:0] disk SanDisk Cruzer Blade 1.00 /dev/sdc /dev/sg2 [3:0:0:0] disk SanDisk Cruzer Blade 1.00 /dev/sdd /dev/sg3 [4:0:0:0] disk SanDisk Cruzer Blade 1.00 /dev/sde /dev/sg4 [8:0:2:6] disk Seagate ST3000DM001-9YN1 R001 /dev/sdg /dev/sg7 [8:0:16:0] process Areca RAID controller R001 - /dev/sg5 [10:0:0:0] disk ATA ST96812AS 3.14 /dev/sdf /dev/sg6 [N:0:1:1] disk CT500P2SSD8__1 /dev/nvme0n1 -
Spin down
smartctl -d areca,15/2 -s standby,now /dev/sg5
Spin up
smartctl -d areca,15/2 -s standby,off /dev/sg5
Status
smartctl -d arcea,15.2 -n standby /dev/sg5
I am working to see if I can make version a reliable version but suspect the firmware is causing an issue on my card and It will need to be able to get a reliable sgx name.
Do you have two cards or is it a single card that reports as two controllers.
You can download the cli64 tool from the Areca website.
My event log.
Copyright (c) 2004-2011 Areca, Inc. All Rights Reserved. Areca CLI, Version: 1.86, Arclib: 310, Date: Nov 1 2011( Linux ) S # Name Type Interface ================================================== [*] 1 ARC-1880 Raid Controller PCI ================================================== CMD Description ========================================================== main Show Command Categories. set General Settings. rsf RaidSet Functions. vsf VolumeSet Functions. disk Physical Drive Functions. sys System Functions. net Ethernet Functions. event Event Functions. hw Hardware Monitor Functions. mail Mail Notification Functions. snmp SNMP Functions. ntp NTP Functions. exit Exit CLI. ========================================================== Command Format: <CMD> [Sub-Command] [Parameters]. Note: Use <CMD> -h or -help to get details. CLI> event info Date-Time Device Event Type Elapsed Time Errors =============================================================================== 2022-05-21 09:43:12 E2 SLOT 15 Device Inserted 2022-05-21 07:51:56 192.168.001.041 HTTP Log In 2022-05-20 19:20:07 H/W MONITOR Raid Powered On 2022-05-20 18:52:41 E2 SLOT 13 Device Removed 2022-05-20 18:52:36 E2 SLOT 15 Device Removed 2022-05-20 13:09:58 E2 SLOT 15 Time Out Error 2022-05-20 06:17:32 H/W MONITOR Raid Powered On 2022-05-20 06:09:09 E2 SLOT 15 Time Out Error 2022-05-19 21:01:28 E2 SLOT 15 Device Inserted 2022-05-19 21:01:21 E2 SLOT 15 Device Removed 2022-05-19 18:39:13 E2 SLOT 15 Time Out Error 2022-05-19 18:37:13 E2 SLOT 15 Time Out Error 2022-05-19 18:03:39 E2 SLOT 15 Device Inserted 2022-05-19 18:03:39 E2 SLOT 15 Device Removed 2022-05-19 18:03:18 E2 SLOT 13 Time Out Error 2022-05-19 18:02:29 E2 SLOT 15 Device Inserted 2022-05-19 18:02:29 E2 SLOT 15 Device Removed 2022-05-19 18:01:25 E2 SLOT 15 Time Out Error 2022-05-19 16:25:51 H/W MONITOR Raid Powered On 2022-05-19 15:11:39 E2 SLOT 13 Time Out Error 2022-05-19 15:10:49 E2 SLOT 15 Time Out Error 2022-05-19 12:29:09 E2 SLOT 13 Device Inserted 2022-05-19 12:29:09 E2 SLOT 15 Device Inserted 2022-05-19 12:27:51 E2 SLOT 15 Device Removed 2022-05-19 12:26:48 E2 SLOT 15 Device Inserted 2022-05-19 12:26:37 E2 SLOT 15 Device Removed 2022-05-19 11:25:34 H/W MONITOR Raid Powered On 2022-05-18 19:22:06 H/W MONITOR Raid Powered On 2022-05-17 21:08:28 E2 SLOT 15 Time Out Error 2022-05-17 20:05:00 H/W MONITOR Raid Powered On 2022-05-17 21:03:26 E2 SLOT 15 Time Out Error 2022-05-16 12:49:50 SW API Interface API Log In 2022-05-16 12:10:30 SW API Interface API Log In 2022-05-16 07:55:00 E2 SLOT 15 Time Out Error 2022-05-15 22:14:09 E2 SLOT 15 Time Out Error 2022-05-15 06:46:47 192.168.001.041 HTTP Log In 2022-05-15 06:40:19 E2 SLOT 15 Device Inserted 2022-05-15 05:50:22 E2 SLOT 16 Device Removed 2022-05-14 14:41:32 H/W MONITOR Raid Powered On 2022-05-14 14:33:50 H/W MONITOR Raid Powered On 2022-05-14 13:49:30 E2 SLOT 16 Time Out Error 2022-05-13 20:53:11 H/W MONITOR Test Event =============================================================================== GuiErrMsg<0x00>: Success. CLI> exit root@computenode:~#
I have two 1882i cards Latest firmware. Both cards are identical except for IP and S/N.
I will d/l the cli tool today and try to spin down the 4 TB drive which is not part of the array. I use it as a stand alone backup drive to some data. the 6 TB drive is drive passed thru to a BlueIRIS windows VM for my security system.
Again is can not thank you enough for spending cycle on this.
IOMMU group 29:[17d3:1880] 03:00.0 RAID bus controller: Areca Technology Corp. ARC-188x series PCIe 2.0/3.0 to SAS/SATA 6/12Gb RAID Controller (rev 05)
[1:0:0:0] disk Seagate ST6000NM0024-1HT R001 /dev/sdb 6.00TB
[1:0:0:1] disk Seagate ST14000NM001G-2K R001 /dev/sdc 14.0TB
[1:0:0:2] disk Seagate ST12000NM0008-2H R001 /dev/sdd 12.0TB
[1:0:0:3] disk Seagate ST12000NM001G-2M R001 /dev/sde 12.0TB
[1:0:0:4] disk Seagate ST10000NM0016-1T R001 /dev/sdf 10.0TB
[1:0:0:5] disk Seagate ST10000NM0016-1T R001 /dev/sdh 10.0TB
[1:0:0:6] disk Seagate ST10000NM0086-2A R001 /dev/sdi 10.0TB
[1:0:0:7] disk Seagate ST10000NM0086-2A R001 /dev/sdj 10.0TB
IOMMU group 30:[17d3:1880] 02:00.0 RAID bus controller: Areca Technology Corp. ARC-188x series PCIe 2.0/3.0 to SAS/SATA 6/12Gb RAID Controller (rev 05)
[8:0:0:0] disk Seagate ST10000NM0016-1T R001 /dev/sdm 10.0TB
[8:0:0:1] disk Seagate ST10000NM0016-1T R001 /dev/sdn 10.0TB
[8:0:0:2] disk Seagate ST10000NM0016-1T R001 /dev/sdo 10.0TB
[8:0:0:3] disk Seagate ST10000NM0016-1T R001 /dev/sdp 10.0TB
[8:0:0:5] disk WDC WD4002FYYZ-01B7C R001 /dev/sdq 4.00TB
[8:0:0:7] disk Seagate ST6000VN0041-2EL R001 /dev/sdr 6.00TB
-
On 5/11/2022 at 2:42 AM, JorgeB said:
To be honest the surprise for me is that is was working before, as far as I remember spin down is known to not work with most RAID controllers, including Areca.
One of the main reason I went to unraid was for the spin down feature. I have been running QNAP's for years and those drives would spin all day. Now i can not live without it. I have turned so many people on to it. unfortunately i am their tech support arm whenever they want to do something new with their systems
-
On 5/11/2022 at 2:51 AM, SimonF said:
Found a cheap Areca ARC-1880DIX on ebay should be here at the weekend and will look to see if I can find a solution.
i have an extra ARC-1280 ML V2. I would of shipped it to you. thank you for going the extra mile on this.
-
Just wanted to report. I upgraded to the last RC version (Version: 6.10.0-rc7) still no change in spin down behavior. i do get this in the logs. looks like the system is issuing commands to spin down but the drives are not listening except for the parity which are connected to the main board.
May 10 14:54:59 STORAGE2 kernel: sdb: sdb1
May 10 14:54:59 STORAGE2 kernel: sdp: sdp1
May 10 14:57:03 STORAGE2 emhttpd: spinning down /dev/sdq
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdm
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdh
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdg
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdd
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdb
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdf
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdn
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdo
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdi
May 10 15:07:02 STORAGE2 emhttpd: spinning down /dev/sdp
May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdk
May 10 15:07:32 STORAGE2 emhttpd: spinning down /dev/sdl
May 10 15:09:51 STORAGE2 emhttpd: spinning down /dev/sdr
May 10 15:12:04 STORAGE2 emhttpd: spinning down /dev/sdq
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdm
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdh
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdg
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdd
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdb
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdf
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdn
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdo
May 10 15:22:03 STORAGE2 emhttpd: spinning down /dev/sdi
May 10 15:24:52 STORAGE2 emhttpd: spinning down /dev/sdr
May 10 15:25:37 STORAGE2 kernel: sdb: sdb1
May 10 15:25:37 STORAGE2 kernel: sdp: sdp1
May 10 15:27:05 STORAGE2 emhttpd: spinning down /dev/sdq
May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdk
May 10 15:29:46 STORAGE2 emhttpd: read SMART /dev/sdl
May 10 15:39:53 STORAGE2 emhttpd: spinning down /dev/sdr
May 10 15:42:06 STORAGE2 emhttpd: spinning down /dev/sd -
1 hour ago, SimonF said:
Ok I dont think hdparm is not going to be the solution.
I suspect this may fail also, so may have to use the -d option, but not sure if the code supports that will take a look.
smartctl -s standby,now /dev/sg9
smartctl -n never /dev/sg9
Something odd happened. Drives 9 & 10 went X on me. Thank the lord for 2 parity drives. System is rebuilding. I will try and issue more commands once rebuilt and 100%.
Again, thank you for all the help and suggestions. I am trying to avoid buying a new adapter and there is nothing working with the ones I have. Something changed in 6.9.2 that broke spin down or compatibility for Areca adapters.
I even logged into the adapter web UI and both have the same config …
-
2 hours ago, SimonF said:
for hdparm -y and -C can you try using the sg device for the disk?
looking at your diags sdj maps to sg9 you may want to run sg_map to confirm this is still the case.
[1:0:0:7] disk Seagate ST10000NM0086-2A R001 /dev/sdj /dev/sg9
state=running queue_depth=32 scsi_level=6 type=0 device_blocked=0 timeout=90
dir: /sys/bus/scsi/devices/1:0:0:7 [/sys/devices/pci0000:00/0000:00:01.0/0000:03:00.0/host1/target1:0:0/1:0:0:7]then run
hdparm -y /dev/sg9
hdparm -C /dev/sg9
We may be able to use sg names
smartctl -n never /dev/sg9 does that show power mode?
root@STORAGE2:~# lsscsi -g|grep "Areca"
[1:0:16:0] process Areca RAID controller R001 - /dev/sg10
[8:0:16:0] process Areca RAID controller R001 - /dev/sg19
root@STORAGE2:~# sg_map
/dev/sg0 /dev/sda
/dev/sg1 /dev/sdb
/dev/sg2 /dev/sdc
/dev/sg3 /dev/sdd
/dev/sg4 /dev/sde
/dev/sg5 /dev/sdf
/dev/sg6 /dev/sdg
/dev/sg7 /dev/sdh
/dev/sg8 /dev/sdi
/dev/sg9 /dev/sdj
/dev/sg10
/dev/sg11 /dev/sdk
/dev/sg12 /dev/sdl
/dev/sg13 /dev/sdm
/dev/sg14 /dev/sdn
/dev/sg15 /dev/sdo
/dev/sg16 /dev/sdp
/dev/sg17 /dev/sdq
/dev/sg18 /dev/sdr
/dev/sg19
root@STORAGE2:~# hdparm -y /dev/sg9
/dev/sg9:
issuing standby command
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
root@STORAGE2:~# hdparm -C /dev/sg9
/dev/sg9:
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
drive state is: standby
root@STORAGE2:~# smartctl -n never /dev/sg9
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.35-Unraid] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
Device is in ACTIVE mode
SCSI device successfully opened
Use 'smartctl -a' (or '-x') to print SMART (and more) information
root@STORAGE2:~#
-
3 hours ago, SimonF said:
if you run
smartctl -s standby,now /dev/sdh
and
smartctl -n stanby /dev/sdh
Does it report device is in standby?
another thing we can test is to see if
hdparm -C and -y works with the generic /dev/sgX device for the disk.
use sg_map or lsscsi -g to show sdh -> sgx names
Thank you for all help !
root@STORAGE2:~# smartctl -s standby,now /dev/sdj
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.35-Unraid] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
SCSI STANDBY command failed:
SCSI device successfully opened
Use 'smartctl -a' (or '-x') to print SMART (and more) information
root@STORAGE2:~# smartctl -n stanby /dev/sdj
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.35-Unraid] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=======> INVALID ARGUMENT TO -n: stanby
=======> VALID ARGUMENTS ARE: never, sleep[,STATUS[,STATUS2]], standby[,STATUS[,STATUS2]], idle[,STATUS[,STATUS2]] <=======
Use smartctl -h to get a usage summary
root@STORAGE2:~# hdparm -C /dev/sdj
/dev/sdj:
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
drive state is: standby
root@STORAGE2:~# hdparm -y /dev/sdj
/dev/sdj:
issuing standby command
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
root@STORAGE2:~# lsscsi -g|grep "Areca"
[1:0:16:0] process Areca RAID controller R001 - /dev/sg10 < Arcea 1881i drives 1 - 8 (sdb > sdi) >
[8:0:16:0] process Areca RAID controller R001 - /dev/sg19 < Arcea 1881i drives 1 - 8 (sdj > sdq) >root@STORAGE2:~# smartctl -a -d areca,1,2 /dev/sg19 < device 1 on controller 2 >
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate Enterprise Capacity 3.5 HDD
Device Model: ST10000NM0016-1TT101
Serial Number: ZA24S56L
LU WWN Device Id: 5 000c50 0afaeae7f
Firmware Version: SND0
User Capacity: 10,000,831,348,736 bytes [10.0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database 7.3/5360
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sat May 7 17:28:13 2022 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Warning! SMART Attribute Data Structure error: invalid SMART checksum.
-
Just now, SimonF said:
Which version did it last work on 6.8,3? Spindown process changed in 6.9.x
6.9.1 Was the last version of unRAID that the drives spun down properly.
-
5 hours ago, JorgeB said:
According to diags both parity drives and cache are on the onboard SATA, all others are on the RAID controllers, and very likely the reason they are not spinning down.
Thanks for assisting with this issue.
Drive spin down always worked until a UNraid OS update. The config of the system has not changed since the system was built years ago. Something changed in the OS. Party drives and other drives not connected drives pin down as they should.
-
6 hours ago, SimonF said:
So are you running the disks via a RAID controller it may be the spin down command is being ignored. Guessing you have disks setup in JBOD mode?
You have only setup 3 drives and specified the controllers are Areca from what I can see for smartctl settings.
[1:0:16:0] process Areca RAID controller R001 - /dev/sg10
state=running queue_depth=32 scsi_level=0 type=3 device_blocked=0 timeout=90
dir: /sys/bus/scsi/devices/1:0:16:0 [/sys/devices/pci0000:00/0000:00:01.0/0000:03:00.0/host1/target1:0:16/1:0:16:0][8:0:16:0] process Areca RAID controller R001 - /dev/sg19
state=running queue_depth=32 scsi_level=0 type=3 device_blocked=0 timeout=90
dir: /sys/bus/scsi/devices/8:0:16:0 [/sys/devices/pci0000:00/0000:00:02.0/0000:02:00.0/host8/target8:0:16/8:0:16:0]Would you also be able to provide a screen grab of the system devices page showing which devices are connect to which controllers.
Are the parity drives on internal SATA ports?
Unraid use hdparm -y /dev/sdx to spin down devices what output do you get?
Party Drives are connected to the onboard controller and the data drives are on Arcea 1882 Controllers. This config has not changed since the day the system was built.
root@STORAGE2:~# hdparm -y /dev/sdh
/dev/sdh:
issuing standby command
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
-
Absolutely . Diags attached. Whatever you need from me to help diagnose the issue I am willing to put the time and effort in to resolving.
-
I know this issue is specifically for the Party drives not spinning down. But my system, is still suffering from my data drives not spinning down. It is very frustrating. Has there been any progress made on this issue. Everything was working just fine with spin down until a unRAID OS update that is now preventing them.
-
BC thanks for the update. Glad your issue have been resolved. I have a bit of hope that maybe there is a fix out there for me . Would prefer not to buy a new card / cards to get this to work.
-
Are there any updates on the Spindown issue. I have upgraded to the latest RC and my drives still continue to spin all day every day..
-
I removed the spindown plugin as it did not help anyways....
-
15 minutes ago, doron said:
Thanks. Nope, not SAS protocol (admittedly it'd have been weird if it were). So SAS Spindown plugin won't help.
Still, does:
sg_start -rp3 /dev/sdf sdparm -C sense /dev/sdf
Do anything helpful?
root@STORAGE2:~# sdparm -ip di_target /dev/sdf
/dev/sdf: Seagate ST10000NM0016-1T R001
Device identification VPD page:root@STORAGE2:~# sdparm -C sense /dev/sdf
/dev/sdf: Seagate ST10000NM0016-1T R001
Decode response as sense data:
Fixed format, current; Sense key: Illegal Request
Additional sense: Invalid command operation code -
1 minute ago, doron said:
Yes, seems like this controller may be presenting SATA drives as SAS protocol. @chris0583, please paste the result of the sdparm command I mentioned above, which will determine it clearly:
sdparm -ip di_target /dev/sdf
root@STORAGE2:~# sdparm -ip di_target /dev/sdf
/dev/sdf: Seagate ST10000NM0016-1T R001
Device identification VPD page: -
4 minutes ago, chris0583 said:
Here ya go
root@STORAGE2:~# smartctl -id test /dev/sdf
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.8-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org/dev/sdf: Device of type 'scsi' [SCSI] detected
/dev/sdf: Device of type 'scsi' [SCSI] openedand just to give some more info to my setup. (which has not changed since the day i build the server over 2 years ago)
root@STORAGE2:~# lsscsi -g|grep "Areca"
[1:0:16:0] process Areca RAID controller R001 - /dev/sg9 < Arcea 1881i drives 1 - 8 (sdb > sdi) >
[8:0:16:0] process Areca RAID controller R001 - /dev/sg19 < Arcea 1881i drives 1 - 8 (sdj > sdq) >root@STORAGE2:~# smartctl -a -d areca,5,1 /dev/sg9 < device 5 on controller 1 /dev/sdf >
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.8-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Model Family: Seagate Enterprise Capacity 3.5 HDD
Device Model: ST10000NM0016-1TT101
Serial Number: ZA22MCLK
LU WWN Device Id: 5 000c50 0a49e2c70
Firmware Version: SNCC
User Capacity: 10,000,831,348,736 bytes [10.0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Aug 24 16:01:27 2021 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSEDGeneral SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 575) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 858) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x50bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 078 064 044 Pre-fail Always - 63226944
3 Spin_Up_Time 0x0003 093 091 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 094 094 020 Old_age Always - 6360
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 095 060 045 Pre-fail Always - 3274812715
9 Power_On_Hours 0x0032 068 068 000 Old_age Always - 28268 (58 45 0)
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 180
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 0 0
189 High_Fly_Writes 0x003a 008 008 000 Old_age Always - 92
190 Airflow_Temperature_Cel 0x0022 061 044 040 Old_age Always - 39 (Min/Max 33/52)
191 G-Sense_Error_Rate 0x0032 089 089 000 Old_age Always - 22800
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 102
193 Load_Cycle_Count 0x0032 061 061 000 Old_age Always - 79774
194 Temperature_Celsius 0x0022 039 056 000 Old_age Always - 39 (0 23 0 0 0)
195 Hardware_ECC_Recovered 0x001a 009 003 000 Old_age Always - 63226944
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 12558h+46m+30.045s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 284713550384
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 1800659171012SMART Error Log Version: 1
No Errors LoggedSMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 65 -
# 2 Short offline Completed without error 00% 39 -
# 3 Short offline Completed without error 00% 31 -SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay. -
16 minutes ago, SimonF said:
Can you do a test with smartctl to see what type of device it detects.
Which vers did spin down work previously?
root@computenode:/usr/local/emhttp/plugins/snapshots# smartctl -id test /dev/sdf
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.12-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org/dev/sde: Device of type 'scsi' [SCSI] detected
/dev/sde [SAT]: Device open changed type from 'scsi' to 'sat'
/dev/sde [SAT]: Device of type 'sat' [ATA] openedHere ya go
root@STORAGE2:~# smartctl -id test /dev/sdf
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.8-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org/dev/sdf: Device of type 'scsi' [SCSI] detected
/dev/sdf: Device of type 'scsi' [SCSI] opened -
1 hour ago, SimonF said:
What output do you get for the following
smartctl -in never /dev/sdf
smartctl -is standby,now /dev/sdf
root@STORAGE2:~# smartctl -in never /dev/sdf
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.8-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Vendor: Seagate
Product: ST10000NM0016-1T
Revision: R001
Compliance: SPC-3
User Capacity: 10,000,831,348,736 bytes [10.0 TB]
Logical block size: 512 bytes
Rotation Rate: 10000 rpm
Logical Unit id: 0x001b4d20611aecf9
Serial number: ZA22MCLK
Device type: disk
Transport protocol: Fibre channel (FCP-2)
Local Time is: Tue Aug 24 14:42:34 2021 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Disabled or Not Supported
Power mode is: ACTIVEroot@STORAGE2:~# smartctl -is standby,now /dev/sdf
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.8-Unraid] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===
Vendor: Seagate
Product: ST10000NM0016-1T
Revision: R001
Compliance: SPC-3
User Capacity: 10,000,831,348,736 bytes [10.0 TB]
Logical block size: 512 bytes
Rotation Rate: 10000 rpm
Logical Unit id: 0x001b4d20611aecf9
Serial number: ZA22MCLK
Device type: disk
Transport protocol: Fibre channel (FCP-2)
Local Time is: Tue Aug 24 14:42:46 2021 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Disabled or Not SupportedDevice placed in STANDBY mode
-
On 8/18/2021 at 6:48 PM, doron said:
Your drives are SATA. The SAS Spindown plugin will not do anything for you, unfortunately (will do no harm either) - it deals exclusively with SAS drives.
I'm assuming the screenshots you attached are from a 6.10.0-rc1 system? Please correct me if I'm wrong, this is an important data point.
If the assumption above is correct, then
(a) This seems to be the exact same issue that started at 6.9.2; apparently it remains in 6.10.0-rc1
(b) Apparently this does not have to do with the mpt2sas module (its version in 6.9.1 and 6.9.2 is the same, while in 6.10.0-rc1 there's a newer version. Problem did not exist in 6.9.1).
(c) Need to look for another change that happened between kernels 5.10.21 and 5.10.28.
Meanwhile, would you mind selecting one of the drives that does not have any activity against it, and on an Unraid command shell, issue these commands (replace /dev/sdX with the drive you selected):
hdparm -y /dev/sdX hdparm -C /dev/sdX sleep 1 hdparm -C /dev/sdX
(yes, the last two hdparm commands are identical)
and post the output?
I'm assuming the screenshots you attached are from a 6.10.0-rc1 system? Please correct me if I'm wrong, this is an important data point.
You are Correct 6.10.0-rc1
Your drives are SATA. The SAS Spindown plugin will not do anything for you, unfortunately (will do no harm either) - it deals exclusively with SAS drives.
2 Parity Drives are connected to the onboard sata ports (these drives spin down) as well as the cache drive.
The rest of the drives are connected to two (2) Areca 1882I's in JBOD mode. The drives have stopped spinning down
Meanwhile, would you mind selecting one of the drives that does not have any activity against it, and on an Unraid command shell, issue these commands (replace /dev/sdX with the drive you selected):
root@STORAGE2:~# hdparm -y /dev/sdf
/dev/sdf:
issuing standby command
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
root@STORAGE2:~# hdparm -C /dev/sdf/dev/sdf:
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
drive state is: standby
root@STORAGE2:~# sleep 1
root@STORAGE2:~# hdparm -C /dev/sdf/dev/sdf:
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
drive state is: standby
root@STORAGE2:~# -
On 8/18/2021 at 12:59 PM, SimonF said:
I have re-open the thread as forgot some where having issues with SATA also.
Can you povide the output from these commands?
/usr/local/sbin/sdspin /dev/sdx down
echo $?
/usr/local/sbin/sdspin /dev/sdx
echo $?
Once your parity check is complete
Sorry for the late reply. I was traveling
I pick one drive on each of the two Aerca 1882i Controller in the system.
root@STORAGE2:~# /usr/local/sbin/sdspin /dev/sdd down
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
root@STORAGE2:~# echo $?
0
root@STORAGE2:~# /usr/local/sbin/sdspin /dev/sdd
root@STORAGE2:~# echo $?
0
root@STORAGE2:~# /usr/local/sbin/sdspin /dev/sdm down
SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
root@STORAGE2:~# echo $?
0
root@STORAGE2:~# /usr/local/sbin/sdspin /dev/sdm
root@STORAGE2:~# echo $?
0
root@STORAGE2:~# -
-
I loaded the SAS plug-in and my drives connected to my Areca 1882’s are NOT spinning down. The two party drives connected to the main board are spinning down like they always did. (The parity drives were never the problem for me)
i bounced the box yesterday to perform a fresh load of the plugins and the system when into a parity check ….
I paused it and waited and no drives spun down .
I tired to manually to spin them down … Not workingI resumed the parity check … still running . I will report when it is completed
As I saw this topic is “solved” should I/we start a new one to continue the problem resolution discussion ?
[6.9.2] Parity Drive will not spin down via GUI or Schedule
in Stable Releases
Posted
I did see that Areca has a "Driver" for 6.10. Does this drive differ from what is in the main distro?