Jump to content

[Plugin] Spin Down SAS Drives


doron

Recommended Posts

I got WD sata and sas drives along with a dell hitachi sas drive which all go to sleep on the hba330 in hba mode, except the 3 x seagate 6tb sata ironwolfs. Would be intresting to see hba card would act differently with generic firmware.

Link to comment

One more strange thing happened today. As i promissed i bought sas cables to change controller but it seems my disks are sleeping from over 2 days. i checked in idrac power graph that shows dell was waking up disks for 1,5 days from migration and than it stopped. I did nothing becouse of work, just turned on server and let it be.  

 

image.png.ecf5c763127d764cd7f976f5376bbe3c.png

 

do any one knows exacly what those HBA330 controllers do in background. Do they check "smart" or do other tests to disks they consider new? Right now all is working like it should so i have no complains.

Edited by MoherPower
Link to comment
  • 3 weeks later...

another update. 

 

Until yesterday everything was ok. all disks slept like they should. Yesterday i changed hardware configuration (took out 1 processore an RAM) of my Dell and i had to shut down server and take all power plugs out. after start all process of waking up and shuting down disks  started again. also i noticed that my disks are making noice just like thay do when they write or read data but in unraid nothing is written ( i chacked in unraid reads / writes and the numbers are not growing)

image.png.76ff299414d68116554ab9fc590fb363.png

 

I think i change raid  controller and get back after that with new infos.

 

here is power graph from IDRAC. (i wonder why i lost all power usage data from before hardware change ? all changing process toom me 30 min.)

image.png.a9b0461e4c6fb96ea98d52cb13d872c3.png

Edited by MoherPower
Link to comment

I was just reading the description of this plugin and im wondering if it is not needed on unraid 6.12?

from the description:

Quote

For Unraid version 6.9.0 and up, the built-in sdspin script is enhanced with SAS support.

That indicates to me the spin down is part of the core OS and the plugin is no longer needed, am i missing something or is there other functionality I'm just not making use of?

Link to comment
26 minutes ago, Sivivatu said:

I was just reading the description of this plugin and im wondering if it is not needed on unraid 6.12?

from the description:

That indicates to me the spin down is part of the core OS and the plugin is no longer needed, am i missing something or is there other functionality I'm just not making use of?

Base os does not support sas drives. Only supports sata.

Link to comment




For Unraid version 6.9.0 and up, the built-in sdspin script is enhanced with SAS support.



That text means that the plugin enhances the built-in sdspin function, which handles SATA drives spin down, with SAS support. sdspin was introduced around version 6.9.0 of Unraid.

Sent from my tracking device using Tapatalk
Link to comment
10 hours ago, doron said:

That text means that the plugin enhances the built-in sdspin function, which handles SATA drives spin down, with SAS support. sdspin was introduced around version 6.9.0 of Unraid.

Sent from my tracking device using Tapatalk

 

 

Ah that makes sense and clears up my confusion thanks heaps

 

Link to comment
  • 1 month later...
  • 3 weeks later...
  • 4 weeks later...

Trying to figure out why 2 identical model Seagate SAS drives (ST10000NM0096) NETAPP X377 10TB where one drive spins down and the other doesn't.  Anyone ever encounter something like this?

 

Running:

sdparm --command=sense /dev/sdy

/dev/sdy: NETAPP X377_STATE10TA07 NA00  (This drive spins down without issues)

 

sdparm --command=sense /dev/sdm

additional sense: Failure prediction threshold exceeded (This is the drive that will not spin down)

 

Running: sdparm --flexible -6 -v -S -p po  for both drives yield the same below for both drives

Power condition [0x1a] mode page [PS=1]:
  PM_BG         0  [cha: n, def:  0, sav:  0]
  STANDBY_Y     0  [cha: y, def:  0, sav:  0]
  IDLE_C        0  [cha: y, def:  0, sav:  0]
  IDLE_B        0  [cha: y, def:  0, sav:  0]
  IDLE_A        0  [cha: y, def:  0, sav:  0]
  STANDBY_Z     0  [cha: y, def:  0, sav:  0]
  IACT          1  [cha: y, def:  1, sav:  1]
  SZCT          9000  [cha: y, def:9000, sav:9000]
  IBCT          1200  [cha: y, def:1200, sav:1200]
  ICCT          6000  [cha: y, def:6000, sav:6000]
  SYCT          6000  [cha: y, def:6000, sav:6000]
  CCF_IDLE      1  [cha: y, def:  1, sav:  1]
  CCF_STAND     1  [cha: y, def:  1, sav:  1]
  CCF_STOPP     2  [cha: y, def:  2, sav:  2]

 

Running: smartctl -i returns identical info except for attributes unique to each drive like logical unit and serial number.

 

=== START OF INFORMATION SECTION ===
Vendor:               NETAPP
Product:              X377_STATE10TA07
Revision:             NA00
Compliance:           SPC-4
User Capacity:        10,000,831,348,736 bytes [10.0 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Logical Unit id:      0x5000c50094a753f3
Serial number:        <<different for each drive of course>>
Device type:          disk
Transport protocol:   SAS (SPL-4)
Local Time is:        Fri Jul 26 08:18:17 2024 PDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

 

 

Edited by timc6896
Link to comment
  • 5 weeks later...

So I am having issues using the Seagate Exo 16 TB drives - Disk 1ST16000NM004J
I can manually spin these down and they stay down for ages, but as soon as I set them to spin down automatically after an hour, they will spin down and then come straight back up again.

Has anyone else had any issues with this in the past?

Link to comment
1 hour ago, InvaderZim21 said:

So I am having issues using the Seagate Exo 16 TB drives - Disk 1ST16000NM004J
I can manually spin these down and they stay down for ages, but as soon as I set them to spin down automatically after an hour, they will spin down and then come straight back up again.

Has anyone else had any issues with this in the past?

Please post some tech details about the issue, at least a syslog excerpt from around the time of auto spin down/up, but much preferably, diagnostics.

Link to comment

Diagnostics attached! 

 

I'm having an issue where my Toshiba MG09 drives will occasionally count up read errors on spindown.

 

This has happened three times now, where I'll get a bundle of read errors, and it has only happened with these drives so far. The first time it happened was with a single 12 TB drive that I assumed was just defective and swapped for a Seagate of the same capacity. 

 

A couple of months later I picked up three 18 TB drives of the same model, two for parity and one for data, to start upgrading my array disk sizes. A week after installation it happened with the one that was assigned to data (two months ago). It had clear SMART data so I ran it through another preclear cycle to make sure it was actually working okay before reassigning it to the array. All has been well since then. 

 

This morning I awoke to both parity disks in a failed state and both showing 80 read errors. 

 

It may bear mentioning that the first time this occurred was with an LSI 9300-16i HBA in IT mode that has since been replaced with an integrated Intel JBOD-only SAS HBA. I can't recall offhand which LSI chipset is in the new HBA, but it has no RAID capability. While it's certainly possible that both HBAs had issues, it's unlikely that both would only affect a specific brand of drive. 

 

Thanks in advance for your help! 

 

glizzyxl-diagnostics-20240824-0612.zip

Link to comment

Not sure if this is the correct place to ask but I am having some issues with spinning down my disks (all SAS).

 

Unraid version: 6.12.13 (servername = maroon)

Spin Down SAS Drive Plugin version: 2024.02.18 

 

While manually spinning down my disks (Unraid GUI -> Main -> Click green circle per disk) some disks spin down fine (sdg, sdn) but I always (100% repro) get this error on Disk 6 (sde). It takes a while (~15-20 seconds) and then the log shows the kernel errors. It also spins up all the previously spundown drives and does SMART checks -- is this normal/expected?

 

What are those kernel errors?

The only thing I can think of is that Disk 6 is my ZFS pool backup for appconfig -- is that preventing it from spinning down?

 

here is the syslog:

Aug 26 09:33:22 maroon emhttpd: spinning down /dev/sdg
Aug 26 09:33:22 maroon SAS Assist v2024.02.18: Spinning down device /dev/sdg
Aug 26 09:33:41 maroon emhttpd: spinning down /dev/sdn
Aug 26 09:33:41 maroon SAS Assist v2024.02.18: Spinning down device /dev/sdn
Aug 26 09:33:51 maroon emhttpd: spinning down /dev/sde
Aug 26 09:33:51 maroon SAS Assist v2024.02.18: Spinning down device /dev/sde
Aug 26 09:34:08 maroon emhttpd: read SMART /dev/sde
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#764 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#764 CDB: opcode=0x28 28 00 37 bc e6 9a 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013456 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013392
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013400
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013408
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013416
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013424
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#763 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#763 CDB: opcode=0x28 28 00 37 bc e6 90 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013376 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013312
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013320
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013328
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013336
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013344
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#762 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#762 CDB: opcode=0x28 28 00 37 bc e6 89 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013320 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013256
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013264
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013272
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013280
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013288
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#761 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#761 CDB: opcode=0x28 28 00 37 bc e6 6c 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013088 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013024
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013032
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013040
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013048
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013056
Aug 26 09:34:23 maroon emhttpd: read SMART /dev/sdg
Aug 26 09:34:23 maroon emhttpd: read SMART /dev/sdn

 

Link to comment
  • 2 weeks later...
6 hours ago, jmztaylor said:

I added a controller and sas drives to my server.  Installed the plugin but the sas drives still refuse to spin down.  I don't see anything in syslog unless I missed something.  Attached is diagnostics.

tower-diagnostics-20240909-1033.zip 216.58 kB · 0 downloads

Can you please send the contents of the files:

/usr/local/emhttp/plugins/sas-spindown/drive-types
/usr/local/sbin/sdspin

and also, run the command:

/usr/local/emhttp/plugins/sas-spindown/sas-util

and send the output you get, and also the content of the resulting file:

/tmp/sas-util.out

 

Link to comment
On 8/26/2024 at 12:40 PM, tone said:

Not sure if this is the correct place to ask but I am having some issues with spinning down my disks (all SAS).

 

Unraid version: 6.12.13 (servername = maroon)

Spin Down SAS Drive Plugin version: 2024.02.18 

 

While manually spinning down my disks (Unraid GUI -> Main -> Click green circle per disk) some disks spin down fine (sdg, sdn) but I always (100% repro) get this error on Disk 6 (sde). It takes a while (~15-20 seconds) and then the log shows the kernel errors. It also spins up all the previously spundown drives and does SMART checks -- is this normal/expected?

 

What are those kernel errors?

The only thing I can think of is that Disk 6 is my ZFS pool backup for appconfig -- is that preventing it from spinning down?

 

here is the syslog:

Aug 26 09:33:22 maroon emhttpd: spinning down /dev/sdg
Aug 26 09:33:22 maroon SAS Assist v2024.02.18: Spinning down device /dev/sdg
Aug 26 09:33:41 maroon emhttpd: spinning down /dev/sdn
Aug 26 09:33:41 maroon SAS Assist v2024.02.18: Spinning down device /dev/sdn
Aug 26 09:33:51 maroon emhttpd: spinning down /dev/sde
Aug 26 09:33:51 maroon SAS Assist v2024.02.18: Spinning down device /dev/sde
Aug 26 09:34:08 maroon emhttpd: read SMART /dev/sde
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#764 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#764 CDB: opcode=0x28 28 00 37 bc e6 9a 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013456 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013392
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013400
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013408
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013416
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013424
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#763 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#763 CDB: opcode=0x28 28 00 37 bc e6 90 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013376 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013312
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013320
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013328
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013336
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013344
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#762 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#762 CDB: opcode=0x28 28 00 37 bc e6 89 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013320 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013256
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013264
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013272
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013280
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013288
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#761 UNKNOWN(0x2003) Result: hostbyte=0x05 driverbyte=DRIVER_OK cmd_age=10s
Aug 26 09:34:12 maroon kernel: sd 1:1:14:0: [sde] tag#761 CDB: opcode=0x28 28 00 37 bc e6 6c 00 00 05 00
Aug 26 09:34:12 maroon kernel: I/O error, dev sde, sector 7481013088 op 0x0:(READ) flags 0x0 phys_seg 5 prio class 2
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013024
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013032
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013040
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013048
Aug 26 09:34:12 maroon kernel: md: disk6 read error, sector=7481013056
Aug 26 09:34:23 maroon emhttpd: read SMART /dev/sdg
Aug 26 09:34:23 maroon emhttpd: read SMART /dev/sdn

 

 

I'm having a similar problem, where I'm logging read errors on or immediately after spindown. Hoping to get some feedback. Right now I just have those three drives set to not spin down. In my case it's causing UnRAID to disable the drives.

Link to comment
16 hours ago, doron said:

Can you please send the contents of the files:

/usr/local/emhttp/plugins/sas-spindown/drive-types
/usr/local/sbin/sdspin

and also, run the command:

/usr/local/emhttp/plugins/sas-spindown/sas-util

and send the output you get, and also the content of the resulting file:

/tmp/sas-util.out

 

 

/usr/local/emhttp/plugins/sas-spindown/drive_types

#!/bin/bash

# Automatically generated, Mon Sep  9 09:41:48 CDT 2024

declare -A DRIVE_TYPE
DRIVE_TYPE[sda]=OTHER
DRIVE_TYPE[sdb]=SAS
DRIVE_TYPE[sdc]=SAS
DRIVE_TYPE[sdd]=SAS
DRIVE_TYPE[sde]=OTHER
DRIVE_TYPE[sdf]=OTHER
DRIVE_TYPE[sdg]=OTHER
DRIVE_TYPE[sdh]=OTHER
DRIVE_TYPE[sdi]=OTHER

 

 

/usr/local/sbin/sdspin

#!/bin/bash
#
# Deal with spin up/down status of HDDs
#
# This script is initiated from emhttpd, like so:
#
#    sdspin <device> [up | down | status ]
#
# "device" is the HDD rdev name, such as "sdd".
#
#  up == Spin the drive up
#  down == Spin the drive down
#  status == return the current status via rc
#
# Default (if no $2) is "status".

# Exit code:
#   0 - Success (if up/down), device spun up (if status)
#   1 - Failure
#   2 - Device spun down (if status)

# Spin down/up SAS drives plugin
# v2024.02.18
#
# (c) 2019-2024 @doron - CC BY-SA 4.0

. /usr/local/emhttp/plugins/sas-spindown/functions

RDEVNAME=/dev/${1#'/dev/'}      # So that we can be called with either "sdk" or "/dev/sdk"

Hdparm () {

OUTPUT=$($HDPARM $1 $RDEVNAME 2>&1)
if [[ $? != 0 || ${OUTPUT,,} =~ "bad/missing sense" ]] ; then
RC=1
fi
$DEBUG && { Log "debug: $HDPARM $1 $RDEVNAME"
Log "debug: $OUTPUT" ; }

}

RC=0
case ${2,,} in

"up")

if IsSAS $RDEVNAME ; then

$DEBUG && Log "debug: $SG_START -rp1 $RDEVNAME"
$SG_START -rp1 $RDEVNAME > /dev/null ||
RC=1

else

Hdparm -S0

fi
;;

"down")

if IsSAS $RDEVNAME ; then

if ! IsExcluded $RDEVNAME ; then
if IsRotational $RDEVNAME ; then

Log "Spinning down device $RDEVNAME"
$DEBUG && Log "debug: $SG_START -rp3 $RDEVNAME"
$SG_START -rp3 $RDEVNAME > /dev/null ||
RC=1

fi

else

Log "Device $RDEVNAME cannot be spun down - excluded"
RC=1

fi

else  # Not SAS

Hdparm -y

fi
;;

"status" | "")

if IsSAS $RDEVNAME ; then

OUTPUT=$($SDPARM -C sense $RDEVNAME 2>&1)
if [[ $? != 0 ]] ; then
RC=1
elif [[ ${OUTPUT,,} =~ "standby condition activated" ]] ; then
RC=2
fi
$DEBUG &&  { Log "debug: $SDPARM -C sense $RDEVNAME"
Log "debug: $OUTPUT" ; }

else

Hdparm -C
if [[ $RC == 0 &&
${OUTPUT,,} =~ "standby" &&
! ${OUTPUT,,} =~ "bad/missing sense" ]] ; then
RC=2
fi

fi
;;

*)
Log "Invalid op code $2"
RC=1
;;

esac

$DEBUG && Log "debug: exit $RC"
exit $RC


/usr/local/emhttp/plugins/sas-spindown/sas-util
 

SAS Spindown Utility (v20240218.01)



sdb     | HUC101890CSS200       | 1000:0073:1014:040d   |  n/a  |
sdc     | HUC10189 CLAR900      | 1000:0073:1014:040d   |  n/a  |
sdd     | HUC10189 CLAR900      | 1000:0073:1014:040d   |  n/a  |

Run completed. The output is at /tmp/sas-util-out.


/tmp/sas-util-out
 

{
"utility-run": {
"date": "20240910-09:27 CDT",
"version": "20240218.01",
"Unraid version": "7.0.0-beta.2",
"message": "",
"drives": [
{
"drive": {
"model": "HUC101890CSS200",
"sdparm-i": "/dev/sdb: HGST HUC101890CSS200 A3F0|Device identification VPD page:| Addressed logical unit:| designator type: NAA, code set: Binary| 0x5000cca0360472b0| Target port:|designator type: NAA, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| 0x5000cca0360472b1| designator type: Relative target port, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| Relative target port: 0x1| Target device that contains addressed lu:| designator type: NAA, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| 0x5000cca0360472b3| designator type: SCSI name string, code set: UTF-8| SCSI name string:| naa.5000CCA0360472B3|RC=0",
"controller-id": "1000:0073:1014:040d",
"controller-slot": "02:00.2"
}
},
{
"drive": {
"model": "HUC10189 CLAR900",
"sdparm-i": "/dev/sdc: HITACHI HUC10189 CLAR900 L7SS|Device identification VPD page:| Addressed logical unit:| designator type: NAA, code set: Binary| 0x5000cca07f0f8578| Target port:| designator type: NAA, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| 0x5000cca07f0f8579| designator type: Relative target port, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| Relative target port: 0x1| Target device that contains addressed lu:| designator type: NAA, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| 0x5000cca07f0f857b| designator type: SCSI name string, code set: UTF-8| SCSI name string:| naa.5000CCA07F0F857B|RC=0",
"controller-id": "1000:0073:1014:040d",
"controller-slot": "02:00.2"
}
},
{
"drive": {
"model": "HUC10189 CLAR900",
"sdparm-i": "/dev/sdd: HITACHI HUC10189 CLAR900 L7SS|Device identification VPD page:| Addressed logical unit:| designator type: NAA, code set: Binary| 0x5000cca07f19124c| Target port:| designator type: NAA, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| 0x5000cca07f19124d| designator type: Relative target port, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| Relative target port: 0x1| Target device that contains addressed lu:| designator type: NAA, code set: Binary| transport: Serial Attached SCSI Protocol (SPL-4)| 0x5000cca07f19124f| designator type: SCSI name string, code set: UTF-8| SCSI name string:| naa.5000CCA07F19124F|RC=0",
"controller-id": "1000:0073:1014:040d",
"controller-slot": "02:00.2"
}
}
]
}
}
{
"controllers": [
{
"controller": {
"Slot": "02:00.2",
"Class": "PCI bridge [0604]",
"Vendor": "Advanced Micro Devices, Inc. [AMD] [1022]",
"Device": "500 Series Chipset Switch Upstream Port [43e9]",
"SVendor": "ASMedia Technology Inc. [1b21]",
"SDevice": "Device [0201]",
"ProgIf": "00",
"IOMMUGroup": "16"
}
}
]
}

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...