Jump to content

Kernel patch to use drives on a P410i controller in "HBA" mode


Recommended Posts

  • 1 year later...

Hello, this is exactly what I'd like to do: use my P410i in HBA-mode with UnRaid. I managed to get the card in HBA mode but UnRaid does not (yet) see the disks, probably because I didn't patch the kernel as described in https://github.com/im-0/hpsahba

 

"However, to get system actually see and use disks in HBA mode, few kernel patches required" 

 

I have tried to do this, but I'm stuck as the tutorials are written for Ubuntu.

Would be great to get some help here.

 

Rgds,

 

Michael

Link to comment

The mode has changed, I was able to do that. But now the unRaid kernel has to be patched and I don't know how. Would be nice if someone could guide me.

Having this patch implemented as standard in unRaid would be the best, as it would then survive updates/upgrades of the platform.

Link to comment
1 hour ago, M3350 said:

The mode has changed, I was able to do that. But now the unRaid kernel has to be patched and I don't know how. Would be nice if someone could guide me.

Having this patch implemented as standard in unRaid would be the best, as it would then survive updates/upgrades of the platform.

I can look into this but keep in mind that this Github also states this:

Quote

CAUTION: This tool will destroy your data and may damage your hardware!

so I don't think Limetech will integrate this in Unraid.

 

 

What does this tool exactly (haven't got the time to read everything here).

Isn't this changed on the controller itself? On my HP ProLiant Gen6 I had to boot into a "old" live DVD and change the mode of the controller, this change was also persistant after updates.

Link to comment

The tool itself is not the issue here. It's a one-time effort just to put the controller in HBA/IT mode. But then, obviously, you want unRaid to access the disks directly. The standard hpsa driver module needs a small patch to make that possible. Applying this little patch is one thing, but imagine what would happen if you'd upgrade your box one day and by doing that your patched hpsa module gets replaced by the standard hpsa module... making your disks inaccessible again. You would then have to re-apply the patch manually. So it would be nice to have some kind of tickbox: "hpsa patch active".

 

FYI: Replacing the p410i by another card is not an option because it's on the motherboard of my blade server and I have a storage blade 2200sb connected to it.

Link to comment

Only on the P410i indeed. The 420 supports HBA as standard. Although the P410i is an old cranky piece of hardware, I can't get rid of it because it's built-in in my blade servers (bl460 and 465 G7) and storage blade 2200sb.

I know I can configure every single disk in its own RAID0 set and have the P410i present these to the OS, but then I'd have to reboot the NAS each time a disk has to be added. I want the disks to be directly accessible by the OS. I know it can be done, the OP Mathias Nielsen confirmed that in August 2019 (see above), but after two days of trying I'd appreciate some help with the kernel patch part of the solution 🙂

Edited by M3350
Link to comment

Thumbs up to this change. It would be nice to have it implemented, even as a plugin.

Mode can be easily change with an Ubuntu Desktop Live USB, but driver patch needs to be done within UnRAID.

 

Patch is from 2018, but sadly was rejected.

 

https://patchwork.kernel.org/project/linux-scsi/patch/[email protected]/

 

root@Tower:~# lsscsi 
[0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda 
[1:0:0:0]    storage HP       P410i            6.64  -

 

root@Tower:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0    7:0    0 11.4M  1 loop /lib/modules
loop1    7:1    0 18.6M  1 loop /lib/firmware
sda      8:0    1  7.5G  0 disk 
└─sda1   8:1    1  7.5G  0 part /boot

 

Edited by grugo
Added more info
Link to comment
2 hours ago, grugo said:

Thumbs up to this change. It would be nice to have it implemented, even as a plugin.

Mode can be easily change with an Ubuntu Desktop Live USB, but driver patch needs to be done within UnRAID.

 

Patch is from 2018, but sadly was rejected.

 

https://patchwork.kernel.org/project/linux-scsi/patch/[email protected]/

You can integrate every patch, simple as possible, with the Unraid-Kernel-Helper into for lets say Unraid 6.9.0RC2 but please keep in mind that this patch has to be compatible with Kernel 5.x

(You have to turn on User Patches)

 

 

Link to comment
58 minutes ago, ich777 said:

You can integrate every patch, simple as possible, with the Unraid-Kernel-Helper into for lets say Unraid 6.9.0RC2 but please keep in mind that this patch has to be compatible with Kernel 5.x

(You have to turn on User Patches)

 

 

 

In the repo https://github.com/im-0/hpsahba there are instructions for kernels 4.x and 5.x, so no problem for 6.8.2, 6.8.3, 6.9.0rc1 and 6.9.0rc2.

 

The problem with that approach is that you need docker, and because the driver is not patch, the array is not started due to no disks listed. Am I wrong @ich777?

Link to comment

Ok, I make it work.

Sadly, I had to pop out one of the drives and use a drive-to-usb adapter and create the array to use docker.

After read how that pugin/docker works, it was quite easy.

Once you have it working, create a file in /etc/modprobe.d/ called hpsa.conf and place the following

options hpsa hpsa_use_nvram_hba_flag=1

With that it will load the parameter on load. 

So, after that, 

modprobe -r hpsa
modprobe hpsa

and you should see the drives.

 

 

Thanks a lot @ich777!

Edited by grugo
  • Like 1
Link to comment
25 minutes ago, sota said:

for those doing this, do me a favor please; dump the folder: /dev/disk/by-id

and paste it here.  i want to see if something still happens.

 

Thanks!

 

Before or after the patch ? And if after, before or after the parameter ?

Link to comment
On 1/8/2021 at 7:11 PM, sota said:

after everything.  I'm curious to see if certain things are still badly reported.

root@Tower:~# uptime
 09:48:35 up 7 min,  0 users,  load average: 0.00, 0.06, 0.04
 
root@Tower:~# lsscsi 
[0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda 
[1:0:0:0]    storage HP       P410i            6.64  -        

root@Tower:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0    7:0    0 11.1M  1 loop /lib/modules
loop1    7:1    0 21.4M  1 loop /lib/firmware
sda      8:0    1  7.5G  0 disk 
└─sda1   8:1    1  7.5G  0 part /boot

root@Tower:~# lspci 
00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 13)
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13)
00:02.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 2 (rev 13)
00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 13)
00:04.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 4 (rev 13)
00:05.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 5 (rev 13)
00:06.0 PCI bridge: Intel Corporation 5520/X58 I/O Hub PCI Express Root Port 6 (rev 13)
00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 13)
00:08.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 8 (rev 13)
00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 9 (rev 13)
00:0a.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 10 (rev 13)
00:0d.0 Host bridge: Intel Corporation Device 343a (rev 13)
00:0d.1 Host bridge: Intel Corporation Device 343b (rev 13)
00:0d.2 Host bridge: Intel Corporation Device 343c (rev 13)
00:0d.3 Host bridge: Intel Corporation Device 343d (rev 13)
00:0d.4 Host bridge: Intel Corporation 7500/5520/5500/X58 Physical Layer Port 0 (rev 13)
00:0d.5 Host bridge: Intel Corporation 7500/5520/5500 Physical Layer Port 1 (rev 13)
00:0d.6 Host bridge: Intel Corporation Device 341a (rev 13)
00:0e.0 Host bridge: Intel Corporation Device 341c (rev 13)
00:0e.1 Host bridge: Intel Corporation Device 341d (rev 13)
00:0e.2 Host bridge: Intel Corporation Device 341e (rev 13)
00:0e.3 Host bridge: Intel Corporation Device 341f (rev 13)
00:0e.4 Host bridge: Intel Corporation Device 3439 (rev 13)
00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 13)
00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 13)
00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 13)
00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1
00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5
00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
00:1d.3 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6
00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation 82801JIB (ICH10) LPC Interface Controller
01:03.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] ES1000 (rev 02)
02:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 04)
02:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 04)
02:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 01)
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
04:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
05:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)
3e:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02)
3e:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02)
3e:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02)
3e:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02)
3e:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02)
3e:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02)
3e:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02)
3e:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02)
3e:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02)
3e:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02)
3e:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02)
3e:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02)
3e:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02)
3e:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02)
3e:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02)
3e:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02)
3e:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02)
3e:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02)
3e:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02)
3e:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02)
3e:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02)
3e:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02)
3e:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02)
3e:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02)
3f:00.0 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture Generic Non-core Registers (rev 02)
3f:00.1 Host bridge: Intel Corporation Xeon 5600 Series QuickPath Architecture System Address Decoder (rev 02)
3f:02.0 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 0 (rev 02)
3f:02.1 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 0 (rev 02)
3f:02.2 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 0 (rev 02)
3f:02.3 Host bridge: Intel Corporation Xeon 5600 Series Mirror Port Link 1 (rev 02)
3f:02.4 Host bridge: Intel Corporation Xeon 5600 Series QPI Link 1 (rev 02)
3f:02.5 Host bridge: Intel Corporation Xeon 5600 Series QPI Physical 1 (rev 02)
3f:03.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Registers (rev 02)
3f:03.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Target Address Decoder (rev 02)
3f:03.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller RAS Registers (rev 02)
3f:03.4 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Test Registers (rev 02)
3f:04.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Control (rev 02)
3f:04.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Address (rev 02)
3f:04.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Rank (rev 02)
3f:04.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 0 Thermal Control (rev 02)
3f:05.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Control (rev 02)
3f:05.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Address (rev 02)
3f:05.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Rank (rev 02)
3f:05.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 1 Thermal Control (rev 02)
3f:06.0 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Control (rev 02)
3f:06.1 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Address (rev 02)
3f:06.2 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Rank (rev 02)
3f:06.3 Host bridge: Intel Corporation Xeon 5600 Series Integrated Memory Controller Channel 2 Thermal Control (rev 02)

root@Tower:~# lsscsi -d
[0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda [8:0]
[1:0:0:0]    storage HP       P410i            6.64  -        

root@Tower:~# lsscsi -l
[0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda 
  state=running queue_depth=1 scsi_level=7 type=0 device_blocked=0 timeout=30
[1:0:0:0]    storage HP       P410i            6.64  -        
  state=running queue_depth=32 scsi_level=6 type=12 device_blocked=0 timeout=120

root@Tower:~# cat /proc/devices 
Character devices:
  1 mem
  4 /dev/vc/0
  4 tty
  4 ttyS
  5 /dev/tty
  5 /dev/console
  5 /dev/ptmx
  6 lp
  7 vcs
 10 misc
 13 input
 21 sg
 29 fb
128 ptm
136 pts
180 usb
189 usb_device
202 cpu/msr
203 cpu/cpuid
226 drm
248 hidraw
249 vfio
250 uio
251 bsg
252 ptp
253 pps
254 rtc

Block devices:
  7 loop
  8 sd
  9 md
 65 sd
 66 sd
 67 sd
 68 sd
 69 sd
 70 sd
 71 sd
128 sd
129 sd
130 sd
131 sd
132 sd
133 sd
134 sd
135 sd
259 blkext

root@Tower:~# ls /dev/disk/by-
by-id/    by-label/ by-path/  by-uuid/  

root@Tower:~# ls /dev/disk/by-id/
usb-SanDisk_Cruzer_Blade_4C531001640627114584-0:0@  usb-SanDisk_Cruzer_Blade_4C531001640627114584-0:0-part1@

 

I'm not sure what do you mean with "dump" of /dev/disk/by-id, so I'm gonna post some info, before and after reload the module. That was the before, now the after

 

root@Tower:~# modprobe -r hpsa
root@Tower:~# modprobe hpsa hpsa_use_nvram_hba_flag=1
root@Tower:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0  11.1M  1 loop /lib/modules
loop1    7:1    0  21.4M  1 loop /lib/firmware
sda      8:0    1   7.5G  0 disk 
└─sda1   8:1    1   7.5G  0 part /boot
sdb      8:16   0 465.8G  0 disk 
└─sdb1   8:17   0 465.8G  0 part 
sdc      8:32   0   4.5T  0 disk 
└─sdc1   8:33   0   4.5T  0 part 
sdd      8:48   0 931.5G  0 disk 
└─sdd1   8:49   0 931.5G  0 part 
sde      8:64   0 931.5G  0 disk 
└─sde1   8:65   0 931.5G  0 part 
sdf      8:80   0 111.8G  0 disk 
└─sdf1   8:81   0 111.8G  0 part 
sdg      8:96   0 931.5G  0 disk 
└─sdg1   8:97   0 931.5G  0 part 
sdh      8:112  0 931.5G  0 disk 
└─sdh1   8:113  0 931.5G  0 part 
root@Tower:~# lsscsi 
[0:0:0:0]    disk    SanDisk  Cruzer Blade     1.00  /dev/sda 
[1:0:0:0]    storage HP       P410i            6.64  -        
[1:0:1:0]    disk    ATA      HGST HTS725050A7 A340  /dev/sdb 
[1:0:2:0]    disk    ATA      ST5000LM000-2AN1 0001  /dev/sdc 
[1:0:3:0]    disk    ATA      ST1000LM048-2E71 0001  /dev/sdd 
[1:0:4:0]    disk    ATA      ST1000LM048-2E71 0001  /dev/sde 
[1:0:5:0]    disk    ATA      Samsung SSD 850  1B6Q  /dev/sdf 
[1:0:6:0]    disk    ATA      ST1000LM048-2E71 0001  /dev/sdg 
[1:0:7:0]    disk    ATA      ST1000LM048-2E71 0001  /dev/sdh 
root@Tower:~# ls /dev/disk/by-id/
ata-ST5000LM000-2AN170_WCJ2M109@        scsi-35000c500d4d8c8cd-part1@  scsi-35000cca85ec07e21@                             usb-SanDisk_Cruzer_Blade_4C531001640627114584-0:0-part1@  wwn-0x5000c500d4d8c8cd@        wwn-0x5000c500d4da58ff-part1@
ata-ST5000LM000-2AN170_WCJ2M109-part1@  scsi-35000c500d4d98336@        scsi-35000cca85ec07e21-part1@                       wwn-0x5000c500c3ad1f6d@                                   wwn-0x5000c500d4d8c8cd-part1@  wwn-0x5000cca85ec07e21@
scsi-35000c500d4cf60f5@                 scsi-35000c500d4d98336-part1@  scsi-35002538d40245b34@                             wwn-0x5000c500c3ad1f6d-part1@                             wwn-0x5000c500d4d98336@        wwn-0x5000cca85ec07e21-part1@
scsi-35000c500d4cf60f5-part1@           scsi-35000c500d4da58ff@        scsi-35002538d40245b34-part1@                       wwn-0x5000c500d4cf60f5@                                   wwn-0x5000c500d4d98336-part1@  wwn-0x5002538d40245b34@
scsi-35000c500d4d8c8cd@                 scsi-35000c500d4da58ff-part1@  usb-SanDisk_Cruzer_Blade_4C531001640627114584-0:0@  wwn-0x5000c500d4cf60f5-part1@                             wwn-0x5000c500d4da58ff@        wwn-0x5002538d40245b34-part1@

 

Link to comment

your before by-id doesn't have as many disks showing.  I wanted to see if the wwn entries received unique numbers.

I had a case where identical disks (but with different serial #s obviously) would have the same wwn entry, and prevent both disks from being mounted correctly.  pretty confident it was a firmware reporting problem.

Link to comment
On 1/6/2021 at 11:31 PM, grugo said:

 

In the repo https://github.com/im-0/hpsahba there are instructions for kernels 4.x and 5.x, so no problem for 6.8.2, 6.8.3, 6.9.0rc1 and 6.9.0rc2.

 

The problem with that approach is that you need docker, and because the driver is not patch, the array is not started due to no disks listed. Am I wrong @ich777?

Completely overlooked this mentioning sorry for that.

Yes but you could temporarily build it in RAM and create a temporary Docker.img with lets say 1GB.

 

Is this patch or module working for the drives since I theoreticaly could build that as an extra option into the Kernel-Helper itself.

Link to comment
33 minutes ago, ich777 said:

Completely overlooked this mentioning sorry for that.

Yes but you could temporarily build it in RAM and create a temporary Docker.img with lets say 1GB.

 

Is this patch or module working for the drives since I theoreticaly could build that as an extra option into the Kernel-Helper itself.

 

Absolutely. 
image.thumb.png.e0e3cf0895c3001aef7ef4f19c06d51e.png
 

Link to comment
4 hours ago, ich777 said:

@grugo So only the kernel module is needed, but what if someone needs to change the mode of the disk itself?

Can this also be done somewhere in ILO?

 

I don't think a plugin would be the correct way since a few kernel modules needs to be replaced but I will look into it...

 

You can't change modes using iLO. Neither it can be done with HPESSA (HPE Smart Storage Administrator) live CD / software. It was only available for itanium-based servers only (officially).

 

I change the mode using a Ubuntu Desktop Live USB.

Link to comment
8 minutes ago, grugo said:

 

Neither it can be done with HPESSA (HPE Smart Storage Administrator) live CD / software.

Are you sure? The description says that it is possible but you have to pass an argument at boot, in our case for Unraid to syslinux.cfg: 'hpsa.hpsa_use_nvram_hba_flag=1' or do I understand the following wrong:

 

Quote

hpsahba itself is able to work on any modern Linux system.

However, to get system actually see and use disks in HBA mode, few kernel patches required: https://github.com/im-0/hpsahba/tree/master/kernel.

This functionality is disabled by default. To enable, load module hpsa with parameter hpsa_use_nvram_hba_flag set to "1". Or set it in the kernel command line: "hpsa.hpsa_use_nvram_hba_flag=1".

Quote
  • hpsahba -h
    Show help message and exit.
  • hpsahba -v
    Show version and exit.
  • hpsahba -i DEVICE_PATH
    Show some information about device. This Includes HBA mode support bit (supported/not supported) and current state of HBA mode (enabled/disabled). It is recommended to run this before trying to enable or disable HBA mode.
  • hpsahba -E DEVICE_PATH
    Enable HBA mode.
  • hpsahba -d DEVICE_PATH
    Disable HBA mode.

 

Link to comment
2 minutes ago, ich777 said:

Are you sure? The description says that it is possible but you have to pass an argument at boot, in our case for Unraid to syslinux.cfg: 'hpsa.hpsa_use_nvram_hba_flag=1' or do I understand the following wrong:

 

 

 

The repo has 2 parts.

The first one is to enable passthrough (AKA HBA) mode in the controller. The tool "tells" the RAID controller to change its internal configuration, doing it immediately, without requiring reboot. I was referring to this when I said change modes.
However, to get system actually see and use disks in HBA mode, the kernel patches are required. That is where https://github.com/im-0/hpsahba/tree/master/kernel come into pay.
After the patch, this functionality is disabled by default. To enable, load module hpsa with parameter hpsa_use_nvram_hba_flag set to "1". Or set it in the kernel command line: "hpsa.hpsa_use_nvram_hba_flag=1". Or even edit syslinux.cfg to include the flags. But patchs need to be done before, if not, the module will have no idea what flag are you talking about.

 

HPESSA (HPE Smart Storage Administrator) is a tool to manage, diagnose, and monitor HPE array controllers and the SAS host bus adapters. You can use it with the P410i to create, modify and remove arrays, but you can't set it in HBA mode.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...