HP Proliant / Workstation & unRaid Information Thread


1812

Recommended Posts

2 hours ago, Myleslewis said:

Hi all,

 

Purchased and installed a h220, was wandering if the drive lights should be on For drives attached to the card?

 

Not a problem at all if they aren’t, was just curious more than anything :) 

 

Thanks,

 

Myles 

IIRC, when directly hooked up to the cages, yes, when run through an expander, no.

Link to comment
48 minutes ago, 1812 said:

IIRC, when directly hooked up to the cages, yes, when run through an expander, no.

So what I’ve done is add a second cage from another dl380 g7 I bought then run the 2 sas cables from back of the cage to the h220 directly. It’s set up the same as it would be as standard but just goes to the h220 instead of the motherboard. 
 

On boot up the h220 found the drives no problem, unraid found them without any changes, cleared and formatted them and they’re added to the array. 
 

as I said it’s not the end of the world if they don’t work. Just quite a nice to have feature! If there was something I’ve not done or a setting in the bios or h220 for example for it to work, then it would be great. Otherwise no worries !

 

Thanks,

 

Myles 

Link to comment
  • 1 month later...

Hello, has anyone been able to adjust their fan thresholds with ipmitool? I can't see what they're supposed to be as they just come as "na" when I run the command so I'm not sure what would be a safe value to set it to. Additionally I tried using raw commands but none went through, I can't seem to find HPs documentation for it. I have a DL380 Gen9.

 

If this should be posted elsewhere let me know, I'm new.

Capture.PNG

Link to comment
  • 2 weeks later...

Morning,

I have added a second riser card to my DL385 G7 (basically bought a second server as was cheaper than buying caddies, cables, and cages.
The LED light is green so detects the card, but nothing in the riser works, LSI or NIC, is there something i am doing wrong?

I have 2 x LSI cards in riser 1 now and show all 16 drives

Link to comment
On 3/31/2021 at 4:24 AM, Biff0r said:

Morning,

I have added a second riser card to my DL385 G7 (basically bought a second server as was cheaper than buying caddies, cables, and cages.
The LED light is green so detects the card, but nothing in the riser works, LSI or NIC, is there something i am doing wrong?

I have 2 x LSI cards in riser 1 now and show all 16 drives

do you have 2 processors installed?

Link to comment
  • 2 months later...

So i am running an HP HBA 222 on my DL380P G8 with 3,5" HP SAS Disks attached. They show up just fine on unraid, smart is partial readable compared to my Crucial MX500s - on the HP Disks, i am getting the following error:

Non-medium error count: 518.

 

The numbers seem to be not increasing, only when booting up, it adds like one counter to it. Should i be worried or is it possible to get rid of those errors?

 

Screenshot 2021-06-18 135742.png

Link to comment
  • 2 weeks later...

Hi,

 

i'm new in virtualisation with unraid. In my HP Proliant ML310a Gen8 is a 4-Port Intel Pro Gbit-Network Card installed. In the VM Manager Settings i have PCIe ACS Override set to Downstream and AVFIO allow unsafe interrupts set to yes.

 

Now i have 4 separate IOMMU Groups for each Ethernet Port on the Intel Network Card:

 

IOMMU group 17:			 	[8086:10d6] 0d:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
IOMMU group 18:			 	[8086:10d6] 0d:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
IOMMU group 19:			 	[8086:10d6] 0e:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
IOMMU group 20:			 	[8086:10d6] 0e:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)

 

Group 17 and 18 are used for my Server-Network Connection to my Switch. Group 19 and 20 i wanna pass through to an Virtual Machine (Ubuntu). For this is have activated the Checkboxes for IOMMU Group 19 & 20 in Tools/System Devices and have activated this setting with BIND SELECTED TO VFIO AT BOOT. After the reboot of Unraid i can select the two PCI devices in the Settings of the VM. But if i wanna start the VM i become an error message:

 

Execution error

internal error: qemu unexpectedly closed the monitor: 2021-06-27T12:16:08.524836Z qemu-system-x86_64: -device vfio-pci,host=0000:0e:00.0,id=hostdev0,bus=pci.4,addr=0x0: vfio 0000:0e:00.0: failed to setup container for group 19: Failed to set iommu for container: Operation not permitted

 

The same thing happens if i want to pass hrought the HP Broadcom Networkdevices.

 

What is wrong here?

 

Thank you!

Bildschirmfoto 2021-06-27 um 14.14.52.png

Link to comment

Hi,

 

I am currently trying to get Unraid running on my ProLiant DL360e Gen8, I have managed to get it to boot once from the SD card but that is not useable due to the unique GUID requirement.

 

To solve that I made an image of the working SD card and wrote that to a USB stick which worked once and the system rebooted and I was able to activate. However I have now restarted the server and cant get it to boot once again. 

 

I have followed the wiki (https://wiki.unraid.net/USB_Flash_Drive_Preparation) and used the HP tool as well as creating a 1GB partition on my USB stick and manually copying the files, and cant seem to get it to boot more than once. 

Edited by PandaGod
more info
Link to comment
On 6/27/2021 at 8:20 AM, Shantarius said:

Hi,

 

i'm new in virtualisation with unraid. In my HP Proliant ML310a Gen8 is a 4-Port Intel Pro Gbit-Network Card installed. In the VM Manager Settings i have PCIe ACS Override set to Downstream and AVFIO allow unsafe interrupts set to yes.

 

Now i have 4 separate IOMMU Groups for each Ethernet Port on the Intel Network Card:

 


IOMMU group 17:			 	[8086:10d6] 0d:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
IOMMU group 18:			 	[8086:10d6] 0d:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
IOMMU group 19:			 	[8086:10d6] 0e:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
IOMMU group 20:			 	[8086:10d6] 0e:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)

 

Group 17 and 18 are used for my Server-Network Connection to my Switch. Group 19 and 20 i wanna pass through to an Virtual Machine (Ubuntu). For this is have activated the Checkboxes for IOMMU Group 19 & 20 in Tools/System Devices and have activated this setting with BIND SELECTED TO VFIO AT BOOT. After the reboot of Unraid i can select the two PCI devices in the Settings of the VM. But if i wanna start the VM i become an error message:

 


Execution error

internal error: qemu unexpectedly closed the monitor: 2021-06-27T12:16:08.524836Z qemu-system-x86_64: -device vfio-pci,host=0000:0e:00.0,id=hostdev0,bus=pci.4,addr=0x0: vfio 0000:0e:00.0: failed to setup container for group 19: Failed to set iommu for container: Operation not permitted

 

The same thing happens if i want to pass hrought the HP Broadcom Networkdevices.

 

What is wrong here?

 

Thank you!

Bildschirmfoto 2021-06-27 um 14.14.52.png

 

look in your logs to see if the following appears

 

Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.

 

if so, this is the fix: 

 

Link to comment
13 hours ago, PandaGod said:

Hi,

 

I am currently trying to get Unraid running on my ProLiant DL360e Gen8, I have managed to get it to boot once from the SD card but that is not useable due to the unique GUID requirement.

 

To solve that I made an image of the working SD card and wrote that to a USB stick which worked once and the system rebooted and I was able to activate. However I have now restarted the server and cant get it to boot once again. 

 

I have followed the wiki (https://wiki.unraid.net/USB_Flash_Drive_Preparation) and used the HP tool as well as creating a 1GB partition on my USB stick and manually copying the files, and cant seem to get it to boot more than once. 

 

Could be a mismatch between boot options on the server and unpaid (legacy vs UEFI). Check bios and try a different setting.

 

Could also be a usb thumb drive issue. I'd start by eliminating variables. Make a backup copy of your usb, wipe it, then use the unRaid USB creator tool to make a new install.

Set boot option in bios to legacy and check if it boots. if so, do it a few times. If it keeps working, then re-download your license.

  • Like 1
Link to comment
  • 4 weeks later...

I bought a LSI card and migrated to a HP ProLiant ML310e.  The drives connected to the LSI card show up in the boot screen however, the drives are not showing up in unraid.  I have enabled in bios under sata controller AHCI and drive cache.   See below for syslogs.  Can someone please help?  I disabled intel VDT in bios and unraid is now able to detect the drives and start the array.  Any ideas?  Unraid driver is interfering with intel VDT?  Does this mean I won't be able to spin up any VM's?

 

LSI Card: Supermicro AOC-S2308L-L8e


DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: ata1: SATA link down (SStatus 0 SControl 300)
kernel: ata2: SATA link down (SStatus 0 SControl 300)
kernel: ata3: SATA link down (SStatus 0 SControl 300)
kernel: ata4: SATA link down (SStatus 0 SControl 300)
kernel: ata5: SATA link down (SStatus 0 SControl 300)
kernel: ata6: SATA link down (SStatus 0 SControl 300)
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:
kernel:     
kernel: 04000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 09000000 
kernel: 00000000 
kernel: d3000000 
kernel: 
kernel:     
kernel: ffffffff 
kernel: ffffffff 
kernel: 00000000 
kernel: 
kernel: mpt2sas_cm0: sending diag reset !!
kernel: mpt2sas_cm0: diag reset: SUCCESS
kernel: mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
kernel: DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:
kernel:     
kernel: 04000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 09000000 
kernel: 00000000 
kernel: d3000000 
kernel: 
kernel:     
kernel: ffffffff 
kernel: ffffffff 
kernel: 00000000 
kernel: 
kernel: mpt2sas_cm0: _config_request: attempting retry (1)
kernel: DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:
kernel:     
kernel: 04000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 09000000 
kernel: 00000000 
kernel: d3000000 
kernel: 
kernel:     
kernel: ffffffff 
kernel: ffffffff 
kernel: 00000000 
kernel: 
kernel: mpt2sas_cm0: _config_request: attempting retry (2)
kernel: DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:

Link to comment
  • 2 months later...
  • 5 weeks later...
On 6/3/2018 at 11:24 AM, sse450 said:

 

In case, you wondered, please find below an update on my issue:

 

HPE replaced the server with a new ML150 Gen9. It comes with H240 card and an 1T LFF SATA hard disk with HPE sticker.

 

Problem started from day 1.

1. I removed H240 as I don't need any RAID for unRAID.

2. Connected mini-SAS cable from the drive cage to mini-SAS on the motherboard.

3. I opted for AHCI in BIOS instead of B140i.

(no any other change like upgrading BIOS, replacing hard disk etc.)

4. Bummer. The server wouldn't recognize the hard disk. Tried both mini-SAS ports on the motherboard. No way.

 

After much fiddling for hours, I just wanted to try H240. It worked in HBA mode. Then I removed H240 and connected to mini-SAS port of motherboard. Again worked. Crazy!

 

Anyway, I decided to use H240 as it seems a safer way. There was no problem for 2-3 days. As H240 doesn't report any hard disk smart attributes back to unRAID, I replaced it with an LSI card. It worked fully. So far, so good.

 

After 10 days of good use, something happened, the sever lost LFF SATA hard disks again. There was a flashing red light on the front. I checked iLO. There were some critical power supply problems painted in red. The final message from iLO was:

Critical,178,17991,0x0014,System Error,,,06/02/2018 07:51:00,9: Server Critical Fault (Service Information: Runtime Fault, System Board,  P5V/P3V3/Chipset/AUX Regulators 1 (04h)) 

Now, motherboard port, H240 and LSI cards are not detecting LFF SATA hard disks at all.

 

Contacted online HPE support last night. The tech examined the iLO report and decided that the motherboard needs to be replaced because of the above error. He said I will be contacted on Monday (tomorrow).

 

I am having hard time to believe that two of my ML150 Gen9 servers had the same motherboard problem. How probable is that?

 

All the best.

 

 

@sse450 I'm wondering if it was a problem with the ML150 G9 + unRAID compatibility or if you really had the bad luck of having two defective motherboards. The reason is that I got myself a ML150 G9 very cheap and after having read your posts I am afraid of trying unRAID on it.

Link to comment
  • 2 weeks later...
On 7/26/2021 at 5:15 PM, mr.x said:

I bought a LSI card and migrated to a HP ProLiant ML310e.  The drives connected to the LSI card show up in the boot screen however, the drives are not showing up in unraid.  I have enabled in bios under sata controller AHCI and drive cache.   See below for syslogs.  Can someone please help?  I disabled intel VDT in bios and unraid is now able to detect the drives and start the array.  Any ideas?  Unraid driver is interfering with intel VDT?  Does this mean I won't be able to spin up any VM's?

 

LSI Card: Supermicro AOC-S2308L-L8e


DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: ata1: SATA link down (SStatus 0 SControl 300)
kernel: ata2: SATA link down (SStatus 0 SControl 300)
kernel: ata3: SATA link down (SStatus 0 SControl 300)
kernel: ata4: SATA link down (SStatus 0 SControl 300)
kernel: ata5: SATA link down (SStatus 0 SControl 300)
kernel: ata6: SATA link down (SStatus 0 SControl 300)
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:
kernel:     
kernel: 04000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 09000000 
kernel: 00000000 
kernel: d3000000 
kernel: 
kernel:     
kernel: ffffffff 
kernel: ffffffff 
kernel: 00000000 
kernel: 
kernel: mpt2sas_cm0: sending diag reset !!
kernel: mpt2sas_cm0: diag reset: SUCCESS
kernel: mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
kernel: DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:
kernel:     
kernel: 04000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 09000000 
kernel: 00000000 
kernel: d3000000 
kernel: 
kernel:     
kernel: ffffffff 
kernel: ffffffff 
kernel: 00000000 
kernel: 
kernel: mpt2sas_cm0: _config_request: attempting retry (1)
kernel: DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:
kernel:     
kernel: 04000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 00000000 
kernel: 09000000 
kernel: 00000000 
kernel: d3000000 
kernel: 
kernel:     
kernel: ffffffff 
kernel: ffffffff 
kernel: 00000000 
kernel: 
kernel: mpt2sas_cm0: _config_request: attempting retry (2)
kernel: DMAR: DRHD: handling fault status reg 2
kernel: DMAR: [DMA Write] Request device [07:00.0] PASID ffffffff fault addr ffc78000 [fault reason 12] non-zero reserved fields in PTE
kernel: mpt2sas_cm0: config_request: manufacturing(0), action(0), form(0x00000000), smid(10129)
kernel: mpt2sas_cm0: _config_request: command timeout
kernel: mpt2sas_cm0: Command Timeout
kernel: mf:

Have u found a solution for the problem ? 

Edited by Dave83
Link to comment
  • 2 weeks later...

Hello all,

I’m coming back on the hardware passthrough problem since I’m having a wierd issue.

I’m running unraid pro 6.9.2 on an Hp Ml350 with 16disks installed. Bios P92 (2015) out of warranty and not upgradeable anymore. 10Gbe mellanox card as the only ethernet connction in use. Two Vm are up and running reasonably stable. I have the integrated quad nic not in use at the moment, the four nic are located in the same IOMMU group and are bonded to vfio at startup. I can’t see them in the unraid dashboard. so I tried to set up a Pfsense Vm to play with.  The machine did not want to passthrough the nic card giving the “inelegible hardware error..” which is related to the rmrr issue in Hp server.

I then pathed the image and added ‘intel_iommu=relax_rmrr vfio_iommu_type1.allow_unsafe_interrupts=1” to the flash.

Grep command says rmrr are relaxable, so I assumed the patch worked.

when I set up the Pfsense vm I check the broadcom cards (NetXtreme BCM5719) to pass through, I removed the virtual network adapter.

As soon as I start the Vm to begin with installation the whole system hangs, crashes and reboots with parity check and the VM tab in dashboard not enabled.

If don’t passthrough the nic card, just virtual nic adapter, the pfsense install starts normally without issue.

Any help would be welcome, I’m posting my log file and some screen shots

thanks in advance

34FC04A7-5981-4906-80CC-80092896E9F1.png

452E9077-3F55-4AD3-97EF-C76947C832FE.png

D33A3CAA-F617-47D0-93B8-F741F4D820E7.png

96FA0610-2988-4512-80FC-E0C97C4A4CE5.png

homeserver-syslog-20211208-0857.zip

Link to comment
5 hours ago, mo679 said:

Hello all,

I’m coming back on the hardware passthrough problem since I’m having a wierd issue.

I’m running unraid pro 6.9.2 on an Hp Ml350 with 16disks installed. Bios P92 (2015) out of warranty and not upgradeable anymore. 10Gbe mellanox card as the only ethernet connction in use. Two Vm are up and running reasonably stable. I have the integrated quad nic not in use at the moment, the four nic are located in the same IOMMU group and are bonded to vfio at startup. I can’t see them in the unraid dashboard. so I tried to set up a Pfsense Vm to play with.  The machine did not want to passthrough the nic card giving the “inelegible hardware error..” which is related to the rmrr issue in Hp server.

I then pathed the image and added ‘intel_iommu=relax_rmrr vfio_iommu_type1.allow_unsafe_interrupts=1” to the flash.

Grep command says rmrr are relaxable, so I assumed the patch worked.

when I set up the Pfsense vm I check the broadcom cards (NetXtreme BCM5719) to pass through, I removed the virtual network adapter.

As soon as I start the Vm to begin with installation the whole system hangs, crashes and reboots with parity check and the VM tab in dashboard not enabled.

If don’t passthrough the nic card, just virtual nic adapter, the pfsense install starts normally without issue.

Any help would be welcome, I’m posting my log file and some screen shots

thanks in advance

34FC04A7-5981-4906-80CC-80092896E9F1.png

452E9077-3F55-4AD3-97EF-C76947C832FE.png

D33A3CAA-F617-47D0-93B8-F741F4D820E7.png

96FA0610-2988-4512-80FC-E0C97C4A4CE5.png

homeserver-syslog-20211208-0857.zip 11.1 kB · 0 downloads

 

for some reason I'm unable to view your syslog... I can't unzip it. not sure why.

 

Are you using the onboard raid controller? A long time ago I ran into problems with it even being enabled causing issues when trying to also use onboard networking.

 

are you booting in legacy mode?

Link to comment

Thanks sir for your reply, I’m not using the onboard raid, I removed the board and connected directly to the motherboard, I’m also using an Lsi pcie card.

I patched the image following the aforementioned instructions and seems active, 

I will post a new syslog 

Link to comment

Hello, i tried using virtual adapter, without passtrough and the pfsense Vm booted up but then hangs since it finds no nic cards, just to see if the installer was valid.

I’m booting in uefi mode.

i found a rom firmware p92-2.72. From 2019, my bios is from 2015, might it be the rmrr issue was addressed?

thanks in advance

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.