HP Proliant / Workstation & unRaid Information Thread


1812

Recommended Posts

This thread is a work in progress. Updated information will be added at the top in this post.

 

Feel free to ask questions or post further problems in this thread.

 

Workstation information is near the bottom.


Below are common problems associated with HP servers, and where available, known working fixes. I am not a certified IT technician. The information below was either discovered by scouring the Internet and/or trial and error by myself and several others on this board. Utilize this information at your own risk, though I have done almost everything discussed below.

 

 

UnRaid Versions

 

Proliants (and HP servers)  should run 6.2.4 for the time being. 6.3 caused issues with CPU core numbering and other mild problems. This was due to an update in the Linux kernel or something stupid. Preliminary tests on the 6.4 release show that most if not all the problem from 6.3 have been resolved. But I would wait to update to 6.4 until the official stable release comes out.

 

Proliants (and HP servers) should run 6.4. Many previous issues have been resolved from 6.3.x with a few minor things still prevalent.

 

6.5.1+ are stable and recommended. As new versions of unRaid are produced, I will update this section if there are any known issues, otherwise assume latest available works.

 

HP Raid Controllers (P410i)

 

Most HP raid controllers can not do JBOD. If you intend to use one, you have to use the raid controlle to create Raid-0 volumes and those are then presented to unRaid. Recovery of data then becomes more difficult.

 

Alternatives include using a pci-e host bus adapter that works in unraid (consult the hardware wiki for working cards.) I currently use an H220, but any compatible one will do. - UPDATE July 2018- I ran into issues using H220 HBA with HP SAS Expnders on my MD1000 (which is just a box with expanders) When using unRaid ver. 6.3.x. I swapped out several H220's and sas expanders to no avail.  I also attempted to to a direct connection with and 8087 to 8088 connection. It would see the enclosure but not the disks. I then swapped in an H310 in IT mode which worked using the connection adapter. I an currently running that.  2018: I determined the fault was in the md1000 enclosure. I currently run H220 adapters in 2 different servers without issues, including using an hp expander.

 

External connectivity can be gained via an HP SAS expander or via 8087 to 8088 adapters.

 

 

HP PCI-E raid cards (P212, P411)

 

[this section untested under 6.4 and above, relevant to 6.2.4]

Under certain Bios, unraid will not play nice with some HP PCI-E raid cards. This includes unRaid hanging on boot and experiencing lockup failures on the controller. If you intend to use an HP pci-e raid card, then you must update to the latest bios available on the server. BUT doing this may lead to other issues, including "Device is ineligible for IOMMU domain attach due to platform RMRR" problems.

 

Thread on HP raid cards:

 

-UPDATE-

I was working on setting up a temporary server (DL120 G6) using an P212 and H220, just to see if it would work. The server booted on 6.2.4, 6.3.2, and 6.3.5 and showed drives on both controllers. When I attempted to setup a vm using passthrough, I modified the syslinux.cfg as required to include: 

append vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot

From that point on, I could not get the server to complete loading unraid and kept getting errors (call traces/lockups/etc.) I gave up on combining the cards in that server and moved them to a Dell T300 that doesn't have hardware passthrough. It again booted successfully on 6.2.4, 6.3.2, and 6.3.5. So the issue does not appear to be with the cards and unraid, so much as it is an issue of needing to allow unsafe interrupts for hardware passthrough causing the issue with the cards. This was verified when I re-added the unsafe interrupts line to the syslinux.cfg and the errors returned.

 

HP Networking

 

Integrated gigabit ethernet seems to work in 6.2.4 when the port(s) are directly attached to the motherboard. If the port(s) are located on a riser card, drivers were implemented beginning in 6.3.x to allow functionality.

 


VM problems - pass-through

 

"internal error: process exited while connecting to monitor/vfio: failed to setup container for group 1/vfio: failed to get group 1/Device initialization failed"


Step 1

 

Typically using PCIE-ACS override does not fix gpu pass-through issues. But regardless, the first step is to ensure that your video card and audio component are in their own IOMMU group. You can view the groups at Tools>System Devices, then search for the maker of your GPU. For example, my 1050 is:

 

09:00.0 USB controller [0c03]: Fresco Logic FL1100 USB 3.0 Host Controller [1b73:1100] (rev 10)
12:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050] [10de:1c81] (rev a1)
12:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb9] (rev a1)
80:00.0 PCI bridge [0604]: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 0 [8086:3420] (rev 22)


This shows both the GPU portion and audio in set as 12:00.0 & 12:00.1. Then scroll down to the IOMMU group and verify that only those 2 device are listed their own group.


 

/sys/kernel/iommu_groups/16/devices/0000:04:00.3
/sys/kernel/iommu_groups/17/devices/0000:12:00.0
/sys/kernel/iommu_groups/17/devices/0000:12:00.1
/sys/kernel/iommu_groups/18/devices/0000:09:00.0


In my example, they are the only 2 devices in group 17, the gpu and audio portion. Your group number and device assignment may be different.

 

If your gpu/audio are in their own IOMMU group, SKIP to STEP 3.

 

 

Step 2

 

If you have other devices in the same IOMMU group as your GPU, you will either need to also pass those through, or isolate your GPU/Audio into a separate group. You can try enabling PCIE-ACS override and rebooting, and rechecking. If that does not work, then disable acs override, and proceed to Step 2a:


Step 2a

 

Obtain the unique device id's of the GPU and audio component. You can view find it at Tools>System Devices, then search for the maker of your GPU. For example, my 1050 is:

09:00.0 USB controller [0c03]: Fresco Logic FL1100 USB 3.0 Host Controller [1b73:1100] (rev 10)
12:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050] [10de:1c81] (rev a1)
12:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fb9] (rev a1)
80:00.0 PCI bridge [0604]: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI Express Root Port 0 [8086:3420] (rev 22)


 

The unique id is the 8 number at the end of the device in brackets. For my 1050, the GPU is, 10de:1c81 and the audio is 10de:0fb9.

 

Once you have obtained your id's, you will need to edit your syslinux.cfg file. To do this, go to Main>Boot Device>Flash. Under Syslinux Configuration, edit the text there for the first "append initrd=/bzroot" to the following (XXXX:XXXX is your GPU id, YYYY:YYYY is your audio component):

 

append pcie_acs_override=id:XXXX:XXXX,YYYY:YYYY initrd=/bzroot


My complete edited syslinux.cfg file looks like:

default /syslinux/menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append pcie_acs_override=10de:1c81,10de:0fb9 initrd=/bzroot
label unRAID OS GUI Mode
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui
label unRAID OS Safe Mode (no plugins, no GUI)
  kernel /bzim3age
  append initrd=/bzroot unraidsafemode
label unRAID OS GUI Safe Mode (no plugins)
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui unraidsafemode
label Memtest86+
  kernel /memtest

 

Save and reboot. Verify that the GPU and audio component are either in their own IOMMU group, or more likely, they are each in their own independent groups (2 groups total with a single device in each.)

 

You can try your pass-through now, but it probably won't work.

 


Step 3


The server needs to allow "unsafe" interrupts. This is achieved by modifying the syslinux.cfg file. To do this, go to Main>Boot Device>Flash. Under Syslinux Configuration, edit the text there for the first "append initrd=/bzroot" to the following:


 

append vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot


The full syslinux.cfg would then look like:

default /syslinux/menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot
label unRAID OS GUI Mode
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui
label unRAID OS Safe Mode (no plugins, no GUI)
  kernel /bzimage
  append initrd=/bzroot unraidsafemode
label Memtest86+
  kernel /memtest


OR if you had to follow step 2, it would look like this

default /syslinux/menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append pcie_acs_override=10de:1c81,10de:0fb9 vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot
label unRAID OS GUI Mode
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui
label unRAID OS Safe Mode (no plugins, no GUI)
  kernel /bzimage
  append initrd=/bzroot unraidsafemode
label unRAID OS GUI Safe Mode (no plugins)
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui unraidsafemode
label Memtest86+
  kernel /memtest


Save and reboot. You can now try your pass-through. If the vm shows that it started on the dashboard, you're good to go. If you don't have output or distorted video, that's a non-related issue.

 

IF if doesn't work, go to step 4.

 

Step 4

 

After you have reverified that your gpu and audio component are in their own IOMMU group(s), examine your system logs after a failure to start a vm. Tools>System Log. Look at the bottom or search using your web browser for "Device is ineligible for IOMMU domain attach due to platform RMRR"

 

If it is there, it is most likely likely your audio component listed as the problem. There are a couple ways to try to resolve it, but that depends on your hardware. According to HP,  the RMRR problem stems from an upgrade to Linux above kernel version 3.16. There are discussions around the Internet about an unofficial path but with varying results.


HP has a published sheet on this, with a fix for some: https://h20565.www2.hpe.com/hpsc/doc/public/display?sp4ts.oid=7271259&docId=emr_na-c04781229&docLocale=en_US

 

If you are on Gen 8+ hardware, there is a bios fix: https://docs.hpcloud.com/hos-4.x/helion/networking/enabling_pcipt_on_gen9.html

 

If you are pre-Gen 8 or the previous link does not work/apply you have 3 options:

 

1. Roll back to a previous bios. During boot, access the bios menu and select an older bios if available. It appears that bios versions that are 2011 and before do not have the RMRR issue, and pass-through of both video and audio are able to occur.

 

2. If you are 2012 or newer: Look online for an older bios and HP procedure to roll it back. If that is not an option, then:

 

3. Don't pass through the audio component of the video card. This can be achieved by following Step 2a. Then in the vm manager, only pass-through the video component to the vm, leaving out the ineligible audio device. You can get audio in your vm by utilizing a usb audio device. Not elegant, but functional. If using acs override for just the card doesn't separate out the gpu and its audio component, then try using the following acs setting instead: pcie_acs_override=multifunction

 

4. You may also omit the previous 3 steps and attempt to use the unRaid Proliant Edition patch located here: 

 

 

 

 

Further reading:

 

 

 

 

Bios - Firmware Updates

 

The 2017.04.0 SPP is the last production SPP to contain components for the G7 and Gen8 server platforms. For additional information, please refer to Reducing Server Updates. http://h20564.www2.hpe.com/hpsc/swd/public/detail?swItemId=MTX_3f6b4074ed734dc3baf007612d#tab5 At the time of writing this info, this was true, but it appears they have issues some other updates along the way, probably for mitigations or similar. I don't follow them anymore for this older hardware, so you'll have to google it for yourself to find the latest for your particular machine.

 

Currently there is no reason to update the server rom to a "newer" bios, but you can update controller drivers/etc using the SPP. Do not use the auto installer as it will update the server rom and introduce issues outlined above. After the system runs a scan, you can select to update your ethernet/raid/etc which have shown no unwanted effects in unRaid so far.

 

HP ProLiant DL120 G6/ProLiant DL120 G6, BIOS O26 07/01/2013 works on 6.2.4/6.4 with pass through.

 

 

 ACPI Error Messages 

 

Some servers complain with an error message similar to the following:

 

Oct 28 23:06:10 Tower kernel: ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20170531/exfield-427)
Oct 28 23:06:10 Tower kernel: ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20170531/psparse-550)
Oct 28 23:06:10 Tower kernel: ACPI Exception: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20170531/power_meter-338)

 

Bug details are here: https://community.hpe.com/t5/ProLiant-Servers-Netservers/ACPI-Error-SMBus-or-IPMI-write-requires-Buffer-of-length-66/td-p/6943959

 

This is essentially a misreading and does not affect anything but spamming your system log. To remove, disable "acpi_power_meter" module from the kernel by adding this line to the /boot/config/go file:

 

rmmod acpi_power_meter

 

HT to @perfecblue for this fix. Further reading can be done here: https://lime-technology.com/forums/topic/59375-hp-proliant-unraid-information-thread/?do=findComment&comment=634360

 

 

 

 

 

HPZ420 workstation info

 

While not technically a proliant, I picked one of these up recently and here are the important discoveries:

 

You do not need the RMRR patch

 

If attempting to use onboard sata ports, you must use UEFI booting with unRaid. Otherwise it just sits there and blinks the cursor at you regardless of any specified boot order (3 hours of my time on that one....) Would be nice to find a fix, as per below, legacy boot is needed for some stability.

 

I was also getting unexpected hard server resets and the following:

 

928- fatal pcie error
pcie error deteted slot 5
completion timeout

 

This is on a known good GPU. It would occur when shutting down a vm the GPU was assigned to. Multiple places on the Internet said possible bad board, bios error, etc. I did change some bios settings but it still happened. Only after I applied the unsafe interrupts fix from above did it stop.

 

 

CPU 0 stuck at 100% usage

 

TLDR: add the following to your syslinux.cfg

acpi=force

have time to read: https://lime-technology.com/forums/topic/73537-cup-0-100-on-2-different-usb-licenses/?tab=comments#comment-676484

 

 

Weird VM issues

I started getting weird crashes and inability to launch vm's when the server was booting into UEFI. Returning to Legacy appears to solve the issue. I'll retest in subsequent versions of unRaid to determine if it persists (last checked 6.5.3)

 

Edited by 1812
  • Like 1
Link to comment

Hi

 

My current setup has the following:

 

IOMMU group 1
    
[8086:0151] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
[10de:104a] 07:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1)
[10de:0e08] 07:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1)

 

So using Step 2.a I added the following line to my syslinux.cfg file

 

append pcie_acs_override=id:10de:104a,10de:0e08 initrd=/bzroot

 

But when I rebooted the box its still the same, any ideas? Oh I also have Enable PCIe ACS Override  switched on.

 

 

 

Link to comment
58 minutes ago, IrishBiker said:

Hi

 

My current setup has the following:

 

IOMMU group 1
    
[8086:0151] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
[10de:104a] 07:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1)
[10de:0e08] 07:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1)

 

So using Step 2.a I added the following line to my syslinux.cfg file

 

append pcie_acs_override=id:10de:104a,10de:0e08 initrd=/bzroot

 

But when I rebooted the box its still the same, any ideas? Oh I also have Enable PCIe ACS Override  switched on.

 

 

 

 

Pci bridges are never fun to work around. I have never directly had the issue but a couple folks on here have.

 

First, turn acs override off. then go to syslinux.cfg and try adding the device id of the pci bridge to the two from the video card and reboot. it probably won't fix it, but worth trying.

 

When that doesn't work, Look in your bios for an SR-IOV function. if disabled, enable and recheck iommu groupings. If that doesn't do it, then re-read the HP solution posted above and see if that is applicable to your server.

 

And if none of that works, we may get creative and try something else.

 

What version of unRaid are you on and what bios and date are you on?

Link to comment
1 minute ago, IrishBiker said:

Thanks for the update. I'm using 6.3.5  and the bios version is J06 11/02/2015

 
I've viewed the bios but I cannot see any option for SR-IOV
 
Also Unraid now won't let me switch off the "Enable PCIe ACS Override" option!

 

You can turn it off yourself if needed

 

It's listed in this file as pcie_ace_override=downstream

root@Tower:/boot/syslinux# pwd
/boot/syslinux
root@Tower:/boot/syslinux# cat syslinux.cfg
default menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label unRAID OS (With ACS)
  menu default
  kernel /bzimage
  append pcie_acs_override=downstream isolcpus=8,9,10,11,12,13,14,15 initrd=/bzroot
label unRAID OS (without ACS)
  menu default
  kernel /bzimage
  append isolcpus=8,9,10,11,12,13,14,15 initrd=/bzroot

Link to comment
  • 1 month later...

Ive updated to the most recent bios  - found SR-IOV in the new bios and enabled, have pcie acs override on, and the other vfio_iommu_type1.allow_unsafe_interrupts=1. Tried the pcie_acs_override= as well. Have disabled the main ethernet in bios , and running off another hp ethernet card, have changed to 6.2.4 but I'm still not getting anything out of it. Would really like to get working if possible but running out of options. Ive looked at the other gen9 instructions, but I'm on a g7 and doesn't align. Any other ideas or directions to go ? I wasn't able to find any pre 2011 bios either. 

Link to comment
3 minutes ago, burningstarIV said:

Ive updated to the most recent bios  - found SR-IOV in the new bios and enabled, have pcie acs override on, and the other vfio_iommu_type1.allow_unsafe_interrupts=1. Tried the pcie_acs_override= as well. Have disabled the main ethernet in bios , and running off another hp ethernet card, have changed to 6.2.4 but I'm still not getting anything out of it. Would really like to get working if possible but running out of options. Ive looked at the other gen9 instructions, but I'm on a g7 and doesn't align. Any other ideas or directions to go ? I wasn't able to find any pre 2011 bios either. 

 

Post Tools>diagnostics

 

from what I remember, the hp riser style networking cards don't work in 6.2.4 but the driver was added in 6.3.x. You might need to get an intel pro 1000 or similar cars on eBay and wait out 6.4 going stable or upgrade to 6.3.5 and deal with the issues that has.

Link to comment
8 hours ago, burningstarIV said:

I've messed with it a good bit - I've got the latest bios and have sr-iov enabled. Take a look and see if its worth breathing any life into it. If nothing jumps out, I've tried everything i can with it - thanks 

octalquad-diagnostics-20171015-1104.zip

 

I have 2 580 g7's running unraid, but on 6.2.4 with no "real" issues. One runs 3 VMs all with gpu passthrough. Also older bios (P65 05/23/2011). I'm not sure is 6.3.5 is causing the problem or something else.

 

 

Your rmrr error is the audio component of your video card. Following the steps above should fix it... right now it's not splitting it out into different groups. When it splits the GPU into different groups for video/audio, then you can work around it by omitting the sound portion in the vm..

 

 

Tomorrow, I'll notate all my bios settings and post them up here. What is the earlier bios version you have?

Link to comment

Hey - i wasn't able to find an earlier bios - i had a 2013 or something on there and updated to the 2016 /17 latest hoping it might work, as was the case w/ g8 and g9 ... but it does not. Do you have a firmware update for the previous bios that i could force ? 

 

i tried older installs of unraid and only disabled ethernet, and went to another hp intel nic which worked, but no pass through. I think the bios down grade would definitely help if possible. 

Link to comment

I'm trying to run PFSense in a VM on my HP ProLiant MicroServer Gen 8. (System ROM: J06 11/02/2015) I installed an Intel Pro/1000 Dual Post NIC that I bought off Amazon into it's single PCI-Express x16 slot hoping to be able to simply pass it through to the VM. However have ran into issues with it being able to grabbing the NIC. First off when doing `lspci` I see the following:

 

Quote

00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
07:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
07:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)

 

And when checking the IOMMU groups I can see that the PCI bridge shares the same IOMMU groups:

 

Quote

/sys/kernel/iommu_groups/1/devices/0000:07:00.0
/sys/kernel/iommu_groups/1/devices/0000:07:00.1
/sys/kernel/iommu_groups/1/devices/0000:00:01.0

 

So I followed your guide (and many other guides) and enabled the ACS override in my syslinux.cfg:

 

Quote

default /syslinux/menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append pcie_acs_override=downstream vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot
label unRAID OS GUI Mode
  kernel /bzimage
  append pcie_acs_override=downstream vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot,/bzroot-gui
label unRAID OS Safe Mode (no plugins, no GUI)
  kernel /bzimage
  append pcie_acs_override=downstream initrd=/bzroot unraidsafemode
label unRAID OS GUI Safe Mode (no plugins)
  kernel /bzimage
  append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui unraidsafemode
label Memtest86+
  kernel /memtest

 

This fixes my IOMMU group issue!

 

Quote

/sys/kernel/iommu_groups/11/devices/0000:07:00.0

/sys/kernel/iommu_groups/12/devices/0000:07:00.1

 

So I go in and add my NIC to my VM with the following XML:

 

Quote

<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
            <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
      </source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
            <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/>
      </source>
</hostdev>

 

Go to start the VM and nope I get this in my `syslog` when trying to start my VM:

 

Quote

kernel: vfio-pci 0000:07:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.

 

So with a bunch of google searching last night (before I found this thread) I found the advisory linked above. (https://support.hpe.com/hpsc/doc/public/display?sp4ts.oid=7271259&docId=emr_na-c04781229&docLocale=en_US) Sweet there is a fix for it! I load up the latest version of fedora on to a flash drive and load up the latest rpms for hp-health and hp-scripting-tools for RHEL7 on to another flash drive and followed the instructions. I created a `exclude.dat` file and added the slot number on it, however I wasn't sure if it was talking about the slot number on the ILO System Information screen (PCI Slot 1) or the bus number (0x07) so I excluded both:

 

Quote

<Conrep>
<Section name="RMRDS_Slot1" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot7" helptext=".">Endpoints_Excluded</Section>
</Conrep>

 

I ran the command and everything seemed to work I even ran step 6 to save it back out and it looked like it took the changes. I reboot the server back into Unraid and it's as if nothing has changed I still get the IOMMU error like above. After this I also went back into Fedora and proceeded to exclude every slot to no avail

 

Quote

<Conrep>
<Section name="RMRDS_Slot1" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot2" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot3" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot5" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot6" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot7" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot8" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot9" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot10" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot11" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot12" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot13" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot14" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot15" helptext=".">Endpoints_Excluded</Section>
<Section name="RMRDS_Slot16" helptext=".">Endpoints_Excluded</Section>
</Conrep>

 

Any help would be appreciated. I'm not sure what else to do.

Edited by StevenMattera
  • Like 1
Link to comment
On 10/17/2017 at 12:11 AM, burningstarIV said:

got it running - more details tomorrow - thanks for helping with it - ill still have to do some tweaking but its all up and running correctly. It was just the bios that was a problem 

GOOD because work got busy and I won't be able to get back to my server until next week.

Link to comment
4 hours ago, StevenMattera said:

I'm trying to run PFSense in a VM on my HP ProLiant MicroServer Gen 8. (System ROM: J06 11/02/2015) I installed an Intel Pro/1000 Dual Post NIC that I bought off Amazon into it's single PCI-Express x16 slot hoping to be able to simply pass it through to the VM. However have ran into issues with it being able to grabbing the NIC. First off when doing `lspci` I see the following:

 

 

And when checking the IOMMU groups I can see that the PCI bridge shares the same IOMMU groups:

 

 

So I followed your guide (and many other guides) and enabled the ACS override in my syslinux.cfg:

 

 

This fixes my IOMMU group issue!

 

 

So I go in and add my NIC to my VM with the following XML:

 

 

Go to start the VM and nope I get this in my `syslog` when trying to start my VM:

 

 

So with a bunch of google searching last night (before I found this thread) I found the advisory linked above. (https://support.hpe.com/hpsc/doc/public/display?sp4ts.oid=7271259&docId=emr_na-c04781229&docLocale=en_US) Sweet there is a fix for it! I load up the latest version of fedora on to a flash drive and load up the latest rpms for hp-health and hp-scripting-tools for RHEL7 on to another flash drive and followed the instructions. I created a `exclude.dat` file and added the slot number on it, however I wasn't sure if it was talking about the slot number on the ILO System Information screen (PCI Slot 1) or the bus number (0x07) so I excluded both:

 

 

I ran the command and everything seemed to work I even ran step 6 to save it back out and it looked like it took the changes. I reboot the server back into Unraid and it's as if nothing has changed I still get the IOMMU error like above. After this I also went back into Fedora and proceeded to exclude every slot to no avail

 

 

Any help would be appreciated. I'm not sure what else to do.

 

 

Post --  System>Tools> Diagnostics

 

Are you trying to pass through half the card or the whole card? Did you isolate it from unRaid ( as in, unRaid isn't trying to use it for server communication)?

 

do you have an older bios you can try?

 

 

Link to comment
15 minutes ago, 1812 said:

 

 

Post --  System>Tools> Diagnostics

 

Are you trying to pass through half the card or the whole card? Did you isolate it from unRaid ( as in, unRaid isn't trying to use it for server communication)?

 

do you have an older bios you can try?

 

 

 

The whole card as I want to use the two ports as a WAN and LAN port on PFSense. I did try to also stub the card so UnRaid wouldn't use it as a NIC, but that didn't help and produced the same error. Unfortunately the 2015 bios is all I have. The oldest revision I can find on HPE's website is 2013.04.02 (A) (24 Jun 2013), however I don't have an active contract or warrenty. I will post the System -> Tools -> Diagnostics as soon as I get home.

Edited by StevenMattera
Link to comment
  • 2 weeks later...
4 hours ago, burningstarIV said:

Any ideas on how to get turbo speed to work on cpu ? its enabled in bios and have tried with high performance mode. Have installed tips and tweaks, but doesn't allow turbo setting it seems , from the xeon cpu. turned performance settings on there. Has anyone been able to get it working ? 

 

 

read this and test as outlined:

 

Following the test procedure above confirms that its working on mine (currently dl580 g7, previous dl380 g6). In the bios, you need to put power to os control, or at least I had to. My tips and tweaks show the following:

screen.png.98ec514181a89437f059660e22757283.png

 

 

Unraid won't show you turbo frequencies because of some reason I don't remember. And vm's wont report it correctly either, nor show you boosted frequencies even when it is boosting.

 

Link to comment
  • 2 weeks later...

im back for another round 

 

this time on a dl360 g6 - downgraded bios to p64 05/05/2011. Put in all the regular tricks to pass through an hp quad intel ethernet card, but keeps coming up with ineligible for IOMMU domain attach due to platform RMRR. 

 

using vfio-pci.ids= to exclude each port to pass through on the pfsense vm, plus pcie_acs_override= vfio_iommu_type1.allow_unsafe_interrupts=1. 

 

Any ideas or has anyone done it ? I've got another slightly earlier bios but it won't downgrade to it w/ some failure to erase bios error. 

Link to comment

oh yeah - got the dl580 g7 working pretty well now. It hangs for about 10-60 mins on the gui when starting - but otherwise is working well. 

 

Dl580 g7

4x X7560 (2.26GHz/8-core) 

192 gb ram 

Raid 0 2x Hynix 512GB SSD NVMe PCIe Cache 

2x 128gb Sandisk x300 ssd 

Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20 IT Mode

1x Seagate 4TB (ST4000LM024)

2x Egva gtx 1060 

1x Radeon hd 7750

 

 

 

Anyone have much experience with Citrix xendesktop remote machine setup ? 

 

IMG_3979.JPG

Link to comment
5 hours ago, burningstarIV said:

oh yeah - got the dl580 g7 working pretty well now. It hangs for about 10-60 mins on the gui when starting - but otherwise is working well. 

 

this is odd. post Tools>Diagnostics

 

the only time i've had issues with web GUI not being available is when I was running pfsense as my router  with no backup and had multiple dockers trying to check for updates at boot time. They can't reach the Internet because they check before vm's start in 6.2.4, and then have to each time out. like 300 seconds each I think?

 

 

What is your normal idle power usage? for my main server running 2 E7- 4870's (40 threads and 2 psu's) I idle at 300ish depending on how many desktop i'm running. But my other running 4 E7- 4870's (used for massive video editing projects, 80 threads, 3-4 psu's) idles at 550 watts.... so just curious.

 

5 hours ago, burningstarIV said:

Anyone have much experience with Citrix xendesktop remote machine setup ? 

 

not me!

Link to comment

in ilo its showing about 750w mostly idling on 4 1200w power supplies - the few times I've checked it running vm's and everything - shows about 850 - i thought seemed a little high .. have put it into power saving mode before and went down about 700 

 

... I've got a couple different ones running ... and all connected into a single wrt54g 

IMG_1406.jpeg

Edited by burningstarIV
Link to comment
  • 4 weeks later...

Picked up a dl360e gen8 for some pfsense passthrough, but having some issues with its iommu groups and not working. Have tried the stub and vfio-pci.ids, with vfio_iommu_type1.allow_unsafe_interrupts=1 - neither has worked. I think its a 2014 bios and didn't see a newer one that I could get in the 2017 updates .... passing through a hp NC364T intel 4 port. Do you have any experience with it, or would a dell branded Broadcom work any differently ? 

 

Was trying on my 360 g6 but was running into rmrr issues even with a pre 2012 bios .. gave up and got this gen 8, which is great, but would be better if the network card worked ... 

 

IOMMU group 25
    [111d:8018] 09:02.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
    [8086:10bc] 0a:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
    [8086:10bc] 0a:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
IOMMU group 26
    [111d:8018] 09:04.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0e)
    [8086:10bc] 0b:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
    [8086:10bc] 0b:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
 

 

 

Link to comment

default menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append  intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream pci-stub.ids=8086:10bc initrd=/bzroot
label unRAID OS GUI Mode
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui
label unRAID OS Safe Mode (no plugins, no GUI)
  kernel /bzimage
  append initrd=/bzroot unraidsafemode
label unRAID OS GUI Safe Mode (no plugins)
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui unraidsafemode
label Memtest86+
  kernel /memtest

 

 

 

 

<domain type='kvm'>
  <name>Pfsense</name>
  <uuid>52c7ad92-88b0-c265-c74a-504464704cd6</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="FreeBSD" icon="freebsd.png" os="freebsd"/>
  </metadata>
  <memory unit='KiB'>4718592</memory>
  <currentMemory unit='KiB'>4718592</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='10'/>
    <vcpupin vcpu='1' cpuset='22'/>
    <vcpupin vcpu='2' cpuset='11'/>
    <vcpupin vcpu='3' cpuset='23'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-2.7'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/52c7ad92-88b0-c265-c74a-504464704cd6_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='2' threads='2'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Pfsense/pfsensemain.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/pfSense-CE-2.3.5-RELEASE-amd64.iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:52:42:78'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0a' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

 

 

 

Screen Shot 2018-02-08 at 9.19.49 PM.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.