HP Proliant / Workstation & unRaid Information Thread


1812

Recommended Posts

3 hours ago, sota said:

there's one problem with that:

every time I hot swap a disk in, it restarts that module and spams the log again.

 

Won't be an issue for me as the drive bays on my Gen8 Microserver are non-hot swap.  Can't recall exactly which model it is but it only has 4 bays and is running a Pentium G2020T CPU.

Link to comment
  • 2 weeks later...

I am attempting to passthrough a PNY Nvidia 1030G graphics card through my HP DL380p (G8) to a VM in Unraid. I feel like I have been trying for ever and read just about every thread out there. At this point I think I may have tinkered too much. so far this is what I have done. Please help. 

 

Error:

internal error: qemu unexpectedly closed the monitor: 2021-01-15T17:59:22.466043Z qemu-system-x86_64: -device vfio-pci,host=0000:04:00.1,id=hostdev1,bus=pci.0,addr=0x6.0x1: vfio 0000:04:00.1: failed to setup container for group 33: Failed to set iommu for container: Operation not permitted

 

Bios:

SRV-IOV - enabled

Embedded graphics primary, installed graphics secondary (this was causing my server to use the 1030 and losing all output)

 

Unraid: 

OS: 6.9-rc1

syslinux config: Unraid OS: append vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=id:10de:1d01,10de:0fb8 initrd=/bzroot

system devices: Bind audio and video to same iommu group 

PCIe ACS override - enabled (also seen above)

Allow unsafe interrupts (also seen above) 

 

VM: 

VM XML add the multifunction='on' and set sound and gpu to be same slot (6) , different functions (0x01)

VM XML I have tried changing my default slot of 5 to slot 6

tried with and without vbios 

 

<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>

 

 

 

unraid.jerry-diagnostics-20210115-1228.zip

Link to comment
8 hours ago, JerryRay59 said:

I am attempting to passthrough a PNY Nvidia 1030G graphics card through my HP DL380p (G8) to a VM in Unraid. I feel like I have been trying for ever and read just about every thread out there. At this point I think I may have tinkered too much. so far this is what I have done. Please help. 

 

Error:

internal error: qemu unexpectedly closed the monitor: 2021-01-15T17:59:22.466043Z qemu-system-x86_64: -device vfio-pci,host=0000:04:00.1,id=hostdev1,bus=pci.0,addr=0x6.0x1: vfio 0000:04:00.1: failed to setup container for group 33: Failed to set iommu for container: Operation not permitted

 

Bios:

SRV-IOV - enabled

Embedded graphics primary, installed graphics secondary (this was causing my server to use the 1030 and losing all output)

 

Unraid: 

OS: 6.9-rc1

syslinux config: Unraid OS: append vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=id:10de:1d01,10de:0fb8 initrd=/bzroot

system devices: Bind audio and video to same iommu group 

PCIe ACS override - enabled (also seen above)

Allow unsafe interrupts (also seen above) 

 

VM: 

VM XML add the multifunction='on' and set sound and gpu to be same slot (6) , different functions (0x01)

VM XML I have tried changing my default slot of 5 to slot 6

tried with and without vbios 

 

<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>

 

 

 

unraid.jerry-diagnostics-20210115-1228.zip 143.66 kB · 1 download

 

You did not follow the steps in the first post for "VM problems - pass-through"

 

your logs show

0000:04:00.1: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.

0000:04:00.1: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.

 

your fix is: 

 

Link to comment
1 hour ago, trurl said:

Do not use any controller RAID mode for any of your disks 

I think it says in one of the first posts that due to the hp controllers not having a JBOD mode the only way around this without using an add on card (lsi/hp h210/220) you have to create individual raid 0 volumes which are then presented to unraid, which can make data recovery difficult but is there another way ? 
 

Thanks,

 

Myles

Link to comment
5 hours ago, Myleslewis said:

I think it says in one of the first posts that due to the hp controllers not having a JBOD mode the only way around this without using an add on card (lsi/hp h210/220) you have to create individual raid 0 volumes which are then presented to unraid, which can make data recovery difficult but is there another way ? 
 

Thanks,

 

Myles

 

Correct. Currently there is no other way. But, there is something in development by other forum members to use/froce the controller in JBOD mode even though it's not available in the bios. If that becomes viable then I'll definitely share here.

 

 

Link to comment

I don’t know if it’s a problem with the ssd not having hp firmware on it perhaps ? Has anyone else had much luck using non HP drives in dl380 g6/g7 etc ? 
 

The fact that the controller is seeing the drive kind of makes me think it’s an issue with unraid, I’ve not got any other drives and no ssd’s I can try it with :( 
 

Thanks,

 

Myles 

Link to comment
27 minutes ago, Myleslewis said:

I don’t know if it’s a problem with the ssd not having hp firmware on it perhaps ? Has anyone else had much luck using non HP drives in dl380 g6/g7 etc ? 

 

I use a slew of Samsung evo ssd's in my ml350p gen 8 without issue. But his is using an HBA. After my initial issues with the onboard raid controller, I avoid it like the plague. But there are some others running disks in Raid 0 using the controller without issues. 

 

43 minutes ago, xqplc said:

When I put the disk in raid 0 unraid still doesnt reconise it.

 

I don't have a fix for you aside from getting a HBA or waiting to see if the patch works out. Do you have other disks you can try to see if it's the disk itself or something else?

 

 

Link to comment
6 minutes ago, 1812 said:

 

I use a slew of Samsung evo ssd's in my ml350p gen 8 without issue. But his is using an HBA. After my initial issues with the onboard raid controller, I avoid it like the plague. But there are some others running disks in Raid 0 using the controller without issues. 

 

 

I don't have a fix for you aside from getting a HBA or waiting to see if the patch works out. Do you have other disks you can try to see if it's the disk itself or something else?

 

 

 

I currently use the onboard controller in my dl380 G7, but the plan is to move to a hba, I actually bought another g7 off social media with the aim of taking the drive cage out and expanding my drives to 16, so been waiting to do it all together.
 

What HBA do you use ? Can you make a recommendation for the G7? 

 

Thanks,

 

Myles 

Link to comment
1 hour ago, Myleslewis said:

 

I currently use the onboard controller in my dl380 G7, but the plan is to move to a hba, I actually bought another g7 off social media with the aim of taking the drive cage out and expanding my drives to 16, so been waiting to do it all together.
 

What HBA do you use ? Can you make a recommendation for the G7? 

 

Thanks,

 

Myles 

 

I use an H220. I've actually used them in several HP servers from G6 to this one for years without issue and they are pretty cheap. I bought an h240 once but it didn't send drive temps through at the time,. So now it sits in a box until that's better supported or I figure out what needs to be configured. But there is no real need for a card like that which does 12gbps in most unRaid servers, mine included. Even on 10gbe networking, my transfers to an SSD cache pool are limited by network speed.

Link to comment
On 1/17/2021 at 2:16 AM, 1812 said:

 

Correct. Currently there is no other way. But, there is something in development by other forum members to use/froce the controller in JBOD mode even though it's not available in the bios. If that becomes viable then I'll definitely share here.

 

 

 

FWIW - This worked for me with the onboard on a DL380 G8 to get it in HBA mode.

 

The drive temperatures would just disappear after a few days though - so moved to a H220 since

Link to comment

Recently I had a 600gb HP SAS drive fail on me, so I have bought a few replacements (found 3 for £30 on ebay, supposedly in good working condition), I shut down the server, replaced the drives and booted it back up.

 

The p410i detected the replaced drive so I didn't have to re-create the raid0 volume, and went on to boot into unraid. I stopped the array, selected the newly installed drive and then clicked start array... unraid crashed and the drive, which was previously fine and gave no smart warnings, then had it's fault light solid, suggesting it had failed and also when I rebooted the server I was given a SMART warning saying that imminent failure was detected...?

 

Am I missing something with how I can replace the failed drive ?

 

Thanks,

 

Myles

Edited by Myleslewis
Link to comment
1 hour ago, Myleslewis said:

Recently I had a 600gb HP SAS drive fail on me, so I have bought a few replacements (found 3 for £30 on ebay, supposedly in good working condition), I shut down the server, replaced the drives and booted it back up.

 

The p410i detected the replaced drive so I didn't have to re-create the raid0 volume, and went on to boot into unraid. I stopped the array, selected the newly installed drive and then clicked start array... unraid crashed and the drive, which was previously fine and gave no smart warnings, then had it's fault light solid, suggesting it had failed and also when I rebooted the server I was given a SMART warning saying that imminent failure was detected...?

 

Am I missing something with how I can replace the failed drive ?

 

Thanks,

 

Myles

 

You could try another disk to see if you just got a bad one. I have no guidance because I don't use the raid controller with disks in raid 0 because weird things like this can happen.

Link to comment

Have tried another disk and this one seems to be okay.

 

Drive is currently being rebuilt at 93~MB/sec, I did manage to get the previous drive to start rebuilding but it was writing at 300~KB/sec so it very well could of been a bad drive.

 

I'm definitely going to look to buy a HBA at the end of the month. Thankfully I've currently only got 474gb total in the array so I can move that all to a portable hard drive I have to switch all the drives over to a HBA.

 

Thanks,

 

Myles

Link to comment
  • 5 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.