***GUIDE*** Passing Through Network Controllers to unRAID 6 Virtual Machines


jonp

Recommended Posts

2 minutes ago, shaunmccloud said:

Not a huge deal for me, I have another dual port NIC I can drop in until I get my switch with two SFP+ ports so I can use my SolarFlare NIC in a virtualized pfSense install.

To be clear, my issue was getting my centos VM to have network access.  It was possible by passing through the nic directly to the vm (as I indicated in a previous post).

But that functionality for me was broken with the previous Unraid update.

I'm using a VM in my qnap NAS to serve as a replacement.

Link to comment
1 minute ago, eds said:

To be clear, my issue was getting my centos VM to have network access.  It was possible by passing through the nic directly to the vm (as I indicated in a previous post).

But that functionality for me was broken with the previous Unraid update.

I'm using a VM in my qnap NAS to serve as a replacement.

I'd rather not take the chance on passing a single i350 through right now since my current switch won't link with my SolarFlare NICs if the connection is made with a 10GbE DAC (even when forced to 1GbE).

Link to comment
  • 4 weeks later...
On 4/14/2020 at 6:55 AM, shaunmccloud said:

Just realized, I will be adding a second SolarFlare 10GbE dual port NIC, how do I pass just one of them through to a VM (so 2 ports to a VM and 2 ports to unRAID)?

The method in this thread is both outdated, and generally not required for NIC's. Stubbing, GPU's aside, is only *required* when a device is in a shared IOMMU group. 

 

In the event it's required it's achieved via the vfio-pci.ids=xxxx:XXXX append statement, in lieu of the former pci-stub.ids statement. This binds the HW to the vfio driver at runtime, the current accepted method. 

 

I also want to pull one key piece of info from the OP that stands today more than it did 5yrs ago: Direct passthrough of network controllers is NOT a requirement for most use cases under Windows & Unix (I can't comment regarding the current state of BSD) , and in fact loses benefits to be had with use of a vNIC. See the OP for more. 

 

For VM's of PC type (not Q35), here's a quick and simple way to passthrough non-GPU devices

 

Now, with all that out of the way...

 

Your question is valid and as of v6.7 has a fancy new method for itself. I've linked to a thread, not the changelog post as there's some good info posted by Limetech, but be sure to also read the changelog

 

With regards to passthrough, that there isn't going to be 1 all-encompassing nor future-proof method for passthrough for a variety of reasons.

 

For anyone reading this post in future, or not covered by the above, search for the current accepted method for performing passthrough (there will be a relevant thread here), as this will give you the best results. 

Link to comment
  • 2 weeks later...

Hi all,

Thanks for all the information and tips shared. unfortunately, for the first time ever, I couldn't find anything that would help me solving my problem.

 

Just like many others, I am trying to create a Pfsense VM and passthrough a qud NIC card.

I did follow numerous tutorials, tried countless tweaks on my XML but I just can't make Pfsense detecting my card.

 

My card has its own IOMMU as shown below:

IOMMU group 13:	[108e:abcd] 02:00.0 Ethernet controller: Oracle/SUN Multithreaded 10-Gigabit Ethernet Network Controller (rev 01)
	[108e:abcd] 02:00.1 Ethernet controller: Oracle/SUN Multithreaded 10-Gigabit Ethernet Network Controller (rev 01)
	[108e:abcd] 02:00.2 Ethernet controller: Oracle/SUN Multithreaded 10-Gigabit Ethernet Network Controller (rev 01)
	[108e:abcd] 02:00.3 Ethernet controller: Oracle/SUN Multithreaded 10-Gigabit Ethernet Network Controller (rev 01)

 

and my auto-generated XML looks like this:

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>Pfsense</name>
  <uuid>edef17bf-972b-18f0-8ccb-2db77e95496a</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="FreeBSD" icon="pfsense.png" os="freebsd"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='12'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='13'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/edef17bf-972b-18f0-8ccb-2db77e95496a_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='2' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/pfSense-CE-2.4.5-RELEASE-amd64.iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source file='/mnt/user/domains/Pfsense/vdisk1.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='fr'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x2'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x3'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

Do you guys have any idea what I am doing wrong?

 

Thanks for your valuable support!

Link to comment
  • 2 months later...
On 2/26/2020 at 10:23 PM, skois said:

Change the second adress tags to 

<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>

 

So it will be in the same virtual bus 

Now you should see 2 ports when booted

Thanks a lot you really helped me.

 

I also had an issue with a dual nic I350

 

Here I add the Original and modified configuration;

 

Original:

 

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </hostdev>

 

Modified:

 

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
    </hostdev>
  

 

I hope this helps others, and also that unraid solves it (I have seen the new beta in a video and given the new included tools maybe they have solved it)

 

Thanks.

Link to comment
8 minutes ago, Golfonauta said:

Thanks a lot you really helped me.

 

I also had an issue with a dual nic I350

 

Here I add the Original and modified configuration;

 

Original:

 

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </hostdev>

 

Modified:

 

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
    </hostdev>
  

 

I hope this helps others, and also that unraid solves it (I have seen the new beta in a video and given the new included tools maybe they have solved it)

 

Thanks.

You welcome! 
Hopefully it will run stable! I haven't had the chance to test it. By the time i figured how to make it work, my dedicated mini pc have arrived and did the whole pfsense on this one
PS. Running rock solid for 4 months now. (the minibox)

Link to comment
  • 3 weeks later...

Sorry if it's already been discussed (couldn't find in the original post) but my motherboard has 4 Intel I210 controllers each with their separate port, with lspci -n I see this:

02:00.0 0200: 8086:1533 (rev 03)
03:00.0 0200: 8086:1533 (rev 03)
04:00.0 0200: 8086:1533 (rev 03)
05:00.0 0200: 8086:1533 (rev 03)

Now, adding 8086:1533 to my syslinux.cfg, will Unraid still be able to use the first 2 ports (0200 and 0300, which I'm curently using in bond0) while I'd like to passthrough 0400 and 0500 to a pfSense VM?

Link to comment
6 hours ago, dnLL said:

Sorry if it's already been discussed (couldn't find in the original post) but my motherboard has 4 Intel I210 controllers each with their separate port, with lspci -n I see this:


02:00.0 0200: 8086:1533 (rev 03)
03:00.0 0200: 8086:1533 (rev 03)
04:00.0 0200: 8086:1533 (rev 03)
05:00.0 0200: 8086:1533 (rev 03)

Now, adding 8086:1533 to my syslinux.cfg, will Unraid still be able to use the first 2 ports (0200 and 0300, which I'm curently using in bond0) while I'd like to passthrough 0400 and 0500 to a pfSense VM?

if you stub the device, unraid won't be able to use it.

i THINK i saw something a way to split it. 
But i do not reccomend it at all. It may have very bad results.
Maybe some other gurus here can confirm or elaborate more.

Link to comment

Hi

 

Sorry if this has already been asked, I did do a search but couldnt see an answer. 

 

I currently have 2 NICs in my system, one for UNRAID and one being passed through to a VM using this guide. 

 

If I wanted to pass a 3rd NIC through to a 2nd VM would that be possible? I am sure the answer is simple if it is I just wanted to make sure. What would I need to do the syslinux file for the second NIC?

 

Thanks for the guide was super easy to follow and worked so well. 

Link to comment
  • 4 weeks later...
  • 2 weeks later...

Hi all. Please help me with my problem. I have supermicro x9scm-f motherboard, it has dual nic and i install pci-e intel dual nic. I am trying to create a Pfsense VM and passthrough a dual NIC card and one nic from motherboard. I use unraid Version: 6.9.0-beta25. 

I did follow numerous tutorials, tried countless tweaks on my XML but pfsense dont detect any card. If i disable nic from motherboard pfsense detect only one nic.  

My xml 

 

<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

1.thumb.jpg.85c68df2563eef8545689be067aa5e37.jpg

Link to comment
  • 2 weeks later...

Hi UNRAID Community!

I’m glad to see I am not the only one struggling with this issue! This feels like the biggest challenge I have come across in my relatively short UNRAID journey and I’m looking forward to the sense and relief and accomplishment when I solve the issue! 😊

 

I have purchased a quad nic with intel i350. I’m using UNRAID 6.9.0 beta 25 so was able to pass through NIC ports from the GUI. I can see them when creating the VM but get the ‘No network interfaces’ found note when I boot PFSense.

 

I was trying to find the correlation between the IOMMU groups and the XML but don’t fully understand the concept. The 4 at the start of the number appears to be the bus? 00 in the middle the slot? And the 0,1,2,3 at the end the function. I could be totally wrong but that was how my brain was trying to make sense of it. I’ve tried many combos of previous posts without any luck. Anyways please see my XML attached and a snip of the IOMMU groups. Any ideas/suggestions would be greatly appreciated!

Cheers, Dan!

 

image.png.2ff3d24a1cb722115fdb779e2e89203c.png

XML.txt

Link to comment
On 9/27/2020 at 4:36 AM, Dan! said:

I was trying to find the correlation between the IOMMU groups and the XML but don’t fully understand the concept. The 4 at the start of the number appears to be the bus? 00 in the middle the slot? And the 0,1,2,3 at the end the function.

That's correct. Your xml looks fine. What's the error message you are getting?

Link to comment

When I start PFSense through the VM I get "No Interfaces found!" message. Pictured below

I also just noticed when I start the VM it uses 100% of the CPU (1 core, 2 threads) from an intel i3 10100 (4 core/8 thread) on the GUI dashboard.

Do I need to be looking at this issue from a PFSense perspective? I thought because it was not recognising the NIC it must have been something in the XML.

 

image.thumb.png.f2fdbae939ed06140471a8826a0db054.png

Link to comment

yeah, I had the same issues and managed to pass through my i350 card to pfsense by adjusting the xml file as described here:

However, looking at your xml it seems you already did the necessary adjustments so there seems to be a different issue. Just poking around in the dark you could try a couple of other things:

  1. use a different machine type. I am using <type arch='x86_64' machine='pc-q35-4.1'>hvm</type>
  2. try to pass through only one entry first to see if this works
  3. try to add an alias to your hostdev entry like below
     

        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/>
        </hostdev>
  4. I am also using below entry in my syslinux config - Unraid OS - for my network card (adjust the values to your needs)
    kernel /bzimage
    append xen-pciback.hide=(01:00.1)(01:00.2)(01:00.3) initrd=/bzroot

     

  • Thanks 1
Link to comment

Thanks @user2352

On 9/29/2020 at 7:01 PM, user2352 said:

However, looking at your xml it seems you already did the necessary adjustments so there seems to be a different issue. Just poking around in the dark you could try a couple of other things:

  1. use a different machine type. I am using <type arch='x86_64' machine='pc-q35-4.1'>hvm</type>
  2. try to pass through only one entry first to see if this works

I completed the first two steps. Once I tried passing through only one port at a time, it managed to passthrough one of the ports. By doing this it was also changing the xml. For a while it was only recognising one of the ports. But with a few restarts and trying a few different things it seemed to click and recognise all 4 ports.

 

I've run out of time for tonight but will finish the rest of the install in a couple of days. I'm just happy to can see the 4 ports.

 

This one had me stuck for a few weeks, I feel like I've learned a lot and I really appreciate your help. Its one thing I have enjoyed about the UNRAID community and I hope to help others as I build on my knowledge.

 

Sharing my XML in case it helps anyone else.

 

Cheers

Dan!

 

XML.log

Link to comment
  • 2 months later...

3 hours of thread-research and the solution is so simple. I had the same problem with my HP NC364T. Just saw only 1 of the 4 ports.

 

I just put all of the 4 ports in the xml-file in one bus and it worked like magic ;)

 

Thanks to all for investigating this issue and providing the solution.

 

pfsense-xml.thumb.png.f5324b6857d5d0fd82110309edae8c3f.png

Edited by riddler0815
Link to comment
  • 2 months later...

Hello everyone! I'm new here and I've been, as many others, trying to passthrough a quad NIC for my pfSense VM.

I've read a ton of forum posts and watched SpaceInvaderOne's videos on youtube but i still can't get it to work.

I've stubed the device id, tried different forms of ACS override. The downstream seems to be most fitting for my use though it seems they don't get placed in the same group. That is a problem further on, my main priority is to get the VM up and running and to do tweaking afterwards.

label Unraid OS
  menu default
  kernel /bzimage
  append vfio-pci.ids=8086:10bc vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_override=downstream initrd=/bzroot
IOMMU group 14:	
[8086:10bc] 09:00.0 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
[8086:10bc] 09:00.1 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
IOMMU group 15:	
[8086:10bc] 0a:00.0 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
[8086:10bc] 0a:00.1 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)

 

I had a running pfSense-VM without any NICs bound to it before I purchased my quadnic so this install is not complete.

Here is the XML for the VM.

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>pfSense</name>
  <uuid>cf81e79c-dfdd-a84f-4e55-733f613af8a5</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="FreeBSD" icon="freebsd.png" os="freebsd"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='3'/>
    <vcpupin vcpu='1' cpuset='7'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/cf81e79c-dfdd-a84f-4e55-733f613af8a5_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='1' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/pfSense-CE-2.4.5-RELEASE-p1-amd64.iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/pfSense/vdisk1.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='sv'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

Can someone please have a look at this with some fresh eyes. I've been starring at this for way to many hours and I hope it's a small problem somewhere.

 

Thanks!

 

[EDIT] For testing purposes I've only checked 2/4 boxes in the pfSense VM for the quadnic therefore it's only two hostdevs.

error.jpg

 

 

UPDATE!!!!
After some digging in logs etc I finally found what the problem was. I wasn't thinking that it would matter if it was a HP server or a custom build, but it did. The RMRR-problem... Patched it using the guide this excellent forum provided I did get it to work!

Edited by Fluffis
Adding further information to help resolve my problem
Link to comment
  • 7 months later...
  • 11 months later...

I'm following the steps of the guide and I'm getting an extra line added when I save the VM.

 

I enter

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x1f' function='0x6'/>
      </source>
    </hostdev>

 

and it saves as 

 

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x1f' function='0x6'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x01' function='0x0'/>
    </hostdev>

 

 

I'm trying to pass a i-219 onboard NIC to my VM.

 

00:00.0 Host bridge: Intel Corporation 10th Gen Core Processor Host Bridge/DRAM Registers (rev 03)
00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 03)
00:02.0 VGA compatible controller: Intel Corporation CometLake-S GT2 [UHD Graphics 630] (rev 03)
00:14.0 USB controller: Intel Corporation Tiger Lake-H USB 3.2 Gen 2x1 xHCI Host Controller (rev 11)
00:14.2 RAM memory: Intel Corporation Tiger Lake-H Shared SRAM (rev 11)
00:16.0 Communication controller: Intel Corporation Tiger Lake-H Management Engine Interface (rev 11)
00:17.0 SATA controller: Intel Corporation Device 43d2 (rev 11)
00:1c.0 PCI bridge: Intel Corporation Tiger Lake-H PCI Express Root Port #5 (rev 11)
00:1f.0 ISA bridge: Intel Corporation Device 4388 (rev 11)
00:1f.3 Audio device: Intel Corporation Device f0c8 (rev 11)
00:1f.4 SMBus: Intel Corporation Tiger Lake-H SMBus Controller (rev 11)
00:1f.5 Serial bus controller: Intel Corporation Tiger Lake-H SPI Controller (rev 11)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (14) I219-V (rev 11)
01:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]
02:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961/SM963

00:1f.6 0200: 8086:15fa (rev 11)

 

It's being claimed by the stub when I boot up.

 

Sep 17 15:26:41 Tower kernel: pci-stub: add 8086:15FA sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
Sep 17 15:26:41 Tower kernel: pci-stub 0000:00:1f.6: claimed by stub

 

 

When I start the VM, it terminates due to group 7 not being viable.

-blockdev '{"driver":"file","filename":"/mnt/user/domains/haos_ova-9.0.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \
-device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,bootindex=1,write-cache=on \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0,index=0 \
-chardev socket,id=charchannel0,fd=34,server=on,wait=off \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-audiodev '{"id":"audio1","driver":"none"}' \
-vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \
-k en-us \
-device qxl-vga,id=video0,max_outputs=1,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
-device usb-host,hostdevice=/dev/bus/usb/001/005,id=hostdev0,bus=usb.0,port=2 \
-device usb-host,hostdevice=/dev/bus/usb/001/004,id=hostdev1,bus=usb.0,port=3 \
-device vfio-pci,host=0000:00:1f.6,id=hostdev2,bus=pci.7,addr=0x1 \
-device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/2 (label charserial0)
qxl_send_events: spice-server bug: guest stopped, ignoring
2022-09-17T22:28:48.754730Z qemu-system-x86_64: -device vfio-pci,host=0000:00:1f.6,id=hostdev2,bus=pci.7,addr=0x1: vfio 0000:00:1f.6: group 7 is not viable
Please ensure all devices within the iommu_group are bound to their vfio bus driver.
2022-09-17 22:28:49.243+0000: shutting down, reason=failed
2022-09-17 22:41:17.152+0000: starting up libvirt version: 8.2.0, qemu version: 6.2.0, kernel: 5.15.46-Unraid, hostname: Tower
LC_ALL=C \
PATH=/bin:/sbin:/usr/bin:/usr/sbin \
HOME='/var/lib/libvirt/qemu/domain-3-Home Assistant' \
XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-3-Home Assistant/.local/share' \
XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-3-Home Assistant/.cache' \
XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-3-Home Assistant/.config' \
/usr/local/sbin/qemu \
-name 'guest=Home Assistant,debug-threads=on' \
-S \
-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-3-Home Assistant/master-key.aes"}' \
-blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/daec4048-aab8-9201-ae0f-316786352df7_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-q35-6.2,usb=off,dump-guest-core=off,mem-merge=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,memory-backend=pc.ram \
-accel kvm \
-cpu host,migratable=on,host-cache-info=on,l3-cache=off \
-m 3072 \
-object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":3221225472}' \
-overcommit mem-lock=off \
-smp 1,sockets=1,dies=1,cores=1,threads=1 \
-uuid daec4048-aab8-9201-ae0f-316786352df7 \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=35,server=on,wait=off \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-boot strict=on \
-device pcie-root-port,port=16,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=17,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=18,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=19,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=20,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=21,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
-device pcie-pci-bridge,id=pci.7,bus=pci.1,addr=0x0 \
-device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \
-blockdev '{"driver":"file","filename":"/mnt/user/domains/haos_ova-9.0.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \
-device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,bootindex=1,write-cache=on \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0,index=0 \
-chardev socket,id=charchannel0,fd=34,server=on,wait=off \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-audiodev '{"id":"audio1","driver":"none"}' \
-vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \
-k en-us \
-device qxl-vga,id=video0,max_outputs=1,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
-device usb-host,hostdevice=/dev/bus/usb/001/005,id=hostdev0,bus=usb.0,port=2 \
-device usb-host,hostdevice=/dev/bus/usb/001/004,id=hostdev1,bus=usb.0,port=3 \
-device vfio-pci,host=0000:00:1f.6,id=hostdev2,bus=pci.7,addr=0x1 \
-device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/4 (label charserial0)
qxl_send_events: spice-server bug: guest stopped, ignoring
2022-09-17T22:41:17.220002Z qemu-system-x86_64: -device vfio-pci,host=0000:00:1f.6,id=hostdev2,bus=pci.7,addr=0x1: vfio 0000:00:1f.6: group 7 is not viable
Please ensure all devices within the iommu_group are bound to their vfio bus driver.
2022-09-17 22:41:17.709+0000: shutting down, reason=failed

 

Group 7 has other devices within it.  

Sep 17 15:26:41 Tower kernel: pci 0000:00:1f.0: Adding to iommu group 7
Sep 17 15:26:41 Tower kernel: pci 0000:00:1f.3: Adding to iommu group 7
Sep 17 15:26:41 Tower kernel: pci 0000:00:1f.4: Adding to iommu group 7
Sep 17 15:26:41 Tower kernel: pci 0000:00:1f.5: Adding to iommu group 7
Sep 17 15:26:41 Tower kernel: pci 0000:00:1f.6: Adding to iommu group 7


Is there something else I need to do in this case?

 

I would like to pass my onboard NIC to the VM so it can be on a separate VLAN and not have that VLAN access any exposed shares.

Link to comment
  • 6 months later...

I've got a Windows 11 gaming VM and wanted to have a passed NIC for low latency applications like game streaming both from Nvidia cloud and also using sunshine/moonlight to stream from VM host to my Apple TV or Nvidia Shield in the living room. I also did not want to lose the raw speed of accessing the shares on the host from within the VM.

 

I was able to configure a passthrough NIC as my main NIC and then add a virtio-net adapter using virbr0 interface. Then in the VM on the second NIC I manually configured the IP and removed the DNS and default gateway. I am able to access my host using the virbr0 default gateway IP and map the drive in Windows getting the full speed of virtio-net while all my other network connectivity is happening over the passthrough NIC.

 

Not sure if anyone else wants to do something like but figured I'd comment that it was possible.

Edited by nickp85
  • Like 1
Link to comment
  • 8 months later...
On 3/23/2023 at 10:45 PM, nickp85 said:

I've got a Windows 11 gaming VM and wanted to have a passed NIC for low latency applications like game streaming both from Nvidia cloud and also using sunshine/moonlight to stream from VM host to my Apple TV or Nvidia Shield in the living room. I also did not want to lose the raw speed of accessing the shares on the host from within the VM.

 

I was able to configure a passthrough NIC as my main NIC and then add a virtio-net adapter using virbr0 interface. Then in the VM on the second NIC I manually configured the IP and removed the DNS and default gateway. I am able to access my host using the virbr0 default gateway IP and map the drive in Windows getting the full speed of virtio-net while all my other network connectivity is happening over the passthrough NIC.

 

Not sure if anyone else wants to do something like but figured I'd comment that it was possible.

 I'd love to see details - pics or a video of a step by step would be amazing

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.