failed to set iommu for container: Operation not permitted pfsense+nic


Recommended Posts

Hi Guys!

Trying to add a network card to my pfsense vm.

 

3Com Corporation 3c940              37:09.0 0200: 10b7:1700

Realtek RTL8111/8168/8411          03:00.0 0200: 10ec:8168

 

Added pci-stub.ids=10ec:8168,10b7:1700 to my syslinux.cfg

rebooted

 

Added the following line to my .xml-file

(Im only testing ONE card at the moment for a proof of concept)

 

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/>

      </source>

    </hostdev>

 

That was changed to:

(2 last lines automatically added)

 

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/>

      </source>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>

    </hostdev>

 

However i receive the following error when trying to start the vm:

 

virsh start pfSense160325

error: Failed to start domain pfSense160325

error: internal error: early end of file from monitor: possible problem:

2016-03-25T20:01:38.358902Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to set iommu for container: Operation not permitted

2016-03-25T20:01:38.358943Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to setup container for group 13

2016-03-25T20:01:38.358957Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: vfio: failed to get group 13

2016-03-25T20:01:38.358969Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device initialization failed

2016-03-25T20:01:38.358982Z qemu-system-x86_64: -device vfio-pci,host=37:09.0,id=hostdev0,bus=pci.2,addr=0x6: Device 'vfio-pci' could not be initialized

 

Any clues?!

Link to comment

I also have/seen this... and what I did (and it might not be right) I updated qemu.conf and changed the cgroup_device_acl part like this

 

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
    "/dev/vfio/1","/dev/vfio/2","/dev/vfio/3",
    "/dev/vfio/4","/dev/vfio/5","/dev/vfio/6",
    "/dev/vfio/7","/dev/vfio/8","/dev/vfio/9",
    "/dev/vfio/10","/dev/vfio/11","/dev/vfio/12",
    "/dev/vfio/13","/dev/vfio/14","/dev/vfio/15",
    "/dev/vfio/16","/dev/vfio/17","/dev/vfio/18",
    "/dev/vfio/19","/dev/vfio/20","/dev/vfio/21",
    "/dev/vfio/22","/dev/vfio/23","/dev/vfio/24",
    "/dev/vfio/25","/dev/vfio/26","/dev/vfio/27",
    "/dev/vfio/28","/dev/vfio/29","/dev/vfio/30",
    "/dev/vfio/31","/dev/vfio/32"
]

 

 

@LT is this cause any harm ?

 

//Peter

Link to comment

Hi!

Thanks for your input!

Im however unable to see any difference from my current setup (mine is longer)

 

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
    "/dev/vfio/1","/dev/vfio/2","/dev/vfio/3",
    "/dev/vfio/4","/dev/vfio/5","/dev/vfio/6",
    "/dev/vfio/7","/dev/vfio/8","/dev/vfio/9",
    "/dev/vfio/10","/dev/vfio/11","/dev/vfio/12",
    "/dev/vfio/13","/dev/vfio/14","/dev/vfio/15",
    "/dev/vfio/16","/dev/vfio/17","/dev/vfio/18",
    "/dev/vfio/19","/dev/vfio/20","/dev/vfio/21",
    "/dev/vfio/22","/dev/vfio/23","/dev/vfio/24",
    "/dev/vfio/25","/dev/vfio/26","/dev/vfio/27",
    "/dev/vfio/28","/dev/vfio/29","/dev/vfio/30",
    "/dev/vfio/31","/dev/vfio/32","/dev/vfio/33",
    "/dev/vfio/34","/dev/vfio/35","/dev/vfio/36",
    "/dev/vfio/37","/dev/vfio/38","/dev/vfio/39",
    "/dev/vfio/40","/dev/vfio/41","/dev/vfio/42",
    "/dev/vfio/43","/dev/vfio/44","/dev/vfio/45",
    "/dev/vfio/46","/dev/vfio/47","/dev/vfio/48",
    "/dev/vfio/49","/dev/vfio/50","/dev/vfio/51",
    "/dev/vfio/52","/dev/vfio/53","/dev/vfio/54",
    "/dev/vfio/55","/dev/vfio/56","/dev/vfio/57",
    "/dev/vfio/58","/dev/vfio/59","/dev/vfio/60",
    "/dev/vfio/61","/dev/vfio/62","/dev/vfio/63",
    "/dev/vfio/64","/dev/vfio/65","/dev/vfio/66",
    "/dev/vfio/67","/dev/vfio/68","/dev/vfio/69",
    "/dev/vfio/70","/dev/vfio/71","/dev/vfio/72",
    "/dev/vfio/73","/dev/vfio/74","/dev/vfio/75",
    "/dev/vfio/76","/dev/vfio/77","/dev/vfio/78",
    "/dev/vfio/79","/dev/vfio/80","/dev/vfio/81",
    "/dev/vfio/82","/dev/vfio/83","/dev/vfio/84",
    "/dev/vfio/85","/dev/vfio/86","/dev/vfio/87",
    "/dev/vfio/88","/dev/vfio/89","/dev/vfio/90",
    "/dev/vfio/91","/dev/vfio/92","/dev/vfio/93",
    "/dev/vfio/94","/dev/vfio/95","/dev/vfio/96",
    "/dev/vfio/97","/dev/vfio/98","/dev/vfio/99"

 

I also have/seen this... and what I did (and it might not be right) I updated qemu.conf and changed the cgroup_device_acl part like this

 

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
    "/dev/vfio/1","/dev/vfio/2","/dev/vfio/3",
    "/dev/vfio/4","/dev/vfio/5","/dev/vfio/6",
    "/dev/vfio/7","/dev/vfio/8","/dev/vfio/9",
    "/dev/vfio/10","/dev/vfio/11","/dev/vfio/12",
    "/dev/vfio/13","/dev/vfio/14","/dev/vfio/15",
    "/dev/vfio/16","/dev/vfio/17","/dev/vfio/18",
    "/dev/vfio/19","/dev/vfio/20","/dev/vfio/21",
    "/dev/vfio/22","/dev/vfio/23","/dev/vfio/24",
    "/dev/vfio/25","/dev/vfio/26","/dev/vfio/27",
    "/dev/vfio/28","/dev/vfio/29","/dev/vfio/30",
    "/dev/vfio/31","/dev/vfio/32"
]

 

 

@LT is this cause any harm ?

 

//Peter

Link to comment

Think you need to pass through all devices in the same IOMMU group so you will need to pass through all of these to the same VM:

/sys/kernel/iommu_groups/13/devices/0000:37:05.0
/sys/kernel/iommu_groups/13/devices/0000:37:09.0

 

 

So put this XML in and try:

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x37' slot='0x09' function='0x0'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x37' slot='0x05' function='0x0'/>
      </source>
    </hostdev>

 

 

You shouldn't need to pass through the bridge so you can skip:

/sys/kernel/iommu_groups/13/devices/0000:00:1e.0
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.