[Plugin] VFIO-PCI Config


Skitals

Recommended Posts

seems like i have a quite similar problem

I am trying to passthrough an pcie-usb controller. It works in unraid, i can bind it with the vfio plugin, and it finally shows up under other pci cards in the vm template. 
however, when i add it and start the vm, first the vm gets paused, then the vm tab gets unresponsive and finally the whole server goes dead.
gpu passthrough and everything works, i just need the usb controller to work aswell, so i can make use of my audio interface in macos (but the same problem occurs for win vm)

tried reading and trying all sort of things, but either the pci card doesn't show at all as a passthrough card (connected devices are recognized though, so unraid clearly can work with this card) or the pci card is selectable (obviously the devices do not show, when the pci card is bound) but the vms crash.

I also tried spaceinvaders way with the vfio-pci-ids= add in syslinux line, but didn't really work somehow since the connected devices were still available in unraid, so not sure what went wrong there.

any advice or waypoints i should check out?

Link to comment
  • 2 weeks later...
  • 4 weeks later...

I'd like access to a USB device from within a VM. Problem however is that it uses the same Group as the UnRaid USB stick;

 

Group 4 00:14.08086:a36dUSB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)

USB devices attached to this controller:

Bus 001 Device 004: ID 051d:0002 American Power Conversion Uninterruptible Power Supply

Bus 001 Device 003: ID 0781:5567 SanDisk Corp. Cruzer Blade

Bus 001 Device 002: ID 289b:0505 Dracal/Raphnet technologies

Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

 

USB Device 002 needs to be passed through. Why is this so hard to share with a VM in 2020 ?

Link to comment
10 hours ago, Squid said:

Because that's the way the hardware is designed.

 

If one of the ACS override options don't help you, then you can either permanently attach the device to the VM when you edit it or use the HotPlug USB plugin

Thanks, the Libvrt Hotplug USB App worked for me.

Link to comment
41 minutes ago, chrisp7 said:

Is there a reason I cant find this in either Community applications and Dockerhub? Im using the beta build of UNRAID. I can only see 1 app of Skitals in Community Applications.

You're probably running 6.9 in which case it's already part of the OS

Link to comment

Hello,

 

Trying to bind 4x ports of Intel I350 ethernet card, but when doing it this way with the plugin, it only binds first two ports. All four ports are in their own group, and when I inspect the vfio-pci.cfg file, it shows all four devices enumerated properly. UnRAID version 6.7.2.

 

Edit: Meant to say each ethernet port has its own group; four groups for the card.

 

Thanks!

Edited by ClintE
Link to comment
  • 1 month later...

I used it to passthrough one Ethernet adapter ini 6.8.3 and the plugin works flawlessy (at first I set "PCIe ACS override" to "both" to have as much IOMMU groups as possible).

 

As some users have problem with this plugin, isn't it possible that it auto-deletes / ignores the /config/vfio-pci.cfg on boot if error happens?

Link to comment
  • 2 weeks later...

Hi, my syslog gets spammed and 99% filled within minutes of booting up with millions of lines like this

Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref]
Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref]
Jan 18 23:54:53 server kernel: vfio-pci 0000:09:00.0: BAR 3: can't reserve [mem 0xf0000000-0xf1ffffff 64bit pref]

I am stubbing my graphics card with this plugin on unRAID 6.8.3. The address 09:00.0 is the device "VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)".

HVM and IOMMU are enabled. PCIe ACS override is disabled.

 

The graphics card passthrough (with dumped vbios rom) works in a VM, but on fixed 800x600 resolution (Nvidia drivers installed, Windows VM says there's a driver error code 43), but the VM logs say

2021-01-19T21:57:24.002296Z qemu-system-x86_64: -device vfio-pci,host=0000:09:00.0,id=hostdev0,bus=pci.0,addr=0x5,romfile=/mnt/disk5/isos/vbios/EVGA_GeForce_GTX_1070.vbios: Failed to mmap 0000:09:00.0 BAR 3. Performance may be slow

Anybody seen this before? Can't find anything like it on the forum.

 

EDIT: Found some more info. According to 

 booting the server without the HDMI plugged in removed the spamming line. However, after plugging the HDMI back in and booting the VM, the VM logs are repeating lines like

2021-01-19T22:17:27.637837Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101afe, 0x0,1) failed: Device or resource busy
2021-01-19T22:17:27.637849Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x101aff, 0x0,1) failed: Device or resource busy
2021-01-19T22:17:27.648663Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy
2021-01-19T22:17:27.648690Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x4810, 0x1fef8c01,8) failed: Device or resource busy
2021-01-19T22:17:27.648784Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102000, 0xabcdabcd,4) failed: Device or resource busy
2021-01-19T22:17:27.648798Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region3+0x102004, 0xabcdabcd,4) failed: Device or resource busy

Windows device manager still says there are driver errors, and there are console-like artifacts horizontally across the screen, including a blinking cursor, on top of Windows. It seems like the unraid console and the Windows VM (or is it VFIO stubbing?) fight for the GPU. I have yet to try the recommendation in the above linked post to unbind the console at boot with the go script.

Edited by ZooMass
More info about VM
Link to comment
  • 1 month later...

Been getting this error since I started using 6.9 RC1:

 

Quote

The author (or moderators of Community Applications) of the plugin template (https://raw.githubusercontent.com/Skitals/unraid-vfio-pci/master/plugins/vfio.pci.plg) has specified that this plugin is incompatible with your version of unRaid (6.9.1). You should uninstall the plugin here: Minimum OS Version: 6.7.0 Maximum OS Version: 6.9.0-beta1

 

VFIO-PCI CFG has been working fine and is working fine for me in 6.9.1 official release.

 

Thanks,

craigr

Link to comment
  • 1 month later...

unRAID 6.9.2.

 

Can't find the plugin...

image.png.9c3399c22280cc109cbc2601110bebca.png

 

There's this option in unRAID System Devices. is it the same thing?

image.png.484fd3cb724b9e24cd15b278e8f722d4.png

 

This warning is appeariong for me:

Quote

Warning: Your system has booted with the PCIe ACS Override setting enabled. The below list doesn't not reflect the way IOMMU would naturally group devices.
To see natural IOMMU groups for your hardware, go to the VM Manager page and set the 
PCIe ACS override setting to Disabled.

 

Link to comment
  • 2 weeks later...

If I stub my Nvidia Quadro 1000 from UnRaid, can I sill use it in Plex to transcode? After I installed the Nvidia drivers for plex, every time I use IPMI I get a black screen after all the UnRaid boot up grub finishes. I made sure the bios says use the VGA onboard but it never works on IPMI or my KVM. Web interface works fine. Thanks in advance.

Link to comment
17 hours ago, MrxFantatsicx said:

If I stub my Nvidia Quadro 1000 from UnRaid, can I sill use it in Plex to transcode?

 

No, stubbing a device by binding it to vfio-pci means the device is completely hidden from Unraid (and therefore Docker). The only thing you can do with it is pass the device through to a VM. There is no value in installing Nvidia drivers if you stub the device.

 

17 hours ago, MrxFantatsicx said:

After I installed the Nvidia drivers

 

You should read the first post in the thread for Nvidia drivers:

  https://forums.unraid.net/topic/98978-plugin-nvidia-driver/

and post any questions in that thread. Be sure to describe the issue in detail and upload your diagnostics (from Tools -> Diagnostics) 

 

Link to comment
  • 2 weeks later...
On 4/5/2020 at 5:38 PM, ljm42 said:

Sure, on the flash drive delete the file /config/vfio-pci.cfg

 

Note that this plugin is just a front-end that writes the config file for you. More details about the config file here:

 

 

Don't seem to have that file on my drive. Made a mistake with the config and now unable to boot. All I see is the files in the plugins folder, deleted those but still unable to boot.

Link to comment
11 hours ago, louij2 said:

Don't seem to have that file on my drive. Made a mistake with the config and now unable to boot. All I see is the files in the plugins folder, deleted those but still unable to boot.


If there is no config/vfio-pci.cfg file then your problems are not related to your vfio-pci setup. I'd recommend starting a new thread in the general support.

 

If the system won't boot then you won't be able to upload diagnostics, but it will help if you can post a screenshot of whatever is in the root folder of the flash drive along with the config folder

 

Then connect a monitor and keyboard to the system and post a photo of whatever is on the screen when the text stops scrolling.

 

This needs to be in a new thread though, it will get lost in this discussion of vfio-pci

  • Like 1
Link to comment
On 5/2/2021 at 10:19 PM, ljm42 said:


If there is no config/vfio-pci.cfg file then your problems are not related to your vfio-pci setup. I'd recommend starting a new thread in the general support.

 

If the system won't boot then you won't be able to upload diagnostics, but it will help if you can post a screenshot of whatever is in the root folder of the flash drive along with the config folder

 

Then connect a monitor and keyboard to the system and post a photo of whatever is on the screen when the text stops scrolling.

 

This needs to be in a new thread though, it will get lost in this discussion of vfio-pci

I had to just pull my flash backup off My Servers and copy everything over. I did try copying just the kernel first but it wasn't that.

Link to comment
  • 1 year later...

Hi

 

I have two gpu cards, rx560 and rtx 1063

motherboard:  msi b460m mortar
unraid: 6.10.2

 

I bind rx560 in my vm and start but it will show errors:
 

image.thumb.png.b6fc855518f657ae592584dced308766.png

 

then i go to vfio-pci bind list, like http://192.168.1.10/Tools/SysDevs, it's show here:

image.thumb.png.aa03aea4824d54ead16f836a8c5e6967.png

 

I can't select rtx560 to bind, it's grey... i want to know why is that? can anyone help me?

thanks very much!!!
 

Edited by aikin
lost some messages
Link to comment
57 minutes ago, aikin said:

Hi

 

I have two gpu cards, rx560 and rtx 1063

motherboard:  msi b460m mortar
unraid: 6.10.2

 

I bind rx560 in my vm and start but it will show errors:
 

image.thumb.png.b6fc855518f657ae592584dced308766.png

 

then i go to vfio-pci bind list, like http://192.168.1.10/Tools/SysDevs, it's show here:

image.thumb.png.aa03aea4824d54ead16f836a8c5e6967.png

 

I can't select rtx560 to bind, it's grey... i want to know why is that? can anyone help me?

thanks very much!!!
 

Are you using this plugin as no longer required for the version you are on. Also we need to see the Iommu groups and you and more devices bound to vfio than just the required devices.

 

The RX is not bound, you may need to change the bindings if you have added recently the 2nd as that could change allocations.

 

Link to comment
3 hours ago, SimonF said:

Are you using this plugin as no longer required for the version you are on. Also we need to see the Iommu groups and you and more devices bound to vfio than just the required devices.

 

The RX is not bound, you may need to change the bindings if you have added recently the 2nd as that could change allocations.

 

 

OK,I can give your more screen picture. It's all bindings.

 

image.thumb.png.ef1a6c94d31a4a45d09ba80ecbd08baf.png
 

and my vm device setting

image.thumb.png.53c127848378923d51bbe47b6e6b139e.png


image.thumb.png.a934d12da27e601b39e4a25994ff39e2.png

and full xml 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Macinabox Monterey</name>
  <uuid>91f99595-ecf1-4f45-8a02-ef10f4da2a6b</uuid>
  <description>MacOS Monterey</description>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="default.png" os="osx"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd</loader>
    <nvram>/mnt/user/system/custom_ovmf/Macinabox_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='1' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Macinabox Monterey/Monterey-opencore.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/disk/by-id/ata-PLEXTOR_PX-128M5S_P02352107534'/>
      <target dev='hdd' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0xa'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:c9:fe:91'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <audio id='1' type='none'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc338'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x04d9'/>
        <product id='0xa168'/>
      </source>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'/>
  <qemu:commandline>
    <qemu:arg value='-usb'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-kbd,bus=usb-bus.0'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='************************'/>
    <qemu:arg value='-smbios'/>
    <qemu:arg value='type=2'/>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='host,vendor=GenuineIntel,+invtsc,kvm=on'/>
  </qemu:commandline>
</domain>


finally, you said i need to change the bindings? how ? thanks your quickly replay!!

 

 

Edited by aikin
messages
Link to comment
3 hours ago, SimonF said:

Are you using this plugin as no longer required for the version you are on. Also we need to see the Iommu groups and you and more devices bound to vfio than just the required devices.

 

The RX is not bound, you may need to change the bindings if you have added recently the 2nd as that could change allocations.

 

 

I have tried swapping the positions of the two graphics cards. then rtx 1063 can't select. It's my motherboard cause ? and is my motherboard not support? 

image.thumb.png.93d3dc41d8b0ef89710147ad35d81e6e.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.