Can't get GPU passthrough working


Recommended Posts

I've been working though setting up my first Unraid server.  I've configured some plugins, precleared and smart tested my HDDs, started the array, setup several cache pools, and added some shares.

 

I'm currently in the process of setting up a win10pro VM. I managed to install windows and It works fine when I use either Unraids VNC or I  RDP into it. if "VNC" is selected as my graphics option in Unraid's VM template. Unfortunately when I select my actual card (EVGA GTX 1070) I can't connect via RDP. Any suggestions as to what the problem might be?

Edited by JKunraid
typos
Link to comment
9 hours ago, Turnspit said:

IOMMU enabled in BIOS?

 

GPU IOMMU group bound to VFIO at boot?

 

Did you pass through all parts of the GPU (audio controller, possible USB-port, ...)?

 

Post your .xml-file of the VM as well.

 

I'm not certain but believe IOMMU is enabled in bios.  I can see my GPU listed  under an IOMMU group in Unraid (tools > system devices) with check boxes for it unchecked.

 

IOMMU group 74:

[10de:1b81] 50:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)

[10de:10f0] 50:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)

 

I don't know if GPU IoMMU group is bound to VFIO at boot (and don't know how to check).  For the record I looked at the VM's log and found this VFIIO message. Perhaps my problem?

"2021-08-08T22:55:18.709941Z qemu-system-x86_64: vfio_region_write(0000:50:00.0:region1+0x42478, 0x0,8) failed: Device or resource busy:"

 

 

When you say "passthrough parts", I'm assuming you mean in VM template right?. If so I've selected mygpu, nvidia audio driver, and tried all three available usb controller options in dropdown in VM template.

 

Below is the xml view of the VM's template.

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='31'>
  <name>Windows 10 Pro</name>
  <uuid>b178c5a5-184a-f35a-04d3-5ab9b25d87f2</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>34</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='33'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='34'/>
    <vcpupin vcpu='4' cpuset='3'/>
    <vcpupin vcpu='5' cpuset='35'/>
    <vcpupin vcpu='6' cpuset='4'/>
    <vcpupin vcpu='7' cpuset='36'/>
    <vcpupin vcpu='8' cpuset='5'/>
    <vcpupin vcpu='9' cpuset='37'/>
    <vcpupin vcpu='10' cpuset='6'/>
    <vcpupin vcpu='11' cpuset='38'/>
    <vcpupin vcpu='12' cpuset='7'/>
    <vcpupin vcpu='13' cpuset='39'/>
    <vcpupin vcpu='14' cpuset='8'/>
    <vcpupin vcpu='15' cpuset='40'/>
    <vcpupin vcpu='16' cpuset='9'/>
    <vcpupin vcpu='17' cpuset='41'/>
    <vcpupin vcpu='18' cpuset='10'/>
    <vcpupin vcpu='19' cpuset='42'/>
    <vcpupin vcpu='20' cpuset='11'/>
    <vcpupin vcpu='21' cpuset='43'/>
    <vcpupin vcpu='22' cpuset='12'/>
    <vcpupin vcpu='23' cpuset='44'/>
    <vcpupin vcpu='24' cpuset='13'/>
    <vcpupin vcpu='25' cpuset='45'/>
    <vcpupin vcpu='26' cpuset='14'/>
    <vcpupin vcpu='27' cpuset='46'/>
    <vcpupin vcpu='28' cpuset='15'/>
    <vcpupin vcpu='29' cpuset='47'/>
    <vcpupin vcpu='30' cpuset='16'/>
    <vcpupin vcpu='31' cpuset='48'/>
    <vcpupin vcpu='32' cpuset='17'/>
    <vcpupin vcpu='33' cpuset='49'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/b178c5a5-184a-f35a-04d3-5ab9b25d87f2_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='17' threads='2'/>
    <cache mode='passthrough'/>
    <feature policy='require' name='topoext'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Windows 10 Pro Template/vdisk1.img' index='1'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:ac:e6:5f'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio-net'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-31-Windows 10 Pro/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x50' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x50' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hub type='usb'>
      <alias name='hub0'/>
      <address type='usb' bus='0' port='2'/>
    </hub>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

 

Edited by JKunraid
typo
Link to comment
7 hours ago, Turnspit said:

IOMMU enabled in BIOS?

 

GPU IOMMU group bound to VFIO at boot?

 

Did you pass through all parts of the GPU (audio controller, possible USB-port, ...)?

 

Post your .xml-file of the VM as well.

 

UPDATE: 


I just tried checking the boxes for following items (in tools > system files) then rebooting

IOMMU group 74:

[10de:1b81] 50:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)

[10de:10f0] 50:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)

 

It still can't connect via RDP but I think it has bound the devices to VFIO as mentioned. Below is copy of View VFIO-PCI log (found in from tools > system files)

 

 

Loading config from /boot/config/vfio-pci.cfg
BIND=0000:50:00.0|10de:1b81 0000:50:00.1|10de:10f0
---
Processing 0000:50:00.0 10de:1b81
Vendor:Device 10de:1b81 found at 0000:50:00.0

IOMMU group members (sans bridges):
/sys/bus/pci/devices/0000:50:00.0/iommu_group/devices/0000:50:00.0
/sys/bus/pci/devices/0000:50:00.0/iommu_group/devices/0000:50:00.1

Binding...
Successfully bound the device 10de:1b81 at 0000:50:00.0 to vfio-pci
---
Processing 0000:50:00.1 10de:10f0
Vendor:Device 10de:10f0 found at 0000:50:00.1

IOMMU group members (sans bridges):
/sys/bus/pci/devices/0000:50:00.1/iommu_group/devices/0000:50:00.0
/sys/bus/pci/devices/0000:50:00.1/iommu_group/devices/0000:50:00.1

Binding...
0000:50:00.0 already bound to vfio-pci
0000:50:00.1 already bound to vfio-pci
Successfully bound the device 10de:10f0 at 0000:50:00.1 to vfio-pci
---
vfio-pci binding complete

Devices listed in /sys/bus/pci/drivers/vfio-pci:
lrwxrwxrwx 1 root root 0 Aug 9 07:54 0000:50:00.0 -> ../../../../devices/pci0000:40/0000:40:03.1/0000:50:00.0
lrwxrwxrwx 1 root root 0 Aug 9 07:54 0000:50:00.1 -> ../../../../devices/pci0000:40/0000:40:03.1/0000:50:00.1

ls -l /dev/vfio/
total 0
crw------- 1 root root 249, 0 Aug 9 07:54 74
crw-rw-rw- 1 root root 10, 196 Aug 9 07:54 vfio

Link to comment
7 hours ago, Turnspit said:

IOMMU enabled in BIOS?

 

GPU IOMMU group bound to VFIO at boot?

 

Did you pass through all parts of the GPU (audio controller, possible USB-port, ...)?

 

Post your .xml-file of the VM as well.

 

UPDATE 2:

 

Now I'm getting a message from fix common problems plugin that log is 100% full. I checked my syslog in /var/logs/ and its over 125Mb. When I tail the last 200 records show up with this message

 

Aug  9 09:11:16 Threadripper kernel: vfio-pci 0000:50:00.0: BAR 1: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]

Link to comment

Found the solution. 

 

Speaking as an experienced dev, this was way way too complicated to setup. I can only imagine how many non-techies give up trying to figure this out   Hopefully future versions of Unraid make this far more intuative. For anyone else experiencing similar problem here is a brief walkthrough of major steps I took to get an EVGA GTX 1070 SC running on an AMD Threadripper board on Unraid 6,9.2.

 

1. Enable IOMMU, CSM and Legacy mode in bios (note for some GPUs UEFI rather than legacy might work better) 

 

2.1 In Unraid go to tools > system devices.

2.2  Scroll down until you find the IOMMU group with your GPU.

2.3. Click the checkbox for all the devices in the GPU's IOMMU group and save.

 

3. Go to techpower up website and download vbios version for your GPU

https://www.techpowerup.com/vgabios/

 

4.1 (NOTE: This step might only apply to systems with single NVIDIA GPU) Download/Install a Hex editor.

4.2 Use Hex editor to modify the above VBIOS file deleting leading section (google to find exact section that needs to be deleted on your card)

4.3 Save then upload modified VBIOS file to some location on your Unraid server.  

 

5.1 Go to VM manger screen and create new win10 VM

5.2  For initial installation for "GRAPHICS CARD" field select VNC. 

5.3 Install Win10 and enable remote desktop. 

5.4 Confirm you can connect with RDP

 

6.1 Go back to the VM template you created to edit it.

6.2.  For "Graphics Card" field select you GPU from dropdown (in my case a EVGA GTX  1070 SC)

6.3  For "Graphics ROM BIOS"  select your modified VBIOS 

6.4. For "Sound Card" you must select sound card that is same IOMMU group as GPU (you can add second card if needed)

6.6 select "UPDATE" to save the file

 

7.1 reopen your VM template from above (the reason why this step is needed is because of a bug that resets below if you try to use "form view" for following steps)

7.2 click on "XML View"

7.3 scroll down to the fields that list the slots for you GPU and Sound Drive .(In my case it looks like below. Yours may differ and need additional edits if more devices on same IOMMU)

 

<rom file='/mnt/transfer_pool/Transfer/Unraid/VBIOS/EVGA.GTX1070.8192.161103_1.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' '/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x50' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

 

7.4 My GPU also have a sound device built in. When you plug into a physical mobo both devices are in the same PCIe slot. Unfortunately when Unraid maps it to virtual mobo it places the devices in different slots which confuse the NVIDIA driver. To fix this issue you have to put all the devices on the same IOMMU in same slots. For my particular GPU these were the two lines that I modified from above.

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' '/>
      (changed to below)

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
 

      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0/>

      (changed to below)

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>

 

7.5 Click update to save XML file.  Note: it's essential not to go back to the "form version" to save the VM template as it will discard these manual edits. If you do ever change the template again in form view you will need to save it then manually edit it again in "XML view" again.

 

8.1. Start the VM. And try to connect with RDP after a minute (does some sort of update so takes a bit longer than normal to connect)

8.2 If you can't connect with RDP do a force shutdown of VM then restart it again (for some reason I had to do it twice for it to work)

 

 

 

 

Edited by JKunraid
fixed details
Link to comment

Again good you found a solution but I would advise watching some of spaceinvader one videos on the subject. It is super easy when you know the process.

 

I would personally recommend running your GPU for the VM as a secondary video device first to check it works. Then Dump your own bios using the command line method. This his proven so much more reliable for me.

 

a few of ed’s video on the topic. Some are a little outdated now. However still relevant.

 

 


dumping your on vbios

 

 

 

newer method (didn’t work for my RTX 3080)

 

 

Advanced GPU passthrough

 

 

vfio config and binding - now built into unraid

 

 

 

 

 

Link to comment
  • 5 months later...

 

On 8/12/2021 at 6:17 PM, gray squirrel said:

Again good you found a solution but I would advise watching some of spaceinvader one videos on the subject. It is super easy when you know the process.

 

I would personally recommend running your GPU for the VM as a secondary video device first to check it works. Then Dump your own bios using the command line method. This his proven so much more reliable for me.

 

a few of ed’s video on the topic. Some are a little outdated now. However still relevant.

I've watched them all and carefully followed each and every step for almost a month now and still can't get my VM with Asus GTX 780 passthrough to work properly.  

 

The screen starts and shows windows booting, sometimes I can even manage to log in before the screen goes black... It's very frustrating and discouraging. 

 

If anyone has a clue, I am more than open to try any solution!

 

Thanks!

 

 <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/cache/domains/Gaming VM-2/Asus.780.Direct CU II OC.dump'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>

 

weblibrary2-diagnostics-20220205-1342.zip

Link to comment
15 hours ago, startsomewhere said:

The screen starts and shows windows booting, sometimes I can even manage to log in before the screen goes black..

Just a couple of questions since it's not entirely obvious from your post

 

When the screen goes black, is the VM still actually running?  Have you tried a VM without passing through the video card - Is it stable?

 

Unfortunately device passthrough tends to be highly dependent upon hardware itself, and unfortunately there are certain video cards which just don't like it (and possibly the older the card is the more likely this is to happen)

Link to comment
5 hours ago, Squid said:

When the screen goes black, is the VM still actually running?  Have you tried a VM without passing through the video card - Is it stable?

 

Using VNC, the VM is stable and runs just fine.  I've even connected remotely from another computer in the house and it's working perfectly.  

 

I've also tried to connect remotely while the VM was using GPU passthrough thinking that is maybe was a issue with the NVIDIA drivers but I couldn't connect to it strangly.  The VM was showing as running but couldn't log into it...

 

One thing I need to mention, I've installed the VirtIO Red Hat driver on display adapter...  Could this be the issue?

Link to comment
44 minutes ago, startsomewhere said:

One thing I need to mention, I've installed the VirtIO Red Hat driver on display adapter...  Could this be the issue?

TBH, not sure -> shouldn't matter but it wouldn't hurt to uninstall it.  (Or try with a fresh install of another VM to rule out everything)

Link to comment
On 2/5/2022 at 12:48 PM, startsomewhere said:

 

I've watched them all and carefully followed each and every step for almost a month now and still can't get my VM with Asus GTX 780 passthrough to work properly.  

 

The screen starts and shows windows booting, sometimes I can even manage to log in before the screen goes black... It's very frustrating and discouraging. 

 

If anyone has a clue, I am more than open to try any solution!

 

Thanks!

 

 <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/cache/domains/Gaming VM-2/Asus.780.Direct CU II OC.dump'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>

 

weblibrary2-diagnostics-20220205-1342.zip 98.08 kB · 3 downloads

 

The issue is your GPU.  ASUS GPUs back in the day did not support UEFI and frankly I had nothing but problems trying to get them to work with VFIO properly.  I have always had good success with EVGA branded devices.  Even when some of my oldest 6xx series GTX devices were missing UEFI firmware, EVGA sent me a custom patched firmware BIOS to let me add UEFI support to it.  Whereas when I had a brand new ASUS card without UEFI support at one point, they said "too bad, so sad."  Really a shame.

  • Upvote 1
Link to comment
11 hours ago, jonp said:

I have always had good success with EVGA branded devices. 

I tried also with someone's EVGA 970, downloaded the vbios on techpowerup and modified to remove the header in hex editior and I haven't been able to get the screen showing boot menu like I was able to with my Asus.  This is what led me to think that my Asus hardware was ok but not my VM setup...  Still really confused. I'm not a tech guy and I'm learning my way through...

Link to comment
1 hour ago, startsomewhere said:

Wow! That's far from being a good customer service...  

 

Edit : So even if I download from Tech Powerup the latest vbios that shows UEFI, it will not work?  

 

I mean you can always try.  The advanced GPU pass through video from SpaceInvaderOne is hit or miss when it comes to specific GPUs.  Sometimes you get lucky, other times not so much.

 

1 hour ago, startsomewhere said:

I tried also with someone's EVGA 970, downloaded the vbios on techpowerup and modified to remove the header in hex editior and I haven't been able to get the screen showing boot menu like I was able to with my Asus.  This is what led me to think that my Asus hardware was ok but not my VM setup...  Still really confused. I'm not a tech guy and I'm learning my way through...

 

For the EVGA 970, I'm shocked you can't get that to work just straight out of the box without any download of VBIOS firmware.  Does the system you're using have a built-in integrated GPU on the processor or no?

Link to comment
9 hours ago, jonp said:

Check your BIOS settings and make sure that GPU is enabled and set to act as the primary GPU for your system.   If its not, the add-on GPU may get used by the host where problems can then occur with GPU pass through.

I’ve done it and it still doesn’t work.  To do so, I had to activate UEFI boot otherwise my motherboard wouldn’t let me select the iGpu as first screen.  I’ve also disabled the the secondary screen.  
 

Something strange though appeared on the boot screen when rebooting Unraid.  I don’t know if it has something to do with the issue but just in case… at this point, nothing to loose!!

 

 

6E9E84CE-4F63-4BCB-8238-9EB4EBB75DF3.jpeg

Link to comment

One last thing that doesn't seems right is the my GPU in the XML VM setup. It's affected to slot 0x00 and the bus of the audio (bus=0x06) was different than the GPU so I changed the bus to match the GPU (bus=0x05).

 

I also changed the gpu line to add the multifunction='on', and changed the audio to function='0x01'

 

 

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/cache/domains/Gaming VM-2/Asus.780.Direct CU II OC.rom'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
    </hostdev>

 

Link to comment
  • 6 months later...
On 2/6/2022 at 11:56 PM, jonp said:

ASUS GPUs back in the day did not support UEFI and frankly I had nothing but problems trying to get them to work with VFIO properly

I had a NIGHTMARE of a time getting my Asus 1060 to work. I was on 6.8.3. I finally build a new rig and threw in EVGA 3080 and much easier (just had to modify a line in my flash).
@jonp QUESTION: Should I checkmark/bind my GPU(and its audio) to vfio on boot in System Devices, for my VMs? I passthrough in the VM setting and seems to be working. I only have Win10, but plan to try OS X and Ubuntu. Thank you!
 

 

On 2/8/2022 at 7:11 AM, startsomewhere said:

my GPU in the XML VM setup. It's affected to slot 0x00 and the bus of the audio (bus=0x06) was different than the GPU so I changed the bus to match the GPU (bus=0x05).

As the man said, Asus very tough to figure out. Did you add "allow unsafe interrupts", did you modify text setting to use multifunction? Someone in this thread I linked noticed the motherboard or GPU was separating the IOMMU group... IDK; it was so long ago. You can read through my issue and the thread that followed. I did eventually get it working very stable. Maybe also try other similar gpu bios. I was using 6.8.3 which didn't have vfio passthrough like they had in 6.9, so I don't know if what I am saying is relevant. NVDA just had sales on GPUs, might just be worth picking up a new one, if you haven't figured it out by now. 

 

Link to comment
  • 1 month later...

I had the opposite experience for a bit. 

- I had my DP and HDMI plugged in, I can see the prompt while Unraid booting up.
- I somehow went into one of my Win11 VMs settings and attempted to use the GUI to add the GTX 1660 as a drop down and sound.
- I clicked update and clicked start.  I immediately saw my monitor go dark, but noticed that my Windows VM was running/or started without giving me some pop-up error.
- I was able to RDP into the machine and sure enough there was my NVIDIA card, I was able to install the drivers, reboot several times all working.

 

Update:

I confirmed that the GUI Changes/Settings in this build works fine out of box and no xml hacks.  Anyone can confirm?  I am now able to assign gpu, + sound in the main gui and fire up the machine fine, I couldn't connect last time because dhcp/once I assigned static I can rdp every time without issue.  When you first setup the win11/x machine you'll need vnc, after gpu pass-through works.

 

Edited by RoTalk
Link to comment
  • 5 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.