GPU Passthrough doesnt work after Updating to unRAID 6.9


Recommended Posts

Hello Folks,

 

i went ahead and upgraded my unRAID Server yesterday, from 6.8.3 to 6.9. I bypassed every 6.9 Beta and RC Release and waited for the Stable to come out. Update went very smooth, but on first boot i noticed one of my Win10 VMs, where a GTX 1050 Ti is passed through, trying to start but reverting to an endless Bootloop and CPU Usage peaking at 100%. Then i tried to use VNC instead and the Machine booted without any Problems. I suspected that maybe the new QEMU Version might not compatible and installed a fresh new VM with the same Card passing through. Same problem there. I starting researching and found out that disabling Hyper-V in the Template solves the Problem. I tried that out and windows booted with the Nvidia Driver showing Error 43. Uninstalled the Drivers with DDU and reinstalled them didn't make any difference.

 

Next, I tried to Bind the GPU under Tools->System Devices, switch unRAID from UEFI boot to Legacy Boot, even tried a different GPU (GT 1030) and its always the same. Bootloop with Hyper-V enabled and Error 43 when disabled.

 

Tried downgrading to 6.8.3, but then all my Docker and VM Configuration was gone, not knowing how to recover them without doing all over again.

 

Any Help is really appreciated it.

 

My VM XML Config (as it worked with 6.8.3):

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>DiGi-Server</name>
  <uuid>2f2c3e6c-de2b-dfb5-53b3-ad81a3d6531d</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows Server 2016" icon="windows.png" os="windows2016"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='12'/>
    <vcpupin vcpu='2' cpuset='5'/>
    <vcpupin vcpu='3' cpuset='13'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/2f2c3e6c-de2b-dfb5-53b3-ad81a3d6531d_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='2' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/>
      <target dev='hdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disks/SanDisk_SD7SB2Q-512G-1006_152978401993/DiGi-Server/vdisk1.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='usb' index='0' model='nec-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:43:94:40'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

 

Link to comment

Good afternoon everyone,

 

I also have come across this issue were I was previously on 6.9.0 RC2 (Non-working) and upgraded to the stable 6.9 (non-working) in an attempt to get my GPU passthrough to work (2 GPUs\2VMs). After countless attempts of trial and error. I may consider downgrading to 6.8.3 until a fix is put in place. 

If anyone requires any screenshots or info, I'd be happy to provide.

 

Thanks,

Link to comment
17 hours ago, fearlessknight said:

Good afternoon everyone,

 

I also have come across this issue were I was previously on 6.9.0 RC2 (Non-working) and upgraded to the stable 6.9 (non-working) in an attempt to get my GPU passthrough to work (2 GPUs\2VMs). After countless attempts of trial and error. I may consider downgrading to 6.8.3 until a fix is put in place. 

If anyone requires any screenshots or info, I'd be happy to provide.

 

Thanks,

Glad to see that im not the only one with this Problem. What is your Hardware Configuration?

 

My Server Specs:

Motherboard: Gigabyte Technology Co., Ltd. - X99-UD4-CF

Processor: Intel® Xeon® CPU E5-2640 v3 @ 2.60GHz

Memory: 128 GB DDR4

GPU1: Radeon HD 6450

GPU2: GeForce GTX 1050 Ti

GPU3: GeForce GT 1030

Link to comment
5 hours ago, giafidis said:

Glad to see that im not the only one with this Problem. What is your Hardware Configuration?

 

My Server Specs:

Motherboard: Gigabyte Technology Co., Ltd. - X99-UD4-CF

Processor: Intel® Xeon® CPU E5-2640 v3 @ 2.60GHz

Memory: 128 GB DDR4

GPU1: Radeon HD 6450

GPU2: GeForce GTX 1050 Ti

GPU3: GeForce GT 1030

Server Specs: 

Gigabyte Technology Co., Ltd. Z490 VISION G Version F20b

Intel® Core™ i9-10850K CPU @ 3.60GHz

64 GiB DDR4

GPU1: Asus GTX 1070

GPU2: Asus GTX 1070

 

** I'm able to login to the VNC with no Passthrough. The moment I attempt to install GPU drivers, the system freezes and boots into AutoRecovery.

 

Link to comment

@fearlessknight

 

Same here. Try to disable Hyper-V in the XML Template and then the VM should boot up with GPU Passthrough, showing Error 43 in the Device Manager:

 

 <hyperv>
      <relaxed state='off'/>
      <vapic state='off'/>
      <spinlocks state='off'/>
      <vendor_id state='off'/>
    </hyperv>

 

Edited by giafidis
Link to comment
52 minutes ago, giafidis said:

@fearlessknight

 

Same here. Try to disable Hyper-V in the XML Template and then the VM should boot up with GPU Passthrough, showing Error 43 in the Device Manager:

 


 <hyperv>
      <relaxed state='off'/>
      <vapic state='off'/>
      <spinlocks state='off'/>
      <vendor_id state='off'/>
    </hyperv>

 

Just tried this and was able to get the GPU to passthrough successfully along with the Error 43. I'm going to pass this along to another thread with this info and see if they can't fix this during the next patch. I will mention you as well. Do you have any issues running software while hyperV is off? Or have you moved back to 6.8.3? 

Link to comment
9 minutes ago, fearlessknight said:

Just tried this and was able to get the GPU to passthrough successfully along with the Error 43. I'm going to pass this along to another thread with this info and see if they can't fix this during the next patch. I will mention you as well. Do you have any issues running software while hyperV is off? Or have you moved back to 6.8.3? 

I already switched back, because my Plex Server runs under a VM with GPU Passthrough.

 

I didn't test it in detail, but what i tested, ran without problems. It is more performance bound i believe.

 

If the ask for any Logs, i made a copy while i was on 6.9. Hopefully it gets fixed.

 

 

Link to comment
3 hours ago, giafidis said:

I already switched back, because my Plex Server runs under a VM with GPU Passthrough.

 

I didn't test it in detail, but what i tested, ran without problems. It is more performance bound i believe.

 

If the ask for any Logs, i made a copy while i was on 6.9. Hopefully it gets fixed.

 

 

Thanks again for finding the temp work around. I'll do some more in depth testing and follow up on my results.

 

Link to comment

You two are not the only ones. I upgraded to 6.9 and the VM was working fine. I upgraded my hardware from Intel to AMD a few days later, I'm in the same boat you two are now. Rolling back to try 6.8.3 now...

 

Intel 6700k

ASUS Maximus Hero VIII

 

upgraded to:

 

Ryzen 7 3700x

ASUS Prime X570-Pro

 

GPU is an ASUS GTX 1060.

 

-------------------------------------------------------------------------

 

Edit:

I resolved my issue. Having dug deeper I found the following in my VM Log:

2021-03-06T06:32:32.186442Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region1+0x1be6ba, 0x0,1) failed: Device or resource busy

 

Some searching yielded another thread:

 

Adding a user script to User Script Plugin that runs when the array starts seems to have solved my issue, I was able to successfully boot into Windows 10 VM with the GTX 1060 passed through. I have installed the drivers successfully.

 

The script:

#!/bin/bash
echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

 

Edit 2:

For clarification, my VM would not load on 6.8.3 or 6.9.0 with GTX 1060 as my primary and only gpu in the system. This script fixed the problem for me in both 6.8.3 and 6.9.0.

Edited by Celsian
Additional information
  • Like 2
Link to comment
9 hours ago, Celsian said:

You two are not the only ones. I upgraded to 6.9 and the VM was working fine. I upgraded my hardware from Intel to AMD a few days later, I'm in the same boat you two are now. Rolling back to try 6.8.3 now...

 

Intel 6700k

ASUS Maximus Hero VIII

 

upgraded to:

 

Ryzen 7 3700x

ASUS Prime X570-Pro

 

GPU is an ASUS GTX 1060.

 

-------------------------------------------------------------------------

 

Edit:

I resolved my issue. Having dug deeper I found the following in my VM Log:

2021-03-06T06:32:32.186442Z qemu-system-x86_64: vfio_region_write(0000:09:00.0:region1+0x1be6ba, 0x0,1) failed: Device or resource busy

 

Some searching yielded another thread:

 

Adding a user script to User Script Plugin that runs when the array starts seems to have solved my issue, I was able to successfully boot into Windows 10 VM with the GTX 1060 passed through. I have installed the drivers successfully.

 

The script:

#!/bin/bash
echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

This is excellent! I'm able to boot up my VM and GPU without any problems from the drivers. Thank you! I will be sharing this on the new 6.9.0 thread! 

  • Like 1
Link to comment
2 hours ago, fearlessknight said:

This is excellent! I'm able to boot up my VM and GPU without any problems from the drivers. Thank you! I will be sharing this on the new 6.9.0 thread! 

Amazing! Will try this out first thing tomorrow when i got back to the server!

 

@Celsian:

Thanks for the valuable info!!!

Edited by giafidis
  • Like 1
Link to comment

Upgraded back to 6.9 and tried it out but unfortunately, it doesn't work for me and im not sure if i did something wrong. As suggested, i created the User Script and set it to start before the Array does. Im getting this Output:

 

Script location: /tmp/user.scripts/tmpScripts/GPU Passthrough Fix 6.9/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/GPU Passthrough Fix 6.9/script: line 3: /sys/class/vtconsole/vtcon1/bind: No such file or directory
/tmp/user.scripts/tmpScripts/GPU Passthrough Fix 6.9/script: line 4: echo: write error: No such device

 

Starting the VM, has the same effects as before (Blackscreen and Bootloop).

Edited by giafidis
Link to comment
4 hours ago, giafidis said:

Upgraded back to 6.9 and tried it out but unfortunately, it doesn't work for me and im not sure if i did something wrong. As suggested, i created the User Script and set it to start before the Array does. Im getting this Output:

 



Script location: /tmp/user.scripts/tmpScripts/GPU Passthrough Fix 6.9/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/GPU Passthrough Fix 6.9/script: line 3: /sys/class/vtconsole/vtcon1/bind: No such file or directory
/tmp/user.scripts/tmpScripts/GPU Passthrough Fix 6.9/script: line 4: echo: write error: No such device

 

Starting the VM, has the same effects as before (Blackscreen and Bootloop).

This is exactly how I have it setup on my end with the script to run at the start of the array. It looks like you have it pointing to a different location.

It should be pointing to the config/plugins directory on the unraid USB.

chrome_A3oFXCQJHv.png

chrome_sStQ1SODq6.png

Edited by fearlessknight
Link to comment
41 minutes ago, fearlessknight said:

This is exactly how I have it setup on my end with the script to run at the start of the array. It looks like you have it pointing to a different location.

It should be pointing to the config/plugins directory on the unraid USB.

chrome_A3oFXCQJHv.png

chrome_sStQ1SODq6.png

I set it up exactly like yours (s. Attachment). Can you post the Script log?

2021-03-07 16_38_45-SnakeMountain_Userscripts — Mozilla Firefox.png

Link to comment
12 minutes ago, giafidis said:

I set it up exactly like yours (s. Attachment). Can you post the Script log?

2021-03-07 16_38_45-SnakeMountain_Userscripts — Mozilla Firefox.png

Script location: /tmp/user.scripts/tmpScripts/GPUPass/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/GPUPass/script: line 3: /sys/class/vtconsole/vtcon1/bind: No such file or directory
/tmp/user.scripts/tmpScripts/GPUPass/script: line 4: echo: write error: No such device

 

Same as yours. What do you normally use to remote into your VM? Maybe try a fresh VM install and run the script.

Link to comment
4 hours ago, fearlessknight said:

Script location: /tmp/user.scripts/tmpScripts/GPUPass/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/GPUPass/script: line 3: /sys/class/vtconsole/vtcon1/bind: No such file or directory
/tmp/user.scripts/tmpScripts/GPUPass/script: line 4: echo: write error: No such device

 

Same as yours. What do you normally use to remote into your VM? Maybe try a fresh VM install and run the script.

 

I would agree, try a new VM. Make sure Hyper-V is set back to Yes as well.

 

Here is what my VM looks like in 6.9.0, perhaps you'll see something that will help:

 

Y5M5x4u.png

Edited by Celsian
Link to comment
4 hours ago, fearlessknight said:

Script location: /tmp/user.scripts/tmpScripts/GPUPass/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/GPUPass/script: line 3: /sys/class/vtconsole/vtcon1/bind: No such file or directory
/tmp/user.scripts/tmpScripts/GPUPass/script: line 4: echo: write error: No such device

 

Same as yours. What do you normally use to remote into your VM? Maybe try a fresh VM install and run the script.

I use TightVNC, for VMs that are using VNC and GPU Passthrough.

 

I just finished installed a fresh Win10 VM. I used VNC for the Installation and switch after to my 1050 Ti. After Driver Installation, the VM locks up and reboots. I don't know what else to try out...

 

This is the Log of the new VM. It looks normal to me:


ErrorWarningSystemArrayLogin

-smp 4,sockets=1,dies=1,cores=2,threads=2 \
-uuid e8002a45-a8da-7e91-2076-7fbd4f5077de \
-display none \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=31,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-hpet \
-no-shutdown \
-boot strict=on \
-device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \
-device ahci,id=sata0,bus=pci.0,addr=0x3 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/en_windows_10_enterprise_ltsc_2019_x64_dvd_74865958.iso","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":true,"driver":"raw","file":"libvirt-3-storage"}' \
-device ide-cd,bus=sata0.0,drive=libvirt-3-format,id=sata0-0-0,bootindex=2 \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.190-1.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-cd,bus=sata0.1,drive=libvirt-2-format,id=sata0-0-1 \
-blockdev '{"driver":"file","filename":"/mnt/disks/SanDisk_SD7SB2Q-512G-1006_152978401993/Windows 10/vdisk1.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-hd,bus=sata0.2,drive=libvirt-1-format,id=sata0-0-2,bootindex=1,write-cache=on \
-netdev tap,fd=33,id=hostnet0 \
-device virtio-net,netdev=hostnet0,id=net0,mac=xx:xx:xx:xx:xx:xx,bus=pci.0,addr=0x2 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=34,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-device 'vfio-pci,host=0000:02:00.0,id=hostdev0,bus=pci.0,addr=0x5,romfile=/mnt/user/My Stuff/Dokumente/VGA BIOS/Gigabyte GeForce GTX 1050 Ti 4GB/1050ti _owndump.rom' \
-device vfio-pci,host=0000:02:00.1,id=hostdev1,bus=pci.0,addr=0x6 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2021-03-07 20:19:06.395+0000: Domain id=1 is tainted: high-privileges
2021-03-07 20:19:06.395+0000: Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)

 

Edited by giafidis
Link to comment
6 minutes ago, giafidis said:

I use TightVNC, for VMs that are using VNC and GPU Passthrough.

 

I just finished installed a fresh Win10 VM. I used VNC for the Installation and switch after to my 1050 Ti. After Driver Installation, the VM locks up and reboots. I don't know what else to try out...

 

This is the Log of the new VM. It looks normal to me:



ErrorWarningSystemArrayLogin

-smp 4,sockets=1,dies=1,cores=2,threads=2 \
-uuid e8002a45-a8da-7e91-2076-7fbd4f5077de \
-display none \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=31,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-hpet \
-no-shutdown \
-boot strict=on \
-device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \
-device ahci,id=sata0,bus=pci.0,addr=0x3 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/en_windows_10_enterprise_ltsc_2019_x64_dvd_74865958.iso","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":true,"driver":"raw","file":"libvirt-3-storage"}' \
-device ide-cd,bus=sata0.0,drive=libvirt-3-format,id=sata0-0-0,bootindex=2 \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.190-1.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-cd,bus=sata0.1,drive=libvirt-2-format,id=sata0-0-1 \
-blockdev '{"driver":"file","filename":"/mnt/disks/SanDisk_SD7SB2Q-512G-1006_152978401993/Windows 10/vdisk1.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-hd,bus=sata0.2,drive=libvirt-1-format,id=sata0-0-2,bootindex=1,write-cache=on \
-netdev tap,fd=33,id=hostnet0 \
-device virtio-net,netdev=hostnet0,id=net0,mac=xx:xx:xx:xx:xx:xx,bus=pci.0,addr=0x2 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=34,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-device 'vfio-pci,host=0000:02:00.0,id=hostdev0,bus=pci.0,addr=0x5,romfile=/mnt/user/My Stuff/Dokumente/VGA BIOS/Gigabyte GeForce GTX 1050 Ti 4GB/1050ti _owndump.rom' \
-device vfio-pci,host=0000:02:00.1,id=hostdev1,bus=pci.0,addr=0x6 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2021-03-07 20:19:06.395+0000: Domain id=1 is tainted: high-privileges
2021-03-07 20:19:06.395+0000: Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)

 

I will look more into this when I get home tonight and try out TightVNC to give you my results.

 I would also try to remote in with RDP and see if you can view your GPU drivers. Can you ping your VM after its running?

Link to comment

For the record, I have the same problem passing through a Nvidia GT 710. I tried the script above in user scripts at boot. No change. I tried binding the GT 710 to the vfio-pci driver at boot using the new system devices checkbox feature, but no luck... Very odd. Worked flawlessly in 6.8.3

Edited by DoeBoye
referenced wrong video card
Link to comment

Note: Tried running MSI Util inside Windows to see if something changed with interrupts etc  (I ran it before to take care of some performance and sound issues and it worked great), and it tries to load but gets stuck halfway and ends up 'Not responding'.... interesting.

Link to comment
12 hours ago, DoeBoye said:

For the record, I have the same problem passing through a Nvidia GT 710. I tried the script above in user scripts at boot. No change. I tried binding the GT 710 to the vfio-pci driver at boot using the new system devices checkbox feature, but no luck... Very odd. Worked flawlessly in 6.8.3

What Motherboard do you have?

I also tried the Binding Feature on all my GPUs (Nvidia and AMD) and it doesn't change anything.

 

@Celsian

Thanks your support Mate!

 

I already dumped it with the Script. It didn't make any difference...

Side note: The script doesn't work for me in 6.9. I get always an error message that i should bind the GPU with vfio-pci first and then try again. I tried that out, but no luck. In 6.8.3 however, the script works without any Binding at all...

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.