Trying to get gpu passthrough working windows 10. Blank monitor


10 posts in this topic Last Reply

Recommended Posts

I want to get my 2nd GPU for passthrough working but it isn't getting detected by the monitor. (Normal pc without unraid runs fine no display issues)

 

I tried using the vbios method. (For some reason it actually booted correctly then stopped working, it never worked again since) I tried other harddrives and ssds as well with reinstalling windows and nothing helps. The only way to access the VM is through the VNC client but I can't do that if im using the passthrough. Im currentlly at a loss. Some other settings i tried

 

vfio-pci.ids=10de:1e07,10de:10f7 (i adjust this according to the gpu and audio for gpu)

with PCIe ACS override (this lets me actually boot up the VM)

VFIO allow unsafe interrupts is set to yes

 

I tried reinstalling unraid as well. Nothing works

 

I tried doing both GPUs, I tried doing 1 or the other. I even tried using integrated graphics on motherboard then gpus

 

tower-diagnostics-20200112-0232.zip

Edited by newunraidusers
Link to post

I found this popping up seemingly everytime i reboot

 

Jan 12 03:04:22 Tower kernel: vfio-pci 0000:02:00.0: BAR 1: can't reserve [mem 0x6000000000-0x600fffffff 64bit pref]
Jan 12 03:04:22 Tower kernel: vfio-pci 0000:02:00.0: BAR 1: can't reserve [mem 0x6000000000-0x600fffffff 64bit pref]
Jan 12 03:04:22 Tower kernel: vfio-pci 0000:02:00.0: BAR 1: can't reserve [mem 0x6000000000-0x600fffffff 64bit pref]
Jan 12 03:04:22 Tower kernel: vfio-pci 0000:02:00.0: BAR 1: can't reserve [mem 0x6000000000-0x600fffffff 64bit pref]

and

 

2020-01-12T11:04:22.026175Z qemu-system-x86_64: vfio_region_write(0000:02:00.0:region1+0x12cc60, 0x0,8) failed: Device or resource busy
2020-01-12T11:04:22.026183Z qemu-system-x86_64: vfio_region_write(0000:02:00.0:region1+0x12cc68, 0x0,8) failed: Device or resource busy
2020-01-12T11:04:22.026192Z qemu-system-x86_64: vfio_region_write(0000:02:00.0:region1+0x12cc70, 0x0,8) failed: Device or resource busy
2020-01-12T11:04:22.026209Z qemu-system-x86_64: vfio_region_write(0000:02:00.0:region1+0x12cc78, 0x0,8) failed: Device or resource busy\

 

here is my xml for the vm

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='5'>
  <name>Windows 10</name>
  <uuid>159bb68a-55cd-8e9a-f2d9-595621eb1b2f</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>47710208</memory>
  <currentMemory unit='KiB'>47710208</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>14</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='9'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='10'/>
    <vcpupin vcpu='4' cpuset='3'/>
    <vcpupin vcpu='5' cpuset='11'/>
    <vcpupin vcpu='6' cpuset='4'/>
    <vcpupin vcpu='7' cpuset='12'/>
    <vcpupin vcpu='8' cpuset='5'/>
    <vcpupin vcpu='9' cpuset='13'/>
    <vcpupin vcpu='10' cpuset='6'/>
    <vcpupin vcpu='11' cpuset='14'/>
    <vcpupin vcpu='12' cpuset='7'/>
    <vcpupin vcpu='13' cpuset='15'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-4.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/159bb68a-55cd-8e9a-f2d9-595621eb1b2f_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='7' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Windows 10/vdisk1.img' index='3'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/iso/Win10_1903_V2_English_x64.iso' index='2'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/iso/virtio-win-0.1.171.iso' index='1'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:d3:39:98'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-5-Windows 10/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/user/iso/EVGA.RTX2080Ti.11264.181023.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1395'/>
        <product id='0x740a'/>
        <address bus='1' device='7'/>
      </source>
      <alias name='hostdev2'/>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x006c'/>
        <address bus='1' device='11'/>
      </source>
      <alias name='hostdev3'/>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x0228'/>
        <address bus='1' device='8'/>
      </source>
      <alias name='hostdev4'/>
      <address type='usb' bus='0' port='4'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x0c00'/>
        <address bus='1' device='2'/>
      </source>
      <alias name='hostdev5'/>
      <address type='usb' bus='0' port='5'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x8087'/>
        <product id='0x0aaa'/>
        <address bus='1' device='9'/>
      </source>
      <alias name='hostdev6'/>
      <address type='usb' bus='0' port='6'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

Edited by newunraidusers
Link to post

Hello,

 

The '2020-01-12T11:04:22.026175Z qemu-system-x86_64: vfio_region_write(0000:02:00.0:region1+0x12cc60, 0x0,8) failed: Device or resource busy' lines indicate something else has hold of the GPU.

 

Maybe unRAID itself?


Try running the following in a terminal prior to booting the VM:

 

echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

 

Ensure you have your vBIOS set correctly as per the reply above. You will likely lose display after running the above commands - you might need to use your phone or a tablet on the same network to start the VM and test this.

 

Cheers,

Ross.

Edited by Ross Cannizzaro
Link to post

What @Ross Cannizzaro said. Are you by chance booting up in UEFI mode? If so, it's likely that the kernel is taking control of card and not letting it go for the VM. Take a look at this post. https://passthroughpo.st/explaining-csm-efifboff-setting-boot-gpu-manually/.

 

You likely need to tell GRUB not to touch the card by adding this to the boot options:

video=efifb:off

-JesterEE

Link to post
  • 2 weeks later...

I have been working to get my ASUS GTX-1050Ti working for several weeks.It has been an extremely frustrating experience, always a black screen after starting the VM. To note, I did get the VM to sucessfully boot using Windows 10 using VNC as the video driver. Updated windows and enabled Remote Desktop in Windows. This was to ensure I could get to the VM if when I passed through the GTX 1050 I got a black screen. So under VNC graphics everything was working correctly. I tested Remote Desktop and could get to the VM no problem. After  all that was working OK I passed through my GTX 1050Ti video card and started the VM. Black Screen! Remoted into the machine using RDP and logged in and saw in device manager that Windows only saw the RDP Display Adapter and not the Nivida graphics card. Therefore I surmised that Windows was not seeing the hardware. BTW check your IOMMU groups to ensure that the video card is shown and in its own IOMMU group along with the audio and optinally usb if your card has that.  I closely followed SpaceInvader's videos to no avail (or at least I thought). Going back to recheck everything after several weeks of frustrating effort I finally got it working, There was a lot on the forums and from SpaceInvader as well about the MOBO Bios maybe breaking IOMMU. I tried all valid BIOS's from the oldest supporting my CPU (Threadripper 2950X) to the latest with no positive result. My mobo is a ROG STRIX X399 Gaming-E and I used bios 808 which was on the board when I received it. Anyway I edited the XML per SpaceInvaders instructions to ensure the video card was on the same bus/slot.

 

My comments in the below XML are in curly braces: {An extract from your XML}

 

An extract of your XML (ORIGINAL):

<source>

<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> {Bare metal machine}

</source>

<alias name='hostdev0'/>

<rom file='/mnt/user/iso/EVGA.RTX2080Ti.11264.181023.rom'/>

<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> {Virtual Machine}

</hostdev>

<hostdev mode='subsystem' type='pci' managed='yes'>

<driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> {Bare metal machine}

</source>

<alias name='hostdev1'/>

<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> {Invalid:RED Color see edit XML below} {VM}

 

As per SpaceInvader's instructions this XML extract is invalid and he says (if I understand him correctly) there is a bug here. Your video card is a multifunction device therefore it as Video/Audio and I think USB, it is performing three functions. The card in the bare metal machine is inserted into one slot, therefore the the card must reside in ONE SLOT in the VM. So your XML above should be edited to the following:

 

<source>

<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> {bare metal}

</source>

<alias name='hostdev0'/>

<rom file='/mnt/user/iso/EVGA.RTX2080Ti.11264.181023.rom'/>

<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> {Video card is a multifunction device}

</hostdev>

<hostdev mode='subsystem' type='pci' managed='yes'>

<driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> {Bare Metal}

</source>

<alias name='hostdev1'/>

<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> {Same slot as the VM for video above and fun=1 for audio}

<<<looks like your card is missing the USB port, did you select it in FORM view?>>> Does your card have a usb port? If not disregard.

 

Note that in the VM you have the 2080 sitting in two slots (3 and 6). They must be in oneSlot.Note the multifunction parameter added as well on the VM's bus/slot (make sure a space is included before the "multifunction='on'/>" text.

 

Also make one additional change as recommended by SpaceInvader, make sure your Unraid Server boots in Legacy mode and not UEFI mode. After making the above changes to the XML it did not work but when I booted in Legacy mode it did, at least it worked for me, others here in the forums might have had a different experience.

 

One point of further note, I see that the 2080Ti has a USB 3 port. As you selected the audio card in the form view you need to check further in form view for USB addin and select the USB associated with your 2080Ti. As the video in the vm as Function ='0x0', the audio has Function='0x1', I assume the USB will have Function='0x2' in the same slot as the two previous. If you don't have this setup properly with all three components when you or windows install the Nvida dirvers it is going to be missing the USB and most likely fail. Same will happen if the Card is not shown in the same bus/slot for all three functions, it will most likely fail. SpaceInvader clearly states this. Anyway it worked for me.

 

One final point, you selected an i440 machine which is "correct" for Windows 10. Some have said in the forums that the Q35 machine can give better performance but I could never get it to work in all the troubleshooting I did until...

 

Went into form view on the VM with Q35 selected and reselected everything or minimum make a change to reset the XML. Started the VM, black screen. After much tearing out of hair (I don't have much left) I saw that the XML was showing the Bus=4, Slot=0, function=0 for video on the VM and Bus=5, Slot=0, function=0 for audio on the VM. On a whim I changed to the following: Bus=0, Slot=5, function=0 for video on the VM and Bus=0, Slot=5, function=1 and it worked, Don't ask me why but it worked for me. Maybe someone can elaborate. I tried this with multiple VM's using the Q35 machine and using this parameter I was able to get all of them to work when none of them did previously.

 

Regarding booting in Legacy Mode in Unraid goto MAIN>Flash> and scroll to the bottom and uncheck "Permit UEFI boot mode" after setting you MOBO BIOS to boot into Legacy Mode.

 

Note that I have a single graphics card in Slot 1 of the MOBO with a passed through ROM BIOS. I have worked with that as the simplest configuration to eliminate as many variables as possible. If you have one Graphic card you MUST pass thru the ROM Bios, check with Tech Powerup as SpaceInvader recommends. If your card is still black screen try multiple ROM Bios from TechPowerUP and try all. Also edit the ROM bios to remove the Header Info as per SpaceInvader's instructions otherwise it will fail from the get-go. Or you can download the ROM Bios from your card, a little more complicated but very doable.

 

Anyway the above worked for me after many weeks of trial and error and very sorry for the long post. Hope this helps someone.

Edited by Sparkie
Clarification
Link to post

Further to my post above a clarification, please check the Bus/Slot combo in the edited version and make sure nothing else is occupying that Bus/Slot combo. I did not check it when I did the edit. In other words you cannot have two "cards" inserted into the same Bus/Slot combo in the VM. 

Link to post
  • 9 months later...

Hi. I followed the above thread and finally got my Windows 10 VM to get out from blackscreen with a MSI 3070 Ventus2x card. Now I have the screen working but it is stuck in 600x480 resolution and in device manager is not showing the 3070 card - it is showing just a generic microsoft display adapter.

 

Any tips? 

Link to post
  • 3 weeks later...
On 11/2/2020 at 12:49 PM, Amit said:

Hi. I followed the above thread and finally got my Windows 10 VM to get out from blackscreen with a MSI 3070 Ventus2x card. Now I have the screen working but it is stuck in 600x480 resolution and in device manager is not showing the 3070 card - it is showing just a generic microsoft display adapter.

 

Any tips? 

Hi,

 

A bit late to the party here but I had a similar issue with my 980 GTX. I was basically using a downloaded bios as per the SpaceInvaderOne video. I ended up snapshoting my own bios directly from the card and that did the trick. Worth trying anyway.

Link to post

Sorry for the late reply. I would second Scoopsy13 to snapshot the bios in your particular card and edit as per Spaceinvaders instructions. He also has a video on how to download a program available on Tech Powerup’s website to snapshot your bios and edit it to remove header info. Further note that if you edit the form view of the VM the XML will revert back to default and you will have to make all the changes again. Good luck. Since I have got mine working it has been rock solid even through a number of windows updates and upgrades, flawless.

Edited by Sparkie
Clarification on reliability.
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.