GPU Passthrough is EASY - Here's how


Recommended Posts

3 hours ago, SimonF said:

You should be able to passthru that card also, You can bind to vfio, but you dont have to bind to vfio to passthru to a vm.

so far I am able to passthrough that card, vm boots, but I'm getting Code 43 

 

this is what I am using:

 


kernel /bzimage
append vfio-pci.ids=1022:148c,10de:1ad6,10de:1ad7 video=efifb:off isolcpus=2-11,26-35 initrd=/bzroot kvm_amd.nested=1

tower-diagnostics-20230904-1848.zip

 

 


<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='12'>
  <name>WinAP</name>
  <uuid>c52df59f-c118-5e6a-ba7a-97e4cf49c880</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='14'/>
    <vcpupin vcpu='1' cpuset='38'/>
    <vcpupin vcpu='2' cpuset='15'/>
    <vcpupin vcpu='3' cpuset='39'/>
    <vcpupin vcpu='4' cpuset='16'/>
    <vcpupin vcpu='5' cpuset='40'/>
    <vcpupin vcpu='6' cpuset='17'/>
    <vcpupin vcpu='7' cpuset='41'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/c52df59f-c118-5e6a-ba7a-97e4cf49c880_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv mode='custom'>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='4' threads='2'/>
    <cache mode='passthrough'/>
    <feature policy='require' name='topoext'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/vm_ssd/domains_ssd/WinAP_082023.img' index='2'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <serial>vdisk1</serial>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso' index='1'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:8f:32:e8'/>
      <source bridge='br0'/>
      <target dev='vnet11'/>
      <model type='virtio-net'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-12-WinAP/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <audio id='1' type='none'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

 

 

 

Link to comment
  • 3 weeks later...
  • Are you adding a virtual card as the no1 graphics card? 
  • Then adding the 'passed' through graphics card as a 2nd graphics card?
  • This should streamline any problems..
  • It allows you to boot into the windows VM or other VM system first via VNC and update to get drivers in order etc.
Edited by dopeytree
Link to comment
  • 1 month later...
  • 2 months later...

Possible something to do with Uefi vs bios. 

Check your bios settings then also check the USB stick...

 

 

Quote

Just something I found, all you need to make an existing unraid install on USB stick compatible with UEFI Boot is rename the folder named "EFI-" to "EFI" and it will boot on a UEFI only system.

 

Link to comment
  • 1 month later...

Trying to pass my Quadro GPU to the VM machine. Followed the steps in the OP but after rebooting my frigate docker no longer starts which also has the quadro passed.

 

Is this intended? i.e. can you only pass through to either docker or VM and not both?

 

GUI also freezes after passing and starting up the Windows VM with the Quadro gpu.

  • Like 1
Link to comment
30 minutes ago, Toady001 said:

Trying to pass my Quadro GPU to the VM machine. Followed the steps in the OP but after rebooting my frigate docker no longer starts which also has the quadro passed.

 

Is this intended? i.e. can you only pass through to either docker or VM and not both?

 

GUI also freezes after passing and starting up the Windows VM with the Quadro gpu.

Yes the docker needs to use the host based drivers. As soon as you bind to vfio the host cannot no longer use the card and hence the docker does see the card.

 

You either need to have two cards or gpus that support splitting up(GVT/sr-vio), but this would not support output to a screen for the VM. Just would provide gpu accelaration. 

  • Like 2
Link to comment
Posted (edited)

What are you intending to use the VM for?

 

Is anything that can be done in docker containers?

 

If you wanna run plex theres a container for it. 

if you wanna game check out steam-headless container.

 

The benefit of docker is the containers can all share the GPU whereas a VM will take it exclusively.

 

Personally I've moved away from GPU passthrough in VM's. I only use VM's if I need to run a windows program.

 

I do my gaming on a steamdeck but I did play with steam headless container and actually got better raytracing performance than in a windowsVM.

Obviously some online games (like COD) don't work on linux properly due to anti-cheat but most games work well. Cyberpunk2077, Hogwarts etc.

 

Def worth having a play with both options. See which you prefer.

Edited by dopeytree
Link to comment
  • 4 weeks later...

Tried it with an Intel Arc A380.

Worked like a charm the first time, GPU visible in VM, installed current drivers. No output on monitor plugged into the GPU, but no big issue. 

 

After restarting the VM I got the famous error pci header type 127. A full reboot of the host machine solved it, but it comes again after every VM reboot. 

I guess that's a byproduct of Unraid still not supporting Arc GPUs. 

Also got a strange error one time, that it was unable to bind it to vfio, but I think that was because it simply didn't detect the card and tried to vfio some CPU stuff to the VM 😂

 

I'll let the card sit in the host, doesn't make performance worse, I think. And when the Beta branch for the Arc support finally comes, I will give it a try again. 

Link to comment
On 3/23/2024 at 3:51 AM, Toady001 said:

Trying to pass my Quadro GPU to the VM machine. Followed the steps in the OP but after rebooting my frigate docker no longer starts which also has the quadro passed.

 

Is this intended? i.e. can you only pass through to either docker or VM and not both?

 

GUI also freezes after passing and starting up the Windows VM with the Quadro gpu.

I am having the same issue with my single Nvidia Quadro K5000 GPU when attaching it to my Windows 10 VM. The VM shows as started but inaccessible. VNC works fine of course

Link to comment

Have you got the virtual driver as the 1st graphics card in the VM options?

 

Should be able to login on vnc then set up Microsoft remote desktop for better quality remote experience. 

 

Then getting video out to a monitor should be as simple as going to graphics or video settings. 

I'm no expert just trying to point in the right direction.

Edited by dopeytree
Link to comment
2 hours ago, dopeytree said:

Have you got the virtual driver as the 1st graphics card in the VM options?

 

Should be able to login on vnc then set up Microsoft remote desktop for better quality remote experience. 

 

Then getting video out to a monitor should be as simple as going to graphics or video settings. 

I'm no expert just trying to point in the right way.

First off I want to thank you for reaching out @dopeytree Much appreciated. I followed the instructions in this post and that seems to have resolved my issue. I am hoping that my Plex media server uses the Nvidia GPU when providing content to my daughters when they access it for the occasional movie or TV show (wink wink) You were definitely pointing me in the right direction though.

 

GPU Passthrough

Edited by peterbata
Link to comment

Jesus, this took me a couple hours to troubleshoot....
I couldn't figure out what you were talking about by adding a second video card.

 

Theres a tiny easily missed plus button to add more cards in the VM settings:

 

image.png.62481ab86403d5ad219c22d050117159.png

 

I kept getting a Guest not initialized error, so had to go into XML view (top right of VM settings), and switch the bus to 0 from 7:

image.png.5dcdec6e4c02c97d57c2af18064edf43.png

to

image.png.ae78547ac2e674ce12eafa550dc40a4c.png

It gave me an error for slot 1 so I switched it to slot 2.

I have to manually edit this every time I change a setting. Anyone know a fix?

 

Now I can start the VM using VNC in unraid, as well as by using remote viewer of your choice.

 

Was having a hell of a time getting Nvidia drivers installed, had to go back and forth to VNC like 15 times. Now I have the VNC drivers and Nvidia drivers installed and showing up in device manager.

 

Now even loads Star Citizen which was my entire reason for making this VM.

 

 

Edit: After a single restart the VM keeps losing the Nvidia drivers (device manager has question mark). Nvidia control panel wont open. Task manager doesnt have a GPU tab even right after reinstalling the gpu driver (and shows up in device manager). anymore.

Edited by RaidUnnewb
  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.