GPU passthrough with only one card?


d4rkf

91 posts in this topic Last Reply

Recommended Posts

OK, so I managed to extract the vbios rom, however, I still get the black sreen after powering on the VM. Under hostdev in my xml i use the command like this...

 

</source>

<rom file='/boot/vbios.rom'/>

 

I tried with Seabios and ovmf and both exhibit a black screen.

 

Ok but is that in your VM configuration XML file? It should be ",romfile=/boot/vbios.rom" (no spaces), added to an existing line like in my post. I don't know about other VM configuration file formats, you'd have to look that up.

 

A good way to verify the bios file is to try it while the GPU is still placed in the second slot. If it then works well for passthrough, it should still do so after you add the "romfile=" part to the VM configuration. You could also try specifying a bad file to use as rom to see if the option is actually used (it should not boot).

 

For me using the Vbios file solved the black screen issue, but there may be other issues on you system. The page I linked to mentions: "But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR)."

 

My devices including the 980ti GPU are all passed through using the hostdev method. I tried qemu method but this did not work either.

In fact, when I specify the ROM file, OVMF BIOS does not even appear on the screen so I know the vbios rom is being read.

Link to post
  • Replies 90
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I solved this on my system: Asus Rampage IV Formula / Intel Core i7-4930k / 4 x NVIDIA Gigabyte GTX 950 Windforce, with all graphics cards passed through to Windows 10 VMs. The problem I was having wa

Posted Images

My devices including the 980ti GPU are all passed through using the hostdev method. I tried qemu method but this did not work either.

In fact, when I specify the ROM file, OVMF BIOS does not even appear on the screen so I know the vbios rom is being read.

 

So normally when the 980ti is in the second slot you can pass it through without problems, but when you add the rom file option to the configuration it breaks? I'd say it's a bad romfile then. You're sure you've read it from the card while the card is in the secondary slot and able to pass through? If you read it while the card is not working for passthrough, you probably get the file that doesn't work in the first place. For my GTX950 this file was significantly smaller.

 

I have no experience with using romfiles with the hostdev method, maybe someone can step in here.

 

Link to post

My devices including the 980ti GPU are all passed through using the hostdev method. I tried qemu method but this did not work either.

In fact, when I specify the ROM file, OVMF BIOS does not even appear on the screen so I know the vbios rom is being read.

 

So normally when the 980ti is in the second slot you can pass it through without problems, but when you add the rom file option to the configuration it breaks? I'd say it's a bad romfile then. You're sure you've read it from the card while the card is in the secondary slot and able to pass through? If you read it while the card is not working for passthrough, you probably get the file that doesn't work in the first place. For my GTX950 this file was significantly smaller.

 

I have no experience with using romfiles with the hostdev method, maybe someone can step in here.

 

The only way i could unbind the card and read the rom file was if the gpu was in the 1st slot and no other gpu was installed. If i tried it in the second slot, unbind would not work and cat > rom would also fail. The GPU works fine in passthrough in the 2nd slot at the moment. As soon as I swap it to the 1st slot and specify the ROM file, the VM does not even show OVMF BIOS and is stuck on black screen.

Link to post

The only way i could unbind the card and read the rom file was if the gpu was in the 1st slot and no other gpu was installed. If i tried it in the second slot, unbind would not work and cat > rom would also fail. The GPU works fine in passthrough in the 2nd slot at the moment. As soon as I swap it to the 1st slot and specify the ROM file, the VM does not even show OVMF BIOS and is stuck on black screen.

 

Ah but then if the card was used to boot, you're probably reading the shadowed copy which may be the cause of all problems.

When you boot off another card and do 'lspci -v', what does it say for 'Kernel driver in use' for the 980ti?

 

Link to post

The only way i could unbind the card and read the rom file was if the gpu was in the 1st slot and no other gpu was installed. If i tried it in the second slot, unbind would not work and cat > rom would also fail. The GPU works fine in passthrough in the 2nd slot at the moment. As soon as I swap it to the 1st slot and specify the ROM file, the VM does not even show OVMF BIOS and is stuck on black screen.

 

Ah but then if the card was used to boot, you're probably reading the shadowed copy which may be the cause of all problems.

When you boot off another card and do 'lspci -v', what does it say for 'Kernel driver in use' for the 980ti?

 

Here you go... The 980ti card is currently in another slot and im using an AMD card to boot UNRAID.

 

03:00.0 VGA compatible controller: NVIDIA Corporation Device 17c8 (rev a1) (prog-if 00 [VGA controller])

        Subsystem: eVga.com. Corp. Device 4998

        Flags: bus master, fast devsel, latency 0, IRQ 61

        Memory at f8000000 (32-bit, non-prefetchable)

        Memory at b0000000 (64-bit, prefetchable)

        Memory at c0000000 (64-bit, prefetchable)

        I/O ports at c000

        Expansion ROM at f9000000 [disabled]

        Capabilities: [60] Power Management version 3

        Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+

        Capabilities: [78] Express Legacy Endpoint, MSI 00

        Capabilities: [100] Virtual Channel

        Capabilities: [258] #1e

        Capabilities: [128] Power Budgeting <?>

        Capabilities: [420] Advanced Error Reporting

        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>

        Capabilities: [900] #19

        Kernel driver in use: vfio-pci

 

 

Link to post

Ok that's about the same as I had. If you do:

cd /sys/bus/pci/drivers/vfio-pci/
ls -al

Do you see "0000:03:00.0" listed somewhere, or something else ending with "03:00.0"?

I can't test right now on my unRAID box, but as I remember there were symlinks in that directory for the cards bound to vfio-pci.

If there is a link I would expect that this would work:

echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind"

 

I'm not sure why you actually need to unbind the card to read the rom. But for the GTX950 this step was required.

 

Link to post

Ok that's about the same as I had. If you do:

cd /sys/bus/pci/drivers/vfio-pci/
ls -al

Do you see "0000:03:00.0" listed somewhere, or something else ending with "03:00.0"?

I can't test right now on my unRAID box, but as I remember there were symlinks in that directory for the cards bound to vfio-pci.

If there is a link I would expect that this would work:

echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind"

 

I'm not sure why you actually need to unbind the card to read the rom. But for the GTX950 this step was required.

 

Yes its there and this is what I attempted with the card in another slot. It's not until I moved the card to slot 1 and removed other gfx cards that the unbind actually worked. Im pretty sure I have a valid vbios rom file too since I followed your instructions exactly.

Link to post

No I don't think the rom you have is valid if you read it from the card while it is in the primary slot and not working for passthrough. I tried that, I did not get the same file as you get when the card is not used for booting. You probably get the rom that is used when you don't specify a rom file, which results in a black screen.

 

One thing worth trying, do you have any VM that is configured to use the device as GPU? What happens if you remove all references to the card in any VM and reboot, is it still using the vfio-pci kernel driver? If not try reading the rom, also don't forget the "echo 1 > rom" to activate it.

 

If the vfio-pci unbind trick doesn't work for you, one thing you might try is downloading the bios from http://www.techpowerup.com/vgabios/ or reading it with GPU-Z. Try it, if it doesn't work then I think you have a 'hybrid' bios like I think I had. I determined with a hex-editor that the file I read from my rom was also located somewhere in the middle of the file I had read using GPU-Z. Maybe you can find a way to extract it.

 

Link to post

No I don't think the rom you have is valid if you read it from the card while it is in the primary slot and not working for passthrough. I tried that, I did not get the same file as you get when the card is not used for booting. You probably get the rom that is used when you don't specify a rom file, which results in a black screen.

 

One thing worth trying, do you have any VM that is configured to use the device as GPU? What happens if you remove all references to the card in any VM and reboot, is it still using the vfio-pci kernel driver? If not try reading the rom, also don't forget the "echo 1 > rom" to activate it.

 

If the vfio-pci unbind trick doesn't work for you, one thing you might try is downloading the bios from http://www.techpowerup.com/vgabios/ or reading it with GPU-Z. Try it, if it doesn't work then I think you have a 'hybrid' bios like I think I had. I determined with a hex-editor that the file I read from my rom was also located somewhere in the middle of the file I had read using GPU-Z. Maybe you can find a way to extract it.

 

I tried the card in another vacant slot and this time unbind worked. I extracted the ROM and added romfile to hostdev in my xml. The vbios ROM is detected and the VM posts with the Gfx card in Slot 1. Thanks for your assistance.

Link to post

I tried the card in another vacant slot and this time unbind worked. I extracted the ROM and added romfile to hostdev in my xml. The vbios ROM is detected and the VM posts with the Gfx card in Slot 1. Thanks for your assistance.

 

OK no problem! Good to see it works for other cards too.

 

Link to post

sorry if dumb question but would this ROM thing work for the problem i have with any time i start a VM with GPU pass through i can only do it once, if i turn off the VM and try to start it up again nothing shows up.

Link to post

A little stale in here, has anyone else had any luck with the commands for the nvidia?  I threw my card in the second slot and I get errors with the echo "0000:04:00.0" > ..unbind as well as the cat rom > ...

 

I even tried nano rom and the file is just empty?

Link to post

i don't have this "<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/>" any ideas?

 

<domain type='kvm' id='14'>

  <name>OpenELEC 6.0</name>

  <uuid>712159f2-29a3-2536-c9fa-dc78f56ee6a1</uuid>

  <metadata>

    <vmtemplate xmlns="unraid" name="OpenELEC" icon="openelec.png" openelec="6.0.0_1"/>

  </metadata>

  <memory unit='KiB'>4718592</memory>

  <currentMemory unit='KiB'>4718592</currentMemory>

  <memoryBacking>

    <nosharepages/>

    <locked/>

  </memoryBacking>

  <vcpu placement='static'>2</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='0'/>

    <vcpupin vcpu='1' cpuset='1'/>

  </cputune>

  <resource>

    <partition>/machine</partition>

  </resource>

  <os>

    <type arch='x86_64' machine='pc-q35-2.5'>hvm</type>

    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>

    <nvram>/etc/libvirt/qemu/nvram/712159f2-29a3-2536-c9fa-dc78f56ee6a1_VARS-pure-efi.fd</nvram>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='1' threads='2'/>

  </cpu>

  <clock offset='utc'>

    <timer name='rtc' tickpolicy='catchup'/>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='hpet' present='no'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>restart</on_crash>

  <devices>

    <emulator>/usr/local/sbin/qemu</emulator>

    <disk type='file' device='disk'>

      <driver name='qemu' type='raw' cache='writeback'/>

      <source file='/mnt/user/vm/OpenELEC-unRAID.x86_64-6.0.0_1.img'/>

      <backingStore/>

      <target dev='hdc' bus='virtio'/>

      <readonly/>

      <boot order='1'/>

      <alias name='virtio-disk2'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>

    </disk>

    <controller type='usb' index='0' model='nec-xhci'>

      <alias name='usb'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>

    </controller>

    <controller type='sata' index='0'>

      <alias name='ide'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>

    </controller>

    <controller type='pci' index='0' model='pcie-root'>

      <alias name='pcie.0'/>

    </controller>

    <controller type='pci' index='1' model='dmi-to-pci-bridge'>

      <model name='i82801b11-bridge'/>

      <alias name='pci.1'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>

    </controller>

    <controller type='pci' index='2' model='pci-bridge'>

      <model name='pci-bridge'/>

      <target chassisNr='2'/>

      <alias name='pci.2'/>

      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>

    </controller>

    <controller type='virtio-serial' index='0'>

      <alias name='virtio-serial0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>

    </controller>

    <filesystem type='mount' accessmode='passthrough'>

      <source dir='/mnt/user/vm/'/>

      <target dir='appconfig'/>

      <alias name='fs0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>

    </filesystem>

    <interface type='bridge'>

      <mac address='52:54:00:d8:1e:27'/>

      <source bridge='br0'/>

      <target dev='vnet0'/>

      <model type='virtio'/>

      <alias name='net0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>

    </interface>

    <serial type='pty'>

      <source path='/dev/pts/0'/>

      <target port='0'/>

      <alias name='serial0'/>

    </serial>

    <console type='pty' tty='/dev/pts/0'>

      <source path='/dev/pts/0'/>

      <target type='serial' port='0'/>

      <alias name='serial0'/>

    </console>

    <channel type='unix'>

      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-OpenELEC 6.0/org.qemu.guest_agent.0'/>

      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>

      <alias name='channel0'/>

      <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>

      </source>

      <alias name='hostdev0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>

    </hostdev>

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/>

      </source>

      <alias name='hostdev1'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>

    </hostdev>

    <memballoon model='virtio'>

      <alias name='balloon0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/>

    </memballoon>

  </devices>

</domain>

Link to post

I solved this on my system: Asus Rampage IV Formula / Intel Core i7-4930k / 4 x NVIDIA Gigabyte GTX 950 Windforce, with all graphics cards passed through to Windows 10 VMs. The problem I was having was that the 3 cards in slots 2, 3 and 4 pass though fine, but passing through the card in slot 1, which is being used to boot unRAID, freezes the connected display.

 

I explored the option to add another graphics card. A USB card won't be recognized by the system BIOS to use for POST. The only other card I could add would be connected by a PCIe 1x to PCIe 16x riser card (which did work by the way for passthrough, but a need to pass through a x16 slot), but it would require modding the mainboard BIOS to get it use it as primary. So I looked for another solution.

 

The problem was caused by the VBIOS on the video card, as mentioned on http://www.linux-kvm.org/page/VGA_device_assignment:

To re-run the POST procedures of the assigned adapter inside the guest, the proper VBIOS ROM image has to be used. However, when passing through the primary adapter of the host, Linux provides only access to the shadowed version of the VBIOS which may differ from the pre-POST version (due to modification applied during POST). This has be been observed with NVDIA Quadro adapters. A workaround is to retrieve the VBIOS from the adapter while it is in secondary mode and use this saved image (-device pci-assign,...,romfile=...). But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR).

 

In my case I could not use the VBIOS from http://www.techpowerup.com/vgabios/. The file I got from there, and also the ones read using GPU-Z is probably a Hybrid BIOS, it includes the legacy one as well as the UEFI one. It's probably possible to extract the required part from the file, but it's pretty simple to read it from the card using the following steps:

 

1) Place the NVIDIA card in the second PCIe slot, using another card as primary graphics card to boot the system.

2) Stop any running VMs and open a SSH connection

3) Type "lspci -v" to get the pci id for the NVIDIA card. It is assumed to be 02:00.0 here, otherwise change numbers below accordingly.

4) If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

5) Readout the VBIOS:

cd /sys/bus/pci/devices/0000:02:00.0/
echo 1 > rom
cat rom > /boot/vbios.rom
echo 0 > rom

6) Bind it back to vfio-pci if required:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

The card can now be placed back as primary, and a small modification must be made to the VM that will use it, to use the VBIOS file read in the above steps. In the XML for the VM, change the following line:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/>

To:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on,romfile=/boot/vbios.rom'/>

 

After this modification, the card is passed through without any problems on my system. This may be the case for more NVIDIA cards used as primary adapters!

 

Hi,I am newbie to vfio. I have few quesions to ask

(1)Which Linux distributions You use to do this experiment.

(2)Linux kernel edition number and qemu edition number

Thx very much!

Link to post

I solved this on my system: Asus Rampage IV Formula / Intel Core i7-4930k / 4 x NVIDIA Gigabyte GTX 950 Windforce, with all graphics cards passed through to Windows 10 VMs. The problem I was having was that the 3 cards in slots 2, 3 and 4 pass though fine, but passing through the card in slot 1, which is being used to boot unRAID, freezes the connected display.

 

I explored the option to add another graphics card. A USB card won't be recognized by the system BIOS to use for POST. The only other card I could add would be connected by a PCIe 1x to PCIe 16x riser card (which did work by the way for passthrough, but a need to pass through a x16 slot), but it would require modding the mainboard BIOS to get it use it as primary. So I looked for another solution.

 

The problem was caused by the VBIOS on the video card, as mentioned on http://www.linux-kvm.org/page/VGA_device_assignment:

To re-run the POST procedures of the assigned adapter inside the guest, the proper VBIOS ROM image has to be used. However, when passing through the primary adapter of the host, Linux provides only access to the shadowed version of the VBIOS which may differ from the pre-POST version (due to modification applied during POST). This has be been observed with NVDIA Quadro adapters. A workaround is to retrieve the VBIOS from the adapter while it is in secondary mode and use this saved image (-device pci-assign,...,romfile=...). But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR).

 

In my case I could not use the VBIOS from http://www.techpowerup.com/vgabios/. The file I got from there, and also the ones read using GPU-Z is probably a Hybrid BIOS, it includes the legacy one as well as the UEFI one. It's probably possible to extract the required part from the file, but it's pretty simple to read it from the card using the following steps:

 

1) Place the NVIDIA card in the second PCIe slot, using another card as primary graphics card to boot the system.

2) Stop any running VMs and open a SSH connection

3) Type "lspci -v" to get the pci id for the NVIDIA card. It is assumed to be 02:00.0 here, otherwise change numbers below accordingly.

4) If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

5) Readout the VBIOS:

cd /sys/bus/pci/devices/0000:02:00.0/
echo 1 > rom
cat rom > /boot/vbios.rom
echo 0 > rom

6) Bind it back to vfio-pci if required:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

The card can now be placed back as primary, and a small modification must be made to the VM that will use it, to use the VBIOS file read in the above steps. In the XML for the VM, change the following line:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/>

To:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on,romfile=/boot/vbios.rom'/>

 

After this modification, the card is passed through without any problems on my system. This may be the case for more NVIDIA cards used as primary adapters!

 

Hi,I am newbie to vfio. I have few quesions to ask

(1)Which Linux distributions You use to do this experiment.

(2)Linux kernel edition number and qemu edition number

Thx very much!

1. He use unraid to do this.

2. The latest version 6.1.9 is using Kernel 4.1.17 and the qemu version is 2.3.0.

Link to post
  • 2 weeks later...

Big thanks for sharing this method of passing through a single NV card!

 

I managed to extract the rom and also succeed passing through the card this way but

now i´m facing some harsh performance drops on the guest (Windows 10).

In fact the GPU seems to perform at around 40-50% of its real bare metal power. (depending on which benchmark you believe)

 

 

Does anybody else facing this performance issues ?

 

Greetings

Link to post

Can you tell me how you managed to copy the rom file ? no matter what I do I don't have ssh permissions I can access it and disable passtrough but can't access the file.

 

this is what I tried last:

/sys/bus/pci/devices/0000:02:00.0# cat rom > /boot/vbios.rom

cat: rom: Invalid argument

 

I don't know how to get permissions on rom file or why it works for other people./

 

Link to post

Hi,

 

does anybody figured out a way to extract the bios while the card is in the primary PCIe slot?

 

Extracting the bios while its in another slot seems to be a problem on amd boards.

Because most amd boards dont supply multiple x16 ports. If the bios is extracted on a (like in my case) x4 slot you´l get some kind of a crippled x4 lanes version of the bios. This leads to a real bad performance if this rom is used for the card while its in a x16 slot.

 

Another possibility would be a bios extracted by somebody else.

Is there someone who is using a MSI 950GTX and could extract a bios while the card is in a secondary x16 slot?

 

Kind greats

Link to post
  • 2 weeks later...

I have the same problem.

 

I will try this:

 

"PCI Express 1X Male To PCI-E 16X Female Riser Ribbon Extender Cable Adapter"

http://www.ebay.com/itm/PCI-Express-1X-Male-To-PCI-E-16X-Female-Riser-Ribbon-Extender-Cable-Adapter-/182073676051?hash=item2a646fd113:g:xhgAAOSwgApW~QfU

 

Then add a simple PCIX lame card just for UNRAID. Cold it work? Anyone tryed?  Because I have 3 free PCIx x1 slots.

 

Thanks!

 

 

Link to post

I have the same problem.

 

I will try this:

 

"PCI Express 1X Male To PCI-E 16X Female Riser Ribbon Extender Cable Adapter"

http://www.ebay.com/itm/PCI-Express-1X-Male-To-PCI-E-16X-Female-Riser-Ribbon-Extender-Cable-Adapter-/182073676051?hash=item2a646fd113:g:xhgAAOSwgApW~QfU

 

Then add a simple PCIX lame card just for UNRAID. Cold it work? Anyone tryed?  Because I have 3 free PCIx x1 slots.

 

Thanks!

 

If you do use a riser/adapter, you may have to do the following trick to have the card initialized by your motherboard.

https://lime-technology.com/forum/index.php?topic=43948.msg419720#msg419720

Link to post

Just wanted to chime in here to say that I have also used hupster's method and successfully passed my NVIDIA GTX970 through to a Windows 10 vm while it was the only video card in the system (and in slot 1).    The hardware in question is an X99 board (ASRock Extreme 6) and an i7 5820K - so no onboard video. 

 

I had to fart around a bit with this because for some reason, even with the 970 in slot 2 and another (AMD) video card in slot 1, I could not get an OVMF vm to work at all (even though my card supports UEFI) with either or both cards installed.  Once I switched over to SeaBios things started working.    But then I still had issues when I moved the 970 to slot 1 (which I wanted because I wanted the x16 performance).

 

I'll recap my steps here so that its all in one place as I had to put things together from the posts above, the WiKi and one or two other spots.

 

1) I started by placing the AMD video card in PCIe x16 slot 1 and the NVIDIA GTX 970 in the second PCIe x16 slot.

 

2) stopped the VMs

 

3) ssh into the unraid machine

 

3)  Type "lspci -v" and note the pci id for the NVIDIA card.  In my case the NVIDIA had an ID of 01:00.0.  The output looked like this:

 

01:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: ZOTAC International (MCO) Ltd. GM204 [GeForce GTX 970]
        Flags: bus master, fast devsel, latency 0, IRQ 42, NUMA node 0
        Memory at fa000000 (32-bit, non-prefetchable) [size=16M]
        Memory at e0000000 (64-bit, prefetchable) [size=256M]
        Memory at f0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at e000 [size=128]
        Expansion ROM at fb000000 [disabled] [size=512K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [258] L1 PM Substates
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] #19

 

Note that in my case the "

Kernel driver in use: vfio-pci

" line was not present so I didnt have to unbind it like hupster did.  If you see that line for you card, you'll need to follow his instructions (I'll put them here as well):

 

3a)

If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

 

4) This is the key step.  The card wont currently work in slot 1 because its vbios is shadowed during bootup.  So we need to capture its bios when its working "normally" then when we move the card to slot 1 we can start the vm using the dumped vbios.

 

so do the following to dump the card's vbios  (again, note that instead of 01:00.0 use whatever PCI Id was designated for your card):

 

cd /sys/bus/pci/devices/0000:01:00.0/
echo 1 > rom
cat rom > /mnt/user/Public/drivers/vbios.dump
echo 0 > rom

 

4a)  Bind the card back to vfio-pci if required  (Note that since i did not do step 3a above, I do not need to do this).

 

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

5) At this time I removed the AMD card and put the NVIDIA card in slot 1 and restarted the system.

 

6)  Now that we have the proper vbios, we can tell the vm to use it when it starts up.  hupster's xml was not correct for me.  I assume that the newer versions of unraid (I'm using v6 beta 20) use the libvirt API to configure the VMs which is a little different.  So I will show you what I did here.  If your vm's xml has the same format as hupsters (you can look for the

qemu:arg

element in your xml then see his post on page 2.

 

I found the hostdev line in the vm's xml which pertained to my video card.  It looked like this:

 

  <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>

 

Note that the address lists function as 0.  Watch for this, there was another hostdev block for the same card but its function was 1 - I assume this was for the HDMI sound.  I modified this xml to add the bios reference from step 4:

 

   <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/user/Public/drivers/vbios.dump'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>

 

7) Thats it! Save the xml and restart the vm.  Worked like a charm. 

 

Special thanks goes out to hupster -- i never would have gotten this far without his excellent steps above.

 

Frank

Link to post
  • 2 weeks later...

Can't dump the ROM

 

root@Tower:/sys/bus/pci/devices/0000:04:00.0# cat rom > /boot/vbios.dump

cat: rom: Input/output error

 

i can dump the ROM of the GPU that Unraid is using i don't understand why i can't dump this one.

 

any ideas?  do i need to run a VM and then shut it down then do a dump?

 

also i have a spare card that i want to use but its 3 slot sized so i will have to remove my SATA ports cards to fit it does Unraid Array need to be up to do a dump? so that i can use  2 GPU's get the dump and then install back my SAtA ports ?

Link to post
  • 1 month later...

Can't dump the ROM

 

root@Tower:/sys/bus/pci/devices/0000:04:00.0# cat rom > /boot/vbios.dump

cat: rom: Input/output error

 

i can dump the ROM of the GPU that Unraid is using i don't understand why i can't dump this one.

 

any ideas?  do i need to run a VM and then shut it down then do a dump?

 

also i have a spare card that i want to use but its 3 slot sized so i will have to remove my SATA ports cards to fit it does Unraid Array need to be up to do a dump? so that i can use  2 GPU's get the dump and then install back my SAtA ports ?

 

A little late perhaps but you could try getting the correct bios for your cards with GPU-z or here as earlier stated: http://www.techpowerup.com/vgabios/

What GPU are you using?

 

I'm going to give this a go next week with 2 980ti's and i7-6800k (no onboard graphics), no room for an extra gpu for unraid.

 

EDIT: worked as a charm, I had the same error at first, forgot to unbind the gpu first:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

Link to post
  • 4 weeks later...

Just wanted to chime in here to say that I have also used hupster's method and successfully passed my NVIDIA GTX970 through to a Windows 10 vm while it was the only video card in the system (and in slot 1).    The hardware in question is an X99 board (ASRock Extreme 6) and an i7 5820K - so no onboard video. 

 

I had to fart around a bit with this because for some reason, even with the 970 in slot 2 and another (AMD) video card in slot 1, I could not get an OVMF vm to work at all (even though my card supports UEFI) with either or both cards installed.  Once I switched over to SeaBios things started working.    But then I still had issues when I moved the 970 to slot 1 (which I wanted because I wanted the x16 performance).

 

I'll recap my steps here so that its all in one place as I had to put things together from the posts above, the WiKi and one or two other spots.

 

1) I started by placing the AMD video card in PCIe x16 slot 1 and the NVIDIA GTX 970 in the second PCIe x16 slot.

 

2) stopped the VMs

 

3) ssh into the unraid machine

 

3)  Type "lspci -v" and note the pci id for the NVIDIA card.  In my case the NVIDIA had an ID of 01:00.0.  The output looked like this:

 

01:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: ZOTAC International (MCO) Ltd. GM204 [GeForce GTX 970]
        Flags: bus master, fast devsel, latency 0, IRQ 42, NUMA node 0
        Memory at fa000000 (32-bit, non-prefetchable) [size=16M]
        Memory at e0000000 (64-bit, prefetchable) [size=256M]
        Memory at f0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at e000 [size=128]
        Expansion ROM at fb000000 [disabled] [size=512K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [258] L1 PM Substates
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] #19

 

Note that in my case the "

Kernel driver in use: vfio-pci

" line was not present so I didnt have to unbind it like hupster did.  If you see that line for you card, you'll need to follow his instructions (I'll put them here as well):

 

3a)

If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

 

4) This is the key step.  The card wont currently work in slot 1 because its vbios is shadowed during bootup.  So we need to capture its bios when its working "normally" then when we move the card to slot 1 we can start the vm using the dumped vbios.

 

so do the following to dump the card's vbios  (again, note that instead of 01:00.0 use whatever PCI Id was designated for your card):

 

cd /sys/bus/pci/devices/0000:01:00.0/
echo 1 > rom
cat rom > /mnt/user/Public/drivers/vbios.dump
echo 0 > rom

 

4a)  Bind the card back to vfio-pci if required  (Note that since i did not do step 3a above, I do not need to do this).

 

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

5) At this time I removed the AMD card and put the NVIDIA card in slot 1 and restarted the system.

 

6)  Now that we have the proper vbios, we can tell the vm to use it when it starts up.  hupster's xml was not correct for me.  I assume that the newer versions of unraid (I'm using v6 beta 20) use the libvirt API to configure the VMs which is a little different.  So I will show you what I did here.  If your vm's xml has the same format as hupsters (you can look for the

qemu:arg

element in your xml then see his post on page 2.

 

I found the hostdev line in the vm's xml which pertained to my video card.  It looked like this:

 

  <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>

 

Note that the address lists function as 0.  Watch for this, there was another hostdev block for the same card but its function was 1 - I assume this was for the HDMI sound.  I modified this xml to add the bios reference from step 4:

 

   <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/user/Public/drivers/vbios.dump'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>

 

7) Thats it! Save the xml and restart the vm.  Worked like a charm. 

 

Special thanks goes out to hupster -- i never would have gotten this far without his excellent steps above.

 

Frank

 

You are a beautiful human being. I was seriously kicking myself for going with a xeon instead of an i7 that would have had onboard graphics.

Link to post

I can't get any bios to attach to any card. I followed the instructions to the letter. Can anyone see what i'm doing wrong? Card is on bus 2

 

<domain type='kvm' id='1'>

  <name>Windows 10</name>

  <uuid>0dea7339-5a49-bc19-1a0c-d4ea1c1942b0</uuid>

  <metadata>

    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>

  </metadata>

  <memory unit='KiB'>3145728</memory>

  <currentMemory unit='KiB'>3145728</currentMemory>

  <memoryBacking>

    <nosharepages/>

    <locked/>

  </memoryBacking>

  <vcpu placement='static'>4</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='1'/>

    <vcpupin vcpu='1' cpuset='2'/>

    <vcpupin vcpu='2' cpuset='3'/>

    <vcpupin vcpu='3' cpuset='4'/>

  </cputune>

  <resource>

    <partition>/machine</partition>

  </resource>

  <os>

    <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>

    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>

    <nvram>/etc/libvirt/qemu/nvram/0dea7339-5a49-bc19-1a0c-d4ea1c1942b0_VARS-pure-efi.fd</nvram>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='4' threads='1'/>

  </cpu>

  <clock offset='localtime'>

    <timer name='rtc' tickpolicy='catchup'/>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='hpet' present='no'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>restart</on_crash>

  <devices>

    <emulator>/usr/local/sbin/qemu</emulator>

    <disk type='file' device='disk'>

      <driver name='qemu' type='raw' cache='writeback'/>

      <source file='/mnt/user/BackUps/Windows10/vdisk1.img'/>

      <backingStore/>

      <target dev='hdc' bus='virtio'/>

      <boot order='1'/>

      <alias name='virtio-disk2'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>

    </disk>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='/mnt/user/isos/virtio-win-0.1.118-1.iso'/>

      <backingStore/>

      <target dev='hdb' bus='ide'/>

      <readonly/>

      <alias name='ide0-0-1'/>

      <address type='drive' controller='0' bus='0' target='0' unit='1'/>

    </disk>

    <controller type='usb' index='0' model='nec-xhci'>

      <alias name='usb'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>

    </controller>

    <controller type='pci' index='0' model='pci-root'>

      <alias name='pci.0'/>

    </controller>

    <controller type='ide' index='0'>

      <alias name='ide'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>

    </controller>

    <controller type='virtio-serial' index='0'>

      <alias name='virtio-serial0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

    </controller>

    <interface type='bridge'>

      <mac address='52:54:00:46:a6:44'/>

      <source bridge='br0'/>

      <target dev='vnet0'/>

      <model type='virtio'/>

      <alias name='net0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>

    </interface>

    <serial type='pty'>

      <source path='/dev/pts/0'/>

      <target port='0'/>

      <alias name='serial0'/>

    </serial>

    <console type='pty' tty='/dev/pts/0'>

      <source path='/dev/pts/0'/>

      <target type='serial' port='0'/>

      <alias name='serial0'/>

    </console>

    <channel type='unix'>

      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Windows 10/org.qemu.guest_agent.0'/>

      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>

      <alias name='channel0'/>

      <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>

      </source>

      <alias name='hostdev0'/>

      <rom file='/mnt/user/Junk/1080.rom'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>

    </hostdev>

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>

      </source>

      <alias name='hostdev1'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

    </hostdev>

    <hostdev mode='subsystem' type='usb' managed='no'>

      <source>

        <vendor id='0x06a3'/>

        <product id='0x0cd9'/>

        <address bus='5' device='5'/>

      </source>

      <alias name='hostdev2'/>

    </hostdev>

    <memballoon model='virtio'>

      <alias name='balloon0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>

    </memballoon>

  </devices>

</domain>

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.