GPU passthrough with only one card?


d4rkf

Recommended Posts

Hi everybody!

 

I'm new to the whole KVM virtualization topic and got interested in unRAID after having seen the video from LinusTechTips. I was playing around with unRAID for the last couple days but couldn't get the GPU passthrough to a Windows guest to work. My question is if it's even possible to achieve with my current setup:

 

CPU: Xeon E3 1241 v3

MB: AsRock H97M-ITX/ac

RAM: 16 GB KINGSTON HyperX

GPU: Gigabyte GV-N970IXOC-4GD

 

My CPU does not have an integrated GPU and I have only one PCIe slot on my board, where my GTX 970 is plugged into. Is there any way to passthrough my GPU to a guest system? Would something like that work if I need a second GPU? http://www.i-tec-europe.eu/?t=3&v=358

 

Thanks in advance!

 

Greetings,

Fabian

Link to comment

Thanks for your reply. Both say enabled:

 

Model: Custom

M/B: ASRock - H97M-ITX/ac

CPU: Intel® Xeon® CPU E3-1241 v3 @ 3.50GHz

HVM: Enabled

IOMMU: Enabled

Cache: 256 kB, 1024 kB, 8192 kB

Memory: 16384 MB (max. installable capacity 16 GB)

Network: eth0: 1000Mb/s - Full Duplex

Kernel: Linux 4.1.7-unRAID x86_64

OpenSSL: 1.0.1p

 

I have a Windows 10 VM running with VNC. But as soon as I try to passthrough the GPU and start the machine, I get a black screen (where the unRAIdlD login terminal was). After that I will not get an output to my screen until I reboot the machine. Is it possible to run unRAID headless when I only have one GPU?

Link to comment

Unfortunately I can't add an additional GPU to my system. The only thing I can do is using some kind of external GPU. Like this http://www.i-tec-europe.eu/?t=3&v=358 But will unRAID be able to use a USB GPU?

If you bios supports choosing the usb graphics adapter as the default adapter it could work, but I'm pretty sure it doesn't. So no, its not possible.

The only options I see is to change CPU to one with builtin graphics or try your luck with an amd graphics card. On the 2-3 motherboards I have tested and graphics works to pass through when its the only adapter installed (and in the first pcie slot). The cards used where a HD6450 and R9 280X.

There is no guarantee that it will work though...

Link to comment

Thanks for your reply! I'll try tomorrow with the new USB GPU I ordered and will report.

 

On my board is a mini pci-e slot which could be used for an additional GPU (with riser card). Since this would be a GPU directly attached to the pci-e lanes this may work. Has anybody tried that already?

 

Also I found this thread: https://lime-technology.com/forum/index.php?topic=41708.0

 

The last reply sounds promising. Does anybody know what the user means by this?

 

Greetings,

Fabian

Link to comment

I have another card now (nvidia quadro nv 290). Unfortunately I can't put it in the first pcie slot since I only have one full x16 slot, which is needed by my gtx 970. I can passthrough the quadro card with ease but how can I pass through the other card? Can I somehow tell unRAID to use the second card instead? I really want to get this working...

Link to comment

Unfortunately I'm quite busy right now. I'll test again on Friday and report back with the log when my screen goes black. Over the last few days I also tested another solution based on KVM (Proxmox VE) but I had the same issues.

 

Thanks for your help!

Link to comment

What we've found is that the limitation of needing a GPU for unRAID is limited to NVIDIA devices in systems with no on-board graphics.  If you have an AMD device, it seems to work in that scenario, but for some reason, NVIDIA devices do not.

 

We are still experimenting with other configuration tweaks to see if we can coax this to work with NVIDIA in a single GPU setup like this, but without any test hardware (we have no systems that don't have integrated graphics chips), it'll be hard.

Link to comment

What we've found is that the limitation of needing a GPU for unRAID is limited to NVIDIA devices in systems with no on-board graphics.  If you have an AMD device, it seems to work in that scenario, but for some reason, NVIDIA devices do not.

 

We are still experimenting with other configuration tweaks to see if we can coax this to work with NVIDIA in a single GPU setup like this, but without any test hardware (we have no systems that don't have integrated graphics chips), it'll be hard.

 

If I am reading this correctly you say that AMD standalone cards do not have this issue but NVIDIA ones do? If so, I am glad I looked at this thread as I have a new NVIDIA card in my cart on Newegg to use as my new pass through card for my unraid box. My mobo does not have onboard graphics, should I be shopping for an AMD card then?

Link to comment

What we've found is that the limitation of needing a GPU for unRAID is limited to NVIDIA devices in systems with no on-board graphics.  If you have an AMD device, it seems to work in that scenario, but for some reason, NVIDIA devices do not.

 

We are still experimenting with other configuration tweaks to see if we can coax this to work with NVIDIA in a single GPU setup like this, but without any test hardware (we have no systems that don't have integrated graphics chips), it'll be hard.

 

If I am reading this correctly you say that AMD standalone cards do not have this issue but NVIDIA ones do? If so, I am glad I looked at this thread as I have a new NVIDIA card in my cart on Newegg to use as my new pass through card for my unraid box. My mobo does not have onboard graphics, should I be shopping for an AMD card then?

That is what we are hearing. I don't have a test system that doesn't have on board graphics, so I can't really verify anything first hand, bit that appears to be the situation so far. We are still testing things out and seeing if there are other ways to get around this with NVIDIA GPUs, but that's just a big hope at this point.

Link to comment

Thanks for the reply Jon!

 

I understand that passing through the primary card (if it's a Nvidia card) is not possible at the moment. But I was able to passthrough a Nvidia Quadro 290 which was sitting in the second slot (PCIe 1x). Do you see a way to tell unRAID it should use the Quadro card instead of my GTX 970? I can't put the GTX in the second slot because I'm using an adapter from mini PCIe to PCIe 1x...

Link to comment
  • 2 months later...

That is what we are hearing. I don't have a test system that doesn't have on board graphics, so I can't really verify anything first hand, bit that appears to be the situation so far. We are still testing things out and seeing if there are other ways to get around this with NVIDIA GPUs, but that's just a big hope at this point.

 

Any updates on this? The requirement for a 2nd GPU is preventing me from moving to unRAID as I'm using an ITX system with a single PCIE slot.

 

Xeon E3-1230 v3 / GTX 960

Link to comment
  • 1 month later...
  • 3 weeks later...

I solved this on my system: Asus Rampage IV Formula / Intel Core i7-4930k / 4 x NVIDIA Gigabyte GTX 950 Windforce, with all graphics cards passed through to Windows 10 VMs. The problem I was having was that the 3 cards in slots 2, 3 and 4 pass though fine, but passing through the card in slot 1, which is being used to boot unRAID, freezes the connected display.

 

I explored the option to add another graphics card. A USB card won't be recognized by the system BIOS to use for POST. The only other card I could add would be connected by a PCIe 1x to PCIe 16x riser card (which did work by the way for passthrough, but a need to pass through a x16 slot), but it would require modding the mainboard BIOS to get it use it as primary. So I looked for another solution.

 

The problem was caused by the VBIOS on the video card, as mentioned on http://www.linux-kvm.org/page/VGA_device_assignment:

To re-run the POST procedures of the assigned adapter inside the guest, the proper VBIOS ROM image has to be used. However, when passing through the primary adapter of the host, Linux provides only access to the shadowed version of the VBIOS which may differ from the pre-POST version (due to modification applied during POST). This has be been observed with NVDIA Quadro adapters. A workaround is to retrieve the VBIOS from the adapter while it is in secondary mode and use this saved image (-device pci-assign,...,romfile=...). But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR).

 

In my case I could not use the VBIOS from http://www.techpowerup.com/vgabios/. The file I got from there, and also the ones read using GPU-Z is probably a Hybrid BIOS, it includes the legacy one as well as the UEFI one. It's probably possible to extract the required part from the file, but it's pretty simple to read it from the card using the following steps:

 

1) Place the NVIDIA card in the second PCIe slot, using another card as primary graphics card to boot the system.

2) Stop any running VMs and open a SSH connection

3) Type "lspci -v" to get the pci id for the NVIDIA card. It is assumed to be 02:00.0 here, otherwise change numbers below accordingly.

4) If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

5) Readout the VBIOS:

cd /sys/bus/pci/devices/0000:02:00.0/
echo 1 > rom
cat rom > /boot/vbios.rom
echo 0 > rom

6) Bind it back to vfio-pci if required:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

The card can now be placed back as primary, and a small modification must be made to the VM that will use it, to use the VBIOS file read in the above steps. In the XML for the VM, change the following line:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/>

To:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on,romfile=/boot/vbios.rom'/>

 

After this modification, the card is passed through without any problems on my system. This may be the case for more NVIDIA cards used as primary adapters!

 

  • Like 1
  • Thanks 1
Link to comment

I get the following error when trying to unbind... I check my 980ti is 02:00.0.

 

root@MOUNRAID01:/sys/bus/pci/devices/0000:02:00.0# echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

 

bash: echo: write error: No such device

 

What am i doing wrong?

Never mind. Didn't fully read your post...

Link to comment

I get the following error when trying to unbind... I check my 980ti is 02:00.0.

 

root@MOUNRAID01:/sys/bus/pci/devices/0000:02:00.0# echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

 

bash: echo: write error: No such device

 

What am i doing wrong?

 

If the card is not assigned for passthrough to a VM then it's not required to unbind it from vfio-pci.

Maybe that's the case, does it work to enable and cat the rom file?

 

Link to comment

I solved this on my system: Asus Rampage IV Formula / Intel Core i7-4930k / 4 x NVIDIA Gigabyte GTX 950 Windforce, with all graphics cards passed through to Windows 10 VMs. The problem I was having was that the 3 cards in slots 2, 3 and 4 pass though fine, but passing through the card in slot 1, which is being used to boot unRAID, freezes the connected display.

 

I explored the option to add another graphics card. A USB card won't be recognized by the system BIOS to use for POST. The only other card I could add would be connected by a PCIe 1x to PCIe 16x riser card (which did work by the way for passthrough, but a need to pass through a x16 slot), but it would require modding the mainboard BIOS to get it use it as primary. So I looked for another solution.

 

The problem was caused by the VBIOS on the video card, as mentioned on http://www.linux-kvm.org/page/VGA_device_assignment:

To re-run the POST procedures of the assigned adapter inside the guest, the proper VBIOS ROM image has to be used. However, when passing through the primary adapter of the host, Linux provides only access to the shadowed version of the VBIOS which may differ from the pre-POST version (due to modification applied during POST). This has be been observed with NVDIA Quadro adapters. A workaround is to retrieve the VBIOS from the adapter while it is in secondary mode and use this saved image (-device pci-assign,...,romfile=...). But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR).

 

In my case I could not use the VBIOS from http://www.techpowerup.com/vgabios/. The file I got from there, and also the ones read using GPU-Z is probably a Hybrid BIOS, it includes the legacy one as well as the UEFI one. It's probably possible to extract the required part from the file, but it's pretty simple to read it from the card using the following steps:

 

1) Place the NVIDIA card in the second PCIe slot, using another card as primary graphics card to boot the system.

2) Stop any running VMs and open a SSH connection

3) Type "lspci -v" to get the pci id for the NVIDIA card. It is assumed to be 02:00.0 here, otherwise change numbers below accordingly.

4) If the card is configured for passthrough, the above command will show "Kernel driver in use: vfio-pci". To retrieve the VBIOS in my case I had to unbind it from vfio-pci:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind

5) Readout the VBIOS:

cd /sys/bus/pci/devices/0000:02:00.0/
echo 1 > rom
cat rom > /boot/vbios.rom
echo 0 > rom

6) Bind it back to vfio-pci if required:

echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind

 

The card can now be placed back as primary, and a small modification must be made to the VM that will use it, to use the VBIOS file read in the above steps. In the XML for the VM, change the following line:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on'/>

To:

<qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on,romfile=/boot/vbios.rom'/>

 

After this modification, the card is passed through without any problems on my system. This may be the case for more NVIDIA cards used as primary adapters!

 

 

OK, so I managed to extract the vbios rom, however, I still get the black sreen after powering on the VM. Under hostdev in my xml i use the command like this...

 

</source>

<rom file='/boot/vbios.rom'/>

 

I tried with Seabios and ovmf and both exhibit a black screen.

Link to comment

OK, so I managed to extract the vbios rom, however, I still get the black sreen after powering on the VM. Under hostdev in my xml i use the command like this...

 

</source>

<rom file='/boot/vbios.rom'/>

 

I tried with Seabios and ovmf and both exhibit a black screen.

 

Ok but is that in your VM configuration XML file? It should be ",romfile=/boot/vbios.rom" (no spaces), added to an existing line like in my post. I don't know about other VM configuration file formats, you'd have to look that up.

 

A good way to verify the bios file is to try it while the GPU is still placed in the second slot. If it then works well for passthrough, it should still do so after you add the "romfile=" part to the VM configuration. You could also try specifying a bad file to use as rom to see if the option is actually used (it should not boot).

 

For me using the Vbios file solved the black screen issue, but there may be other issues on you system. The page I linked to mentions: "But even that may fail, either due to problems of the host chipset or BIOS (host kernel complains about unmappable ROM BAR)."

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.