All GPUs are bound on boot, passthrough impossible


Recommended Posts

So I have been attempting to setup a VM on my new build with GPU passthrough. On my previous build (FX-6300 on AMD 970 board) this was a simple task. On my new TR 1920X and AsRock X399 Fatal1ty Professional Gaming it has proven to be quite a challenge. It seems that UnRAID 6.8.1 binds all of my GPUs on boot and they can not be passed through at all. If I attempt to unbind them ("echo <pci_device_id> > /sys/bus/pci/drivers/nvidia/unbind") it hard locks the Nvidia driver (Unraid NVidia build). I get the same result when I start a VM with any GPU in it. The diagnostic files are attached. Does anyone know how to prevent unraid from stealing all of my GPUs? I have tried the method in Spaceinvader One's(@SpaceinvaderOne) video on GPU passthrough, but it doesn't work because I get stuck at the unbind step.

Edited by xl3b4n0nx
Link to comment

I had a what appears to be a similar issue but learned i just installed the GPUs wrong with card 1 in slot one and cart 2 in slot 2. My GPUs showed up and everything appeared to find according to unRaid. Turns out for my Motherboard i had to install it to slot 5, once i did that everything worked fine. Checking your motherboard, for 2 cards you need a card in Slot 1 and Slot 4, for 3 cards, Slot 1, 2 and 4.


I hope this helps.

Link to comment
9 hours ago, xl3b4n0nx said:

AsRock X399 Fatal1ty Professional Gaming

I have the same board with a 1950x and have 2 GPUs installed. 1080ti in the frist slot, 1050ti in the 3rd slot. Both can be passed through to VMs. For the GPU in the first slot I need a vbios to pass it to a VM. You can find the BIOS on Techpowerup for your specific model or you dump it yourself directly from your card. You might have to modify the BIOS with an hex editor to remove some NVIDIA headers.


Maybe the following helps with VBIOS:


I never tried the "Unraid NVidia build", so can't really tell if there is something special you have to count in. This build is to have native GPU transcoding support for Dockers if I remember correctly. "Unbind" the GPU will break things if something else depends on it.


If you wanna prevent Unraid from using a specific card, get the PCI IDS from the TOOLS >>> SYSTEM DEVICES



IOMMU group 49:	[10de:1b06] 43:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
IOMMU group 50:	[10de:10ef] 43:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)

and put it in your syslinux config. Under MAIN click on flash and scroll down and you see the syslinux config. Add the IDs so it looks like the following and restart the server.

kernel /bzimage
append vfio-pci.ids=10de:1b06,10de:10ef isolcpus=8-15,24-31 pcie_acs_override=downstream,multifunction initrd=/bzroot

Unraid won't initialize the device on next boot and you should be able to pass it through.


The isolcpus and acs part you might not need. In my example the cores from 8-15 and their threads 24-31 (second die) are isolated so my main VM is the only one that has access to it. And the ACS patch I need to split up my IOMMU groups to get an specific USB controller separated in its own group for passthrough.


Link to comment
  • 3 weeks later...

I will try this later. This looks like the most promising solution yet. I will reply with results.


Edit: This worked! Thank you! The VM framework doesn't crash when I boot a VM with a GPU. Now the VNC view just barely works.

Edited by xl3b4n0nx
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.