VIDEO GUIDE***How to pass through an NVIDIA GPU as primary or only gpu in unRAID


SpaceInvaderOne

150 posts in this topic Last Reply

Recommended Posts

On 01/08/2017 at 7:46 PM, Matoking said:

I was pointed towards this thread when I had trouble isolating my 1070 for PCI passthrough.

 

Long story short, I tried dumping my vBIOS like instructed in the video, but couldn't do so (the `cat` command printed I/O errors instead). Instead, I resorted to dumping the full vBIOS under Windows and using a hex editor to splice the relevant part of the ROM into a new file, using some of the partial vBIOS files uploaded here as samples. This finally allowed me to pass the GPU to the Windows VM!

 

---

 

Anyway, I wrote a Python script that should automate this process (you give it a full ROM from techPowerUp or one you dumped using nvflash under Windows), and it should create a patched ROM that you can use to make GPU passthrough work.

 

I passed a few ROMs I downloaded from techPowerUp through the script and compared them to what you guys uploaded here, and so far the Pascal vBIOS files appeared to match, bit by bit. Still, I can't stress it enough that this script is based on guesswork, so it may end up bricking your GPU if you're unlucky. It does a few rudimentary sanity checks, but I would recommend dumping the partial ROM yourself if you can. Still, for those who are pulling your hair out over not being able to do that, this may be a lifesaver.

 

https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher

 

 

great work I have linked this to the op. :)

Link to post
  • Replies 149
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Hi Guys i have made a video tutorial on how to pass through an nvidia gpu as the primary or only gpu in the server. This guide is based off hupsters great work on dumping the bios of the gpu. Hop

GT 610 is a rebranded 5xx series card, so it will require Seabios, as I pointed out in another topic.

rahool, nobody confirmed so far that the dump for zotac 1060 3g mini works for them. lack of evidence does not means it's not working, it's just that probably nobody tried it yet, or bothered to reply

Posted Images

I was also pointed here after running into issues with the install of an EVGA GTX 1080 Ti. Additionally I also subscribe to your youtube channel, amazing work and thank you for all the help you have already given me!

 

It seems that the problem I have encountered may be related to the vbios, then again, i havent attempted a dump because my bios offer the ability to boot to the onboard vga port, so i don think that is necessary... In case it's relevant, my motherboard is an asus Z9PA-D8. HVM and IOMMU are both enabled according to unraid's 'info' tab, and the card is the only pci device that is in its IOMMU group (other than the nvidia audio, which is also in the same group).

 

After installing an ubuntu vm with VNC (per your introduction to unraid vm's video), and then enabling the discrete card after install, the grub bootloader displays and im able to navigate its options successfully. To my novice mind, this seems to indicate that the gpu passthrough is working, right? But as soon as i make a selection to boot ubuntu, the screen freezes on that slightly off-black ubuntu loading screen color and becomes unresponsive. Even a 'force stop' of the vm doesn't clear/reset the screen. If the vm is force-stopped and then started again I am able to successfully view/interact with the grub bootloader, but as soon as i try to boot into ubuntu, the screen goes blank.

 

Any ideas or suggestions of how to fix?

 

 

Edited by entegral
Link to post
On 10/08/2017 at 1:36 AM, entegral said:

I was also pointed here after running into issues with the install of an EVGA GTX 1080 Ti. Additionally I also subscribe to your youtube channel, amazing work and thank you for all the help you have already given me!

 

It seems that the problem I have encountered may be related to the vbios, then again, i havent attempted a dump because my bios offer the ability to boot to the onboard vga port, so i don think that is necessary... In case it's relevant, my motherboard is an asus Z9PA-D8. HVM and IOMMU are both enabled according to unraid's 'info' tab, and the card is the only pci device that is in its IOMMU group (other than the nvidia audio, which is also in the same group).

 

After installing an ubuntu vm with VNC (per your introduction to unraid vm's video), and then enabling the discrete card after install, the grub bootloader displays and im able to navigate its options successfully. To my novice mind, this seems to indicate that the gpu passthrough is working, right? But as soon as i make a selection to boot ubuntu, the screen freezes on that slightly off-black ubuntu loading screen color and becomes unresponsive. Even a 'force stop' of the vm doesn't clear/reset the screen. If the vm is force-stopped and then started again I am able to successfully view/interact with the grub bootloader, but as soon as i try to boot into ubuntu, the screen goes blank.

 

Any ideas or suggestions of how to fix?

 

 

Hi, @entegral yes if you can see the grub boot loader then GPU pass through is working. When setting up a ubuntu VM from the template, it defaults to using bios type OVMF

I would use bios type Seabios for Ubuntu. So make a new ubuntu VM and when making it go to the template and toggle advanced view in the top right then you can choose bios type and select Seabios. Give this a try :) 

Link to post
  • 2 weeks later...
On 8/1/2017 at 1:46 PM, Matoking said:

I was pointed towards this thread when I had trouble isolating my 1070 for PCI passthrough.

 

Long story short, I tried dumping my vBIOS like instructed in the video, but couldn't do so (the `cat` command printed I/O errors instead). Instead, I resorted to dumping the full vBIOS under Windows and using a hex editor to splice the relevant part of the ROM into a new file, using some of the partial vBIOS files uploaded here as samples. This finally allowed me to pass the GPU to the Windows VM!

 

---

 

Anyway, I wrote a Python script that should automate this process (you give it a full ROM from techPowerUp or one you dumped using nvflash under Windows), and it should create a patched ROM that you can use to make GPU passthrough work.

 

I passed a few ROMs I downloaded from techPowerUp through the script and compared them to what you guys uploaded here, and so far the Pascal vBIOS files appeared to match, bit by bit. Still, I can't stress it enough that this script is based on guesswork, so it may end up bricking your GPU if you're unlucky. It does a few rudimentary sanity checks, but I would recommend dumping the partial ROM yourself if you can. Still, for those who are pulling your hair out over not being able to do that, this may be a lifesaver.

 

https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher

 

This looks pretty cool - I looked at the github but I am really dumb with stuff like this. Can you explain the steps you used in WIndows to create the flashed bios?

 

I have an Nvidia EVGA 1050 TI (https://www.techpowerup.com/gpudb/b3905/evga-gtx-1050-ti-sc-acx-2-0) I am trying to pass through to my Win 10 VM. Thanks in advance.

Link to post
2 hours ago, ice pube said:

This looks pretty cool - I looked at the github but I am really dumb with stuff like this. Can you explain the steps you used in WIndows to create the flashed bios?

 

I have an Nvidia EVGA 1050 TI (https://www.techpowerup.com/gpudb/b3905/evga-gtx-1050-ti-sc-acx-2-0) I am trying to pass through to my Win 10 VM. Thanks in advance.

 

I have EVGA 1050Ti SC card. I found a vbios on techpowerup but it said it was untested, and it didn't work (bottom half of screen looked fine, but upper half was all screwed up.

 

So I pulled my own using GPUZ. I installed the card in my old Windows box, ran GPUZ, and extracted the VBIOS. Then editted it with HxD to remove the "nvidia header". Put it on my server, added reference in my VM XML, and it works perfect.

Link to post
  • 4 weeks later...

Seems to work. No error code 43 in device manager of guest OS.

I followed the instructions from 2nd video, but i didn't tried with a monitor connected, just with a VNC remote connection.

I tried before to pass through the GPU with another Linux based distribution, but in that case didn't worked or i didn't succeed.

 

Host:

OS: unRAID version: 6.3.5

System: Dell Power Edge T20, CPU: Xeon 1225 v3 (with integrated GPU)

 

Guest:

OS: Win 8.1 Pro x64

GPU: GTX 1050 Ti (4 GB), NVIDIA driver: 376.09

 

Edited by Dorin
Link to post
  • 2 weeks later...

Hi all !

First, many thanks to gridrunner for the great tuto in first page :)

I have DL the trial unRAID 6.3.5 to experiment GPU Passthrough on a Dell Precision T5600 (chipset C600, 64Go DDR3, Bi-Xeon E5 2620, GTX770).

I have Error Code 43 in my Win10 VM after successfull install nvidia drivers.

Do you know if its supposed to work with my hardware ?

I need a latest motherboard ?

Thanks :)

Link to post
On 05/10/2017 at 11:22 AM, Dual_Shock said:

Hi all !

First, many thanks to gridrunner for the great tuto in first page :)

I have DL the trial unRAID 6.3.5 to experiment GPU Passthrough on a Dell Precision T5600 (chipset C600, 64Go DDR3, Bi-Xeon E5 2620, GTX770).

I have Error Code 43 in my Win10 VM after successfull install nvidia drivers.

Do you know if its supposed to work with my hardware ?

I need a latest motherboard ?

Thanks :)

It finally works for me with a GTX970 instead of my GTX770 !!! And I didn't even need to put the dump bios in the XML ...

However, the performance are very poor. On the bench Unigine Heaven in Dx11, I am at 20 FPS average ... :( (Normally 60-80)

Link to post

I am not a gamer, but this is not typical of VM slowdowns I have read about. I'd expect reductions of maybe 20% or so. So there might still be something not quite right in your config. If this is the sole video card, you might try the ROM file in the XML. Could also be that you are not giving enough cores or memory. Or not allocating matching cores and matching hyper-thread cores properly.

 

Review carefully and experiment and you might find something that could pump up the video performance.

Link to post

@Dual_Shock   please post your xml, iommu groups, and your cpu thread pairings so we can see :)

Definitely, try passing through the vbios.Your 770 probably didn't work because it didn't support

EFI so would only work using seabios and not ovmf. Passing through a 770 vbios that does support that will

make the card start with an EFI bios so work that or you could flash the card but its much easier to use rom in XML.

Check your bios settings that your primary GPU is onboard if you have that and make sure that multi-monitor is off.

Also don't mix cores from across your 2 CPUs. 

Link to post
  • 2 weeks later...

I followed the instructions. Hope I didn't brick anything.

 

I have an GTX1050, which is in my primary PCI port. I dumped the bios using commandline. I didn't move the card to a secondary slot, which I hope was ok? Everything actually worked and I succeeded to dump the bios. The only thing that didn't work is to bind the card again. I get an error message that this card doesn't exist. I initiall unbinded it.

 

Everything seems to be still working though, but I am worried that I bricked something by not binding the card again?

Link to post
On 10/31/2017 at 7:35 PM, steve1977 said:

I followed the instructions. Hope I didn't brick anything.

 

I have an GTX1050, which is in my primary PCI port. I dumped the bios using commandline. I didn't move the card to a secondary slot, which I hope was ok? Everything actually worked and I succeeded to dump the bios. The only thing that didn't work is to bind the card again. I get an error message that this card doesn't exist. I initially unbinded it.

 

Any thoughts on above? My GPU (primary slot GTX 1050) is no longer binded and I don't know how to bind it again. I had unbinded to dump the bios, but then failed to bind it again. Any thoughts how to do so? Thanks in advance!

Link to post

Hope to get this sorted out. Let me provide you some more information.

 

Context where the HW : only one GPU (GTX 1050) used in primary PCI slot, GPU used by Unraid and not assigned to VM

 

Below how "lspci -v" gives me related to the GPU. You will notice that the kernel driver is not in use (this was different when I did this first and unbinded it).

 

https://pastebin.com/8XFap1JA

 

Followed the comments to bind the card again. See error message below:

 

root@Tower:~# cd /sys/bus/pci/devices/0000:65:00.0/
root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo 1 > rom
root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo 0 > rom
root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo "0000:65:00.0" > /sys/bus/pci/drivers/vfio-pci/bind
-bash: echo: write error: No such device
 

And some more info from tools/system devices in case this helps trouble-shooting:

 

IOMMU group 36
    [10de:1c81] 65:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1)
    [10de:0fb9] 65:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

 

 

How can I "bind" the GPU again? What happened when I "successfully" unbinded my card?

Link to post
On 1/11/2017 at 2:09 PM, gridrunner said:

so was it working fine before 6.4.0-rc9f or is this the first time you have tried?

 

For some reason Hyper-V was enabled and it didn't worked, not even after disabling it. I created a new VM with that disabled and it worked.

Weird.

Link to post
  • 2 weeks later...

I'm switching the card in my primary slot and I have

 

<alias name='hostdev0'/>

above the address line in my xml.  Do I leave this in?  The other VM that I previously had in the primary slot didn't have this line:

	<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
	  <rom file='/mnt/disks/sm961/system/gt730bios.dump'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>

Thanks

Link to post

I've tried the different methods ... dump BIOS from the machine ... dump it from another machine using GPUZ and editing with a hex editor.

 

With an ASUS Sabertooth 990FX r1 and AMD FX8150, nothing will persuade the first GPU to pass through to a Windows VM. Genuinely spent many hours attempting it.

 

Either the VM never starts and hogs CPU ... never initialising the displays/GPU's or I get the error 43. I've tried different Windows client versions including pre-creators update Win 10, Win 10 Enterprise and Windows Server 2016.

 

I've resigned myself that of 3 GPU's installed, 2 of them will work.

I don't know if this is a BIOS limitation with my motherboard (no newer BIOS exists) or if it's a CPU issue. If anyone has a matching/similar setup with any insights, I'd welcome your comment.

 

My path from here is an upgrade to a Ryzen CPU and Asrock Taichi board (unless anyone knows of another board that properly supports unbuffered ECC RAM?).

Link to post

I have also spent many hours (if not literally days). I still don't have it fully working, but at least made some meaningful improvements. Have you tried to use OVMF instead of Seabios? OVMF appears to work a lot better. Also, are you using RDC, which may be another source of issue?

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.