SpaceInvaderOne Posted August 4, 2017 Author Share Posted August 4, 2017 On 01/08/2017 at 7:46 PM, Matoking said: I was pointed towards this thread when I had trouble isolating my 1070 for PCI passthrough. Long story short, I tried dumping my vBIOS like instructed in the video, but couldn't do so (the `cat` command printed I/O errors instead). Instead, I resorted to dumping the full vBIOS under Windows and using a hex editor to splice the relevant part of the ROM into a new file, using some of the partial vBIOS files uploaded here as samples. This finally allowed me to pass the GPU to the Windows VM! --- Anyway, I wrote a Python script that should automate this process (you give it a full ROM from techPowerUp or one you dumped using nvflash under Windows), and it should create a patched ROM that you can use to make GPU passthrough work. I passed a few ROMs I downloaded from techPowerUp through the script and compared them to what you guys uploaded here, and so far the Pascal vBIOS files appeared to match, bit by bit. Still, I can't stress it enough that this script is based on guesswork, so it may end up bricking your GPU if you're unlucky. It does a few rudimentary sanity checks, but I would recommend dumping the partial ROM yourself if you can. Still, for those who are pulling your hair out over not being able to do that, this may be a lifesaver. https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher great work I have linked this to the op. Quote Link to comment
entegral Posted August 10, 2017 Share Posted August 10, 2017 (edited) I was also pointed here after running into issues with the install of an EVGA GTX 1080 Ti. Additionally I also subscribe to your youtube channel, amazing work and thank you for all the help you have already given me! It seems that the problem I have encountered may be related to the vbios, then again, i havent attempted a dump because my bios offer the ability to boot to the onboard vga port, so i don think that is necessary... In case it's relevant, my motherboard is an asus Z9PA-D8. HVM and IOMMU are both enabled according to unraid's 'info' tab, and the card is the only pci device that is in its IOMMU group (other than the nvidia audio, which is also in the same group). After installing an ubuntu vm with VNC (per your introduction to unraid vm's video), and then enabling the discrete card after install, the grub bootloader displays and im able to navigate its options successfully. To my novice mind, this seems to indicate that the gpu passthrough is working, right? But as soon as i make a selection to boot ubuntu, the screen freezes on that slightly off-black ubuntu loading screen color and becomes unresponsive. Even a 'force stop' of the vm doesn't clear/reset the screen. If the vm is force-stopped and then started again I am able to successfully view/interact with the grub bootloader, but as soon as i try to boot into ubuntu, the screen goes blank. Any ideas or suggestions of how to fix? Edited August 10, 2017 by entegral Quote Link to comment
SpaceInvaderOne Posted August 12, 2017 Author Share Posted August 12, 2017 On 10/08/2017 at 1:36 AM, entegral said: I was also pointed here after running into issues with the install of an EVGA GTX 1080 Ti. Additionally I also subscribe to your youtube channel, amazing work and thank you for all the help you have already given me! It seems that the problem I have encountered may be related to the vbios, then again, i havent attempted a dump because my bios offer the ability to boot to the onboard vga port, so i don think that is necessary... In case it's relevant, my motherboard is an asus Z9PA-D8. HVM and IOMMU are both enabled according to unraid's 'info' tab, and the card is the only pci device that is in its IOMMU group (other than the nvidia audio, which is also in the same group). After installing an ubuntu vm with VNC (per your introduction to unraid vm's video), and then enabling the discrete card after install, the grub bootloader displays and im able to navigate its options successfully. To my novice mind, this seems to indicate that the gpu passthrough is working, right? But as soon as i make a selection to boot ubuntu, the screen freezes on that slightly off-black ubuntu loading screen color and becomes unresponsive. Even a 'force stop' of the vm doesn't clear/reset the screen. If the vm is force-stopped and then started again I am able to successfully view/interact with the grub bootloader, but as soon as i try to boot into ubuntu, the screen goes blank. Any ideas or suggestions of how to fix? Hi, @entegral yes if you can see the grub boot loader then GPU pass through is working. When setting up a ubuntu VM from the template, it defaults to using bios type OVMF I would use bios type Seabios for Ubuntu. So make a new ubuntu VM and when making it go to the template and toggle advanced view in the top right then you can choose bios type and select Seabios. Give this a try Quote Link to comment
kri kri Posted August 24, 2017 Share Posted August 24, 2017 On 8/1/2017 at 1:46 PM, Matoking said: I was pointed towards this thread when I had trouble isolating my 1070 for PCI passthrough. Long story short, I tried dumping my vBIOS like instructed in the video, but couldn't do so (the `cat` command printed I/O errors instead). Instead, I resorted to dumping the full vBIOS under Windows and using a hex editor to splice the relevant part of the ROM into a new file, using some of the partial vBIOS files uploaded here as samples. This finally allowed me to pass the GPU to the Windows VM! --- Anyway, I wrote a Python script that should automate this process (you give it a full ROM from techPowerUp or one you dumped using nvflash under Windows), and it should create a patched ROM that you can use to make GPU passthrough work. I passed a few ROMs I downloaded from techPowerUp through the script and compared them to what you guys uploaded here, and so far the Pascal vBIOS files appeared to match, bit by bit. Still, I can't stress it enough that this script is based on guesswork, so it may end up bricking your GPU if you're unlucky. It does a few rudimentary sanity checks, but I would recommend dumping the partial ROM yourself if you can. Still, for those who are pulling your hair out over not being able to do that, this may be a lifesaver. https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher This looks pretty cool - I looked at the github but I am really dumb with stuff like this. Can you explain the steps you used in WIndows to create the flashed bios? I have an Nvidia EVGA 1050 TI (https://www.techpowerup.com/gpudb/b3905/evga-gtx-1050-ti-sc-acx-2-0) I am trying to pass through to my Win 10 VM. Thanks in advance. Quote Link to comment
SSD Posted August 24, 2017 Share Posted August 24, 2017 2 hours ago, ice pube said: This looks pretty cool - I looked at the github but I am really dumb with stuff like this. Can you explain the steps you used in WIndows to create the flashed bios? I have an Nvidia EVGA 1050 TI (https://www.techpowerup.com/gpudb/b3905/evga-gtx-1050-ti-sc-acx-2-0) I am trying to pass through to my Win 10 VM. Thanks in advance. I have EVGA 1050Ti SC card. I found a vbios on techpowerup but it said it was untested, and it didn't work (bottom half of screen looked fine, but upper half was all screwed up. So I pulled my own using GPUZ. I installed the card in my old Windows box, ran GPUZ, and extracted the VBIOS. Then editted it with HxD to remove the "nvidia header". Put it on my server, added reference in my VM XML, and it works perfect. Quote Link to comment
Dorin Posted September 21, 2017 Share Posted September 21, 2017 (edited) Seems to work. No error code 43 in device manager of guest OS. I followed the instructions from 2nd video, but i didn't tried with a monitor connected, just with a VNC remote connection. I tried before to pass through the GPU with another Linux based distribution, but in that case didn't worked or i didn't succeed. Host: OS: unRAID version: 6.3.5 System: Dell Power Edge T20, CPU: Xeon 1225 v3 (with integrated GPU) Guest: OS: Win 8.1 Pro x64 GPU: GTX 1050 Ti (4 GB), NVIDIA driver: 376.09 Edited September 21, 2017 by Dorin Quote Link to comment
Dual_Shock Posted October 5, 2017 Share Posted October 5, 2017 Hi all ! First, many thanks to gridrunner for the great tuto in first page I have DL the trial unRAID 6.3.5 to experiment GPU Passthrough on a Dell Precision T5600 (chipset C600, 64Go DDR3, Bi-Xeon E5 2620, GTX770). I have Error Code 43 in my Win10 VM after successfull install nvidia drivers. Do you know if its supposed to work with my hardware ? I need a latest motherboard ? Thanks Quote Link to comment
ren88 Posted October 5, 2017 Share Posted October 5, 2017 i need help setting up my 1050 ti on my laptop as a gpu passthrough on qemu, i am using revenge os arch linux Quote Link to comment
SSD Posted October 8, 2017 Share Posted October 8, 2017 On 10/5/2017 at 10:50 AM, ren88 said: i need help setting up my 1050 ti on my laptop as a gpu passthrough on qemu, i am using revenge os arch linux Are you using unRAID? Quote Link to comment
ren88 Posted October 9, 2017 Share Posted October 9, 2017 yes 15 hours ago, SSD said: Are you using unRAID? Quote Link to comment
Dual_Shock Posted October 16, 2017 Share Posted October 16, 2017 On 05/10/2017 at 11:22 AM, Dual_Shock said: Hi all ! First, many thanks to gridrunner for the great tuto in first page I have DL the trial unRAID 6.3.5 to experiment GPU Passthrough on a Dell Precision T5600 (chipset C600, 64Go DDR3, Bi-Xeon E5 2620, GTX770). I have Error Code 43 in my Win10 VM after successfull install nvidia drivers. Do you know if its supposed to work with my hardware ? I need a latest motherboard ? Thanks It finally works for me with a GTX970 instead of my GTX770 !!! And I didn't even need to put the dump bios in the XML ... However, the performance are very poor. On the bench Unigine Heaven in Dx11, I am at 20 FPS average ... (Normally 60-80) Quote Link to comment
SSD Posted October 16, 2017 Share Posted October 16, 2017 I am not a gamer, but this is not typical of VM slowdowns I have read about. I'd expect reductions of maybe 20% or so. So there might still be something not quite right in your config. If this is the sole video card, you might try the ROM file in the XML. Could also be that you are not giving enough cores or memory. Or not allocating matching cores and matching hyper-thread cores properly. Review carefully and experiment and you might find something that could pump up the video performance. 1 Quote Link to comment
Dual_Shock Posted October 16, 2017 Share Posted October 16, 2017 Thanks for your help. I have tried with 4 cores, 4 Go = 26 FPS average I have tried with 24 cores, 16 Go = 27 FPS average I will test with including ROM Bios in the XML. Quote Link to comment
SSD Posted October 16, 2017 Share Posted October 16, 2017 @gridrunner may have other ideas. Reduction in gaming performance from >60 to 26 FPS is not typical. Quote Link to comment
SpaceInvaderOne Posted October 18, 2017 Author Share Posted October 18, 2017 @Dual_Shock please post your xml, iommu groups, and your cpu thread pairings so we can see Definitely, try passing through the vbios.Your 770 probably didn't work because it didn't support EFI so would only work using seabios and not ovmf. Passing through a 770 vbios that does support that will make the card start with an EFI bios so work that or you could flash the card but its much easier to use rom in XML. Check your bios settings that your primary GPU is onboard if you have that and make sure that multi-monitor is off. Also don't mix cores from across your 2 CPUs. Quote Link to comment
steve1977 Posted October 31, 2017 Share Posted October 31, 2017 I followed the instructions. Hope I didn't brick anything. I have an GTX1050, which is in my primary PCI port. I dumped the bios using commandline. I didn't move the card to a secondary slot, which I hope was ok? Everything actually worked and I succeeded to dump the bios. The only thing that didn't work is to bind the card again. I get an error message that this card doesn't exist. I initiall unbinded it. Everything seems to be still working though, but I am worried that I bricked something by not binding the card again? Quote Link to comment
Josecitox Posted November 1, 2017 Share Posted November 1, 2017 I'm getting error code 43 with the latest unraid beta release. Drivers install just fine but i get that error code. Quote Link to comment
SpaceInvaderOne Posted November 1, 2017 Author Share Posted November 1, 2017 so was it working fine before 6.4.0-rc9f or is this the first time you have tried? Quote Link to comment
steve1977 Posted November 3, 2017 Share Posted November 3, 2017 On 10/31/2017 at 7:35 PM, steve1977 said: I followed the instructions. Hope I didn't brick anything. I have an GTX1050, which is in my primary PCI port. I dumped the bios using commandline. I didn't move the card to a secondary slot, which I hope was ok? Everything actually worked and I succeeded to dump the bios. The only thing that didn't work is to bind the card again. I get an error message that this card doesn't exist. I initially unbinded it. Any thoughts on above? My GPU (primary slot GTX 1050) is no longer binded and I don't know how to bind it again. I had unbinded to dump the bios, but then failed to bind it again. Any thoughts how to do so? Thanks in advance! Quote Link to comment
steve1977 Posted November 3, 2017 Share Posted November 3, 2017 Hope to get this sorted out. Let me provide you some more information. Context where the HW : only one GPU (GTX 1050) used in primary PCI slot, GPU used by Unraid and not assigned to VM Below how "lspci -v" gives me related to the GPU. You will notice that the kernel driver is not in use (this was different when I did this first and unbinded it). https://pastebin.com/8XFap1JA Followed the comments to bind the card again. See error message below: root@Tower:~# cd /sys/bus/pci/devices/0000:65:00.0/ root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo 1 > rom root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo 0 > rom root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo "0000:65:00.0" > /sys/bus/pci/drivers/vfio-pci/bind -bash: echo: write error: No such device And some more info from tools/system devices in case this helps trouble-shooting: IOMMU group 36 [10de:1c81] 65:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1) [10de:0fb9] 65:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1) How can I "bind" the GPU again? What happened when I "successfully" unbinded my card? Quote Link to comment
Josecitox Posted November 3, 2017 Share Posted November 3, 2017 On 1/11/2017 at 2:09 PM, gridrunner said: so was it working fine before 6.4.0-rc9f or is this the first time you have tried? For some reason Hyper-V was enabled and it didn't worked, not even after disabling it. I created a new VM with that disabled and it worked. Weird. Quote Link to comment
Fatherof4 Posted November 14, 2017 Share Posted November 14, 2017 I have 2 of this card in my tower. Gpu-z couldn't save the BIOS. There was some error. Following gridrunner video I managed to dump the BIOS. Everything is working now. Here's the BIOS: https://www.dropbox.com/s/uiuh9qa4qin6vus/NVIDIA GeForce GTX 1060 6GB.dump?dl=0 Thank you gridrunner. I appreciate your effort and help. Quote Link to comment
DZMM Posted November 22, 2017 Share Posted November 22, 2017 I'm switching the card in my primary slot and I have <alias name='hostdev0'/> above the address line in my xml. Do I leave this in? The other VM that I previously had in the primary slot didn't have this line: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/disks/sm961/system/gt730bios.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> Thanks Quote Link to comment
DigitalStefan Posted November 26, 2017 Share Posted November 26, 2017 I've tried the different methods ... dump BIOS from the machine ... dump it from another machine using GPUZ and editing with a hex editor. With an ASUS Sabertooth 990FX r1 and AMD FX8150, nothing will persuade the first GPU to pass through to a Windows VM. Genuinely spent many hours attempting it. Either the VM never starts and hogs CPU ... never initialising the displays/GPU's or I get the error 43. I've tried different Windows client versions including pre-creators update Win 10, Win 10 Enterprise and Windows Server 2016. I've resigned myself that of 3 GPU's installed, 2 of them will work. I don't know if this is a BIOS limitation with my motherboard (no newer BIOS exists) or if it's a CPU issue. If anyone has a matching/similar setup with any insights, I'd welcome your comment. My path from here is an upgrade to a Ryzen CPU and Asrock Taichi board (unless anyone knows of another board that properly supports unbuffered ECC RAM?). Quote Link to comment
steve1977 Posted November 26, 2017 Share Posted November 26, 2017 I have also spent many hours (if not literally days). I still don't have it fully working, but at least made some meaningful improvements. Have you tried to use OVMF instead of Seabios? OVMF appears to work a lot better. Also, are you using RDC, which may be another source of issue? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.