Siwat2545 Posted February 25, 2018 Posted February 25, 2018 (edited) TLDR : This issue is caused by Nvidia driver refusing to be loaded when a virtural enviroment is detected. The Easy Work Around For most scarinos, setting the Hyper-V option to No 1. Go to VMs tab 2. Go to edit VM's Configuration 3. Switch to "Advance Mode" 4. Disable Hyper-V The More "Advanced Work Around" This method is for one's that turning hyper-v off does not help, In this method We'll be patching the nvidia drive. Requirement: Other Windows 10 Machine Decent Internet Connection Decent Disk Space Avaliable OVMF VM (SeaBIOS Won't Work) A USB Flash Drive Step 1: [On External Machine] Install Windows Driver Kit Navigate to the following Link : https://docs.microsoft.com/en-us/windows-hardware/drivers/download-the-wdk Download WDK for Windows 10 Downloading Nvidia CUDA : Navigate to https://developer.nvidia.com/cuda-zone After It has been download, Run it But change the extraction Path to C:\NVIDIA If the following windows are shown DO NOT CLICK CANCEL You must Quickly end It's process tree with task manager (Cancel Will result in the setup deleting the temp files) Now Let's Enable Test Signing, Open A CMD windows as Administrator and type the following in and press enter "Bcdedit.exe -set TESTSIGNING ON" It should Say "The operation completed successfully", Reboot Your Computer If the step is done correctly , You will See the message "Test Mode" on the buttom left of the wallpaper Next, Download nvidia-kvm-patcher by sk1080 from https://codeload.github.com/sk1080/nvidia-kvm-patcher/zip/master extract "patcher.ps1" and "gencert.ps1" to C:\ then Open A Powershell Windows as Administrator type in cd / Set Execution Policy to Remote Sign with "set-executionpolicy remotesigned" Press Y then Enter then type in ./patcher.ps1 C:\NVIDIA\Display.Driver The following Should Happen : Copy The NVIDIA Folder to your usb drive then unplug it and open CMD as administrator again Type in Bcdedit.exe -set TESTSIGNING OFF Open Powershell As Administrator, Type in "set-executionpolicy remotesigned" , This time answer N NOW LET'S CREATE OUR VMS ! Attach the usb drive you just moved your driver in to the unraid system Use Advance Mode Set Hyper-V to No,and Set BIOS to OVMF, attach your usb to the vm too USE VNC as your graphic and Select NONE as Sound Install windows as usual (DO NOT INSTALL THE NETWORK DRIVER) Disable Windows 10 Auto driver installation by [IN VM CONTROLLED USING VNC] 1. Open Control Panel Navigate by following the picture Click Save Changes [STILL IN VM] Next We need to disable driver signature enforcement by running cmd as administrator type in "bcdedit.exe /set nointegritychecks on" then reboot Next move the driver into your desktop Shutdown the VMs enable ACS override patch, Reboot Unraid attach the Graphic card and Sound Device Install The driver, Install Network Card Driver DO NOT UPDATE YOUR DRIVER DONE ! If the step doesn't work try with driver version 372.54 ******If all that doesn't work Here's a list of thing that It fixed client system (TLDR : System specific Fixes) 1. Changing the gpu slot to the second one and put another gpu in the first slot 2. vfio-pci the card 3. Disable Hyper-V 4. Dumping the bios (Work Quite Well) 5. Use VNC With The gpu 6. disable option rom on the gpu (Avaliable on some server board or workstation board, Rarely on consumer board) 7. use an older driver Edited March 15, 2018 by Siwat2545 2 1 1 Quote
suender Posted March 7, 2018 Posted March 7, 2018 Thank u very much, u are my HERO, now I have a headless ryzen server with only one gpu. Quote
SSD Posted March 8, 2018 Posted March 8, 2018 Thanks for taking the time to post this guide! I haven't tested it myself, but it looks detailed and comprehensive! (#ssdindex - GPU Passthru Guide for overcoming Error 43) Quote
Warrentheo Posted March 9, 2018 Posted March 9, 2018 I was having this issue, all I needed to do was dump the rom for the card from the local command line, then edit the VM xml to include the <rom file='/mnt/user/.....'/> code for it... Card has worked perfectly for me ever since... Does this solution resolve some issue I am unaware of? 1 Quote
Siwat2545 Posted March 12, 2018 Author Posted March 12, 2018 On 3/10/2018 at 1:40 AM, Warrentheo said: I was having this issue, all I needed to do was dump the rom for the card from the local command line, then edit the VM xml to include the <rom file='/mnt/user/.....'/> code for it... Card has worked perfectly for me ever since... Does this solution resolve some issue I am unaware of? This issue will solve the issue when Nvidia somehow find out that the machine is a virtural one even when hyper-v is turned off Quote
Starlord Posted March 12, 2018 Posted March 12, 2018 Is there a reason that you are using the quadro drivers and not the standard geforce ones? Quote
Warrentheo Posted March 13, 2018 Posted March 13, 2018 On 3/12/2018 at 2:38 AM, Siwat2545 said: This issue will solve the issue when Nvidia somehow find out that the machine is a virtural one even when hyper-v is turned off I am fairly new to this stuff... Now you have me wondering if I have to look forward to nVidia randomly killing off my setup 1 Quote
Siwat2545 Posted March 14, 2018 Author Posted March 14, 2018 On 3/13/2018 at 3:09 AM, Starlord said: Is there a reason that you are using the quadro drivers and not the standard geforce ones? There is some optimization on workstation uses but cuda IS NOT a quadro driver Quote
jm9843 Posted March 29, 2018 Posted March 29, 2018 On 3/13/2018 at 10:26 AM, Warrentheo said: I am fairly new to this stuff... Now you have me wondering if I have to look forward to nVidia randomly killing off my setup This just happened to me. I was happily passing an Nvidia GPU thru to a Windows 10 VM when it began reporting error code 43 out of nowhere. I'm not sure if it was an Nvidia driver update (I'm in the habit of updating GPU drivers) or if it coincided with a recent (planned) server shutdown. It sucks and I'm not sure where to go from here. If it is the driver, is it possible to uninstall it and roll back until I find the last one that works? I'm assuming that I can download older driver packages from Nvidia? Quote
Warrentheo Posted March 30, 2018 Posted March 30, 2018 (edited) I have an EVGA GTX 1070 with current driver... Still working... I have the boot menu blacklisting the opensource driver, and then passed the device ID to the vfio-pci driver instead... I then have a ROM file I dumped from the actual card in unRaid listed in the XML file for the domain... This seams to be working with no issues... I have OVMF BIOS, i440fx-2.11, and Hyper-V extensions turned off... Edited March 30, 2018 by Warrentheo 1 Quote
jm9843 Posted March 30, 2018 Posted March 30, 2018 Thanks for the reply. I got it working again by using an edited vbios from TechPowerUp per Spaceinvader's instruction. I also left Hyper-V extensions turned off for good measure even though that alone didn't fix it. I have no idea why it worked okay for some time without the bios dump but requires it now. But I'm just glad to have my gaming vm functional again. Quote
Warrentheo Posted March 30, 2018 Posted March 30, 2018 I think what I am about to say is becoming less of a concern, which is why nVidia SLI now works with some cards with different BIOS versions, however using a BIOS that didn't come from the card itself can have some pretty big consequences... Some of the things a BIOS does is regulate fan speeds before windows, and before you think that is not an issue, I would point you here: https://www.evga.com/thermalmod/ This is a recall and BIOS update EVGA did for my card, because without them the card was cooking itself to death... I am sure there are other examples of Bios updates being more than cosmetic or minor... I personally would only EVER use Latest Current Bios written directly for my card, and since that should be the one installed on the card, I would only use a Bios dumped directly off the card in question... That said, it is very unlikely that you will come across a card frying issue from using the wrong Bios... But because the possibility is there, I would always recommend dumping the Bios from the card itself... Quote
steve1977 Posted April 7, 2018 Posted April 7, 2018 (edited) On 3/30/2018 at 8:59 AM, Warrentheo said: I have an EVGA GTX 1070 with current driver... Still working... I have the boot menu blacklisting the opensource driver, and then passed the device ID to the vfio-pci driver instead... I then have a ROM file I dumped from the actual card in unRaid listed in the XML file for the domain... I also face the infamous error 43 for my primary GPU. Interesting enough, it is working well for my secondary card. I am looking for a solution as I'd like to remove my secondary GPU from my server to free up a slot. I'd like to look into both solutions shared here. Two questions: 1) Bios dump: can you @Warrentheo help elaborate please? How to blacklist opensource drivers (which menu, which drivers)? How to pass the device ID to the vfio pci driver? I have successfully dumped the bios, but still seeing error 43. 2) Advanced option: does this require me to build a new VM? I have two existing WIn10 VMs and I'd prefer not to create a new one. Possible? Edited April 7, 2018 by steve1977 Quote
Warrentheo Posted April 7, 2018 Posted April 7, 2018 @steve1977 This is where I got most of that info: (Love his videos in general, helped me a lot) MSI_util.exe Quote
steve1977 Posted April 8, 2018 Posted April 8, 2018 Thanks for your help. His videos are indeed fantastic and I have watched many of them for many different Unraid things. Very well done! I watched the second video twice again, but didn't find any reference to blacklisting or opensource drivers. You're sure that this has been in this videos? Quote
dbeattie71 Posted April 14, 2018 Posted April 14, 2018 Quote attach the Graphic card and Sound Device Install The driver, Install Network Card Driver Dumb question but, with the card passed through, how are you remoting to the VM with the network driver not installed. Quote
JonathanM Posted April 15, 2018 Posted April 15, 2018 19 hours ago, dbeattie71 said: Dumb question but, with the card passed through, how are you remoting to the VM with the network driver not installed. He's not. He's connecting to the KVM VNC console for that VM hosted on unraid. If you watch the video closely, you'll see. Quote
planetwilson Posted April 15, 2018 Posted April 15, 2018 I am getting this as well. My Windows VM which has worked for months has recently stopped working with a code 43. Tried uninstalling the drivers and reinstalling them. I noticed that my VM config now said HyperV was on (no idea what it said previously, was this changed in a recent unRAID release?) so I turned that off but still not working. I am using a bios dump that I have used previously with no issues. Quote
Jcloud Posted April 16, 2018 Posted April 16, 2018 7 hours ago, planetwilson said: I am getting this as well. My Windows VM which has worked for months has recently stopped working with a code 43. Tried uninstalling the drivers and reinstalling them. I noticed that my VM config now said HyperV was on (no idea what it said previously, was this changed in a recent unRAID release?) so I turned that off but still not working. I am using a bios dump that I have used previously with no issues. That happened to me as well, about two weeks ago. Still don't know what I did to set it off. I had to make the VM all over, and used Windows build 1607 ISO then Windows Upgrades to current; if I used the 1709 build ISO I was right back to code 43. While I'm not certain I think it was the "Windows Upgrade Assistant" application which patched in 1709 that caused my headache. Quote
planetwilson Posted April 16, 2018 Posted April 16, 2018 Well mine is working again after I tried an earlier dumped bios instead of the more recent one I had. Now wondering if the more recent one was a patched one form the website GridRunner talks about in his videos and the proper dumped one is fine. Quote
steve1977 Posted April 16, 2018 Posted April 16, 2018 On 4/8/2018 at 9:03 AM, steve1977 said: Thanks for your help. His videos are indeed fantastic and I have watched many of them for many different Unraid things. Very well done! I watched the second video twice again, but didn't find any reference to blacklisting or opensource drivers. You're sure that this has been in this videos? Any thoughts on the "blacklisting of open source drivers"? Would love to use just one GPU to free up one slot, but struggle to get the GPU to work in the primary slot (error 43). Quote
steve1977 Posted April 21, 2018 Posted April 21, 2018 On 2/25/2018 at 3:32 PM, Siwat2545 said: ******If all that doesn't work Here's a list of thing that It fixed client system (TLDR : System specific Fixes) 2. vfio-pci the card Quote
fr05ty Posted April 22, 2018 Posted April 22, 2018 (edited) I was having an issue with the code 43 i tried the hyper-visor on/off didn't make a difference, but i did read on a reddit post to check to see if the unraid OS boot usb was booting in uefi mode in the motherboard bios if so change to legacy mode or equivalent and that fixed my problem, card is all passed through now just had to do a driver reinstall edit: unraid OS boot usb Edited October 3, 2018 by fr05ty 1 Quote
steve1977 Posted April 22, 2018 Posted April 22, 2018 I changed the GPU (from 1050 to 1060) and now it seems to work with the newly dumped dumped vbios. Even left hyper-v enabled. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.