GPU Passthrough doesnt work after Updating to unRAID 6.9


Recommended Posts

14 hours ago, Celsian said:

@giafidis & @DoeBoye If you aren't using the vBios from your GFX card I recommend using Space Invader Ones script to pull it. The script is here: https://github.com/SpaceinvaderOne/Dump_GPU_vBIOS

 

Use the User Scripts plugin, add the script and modify the first section, run it once and then assign the vBios in your VM.

Good idea! Unfortunately, I tried, and even with the vBios file, no change. The VM loads, but everything is extremely slow. I'll type in my login and it takes about 10 seconds to appear. Thanks for the suggestion though! It looked promising on the surface! :)

Link to comment
6 hours ago, giafidis said:

What Motherboard do you have?

I also tried the Binding Feature on all my GPUs (Nvidia and AMD) and it doesn't change anything.

 

@Celsian

Thanks your support Mate!

 

I already dumped it with the Script. It didn't make any difference...

Side note: The script doesn't work for me in 6.9. I get always an error message that i should bind the GPU with vfio-pci first and then try again. I tried that out, but no luck. In 6.8.3 however, the script works without any Binding at all...

 

I have a ASRock EP2C602-4L/D16.

 

Also, as an aside, the script didn't work for me either, but I did it the old fashioned way and the vBios export worked (but didn't solve the issue)

Edited by DoeBoye
Link to comment
10 hours ago, Celsian said:

Jeez, I'm sorry friends. Not sure where to go from here.

 

@DoeBoye It sounds like our problems are a little different. I was never able to load the VM unless the Nvidia Driver wasn't installed, otherwise it would freeze on the loading screen, then boot loop.

No worries! It was a good suggestion. One more thing confirmed not to be the issue :)

 

Link to comment

@DoeBoye I just saw the Announcement and was thinking to try it out, but i switched back to 6.8.3 because i tried some things in the meantime and got super frustrated:

I checked my System Log and saw some repeated Error Messages: pcieport 0000:00:03.0: [ 6] Bad TLP, PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0010(Transmitter ID). They are registering after some activity from the PCIe Bus. After researching a bit, i turned out that x99 Chipsets are known for this Issue, but i couldn't remember it being present on 6.8.3. With Kernel Parameter "pcie_aspm=off" in the Flash Boot menu, you disable the Active State Power Management for PCIe on boot and the Errors don't occur anymore. I was convinced that this was causing the Passthrough Issues, but after some testing, nothing changed...

Link to comment

So this still seems odd to me as I've had this happen even after upgrading and despite using the script; 

I'm able to use 1 VM with 1 GPU, thankfully with the user script having worked. However, If I try to run 2 VMs and 2 GPU's, only 1 works while the other I can remote into, but get an Error 43 with the 2nd GPU. No matter how I flip flop the hardware, its still the same. Is anyone experiencing this as well? Unfortunately, I can't revert back to 6.8.3. 

Link to comment

I had successful GPU passthrough (tested acceleration) on a Ubuntu host on 6.9.1 one time. It did not have the audio device configured correctly (strange UNRAID bug that's been lingering for years apparently) so I did a clean shutdown, tweaked it per the space invader advanced GPU tweaks video (added multifunction='on' and set correct values for audio device) and started it back up again. No output whatsoever - not even the bios. Even reverting back to the broken audio configuration (or removing audio altogether) still resulted in no output at all. Rebooted UNRAID and same thing again, no output. Been pulling my hair out over this maddening inconsistency. Why would it work once and then not again?

 

The configuration that did work once was with latest Q35 and OVMF bios, no attached vbios (AMD GPU).

 

I am not seeing any errors or even warnings anywhere in UNRAID log or VM log that would indicate an issue. It should work - but it doesn't. I've spent probably about 16 hours across 3 days hammering on various Windows 10 Pro configurations to get a GPU passed through and never had any success. I swear I must have tried every combination by now... Why would a core feature be broken in the latest stable release?

 

This is my first foray into UNRAID and I just started with the latest which happened to be 6.9.1 - so no previous success from which to compare to older versions. Having a miserable time so far 😆.

Link to comment
56 minutes ago, Celsian said:

Econaut, are you able to remote into the station when it's giving you the black screen?

 

Not currently. I did install teamviewer with VNC when initially installing Ubuntu and while I can actually see the teamviewer ID populate in the systems list when I start it up in passthrough mode, I cannot actually connect to it. I would probably be able to SSH in if that is what you mean but I don't have that enabled currently.

Link to comment
21 minutes ago, Econaut said:

 

Not currently. I did install teamviewer with VNC when initially installing Ubuntu and while I can actually see the teamviewer ID populate in the systems list when I start it up in passthrough mode, I cannot actually connect to it. I would probably be able to SSH in if that is what you mean but I don't have that enabled currently.

 

Well, it's good that it's showing up. Can you post a screen shot of your VM settings?

 

This checklist might help as well: https://mediaserver8.blogspot.com/2020/07/problems-passing-through-gpu-to-unraid.html

 

You don't seem to be having the same issues as the guys who started this thread. The three of us couldn't get our VM to boot at all, the video driver was causing the VM to terminate prematurely. Have a look at that link above, it might help.

Link to comment

Thanks for that guide - I actually found it referenced somewhere else and followed it pretty much to the letter with the most recent Win10 VM attempt (stopping short of the virt-manager section). After reading countless forum posts, articles, and watching videos I've seen everything all over the map recommended for any possible scenario (SeaBIOS vs OVMF, Q35 vs i440fx, use vbios or don't) - the community doesn't seem to have any reliable consensus aside from 'use whatever works' & 'try everything' which is infuriating haha.

 

I may not be having the exact same issue, in fact I don't think anyone in the world is having my exact issue since my hardware configuration (and most folks) are rather unique, but I am on 6.9.1 and I cannot get GPU passthrough working for the life of me despite following every guide I could get my hands on.

 

Behold:

image.thumb.png.0f4573d4ba4a7583baaf08f08ca61e5a.png

Edited by Econaut
Link to comment
15 hours ago, Econaut said:

Thanks for that guide - I actually found it referenced somewhere else and followed it pretty much to the letter with the most recent Win10 VM attempt (stopping short of the virt-manager section). After reading countless forum posts, articles, and watching videos I've seen everything all over the map recommended for any possible scenario (SeaBIOS vs OVMF, Q35 vs i440fx, use vbios or don't) - the community doesn't seem to have any reliable consensus aside from 'use whatever works' & 'try everything' which is infuriating haha.

 

I may not be having the exact same issue, in fact I don't think anyone in the world is having my exact issue since my hardware configuration (and most folks) are rather unique, but I am on 6.9.1 and I cannot get GPU passthrough working for the life of me despite following every guide I could get my hands on.

 

Behold:

 

 

Sounds like you've tried it all, I don't see any glaringly obvious issues with the current setup you have. I never could get OVMF to work with Windows 10, I'm stuck on SeaBios. From what you said, you've already given that a try though, not sure what else to suggest.

Link to comment

Success! :). I finally had some time to take a good hard look at this issue, and with patience and perseverance, my VM is running again the same as it was in 6.8.3! Here's what I did, but I suspect these steps may not all be necessary.

 

  1. Download DDU (Display Driver Uninstaller) on another pc and copy it over to a shared drive on the server. I made sure it was unzipped and ready to go, as doing anything in my VM took FOREVER.
  2. Boot your VM
  3. Set it to boot into Safe mode
    1. Hit the windows key.
    2. Type 'msconfig' (without the brackets)
    3. Select Safe boot (I added networking just in case I need network access).
  4. Reboot into safe mode and run DDU. Remove all drivers (AMD, then Intel, finally Nvidia). I may have had bits and pieces of Radeon drivers in there, as I had a Radeon card running in the VM at some point. I wanted to make sure I removed all potential issues.
  5. Download MSI Util v2 on another pc (I already had it, but the link can be found from the wiki. I noticed there's a v3 version as well, but I used the one I had already. Copy it over to a shared drive . Prep it the same as DDU. My VM is extremely slow right now.
  6. Go back to VM, go back into msconfig and set boot back to normal
  7. Reboot VM. VM should perform normally at this point (video will be stuck at 800x600 with no drivers).
  8. Grab the msi util app from the shared drive and save to desktop.
  9. Go to nvidia and grab the current drivers
  10. Install current drivers and reboot.
  11. Performance will degrade to virtually unuseable again. Right click on msi util and RUN AS ADMIN.
  12. Now. Get up, get a coffee/grab a beer/Make lunch for your kids. This part took over an hour for the app to launch and find all the devices in my system.
  13. Once it stops adding things to the list, look for anything related to audio/video. I had 2 a/v entries that did not have msi checked, and no interrupt priority set. I checked the msi box and set them to high priority. MAKE SURE TO HIT APPLY in the top right corner.
  14. Reboot and enjoy the sweet sweet speed of your Windows 10 VM

Of course, our issues may be completely unrelated. My VM never crashed/permanently froze. But any inputs from the mouse were extremely delayed (about 10 seconds between moving the mouse and having it register). Apps also took at least 10 times as long to load. Completely unuseable.

 

The funny thing is I had already run this utility when I first built the VM. It usually takes care of any stutter or audio artifacts you might find in your Win10 VM. Odd that I had to re-run it, and it had never behaved so poorly before. Usually I might just have a bit of a stutter or audio weirdness.

 

Finally, as I mentioned at the start, more than likely most people could skip to just running MSI Util and checking 'msi' and setting interrupts to high priority for any a/v entries, but I thought I would provide exactly what I did in case that is not enough.

 

Good Luck!

msi-util-screenshot.PNG

Edited by DoeBoye
  • Thanks 3
Link to comment
On 3/13/2021 at 5:28 PM, DoeBoye said:

Success! :). I finally had some time to take a good hard look at this issue, and with patience and perseverance, my VM is running again the same as it was in 6.8.3! Here's what I did, but I suspect these steps may not all be necessary.

 

  1. Download DDU (Display Driver Uninstaller) on another pc and copy it over to a shared drive on the server. I made sure it was unzipped and ready to go, as doing anything in my VM took FOREVER.
  2. Boot your VM
  3. Set it to boot into Safe mode
    1. Hit the windows key.
    2. Type 'msconfig' (without the brackets)
    3. Select Safe boot (I added networking just in case I need network access).
  4. Reboot into safe mode and run DDU. Remove all drivers (AMD, then Intel, finally Nvidia). I may have had bits and pieces of Radeon drivers in there, as I had a Radeon card running in the VM at some point. I wanted to make sure I removed all potential issues.
  5. Download MSI Util v2 on another pc (I already had it, but the link can be found from the wiki. I noticed there's a v3 version as well, but I used the one I had already. Copy it over to a shared drive . Prep it the same as DDU. My VM is extremely slow right now.
  6. Go back to VM, go back into msconfig and set boot back to normal
  7. Reboot VM. VM should perform normally at this point (video will be stuck at 800x600 with no drivers).
  8. Grab the msi util app from the shared drive and save to desktop.
  9. Go to nvidia and grab the current drivers
  10. Install current drivers and reboot.
  11. Performance will degrade to virtually unuseable again. Right click on msi util and RUN AS ADMIN.
  12. Now. Get up, get a coffee/grab a beer/Make lunch for your kids. This part took over an hour for the app to launch and find all the devices in my system.
  13. Once it stops adding things to the list, look for anything related to audio/video. I had 2 a/v entries that did not have msi checked, and no interrupt priority set. I checked the msi box and set them to high priority. MAKE SURE TO HIT APPLY in the top right corner.
  14. Reboot and enjoy the sweet sweet speed of your Windows 10 VM

Of course, our issues may be completely unrelated. My VM never crashed/permanently froze. But any inputs from the mouse were extremely delayed (about 10 seconds between moving the mouse and having it register). Apps also took at least 10 times as long to load. Completely unuseable.

 

The funny thing is I had already run this utility when I first built the VM. It usually takes care of any stutter or audio artifacts you might find in your Win10 VM. Odd that I had to re-run it, and it had never behaved so poorly before. Usually I might just have a bit of a stutter or audio weirdness.

 

Finally, as I mentioned at the start, more than likely most people could skip to just running MSI Util and checking 'msi' and setting interrupts to high priority for any a/v entries, but I thought I would provide exactly what I did in case that is not enough.

 

Good Luck!

msi-util-screenshot.PNG

 

OMG!! You found the fix for my GTX 1070!!  I had 1 of my 2 gaming VMs completely locking up after the 6.9 update unless i set it to ‘VNC’.  (for some reason my vm using a GTX 1650 was working fine after the update.)

 

Your instructions (specifically enabling MSI mode and interrupt priority in the utility) fixed it immediately. 

 

Note for the next people to find this:

- Any custom resolutions will need to be set up again. (i remote through an ipad pro, so i have some custom 4:3 resolutions set up)

- If your VM is accessible via HDMI (even at 800x600), but unavailable via any remote software, run the MSI mode utility first.  Upon reboot the previously installed drivers will likely be identified, removing the need for the driver re-install step.

 

Again, THANK YOU!!  I was just about to create a new Windows 10 VM from scratch to rule out any other variables.  You saved me HOURS!

 

 

 

 

Edited by TruSnake
Link to comment
On 3/13/2021 at 11:28 PM, DoeBoye said:

Success! :). I finally had some time to take a good hard look at this issue, and with patience and perseverance, my VM is running again the same as it was in 6.8.3! Here's what I did, but I suspect these steps may not all be necessary.

 

  1. Download DDU (Display Driver Uninstaller) on another pc and copy it over to a shared drive on the server. I made sure it was unzipped and ready to go, as doing anything in my VM took FOREVER.
  2. Boot your VM
  3. Set it to boot into Safe mode
    1. Hit the windows key.
    2. Type 'msconfig' (without the brackets)
    3. Select Safe boot (I added networking just in case I need network access).
  4. Reboot into safe mode and run DDU. Remove all drivers (AMD, then Intel, finally Nvidia). I may have had bits and pieces of Radeon drivers in there, as I had a Radeon card running in the VM at some point. I wanted to make sure I removed all potential issues.
  5. Download MSI Util v2 on another pc (I already had it, but the link can be found from the wiki. I noticed there's a v3 version as well, but I used the one I had already. Copy it over to a shared drive . Prep it the same as DDU. My VM is extremely slow right now.
  6. Go back to VM, go back into msconfig and set boot back to normal
  7. Reboot VM. VM should perform normally at this point (video will be stuck at 800x600 with no drivers).
  8. Grab the msi util app from the shared drive and save to desktop.
  9. Go to nvidia and grab the current drivers
  10. Install current drivers and reboot.
  11. Performance will degrade to virtually unuseable again. Right click on msi util and RUN AS ADMIN.
  12. Now. Get up, get a coffee/grab a beer/Make lunch for your kids. This part took over an hour for the app to launch and find all the devices in my system.
  13. Once it stops adding things to the list, look for anything related to audio/video. I had 2 a/v entries that did not have msi checked, and no interrupt priority set. I checked the msi box and set them to high priority. MAKE SURE TO HIT APPLY in the top right corner.
  14. Reboot and enjoy the sweet sweet speed of your Windows 10 VM

Of course, our issues may be completely unrelated. My VM never crashed/permanently froze. But any inputs from the mouse were extremely delayed (about 10 seconds between moving the mouse and having it register). Apps also took at least 10 times as long to load. Completely unuseable.

 

The funny thing is I had already run this utility when I first built the VM. It usually takes care of any stutter or audio artifacts you might find in your Win10 VM. Odd that I had to re-run it, and it had never behaved so poorly before. Usually I might just have a bit of a stutter or audio weirdness.

 

Finally, as I mentioned at the start, more than likely most people could skip to just running MSI Util and checking 'msi' and setting interrupts to high priority for any a/v entries, but I thought I would provide exactly what I did in case that is not enough.

 

Good Luck!

msi-util-screenshot.PNG

I tried checking the MSI option and changing the interrupt priority and hitting apply but upon refresh or restart both go back to it's default selection. Any idea what's going on? 

 

Link to comment
8 hours ago, tanvirh5 said:

I tried checking the MSI option and changing the interrupt priority and hitting apply but upon refresh or restart both go back to it's default selection. Any idea what's going on? 

 

Did you make sure you ran the msi_util as Administrator. If you dont, it wont save teh changes back to  the registry.

Link to comment
9 hours ago, Noggers said:

Did you make sure you ran the msi_util as Administrator. If you dont, it wont save teh changes back to  the registry.

Yup, made sure it's run as administrator. Tried multiple times even after a fresh boot, still the same, the settings doesn't stick. 

Link to comment
On 3/9/2021 at 5:09 PM, Celsian said:

Jeez, I'm sorry friends. Not sure where to go from here.

 

@DoeBoye It sounds like our problems are a little different. I was never able to load the VM unless the Nvidia Driver wasn't installed, otherwise it would freeze on the loading screen, then boot loop.

 

I have the same issue. Setting video to VNC works fine. Using DDU and then removing all video drivers and rebooting using MSI Gaming 1070 Ti works up until the OS tries to add the video card. At that point the VM crashes and tries to reboot. Boot loop from that point on. Never gets to login.

 

All suggestions in this thread have not worked.


I have been able to load a number of Linux distros using the video card and ROM I am passing without issue. Only Windows VM on 6.9 & 6.9.1 refuse to work.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.