Jump to content
mattz

Win10 VM graphics pass-through broke after AMD BIOS update

85 posts in this topic Last Reply

Recommended Posts

Just to inform everyone. 

 

Since BIOS version 3.10 (the latest 3.20 as well) the same bug is present for the board 

 

Asrock Rack X470D4U

 

PS: a downgrade of the bios to the latest working Firmware 1.50 worked for me but only via DOS. 

 

 

Edited by Trashor
Additional info

Share this post


Link to post

I will post in here to keep up to date regarding this issue and hopefully ASUS will release a BIOS update in the somewhat near future. Since I may eventually update to Ryzen 3rd Gen processor. I also am not sure if I could downgrade bios to work with GPU passthrough since I am using a 2nd Gen Ryzen Processor.

Share this post


Link to post

Just an update: I just installed my new Ryzen 3900x on x370 Taichi with AGESA 1.0.0.3ABB and passthrough works fine. But I had to enable PCIe ACS Override setting.

Share this post


Link to post
On 8/18/2019 at 10:52 PM, Leoyzen said:

 I'm using RTX2070 for a Win 10/Ubuntu and RX560 for Win10/Hackintosh)

hi

 

do you use specific bios file to passthrough your rtx2070 ?

 

What is your immogroup for this card ?

 

Can i see your vm xml config file please ?

 

see my post, i got troubel to start the VM

 

++

 

 

Share this post


Link to post
6 hours ago, Fidelix said:

Upgrading the BIOS and the kernel fixed the issue for me.

That's nice to know but it would be more useful to others if you could give the version numbers and your motherboard model.

Share this post


Link to post
On 10/11/2019 at 3:23 AM, Fidelix said:

Upgrading the BIOS and the kernel fixed the issue for me.

Can you elaborate on upgrading the kernel as well?  What steps or guide did you follow (any help would be appreciated!).

Share this post


Link to post

Here is my tale, if it helps someone else:

 

I have an ASUS Prime X470 Pro board that had a 1700x in it along with an rx570 and all was well until I upgraded to BIOS 5220 with AGESA 1.0.0.3 ABBA. I received the well documented "Unknown PCI header type 127" error when running a VM with gpu passthrough enabled.

 

I then upgraded the system to a 3900X with no other changes except for fixing my CPU pinning settings and removing a passed through generic USB device that mysteriously appeared in each of my VM settings. Once I had done that, I fired up my hackintosh VM and surprisingly, Clover booted and displayed fine but when booting into Mac OS the VM got hung up at the apple logo.

 

After a another attempt at starting that VM I received the dreaded "Unknown PCI header type 127" error. I attempted to shutdown Unraid and it also got hungup during shutdown requiring a forced power down.

 

After booting back up, I attempted to boot a Linux Mint VM with GPU passthrough and it booted successfully. I was able to run update the system and run a few program and then I attempted to reboot the VM via the start menu and the screen went dark and it never came back. I performed a force shutdown of the VM and tried to boot it and once again I get the error "Unknown PCI header type 127".

 

So far this is as much testing as I've been able to do. It seems I'm getting a bit farther than some in this, but the issue still isn't solved, even with the latest AGESA update.

 

 

Share this post


Link to post

This sounds like the AMD GPU reset bug more than anything. Rebooting a VM with a certain amd GPUs leaves it in a bad state. You have to reboot the host to be able to use the GPU again. 

 

 

Share this post


Link to post

Looks like I got mine working.

Build:
Mobo: B450M Mortar
Bios: 7B89v1B
GPU: RX 460

Was getting the constant header 127 problem and all VMs with the GPU attached even failed to start.

Solution:
1. Updated to the latest BIOS
2. Update to custom built kernel (described below ~ its easy just copying a couple files)

3. After that the GPU was able to boot into the VM, but I noticed the screen had graphical tears, and the logs indicated vfio_region_write failed.  This was solved by adding a script to ensure unraid didn't try to reserve the only GPU (even though I have the `append vfio-pci.ids`)  Following the below guide helped me here (just make a script and run it)


After doing the above 3, I finally got it working, and was able to boot back into my Windows 10 Gaming VM.  Hope that helps somone, and thanks all!

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.