AMD APU Ryzen 5700G iGPU Passthrough on 6.9.2


Recommended Posts

i tried it now for some hours with the asus B550-i and a 220ge.

No chance. Screen goes black, no RDP connection. No error.

When i stop the VM it crashes the Server. Just nope

 

 

some more testing:

Still not working.
After passing the GPU though, windows wont evben boot. No screen, not pingable. nothing

Feels bad

Edited by Morrian
Link to comment

Hi, I've been monitoring this AM4 IGPU passthru since I'm thinking of getting 5700g + AM4 board once my GPU failed.

So CMMIW It seems like its motherboard dependant,

From the previous post looks like MSI B450 is working, is there any other known working brand?

Has anyone tried Asrockrack AM4? I'm thinking of getting their B550 board.

Since it's server-grade perhaps there's a greater chance of passthrough being supported.

Link to comment
On 2022/6/28 at AM9点13分, cemaranet said:

嗨,我一直在监视这个 AM4 IGPU 通道,因为我正在考虑在我的 GPU 出现故障时获得 5700g + AM4 板。

所以 CMMIW 似乎是它的主板依赖,

从之前的帖子看起来微星 B450 可以正常工作,还有其他已知的工作品牌吗?

有人试过 Asrockrack AM4 吗?我正在考虑购买他们的 B550 板。

由于它是服务器级的,因此支持直通的可能性更大。

 

According to my test, MSI B450 and B550 can support 5700G pass-through, but 3400G cannot pass through.

So basically it has nothing to do with the chipset, maybe it has something to do with the vbios setting of amd in igpu。

Link to comment
On 7/1/2022 at 8:48 AM, wx-Rmt said:

 

According to my test, MSI B450 and B550 can support 5700G pass-through, but 3400G cannot pass through.

So basically it has nothing to do with the chipset, maybe it has something to do with the vbios setting of amd in igpu。

 

Thanks for the insight, that would make motherboard selection easier. 

But how come i read some people having issues extracting vbios from 5700g, while some succeed?

Is this like silicon lottery thing where we can get different OC results from 2 identical CPUs?

 

Link to comment
On 7/3/2022 at 10:06 AM, cemaranet said:

 

Thanks for the insight, that would make motherboard selection easier. 

But how come i read some people having issues extracting vbios from 5700g, while some succeed?

Is this like silicon lottery thing where we can get different OC results from 2 identical CPUs?

 

The vbios of igpu is in the motherboard bios, and it will be upgraded with the motherboard bios upgrade, so you should use the vbios extracted from the corresponding version of the bios of the local motherboard. Using other people's vbios may be inconsistent with the vbios version in your current motherboard bios.

  • Like 1
Link to comment
  • 1 month later...

Trying to simply switch to the Cezanne APU is resulting in a failed attempt for me:

 

I can't RDP to the VM anymore after making the switch with this being the last log message:

 

2022-08-16T16:20:02.284507Z qemu-system-x86_64: -device vfio-pci,host=0000:04:00.0,id=hostdev0,bus=pci.4,addr=0x0: Failed to mmap 0000:04:00.0 BAR 0. Performance may be slow

 

I'll go through this thread again and will give it another try.

 

  • Like 1
Link to comment

  

  

Hi, after reading this thread over and over and trying every method mentioned here it failed for me.

As I didn't want to accept the fact that it worked for 6.9.2 but not for 6.10.3 and I absolutely didn't want to use a dedicated gpu I kept fiddling with the settings and finally got it to work for 6.10.3 as well as the 6.11.rc3, my 5700G is also the only gpu in the system, so no dedicated gpu is installed.

But it only worked with windows 10 as I didn't find a way yet to prevent the RadeonResetBug in linux, once I tried fixing this for linux distros I will update my post. As I am using the german version of windows 10 it can be that i translated some buttons or window names wrong, if something stays unclear feel free to ask and I will supplement some pictures.

 

The steps for both unraid versions were exactly the same.

 

Prerequisites:

 

1. You need to extract the vbios for your igpu from the respective bios version of your mainboard, for this step I just followed the steps mentioned by wx-Rmt in this Post. Just download the bios version you have currently installed from your motherboard manufacturer and follow the steps:

After extracting the vbios place it somewhere on your Cache drive, I like to have it close to the VMS so i placed it in a folder called vbios in the domains share from unraid. My extracted vbios had the ending .dat so i had to edit it and replace .dat by .rom

 

 

2. You have to bind your igpus vga controller and audio device to vfio at boot of unraid

For that go to Tools -> System Devices

Now your actions depend on what you are seeing, if the VGA controller and the audio controller for your IGPU are already in seperate groups you just mark them and hit "bind selected to vfio at boot"

If they are not in seperate groups you can do multiple things

- update your bios, more recent bios versions often have better iommu groupings

- if the bios update alone did not help or you dont want to update your bios you go to Settings -> VM Manager and then select Both in the PCIe ACS override setting, as well as setting VFIO allow unsafe interrupts to yes

After then rebooting your unraid server the IOMMU groups should be seperated

So go back to Tools -> System Devices, there select your igpus VGA and Audio Controller and hit Bind selected to VFIO at boot, then reboot your unraid server again and you should be good to go.

 

 

Once you have your Igpus vbios extracted and set up the iommu groups you can begin creating the vm

 

 

1. Create a windows 10 VM with the following settings:

machine: Q35-6.2

bios: SeaBIOS

hyper-V: yes

The rest was completely standard settings, for now don't pass through the igpu yet, keep it on VNC to set everything up

 

2. Start the VM and install windows 10

During the installation Windows will not see the hard drive to install on until you have installed the virtio drivers, so you have to select the option to load drivers and then select them from the virtio cd.

I have installed the virtio drivers in the following order:

1. Baloon

2. NetKVM

3. vioseriel

4. viostor

Viostor makes windows able to see the virtual harddrive and the installation can continue.

Just follow the installation until it is done.

 

3. Setting up Windows

Once windows is installed and you booted into it you have to open System Control, in the top right select large symbols and then click on System.

In the System Window you go to advanced system settings, in the then opened new window you click on Hardware, then on device installation settings and then select no, then press save settings.

Still inside the advanced system settings open the device manager.

If in network devices your lan connection is still not working you right click it and uninstall it with deleting drivers.

After uninstalling your network device you go to the top of the device manager and look for the option "look for changed hardware", your network device that is not installed should then show up again.

Right click your network device and click on update or install drivers, then you select the option to manually search for the driver on your computer and just select the virtio cd as source, then press ok and windows will automatically install the network device drivers.

 

4. Download the latest amd driver for your igpu as well as the RadeonResetBugFix

https://drivers.amd.com/drivers/whql-amd-software-adrenalin-edition-22.5.1-win10-win11-may10.exe

https://github.com/inga-lovinde/RadeonResetBugFix

After downloading those 2 things just place the RadeonResetBugFixService.exe on your virtual windows harddrive (so just in C:/ )

 

5. Activate Remote Desktop

Again go to System Control, then click on System, then on Remote Desktop, activate Remote Desktop

On the bottom you click on select user for Remote Desktop Access, in the new window you click add, then advanced and in the new window on the right you click search now, if Administrator is selected you just press ok, again ok and one more time ok.

For being able to use Remote Desktop your User needs to have a password set, so if you did not set a login password in the installation go to System Control, Useraccounts and add a password for your user.

Finally check your ip in your windows vm and test if the Remote Desktop connection is working by using Remotedesktop to login into your VM. If your Remote Desktop connection worked you can shut down the windows vm.

 

6. Changing the VM Settings for passthrough

Once your VM is shut down you edit it,

instead of VNC you now select your IGPU as Graphics Card

Just below the Graphics Card in "Path to ROM BIOS file" you must now select the vbios you extracted earlier.

(Remember that you had to change the .dat ending of your vbios to .rom, if you did skip this step unraid will not recognize the bbios file)

Once you've selected the vbios file you also add your IGPU Audio Controller as Soundcard.

 

Save the settings for the VM, but don't start it yet, we still need to edit the xml.

 

After saving the VM click edit again and once  the VM settings open again you click the button in the top right corner to get from "Form View" to "XML View"

 

In the xml file we now have to add some lines and edit some as well.

 

6.1. add a random combination of letters and numbers for your vendor_id value, you should find it just above </hyperv>

It should be exactly 12 letters and numbers in combination, they can be completely random as long as there are 12

      <vendor_id state='on' value='3D6L09A17K3O'/>
    </hyperv>

 

6.2. add the following lines of code just below </hyperv>

    <kvm>
      <hidden state='on'/>
    </kvm>

 

6.3. add the following lines of code at the bottom just after </devices> and before </domain>

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-vga=on'/>
  </qemu:commandline>

 

6.4. A bit above the last addition you should find your igpu device it should look something like this: Your selected busses or slots may have different numbers

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/user/domains/Win10TV/5700G.rom'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>

In this part we have to edit a few things and add something, you will have a adress in the source part and an adress outside of the source part for each hostdevice

We only need to edit the adress part for the hostdevice, the adress in the source part should not be touched.

In the hostdevice adress below the line with the rom file we have to add multifuntion='on' so after function='0x0' just add a space and add the  parameter.

This is your vga controller and it should look like this:

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/user/domains/Win10TV/Cezanne.rom'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>

 

Then we should edit the hostdevice for the audio controller below, we have to make sure that in the adress the bus and slot ids match up with the ids from your vga controller. Once you made sure to change the bus and slot ids to the same value as those of your vga controller you only have to edit the id for function from 0x0 to 0x1

After editing it should look like this:

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
    </hostdev>

 

 

After doing all the changes to the xml we can save the vm settings and start the vm again.

 

If you encounter crashes booting the VM you might have to add the kernel parameter "video=efifb:off"

For doing that you should click on Main, then your Flashdrive and then when you scroll down you add this parameter to the boot option that you use for booting unraid.

 

 

7. Once the vm has booted you may already have a video output on your display, but ignore that for now and use Remote Desktop to login into your VM.

Once logged in you install the previosly downloaded AMD driver.

IMPORTANT: After installing the driver DO NOT REBOOT YET!!!

Once the driver installation is finished we still have to install the RadeonResetBugFix.

If you have moved the RadeonResetBugFixService.exe into your Windows drive you can follow the following steps to install it:

 

7.1. In the searchbar type cmd, right click it and select start as administrator

7.2. In the cmd type "cd C:/"

7.2. In the cmd type "RadeonResetBugFixService.exe install" and wait for the installation of the service to finish, this can take a minute, IMPORTANT: do not delete the RadeonResetBugFixService.exe after installing it, because it will then stop working.

 

That's it, your windows VM should now be working.

When you start yourt VM it can sometimes take 1-2 minutes until the RadeonBugFix activates your GPU, so during that time your display could be black until it is activated, but most of the time the display should turn on immeadiately.

 

 

 

I hope this explanation or more likely wall of text of the steps I took to get it working will help someone. It can be that some steps are not needed, but those are all the steps I used to get it working on my machine, so if someone with more knowledge about VMs can point out what maybe is not needed anymore I would appreciate it.

If you have any questions or I used a wrong translation for something feel free to ask and I will try to clarify with pictures or better translations.

 

Tanne

 

Edited by Tanne
  • Thanks 1
Link to comment

I don't know if it will work with any mainboard, so considering that I can only say for mine that it definitely works.

But I think as long as the IOMMU groups are seperated nicely it should work in theory. Even if they are not seperated you can set PCIe ACS Override to Both in the VM Manager Settings (Settings -> VM Manager). Then you should be able to seperately pass them through. If you are encountering crashes but the passthrough does work you might also want to use "video=efifb:off" as kernel parameter. For that you have to click on Main, then the Flashdrive and then you scroll down and add this parameter to the boot option that you use for unraid.

  • Like 1
Link to comment

 

I was able to test this weekend once and followed your instructions. And I am first of all happy to be able to say that I am once again a step further. The Radeon graphics was found in the hardware management and I was able to install the AMD driver. However, I still have one main problem, namely error 43. 

 

This means apparently, that not cleanly to dei graphics card was passed. I have also entered the syntax according to the instructions, so that Ubnraid really staret headless and also releases the graphics card. Here I found a problem. Unraid starts clean (headless) but I still see a blinking cursor in the post.

 

Is this an indicator that the graphics card is still in use? I had already read that an image or the last image in the framebuffer of the graphics card is still in use. image which is in the framebuffer of the graphics card freezes on the monitor. The boot post ends with me a graphics card output and does not go far, only the flashing corsor unsettles me.

 

I also have a question about the addresses for the devices. You write that soundcard and graphiccard should get the same bus. 

I would like to understand which syntax is responsible for which things. i always get confused. one time we have the physical device and another time a "virtual address", so this one i can apparently adjust or/and change myself?

 

 <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/user/domains/Win10TV/Cezanne.rom'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>


What does the beginning in the line domain mean? and what does the prefix function mean?

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
    </hostdev>

 

Maybe I can post my settings here once and you can help me.

Thank you guys! and thanks for any feedback.

Alex   

 

Edited by Alex8464
Link to comment

Mine worked up to the point of installing the AMD drivers and radeonresetbugfix. I was able to install the AMD driver just fine, then the AMD driver had a 2min countdown to restart my VM, so I used task manager and killed the install to prevent that. Then, I installed the radeonresetbug service, waited until it went in to Started state. Then, I rebooted the VM and now it won't come back up with a display any more. It just shows a green garbled mess. Not sure if it's related to the radeonresetbugfix service or what.

 

Edit:

I logged back in via RDP, the AMD driver finished it's install and it all started working as expected.

Edited by chaosclarity
  • Like 1
Link to comment

Not sure what's going on with it. I can't even get the Unraid host to release the iGPU. All I get is the console output. I've added video=efifb:off to my syslinux but it doesn't seem to release. I get the boot console and a blinking cursor when it's done. When I start my VM, it does start but it never "takes over" the HDMI output and I still see the Unraid console.

  • Like 1
Link to comment

I think the release of the internal graphics (APU) does not work properly in most cases correctly. This initiated my consideration in the sense, if a decontrol of the GPU will be engaged. 

 

If the boot post freezes and is still displayed on the monitor, then it probably worked. 

 

If the boot post hangs and leaves a blinking cursor, then probably not. 

 

Let's look together for a solution to support the AMD 5000G CPU family.

 

Link to comment
9 minutes ago, Alex8464 said:

I think the release of the internal graphics (APU) does not work properly in most cases correctly. This initiated my consideration in the sense, if a decontrol of the GPU will be engaged. 

 

If the boot post freezes and is still displayed on the monitor, then it probably worked. 

 

If the boot post hangs and leaves a blinking cursor, then probably not. 

 

Let's look together for a solution to support the AMD 5000G CPU family.

 

I was able to get it working again. I was originally using a Windows 11 VM, but now tried with Windows 10. I honestly don't think it mattered which version of windows (10 vs 11), because what I noticed is that my XML configuration was reverting when I would try adding passthrough devices (usb controller), thus breaking the gpu passthrough configuration.

If you have a blinking cursor, this is what you want on the console output screen. It should "freeze" or stop scrolling output right at the pci vga device and then show a blinking cursor.

Once you start the VM, it takes over the screen output, but you will never see the bios/boot of Windows, just the windows login screen will suddenly appear.

  • Like 1
Link to comment

That sounds very good. So for me, the graphics card seems to be free in any case and I can dedicate myself to the VM settings.

 

Again, I can boot into my Windows VM and connect via RDP. In the hardware overview I can see that the APU is detected. Driver is installed (RADEON) and also the RadeonResetBug is installed.

 

Unfortunately I had already looked at the configuration of the startup script several times and tried to make changes. 

 

I don't have deep knowledge for resource addressing. I only know that there is a hardware resource whose values I can't change logically.

And then there are addresses to the devices (graphics & Sound-Cards), which are more or less aliases for the VM.

These seem to be redirected to the hardware addresses.Right?

 

Here I need a wokshop ;-) unfortunately I can't figure out Spaceinvader One's instructions.

 

Okay, I also installed the VM as OVMF.

Here are some explanations about that...

https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF 

 

I think the Windows 10 VM works for you, because you started it as SEABIOS and so without TMP.

 

Windows 11 also requires TMP without Tweak. Maybe you could tell me if you start via the SEABIOS.

Link to comment
53 minutes ago, Alex8464 said:

That sounds very good. So for me, the graphics card seems to be free in any case and I can dedicate myself to the VM settings.

 

Again, I can boot into my Windows VM and connect via RDP. In the hardware overview I can see that the APU is detected. Driver is installed (RADEON) and also the RadeonResetBug is installed.

 

Unfortunately I had already looked at the configuration of the startup script several times and tried to make changes. 

 

I don't have deep knowledge for resource addressing. I only know that there is a hardware resource whose values I can't change logically.

And then there are addresses to the devices (graphics & Sound-Cards), which are more or less aliases for the VM.

These seem to be redirected to the hardware addresses.Right?

 

Here I need a wokshop ;-) unfortunately I can't figure out Spaceinvader One's instructions.

 

Okay, I also installed the VM as OVMF.

Here are some explanations about that...

https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF 

 

I think the Windows 10 VM works for you, because you started it as SEABIOS and so without TMP.

 

Windows 11 also requires TMP without Tweak. Maybe you could tell me if you start via the SEABIOS.

Can you post your XML?

Link to comment

ofcourse for sure 🙂

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='1'>
  <name>Windows 11</name>
  <uuid>6a699403-4459-35c8-01a4-c0c0b1d78598</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/>
  </metadata>
  <memory unit='KiB'>17301504</memory>
  <currentMemory unit='KiB'>17301504</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='8'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='10'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='12'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='14'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-6.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/6a699403-4459-35c8-01a4-c0c0b1d78598_VARS-pure-efi-tpm.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv mode='custom'>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='4' threads='2'/>
    <cache mode='passthrough'/>
    <feature policy='require' name='topoext'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Windows 11/vdisk1.img' index='3'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Windows 11/Windows11.iso' index='2'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.221-1.iso' index='1'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:e0:cc:57'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio-net'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Windows 11/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <tpm model='tpm-tis'>
      <backend type='emulator' version='2.0' persistent_state='yes'/>
      <alias name='tpm0'/>
    </tpm>
    <audio id='1' type='none'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/mnt/disk1/isos/VBIOS/vbios_1638.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

Link to comment

I'm leaving a reply here because this post really helped me a lot on 5700G iGPU passthrough. Thanks everyone who put in the time, effort and contributions in this long post.

 

I was successful on PVE 7.2. The iGPU can be passthrough in both Windows 10 and Ubuntu 20.04 VM, and works well.

 

Few tricks:

- Motherboard set to CSM

- Use correct vBIOS rom.

   - You can follow wx-Rmt's step in page 6: https://forums.unraid.net/topic/112649-amd-apu-ryzen-5700g-igpu-passthrough-on-692/?do=findComment&comment=1134762

- I patched ACS on my motherboard because the original IOMMU groups sucks.

- Machine: Q35

- BIOS: SeaBIOS

 

One issue is I still can't find a workable solution to fix the AMD reset bug. I tried follow wx-Rmt's step to install "RadeonResetBugFix" on Windows 10, but it doesn't work well. It crashes my computer every time, not just the VM.

On 6/5/2022 at 1:23 PM, wx-Rmt said:

Download the latest AMD 22.5.2 driver and RadeonResetBugFix in vm win10

https://github.com/inga-lovinde/RadeonResetBugFix

Unzip the amd driver to any directory.

 

 

I also made a video to show how to do 5700G iGPU passthrough on PVE 7. Maybe you can use it as a reference on unraid.
The video is in Chinese, but I have marked some key points in English as much as possible.

And also in the video, I only show the Windows 10. If you are using Ubuntu VM, same configurations except that there is no need to download and install the driver from the AMD website, you only need to do is “apt update && apt upgrade" in your VM.

- Here is the full video: https://www.bilibili.com/video/BV11d4y1G7Nk

- Or you can just jump to 29:33 watch the final demo: https://www.bilibili.com/video/BV11d4y1G7Nk?share_source=copy_web&vd_source=37c57e6564de58a018f5b76ac5bfd5e2&t=1773

 

My Hardware and System environment

- CPU: AMD Ryzen 7 5700G

- GPU: Only integrated graphics card (iGPU), no dedicated graphics card (dGPU)

- Motherboard: ASRock B550 Phantom Gaming-ITX/ax

- BIOS: P2.30 PVE Version: Proxmox 7.2-3

- Linux kernel: Linux 5.15.30-2-pve #1 SMP PVE 5.15.30-3 (Fri, 22 Apr 2022 18:08:27 +0200)

You can leave me a reply under the video or in this post if you have questions.

Link to comment

Thank you very much for your help! today i will try this way. I made a small thoughts about the adressing.

 

As you can see, I really try to understand the XML-Startup script. 

 

One Question about the alias adress types, can they be assigned as i want?

For example I tried differend settings for the Bus and yes it where be assignt in the VM.

And also i´ve no idea about the "function=0x0" command.

 

In the end my last hope is that i will work with the SEABIOS installtion. And I forgot to assign a "vendor_id value", Maybe thats the tipping point...

 

Thanks to all! and ofcourse to chaosclarity and Name_0901. You did a good work by explaining everthings 🙂

 

Unbenannt-1-01.thumb.png.af08c314b75b990b51b684aaf71da556.png

Edited by Alex8464
Link to comment
On 9/2/2022 at 3:58 AM, Name_0901 said:

 

 

One issue is I still can't find a workable solution to fix the AMD reset bug. I tried follow wx-Rmt's step to install "RadeonResetBugFix" on Windows 10, but it doesn't work well. It crashes my computer every time, not just the VM.

 

 

 

 

Now the accepted and feasible method to prevent amdgpu reset is here

https://github.com/gnif/vendor-reset/

Compile it into a kernel module and load it in the host

 

 

But the problem is that the original author of https://github.com/gnif/vendor-reset/ did not add the hardware ID of amdigpu。

 

Only some hardware IDs of amd gpu are included, the author did not have so many amd gpu tests at that time。

 

On unraid ich777 built a plugin with the above source code。

So I asked ich777 some time ago to add the hardware id of AMD's igpu to the reset plugin

 

 

In fact, if you look closely, the grouping of igpu in reset hardware id may be different from that of gpu

 

 

The hardware id 1638 of the 5700g in the amd main driver

https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c

lines 1887

 

/* Renoir */

{0x1002, 0x15E7, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},

{0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},

{0x1002, 0x1638, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},

{0x1002, 0x164C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},

 

Grouped in CHIP_RENOIR|AMD_IS_APU

 

 

In reset patch, there are only the following groups

https://github.com/gnif/vendor-reset/blob/master/src/device-db.h

 

No hardware ID for 5700G

AMD_POLARIS10 AMD_POLARIS11 AMD_POLARIS12 AMD_VEGA10 AMD_VEGA20 AMD_NAVI10

AMD_NAVI14 AMD_NAVI12

 

 

 

 

Edited by wx-Rmt
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.