QEMU PCIe Root Port Patch

Recommended Posts

  • 2 weeks later...

weird my pci express still shows up as PCI Express x 1

It wont let me add those lines anymore 

<qemu:arg value='-global'/>
<qemu:arg value='pcie-root-port.speed=8'/>
<qemu:arg value='-global'/>
<qemu:arg value='pcie-root-port.width=16'/>

im checking in NVIDIA control panel

Using unraid Nvidia Version - 6.8.0-rc5


Edited by fortegs
Link to comment

@darthcircuit Starting with 6.8 RC1 I had to remove the pci-root-port patch. Sure it shows x1 in the Nvidia control panel, but I can't notice any performance drop as in qemu 3.xx versions. Couple people using qcow2 and the underlying xfs filesystem which is the main issue with qemu 4.1. Compressed qcow2 files are only a issue on top of the corruption of vdisks.


4.1.1 maybe will have a fix and if not 4.2 for sure which is already worked on.


Link to comment

The custom Qemu arguments are still working. Starting with Qemu 4.0 they changed the naming to x-speed and x-width.


from the Official Changelog Qemu 4.0



Generic PCIe root port link speed and width enhancements: Starting with the Q35 QEMU 4.0 machine type, generic pcie-root-port will default to the maximum PCIe link speed (16GT/s) and width (x32) provided by the PCIe 4.0 specification. Experimental options x-speed= and x-width= are provided for custom tuning, but it is expected that the default over-provisioning of bandwidth is optimal for the vast majority of use cases. Previous machine versions and ioh3420 root ports will continue to default to 2.5GT/x1 links.

This is how it looks like in 6.8 RC5 now

    <qemu:arg value='-global'/>
    <qemu:arg value='pcie-root-port.x-speed=8'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='pcie-root-port.x-width=16'/>

Nvidia System Info is reporting the correct link speeds again



  • Thanks 1
Link to comment
  • 1 month later...

After following this thread I thought I would spin up 2 new, identically configured Windows 10 VMs in Unraid 6.8.0 to see if the i440fx vs Q35 debate shows anything for my system under QEMU 4.1.


Originally I had some issues on my QEMU 3.0 i440fx Windows 10 VM with video glitching, so an update was necessary anyway.  Before getting entrenched in a VM architecture, I wanted to see which was going to pan out best at the on-set of my new build.


For these benchmarks I ran a "gaming oriented" (Hyperv enlightenments, isolated CPUs, emultor pinning, iothread pinning) Windows 10 VM with 4 HT cores of an AMD 3800X in performance mode and pass-through of a Gigabyte NVIDIA GeForce GTX 1060 6GB Windforce.


Here are AIDA64 results which show negligible difference in performance between the 2 VMs.  Likely, if I ran a statistically relevant number of tests, they would be the same.  Notable, in the Q35 VM, NVIDIA System Info reports my PCI-E lanes correctly (x16 PCI-E 3.0 for this card).


In the end, I went with the i440fx machine.  I'm not saying I agree with getting rid of Q35 as an option.  Options are good for everyone (as long as they aren't broken)!  Just a data-point.





  • Like 1
  • Thanks 1
Link to comment
  • 4 months later...
  • 1 year later...

Reviving this old discussion because I just noticed something strange in my windows vm.

My mainboard has only pcie 3.0 x16.

After qemu 4.0 the code was changed to patch pcie-root-port (s) to be pcie 4.0 x32 16 GT/s (for q35):





Then, as pointed, speed and width changed their names to x-speed and x-width.


In fact, before noticing it, since I'm passing through my gpu I noticed that the radeon software was reporting this:



So both hwinfo and the radeon software are reporting bus type 4.0, running the gpu at 4.0x16 at 16 GT/s (it seems width x32 is automatically changed to x16).

Obviously since the hosting hardware doesn't support 16GT/s, this speed can't be reached.

GPU-Z (strangely) is not reporting the bus type/speed, but only PCI-Express.

I don't have any particular issue for now, but I modified the custom qemu arg to reflect the speed/width of the host machine:


  <qemu:arg value='-global'/> 
  <qemu:arg value='device.ua-gfx0.x-speed=8'/> 
  <qemu:arg value='-global'/>
  <qemu:arg value='device.ua-gfx0.x-width=16'/> 


having assigned the alias ua-gfx0 to the pcie-root-port to which the gpu is attached.

Now softwares report bus type pcie 4.0 (it should be ok, it's an emulated pcie port), but running at pcie 3.0 x16 (8 GT/s).

GPU-Z still reports only PCI-Express.


I was wondering if letting the default pcie 4.0 x16 16GT/s could cause any issue (compatibility?) in the vm on hardware that doesn't support that pcie bus.

What do you think?

Edited by ghost82
Link to comment
2 hours ago, ghost82 said:

I was wondering if letting the default pcie 4.0 x16 16GT/s could cause any issue (compatibility?) in the vm on hardware that doesn't support that pcie bus.

Maybe a placebo effect but...since adding that global arguments my windows 11 vm can now reboot without issues and also doesn't panic any more on shutdown (happenend very randomly and was partially solved by removing isolcpus from boot arg).

It never happened with my gtx titan black, but that was only pci 3.0 compatible.

So maybe some related issues can happen with pcie 4 compatible hardware with emulated pcie 4 buses but with pcie 3 physical hardware.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.