testdasi Posted October 24, 2019 Share Posted October 24, 2019 27 minutes ago, Rhynri said: Do you still need to have the extra root hub verbage in the xml? Nope. 👍 1 Quote Link to comment
fortegs Posted November 5, 2019 Share Posted November 5, 2019 (edited) weird my pci express still shows up as PCI Express x 1 It wont let me add those lines anymore <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline> im checking in NVIDIA control panel Using unraid Nvidia Version - 6.8.0-rc5 Edited November 5, 2019 by fortegs Quote Link to comment
m0ngr31 Posted November 5, 2019 Share Posted November 5, 2019 Those lines won't work in 6.8 since it uses the newer version of QEMU Quote Link to comment
fortegs Posted November 5, 2019 Share Posted November 5, 2019 18 minutes ago, m0ngr31 said: Those lines won't work in 6.8 since it uses the newer version of QEMU so im confused in what do to then? I removed them but my nvidia control panel is still showing pci express x1 Quote Link to comment
GHunter Posted November 5, 2019 Share Posted November 5, 2019 QEMU 4.1 was downgraded to 4.0.1 in unRaid 6.8-RC5 so you may have to use them now. Quote Link to comment
darthcircuit Posted November 5, 2019 Share Posted November 5, 2019 Do we know why it was downgraded? I was hoping to have that fixed as well. Quote Link to comment
david279 Posted November 5, 2019 Share Posted November 5, 2019 4 minutes ago, darthcircuit said: Do we know why it was downgraded? I was hoping to have that fixed as well. People here was also seeing the bug as well on the 6.8rc builds. Quote Link to comment
darthcircuit Posted November 6, 2019 Share Posted November 6, 2019 Interesting. Thanks for that. Would be nice to have the option anyway for those that aren’t using qcow2. I’m just passing through my nvme controller natively and a sata drive. Quote Link to comment
bastl Posted November 6, 2019 Share Posted November 6, 2019 9 hours ago, darthcircuit said: Do we know why it was downgraded? I was hoping to have that fixed as well. Quote Link to comment
darthcircuit Posted November 6, 2019 Share Posted November 6, 2019 Is there any way of manually updating qemu to 4.1? I won't be using qcow2, especially compressed. I just pass through the whole controller for my windows 10 vm, and if my pcie lanes are only running at 1x speed, that will be a problem. Quote Link to comment
bastl Posted November 6, 2019 Share Posted November 6, 2019 @darthcircuit Starting with 6.8 RC1 I had to remove the pci-root-port patch. Sure it shows x1 in the Nvidia control panel, but I can't notice any performance drop as in qemu 3.xx versions. Couple people using qcow2 and the underlying xfs filesystem which is the main issue with qemu 4.1. Compressed qcow2 files are only a issue on top of the corruption of vdisks. 4.1.1 maybe will have a fix and if not 4.2 for sure which is already worked on. https://bugs.launchpad.net/qemu/+bug/1847793 Quote Link to comment
darthcircuit Posted November 6, 2019 Share Posted November 6, 2019 Fingers crossed Quote Link to comment
bastl Posted November 7, 2019 Share Posted November 7, 2019 The custom Qemu arguments are still working. Starting with Qemu 4.0 they changed the naming to x-speed and x-width. from the Official Changelog Qemu 4.0 PCI/PCIe Generic PCIe root port link speed and width enhancements: Starting with the Q35 QEMU 4.0 machine type, generic pcie-root-port will default to the maximum PCIe link speed (16GT/s) and width (x32) provided by the PCIe 4.0 specification. Experimental options x-speed= and x-width= are provided for custom tuning, but it is expected that the default over-provisioning of bandwidth is optimal for the vast majority of use cases. Previous machine versions and ioh3420 root ports will continue to default to 2.5GT/x1 links. This is how it looks like in 6.8 RC5 now <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.x-speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.x-width=16'/> </qemu:commandline> Nvidia System Info is reporting the correct link speeds again 1 Quote Link to comment
darthcircuit Posted November 7, 2019 Share Posted November 7, 2019 Thank you very much for that. Quote Link to comment
JesterEE Posted December 22, 2019 Share Posted December 22, 2019 After following this thread I thought I would spin up 2 new, identically configured Windows 10 VMs in Unraid 6.8.0 to see if the i440fx vs Q35 debate shows anything for my system under QEMU 4.1. Originally I had some issues on my QEMU 3.0 i440fx Windows 10 VM with video glitching, so an update was necessary anyway. Before getting entrenched in a VM architecture, I wanted to see which was going to pan out best at the on-set of my new build. For these benchmarks I ran a "gaming oriented" (Hyperv enlightenments, isolated CPUs, emultor pinning, iothread pinning) Windows 10 VM with 4 HT cores of an AMD 3800X in performance mode and pass-through of a Gigabyte NVIDIA GeForce GTX 1060 6GB Windforce. Here are AIDA64 results which show negligible difference in performance between the 2 VMs. Likely, if I ran a statistically relevant number of tests, they would be the same. Notable, in the Q35 VM, NVIDIA System Info reports my PCI-E lanes correctly (x16 PCI-E 3.0 for this card). In the end, I went with the i440fx machine. I'm not saying I agree with getting rid of Q35 as an option. Options are good for everyone (as long as they aren't broken)! Just a data-point. -JesterEE 1 1 Quote Link to comment
Frag-O-Byte Posted April 28, 2020 Share Posted April 28, 2020 Thanks for the information, it is apreciated. Quote Link to comment
ghost82 Posted December 30, 2021 Share Posted December 30, 2021 (edited) Reviving this old discussion because I just noticed something strange in my windows vm. My mainboard has only pcie 3.0 x16. After qemu 4.0 the code was changed to patch pcie-root-port (s) to be pcie 4.0 x32 16 GT/s (for q35): https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg00339.html + DEFINE_PROP_PCIE_LINK_SPEED("speed", PCIESlot, speed, PCIE_LINK_SPEED_16), + DEFINE_PROP_PCIE_LINK_WIDTH("width", PCIESlot, width, PCIE_LINK_WIDTH_32), Then, as pointed, speed and width changed their names to x-speed and x-width. In fact, before noticing it, since I'm passing through my gpu I noticed that the radeon software was reporting this: So both hwinfo and the radeon software are reporting bus type 4.0, running the gpu at 4.0x16 at 16 GT/s (it seems width x32 is automatically changed to x16). Obviously since the hosting hardware doesn't support 16GT/s, this speed can't be reached. GPU-Z (strangely) is not reporting the bus type/speed, but only PCI-Express. I don't have any particular issue for now, but I modified the custom qemu arg to reflect the speed/width of the host machine: <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='device.ua-gfx0.x-speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='device.ua-gfx0.x-width=16'/> </qemu:commandline> having assigned the alias ua-gfx0 to the pcie-root-port to which the gpu is attached. Now softwares report bus type pcie 4.0 (it should be ok, it's an emulated pcie port), but running at pcie 3.0 x16 (8 GT/s). GPU-Z still reports only PCI-Express. I was wondering if letting the default pcie 4.0 x16 16GT/s could cause any issue (compatibility?) in the vm on hardware that doesn't support that pcie bus. What do you think? Edited December 30, 2021 by ghost82 Quote Link to comment
ghost82 Posted December 30, 2021 Share Posted December 30, 2021 2 hours ago, ghost82 said: I was wondering if letting the default pcie 4.0 x16 16GT/s could cause any issue (compatibility?) in the vm on hardware that doesn't support that pcie bus. Maybe a placebo effect but...since adding that global arguments my windows 11 vm can now reboot without issues and also doesn't panic any more on shutdown (happenend very randomly and was partially solved by removing isolcpus from boot arg). It never happened with my gtx titan black, but that was only pci 3.0 compatible. So maybe some related issues can happen with pcie 4 compatible hardware with emulated pcie 4 buses but with pcie 3 physical hardware. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.