Jump to content

billington.mark

Members
  • Content Count

    348
  • Joined

  • Last visited

Everything posted by billington.mark

  1. I had this a while ago and couldn't get anywhere near bear metal performance. You can get quite complex with this and start looking into IOthread pinning to help, but there's only so much you can do with a virtual disk controller vs a hardware one. You'll probably notice a bit of a difference if you use the emulatorpin option to take the workload off CPU0 (which will be competing with unraid stuff). if you have a few cores to spare, give it a hyperthreaded pair (one that you're not already using in the VM). In the end I got a PCIe riser for an NVME, and passed that through to the VM. You get about 90% of the way there performance wise compared to bare metal as the controller is part of the NVME itself. Good luck.
  2. I have read reports that the Vega iGPU can be used for transcoding in the plex forums, however with it being quite a niche situation where you have a plex server, AND one of the very few CPUs with the vega iGPU, reports are few and far between. However, to be in a position to test, and to be able to talk with people in the plex forum on how to get it working, I need to be able to expose it to my docker container! Also like youve said, its not just plex which could make use of the iGPU.
  3. I'm struggling to see any downside to seeing this get included. Especially if its only enabled by a boot option?
  4. Ive dropped the £ on the memory to get these sold so they're not gathering dust. 32GB DDR3 PC3-12800R 8x M393B5170GB0-CK0 4GB PC3-12800R £60 32GB DDR3 PC3-8500R 8X M393B5170EH1-CF8 4GB PC3-8500R £75 All 64GB for £130
  5. So docker images can make use of iGPU transcoding.
  6. Is it possible to enable the AMD APU iGPU Drivers in this build somehow? @eschultz
  7. Please could the drivers for the Ryzen APUs be added. I believe the prerequisite kernel version is 4.15, and we're on 4.19.33 on the latest RC. It was mentioned here by @eschultz, but ive never seen any mention of it getting implemented, or if it has been, how to enable it: Id like to use the GPU to aid in transcoding in my plex docker (which while undocumented on the plex side, it does apparently work). Even if it wasn't enabled by default, and required adding boot code(s) in syslinux, or a modprobe command in the go file, id be happy! Or even if there was documentation somewhere on creating a custom kernel with the driver enabled? The 2400G is a little workhorse, and adding GPU transcoding would make it a pretty amazing!
  8. Having an upgrade so I'm selling my dual Xeon setup... All items are working great and haven't been overclocked. I will post internationally, but please bear in mind that international postage from the UK is expensive! Motherboard and CPUs (sold as a bundle as I don't have the CPU socket protectors: AsRock Rack EP2C602-4L/D16. https://www.asrockrack.com/general/productdetail.asp?Model=EP2C602-4L/D16#Specifications 2x Xeon E5-2670 £340 posted SOLD Memory (new prices as of 09/05/19): 32GB DDR3 PC3-12800R 8x M393B5170GB0-CK0 4GB PC3-12800R £60 32GB DDR3 PC3-8500R 8X M393B5170EH1-CF8 4GB PC3-8500R £75 All 64GB for £130 These are also for sale on eBay, listed here for slightly cheaper though. All payments through PayPal.
  9. QEMU 4.0 RC0 has been released - https://www.qemu.org/download/#source And a nice specific mention in the changelog to things discussed in this thread (https://wiki.qemu.org/ChangeLog/4.0): Now that these changes are standard with the Q35 machinetype in 4.0, I think this could also be an additional argument against potentially forcing Windows based VMs to the i440fx machine type if this brings things into performance parity? If @limetech could throw this into the next RC for people to test out, that would be much appreciated!
  10. It was me I think the current behaviour in the UI is perfect. Pick an OS, and the sensible, least hassle settings are there for you to use. I dont think options to change the machine type should be removed. At worse, they could possibly be hidden behind an "advanced" switch (which i think currently flips between the form and the xml), then having another tab to view xml instead?... I know there's a balance to be found to accommodate all levels of unraid users here, and i dont envy the UI decisions to try and keep everyone happy! It is worth pointing out that its documented the drivers DO behave differently based on what PCIe link speed they detect, and personally i get better performance numbers, and prefer running a Q35 based VM... I think the long term fix for this is to either allow the option to run modules such as QEMU, libvirt, docker from the master branch, and allow them to be updated independently to the OS, or to have "bleeding edge" builds where these modules are compiled from master. Easier for me to say, than it is to implement though.
  11. @jonp Ive been under the impression for a long time that latency and performance improvements in QEMU needed the Q35 machine type to be taken advantage of. All development ive seen, and tips to improve performance, all seem to be around using the Q35 machine type. At the end of the day, I want to get as close to bare metal performance as possible, thats my aim. Im in no way preaching that we should all move to Q35. Now i have my own performance numbers pre and post patch, i'll happily test the i440fx machine type too. Ive also posted this over in the Level1Tech forum to ask them the same question, seeing as its them who've pushed for the development on the Q35 machine type to get these PCIe fixes in the first place. As for removing the option in the GUI for Q35 for windows... I think it would be more appropriate to show a warning if Q35 was selected, as apposed to remove the ability to choose it altogether.
  12. Thank you for this. This is a great baseline to compare my Xeon build to.
  13. Im seeing around 5-10% increase in performance on GPU tests with my RTX2080.
  14. Yep, looks like its fixed the driver crippling memory scaling (in windows anyway). Im seeing a 5-10% increase in GPU benchmarks after updating to RC4. Was hoping for more, but it looks like my bottleneck is my aging CPU now! (2x E5-2670). Ive been meaning to put my hand in my pocket and upgrade to a threadripper build for a while now.... Im very interested to see what performance gains you guys are getting after this patch... Thankyou @limetech
  15. Having a build with QEMU from master would benefit everyone, not just you guys with threadripper builds
  16. The original topic of this post was to highlight a particular problem I was having (And still am), but the main underlying point here is that over the last couple of years, development on QEMU, introduction of new hardware from AMD, and the general love for virtualisation on workstation hardware has meant development in this space is moving at quite a pace. Short term, a build which would include virtualisation modules from master would make a lot of people happy, but the same is inevitably going to happen when 3rd gen Ryzen, 3rd gen Threadripper, PCIe4, PCIe5, etc, etc drops in the coming months. Personally, I think the long term holy grail here is to see the ability to choose which branch we're able to run key modules like QEMU, libvirt, docker from... then be able to update and get the latest patches\performance improvements independently of an unraid release. Short term though... a build to keep us all quiet would be lovely
  17. Ive been pushing for the changes detailed in that level1tech forum post for a while... https://forums.unraid.net/topic/77499-qemu-pcie-root-port-patch/ Feel free to post in there to push the issue.. the next stable release of QEMU doesnt look like its coming up until April\May: https://wiki.qemu.org/Planning/4.0. So fingers crossed there's an Unraid release offering that soon after. The alternative is for the @limetech guys to be nice to us and include QEMU from the master branch rather than from a stable release in the next RC.... Considering how many issues it would fix around threadripper, as well as PCIe passthrough performance increases, it would make ALOT of people happy...
  18. i440fx doesnt have any PCIe 'slots' as such. its presenting the GPU to the OS on a pci slot. Again, causing latency and a performance hit compared to bare metal. The CLI tool is to show that when you use Q35, the PCIe root ports are x1, not x16. The issue here is that the NVIDIA driver doesnt corrently initialise the card (on windows anyway), unless it detects its on an x8 or x16 slot. The comments on the patch do a good job of explaining whats going on, and whats being changed here: https://patchwork.kernel.org/cover/10683043/ I'm by no means complaining, but if there's a way to improve performance and get as close to bare metal as possible, i think its worth implementing. 👍
  19. Are you using Q35 or i440fx? The issue here is that the NVIDIA driver is behaving differently if the bus reported is anything less than x8. Also, Latency on the VM as a whole is greatly improved when using Q35 with the patches. Its a long read, but you can see the evolution of these changes on the level1tech forum i linked in the original post.
  20. Yep, and because of that, the NVIDIA driver is reigning in performance. I dont use MacOS, so im not sure if you're able to see this info on the driver... but in either case, x1 root ports will be presented to the VM guest, regardless of the OS its running. Depending on what checks the driver is doing on MacOS, it might have different performance implications than on Windows.
  21. Thats still not fixed. (as much as id like for it to have been that easy!) Have a look in the NVIDIA control panel under system info at the bus in use. (id put money on it being x1!). (image is from the level1 forum as im not at home and cant take a screenshot currently) You can also do a speed test by using the evga utility: https://forums.evga.com/PCIE-bandwidth-test-cuda-m1972266.aspx The patch to add the ability to set pcie root port speeds wasn't present in the 3.1 release (which is what we're on, as of 6.7.0rc2)
  22. Please can the following patch be applied to QEMU (until QEMU 4.0 is bundled with unraid, as this fix is already present in master) PCIe root ports are only exposed to VM guests as x1, which results in GPU pass-through performance degradation, and in some cases on higher end NVIDIA cards, the driver doesn't initialise some features of the card. https://patchwork.kernel.org/cover/10683043/ Once applied, the following would be added to the VMs XML, to modify the PCIe root ports to be x16 ports: <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline> Patch is well documented over here too: https://forum.level1techs.com/t/increasing-vfio-vga-performance/133443 This would also increase performance of any other passed through PCIe devices which use more bandwidth provided by an x1 port (NVMe, 10Gb NICs, etc). If we could have QEMU compiled from master instead of the releases though... that would be even better!
  23. Any chance of having QEMU from the master branch rather than 3.1 in the next release? Or, can these patches be applied: https://patchwork.kernel.org/cover/10683043/
  24. looks like we're waiting for QEMU 4.0.... https://wiki.qemu.org/Planning/4.0 I dont think the unraid guys compile from source, they'll just grab the latest stable version... which is currently 3.1 The commits im interested in got pushed after the 3.1 release now ive cross referenced all the dates! @jonpIs there any way we could get a 'bleeding edge' build of QEMU built from the master git branch in the next rc maybe? Being able to test threadripper and PCIe root port lane size fixes would be great . Looking at the current schedule for 4.0, it looks like we'll be waiting a few months before the next official release....