Jump to content

billington.mark

Members
  • Content Count

    321
  • Joined

  • Last visited

Community Reputation

20 Good

About billington.mark

  • Rank
    Advanced Member
  • Birthday 07/30/1985

Converted

  • Gender
    Male
  • Location
    United Kingdom

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. billington.mark

    Terrible gaming performance

    FYI, GPU-z lies... if you really want to see what your PCIe lane situation is for your passed through NVIDIA card, have a look in NVIDIA control panel> help> System information. Then scroll down to BUS. This is because the PCIe root ports created on a Q35 machine are x1 ports by default. in QEMU 3.2 (I think), you can add some extra XML to force the root port to be x16. And in 4.0 all root ports will be x16 by default.
  2. billington.mark

    Terrible gaming performance

    I have a very similar setup to you and have diagnosed NUMA headaches for longer than I care to remember! A few things to try.... (which made my performance better). Switch to a Q35 VM. It might not yield any performance increase right now, but there are some changes in the pipeline for QEMU 3.2\4.0 which will increase performance of passed through PCIe devices. (which should be included in the next version of unraid). After youve flipped to Q35, add an emulatorpin value to take the pressure off of core 0 (which it will be using by default). keeping it on the same numa node as your passed through CPUs would most likely be best. so it'll look like this: <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='26'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='27'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='28'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='29'/> <vcpupin vcpu='8' cpuset='14'/> <vcpupin vcpu='9' cpuset='30'/> <vcpupin vcpu='10' cpuset='15'/> <vcpupin vcpu='11' cpuset='31'/> <emulatorpin cpuset='9,25'/> </cputune> Personally, I have my main workstation VM running off of cores on NUMA node 0, so I have my emulatorpin there. With the QEMU service running on Node0 too, it might be worth testing your emulatorpin on that node too, so 7,23 maybe. personally, I also stub those cpu cores the same as the rest to ensure nothing else is stealing cycles from my VM. Add some additional hyper-v enlightenments (i cant remember if all of these are standard with unraid, but here they are anyway) <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='none'/> <frequencies state='on'/> </hyperv> MSI fix will most likely need to be applied to your GPU and GPU Audio device. https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts.378044/ (Use the v2 utility) Last but by no means least is that your storage is based on NUMA node 0, and everything else is on node 1. Latency will be an issue here. not sure how viable this will be, but if you can, flip your 1070 into a PCIe slot associated with NUMA node 0, change your cpus to that node too (and your emulatorpin), and see how things are there. Another alternative is if you have a spare hdd controller, with only the SSD you're using, pass that through if you're able to, as it'll cut out the QEMU middleman between Windows and the SSD. I think you'll notice the biggest difference with the emulatorpin change.
  3. billington.mark

    Oculus Rift performance in VM

    You're gonna struggle with performance with only having a 4 core CPU.... Even more so if you're assigning 4 cores to your VM... You'll have a lot more luck upgrading to something like an i7 8700 which has a lot more CPU threads to play with. Before delving into your wallet, id post in the hardware forum and ask for some advice on where to go hardware wise. AMD are offering compelling options at very good price points...... in the short term, you could try: isolating the CPU cores you're using on your VM (search the forum for isolcpus). Id isolate core 2 and 3, leaving 0 and 1 available for unraid\docker to use. assigning core 2 and 3 to the VM setting the emulatorpin value in your VM XML to core 1 Post your VM XML and im sure people will chime in with some more suggestions... but you're gonna struggle with such a small cpu core count to play with.
  4. billington.mark

    RTX2080 passthrough surprise

    Nothing special to get it to work... Stubbed the device like you usually would, and passed through like any other device. No issues at all! No extra config in Windows needed either. Just plugged in and it worked instantly like any normal usb port.
  5. billington.mark

    RTX2080 passthrough surprise

    Having a look, it doesnt seem its standard across the board on all 20series. might only be on the higher end SKUs? (2070/2080/2080Ti). EVGA doesnt seem to have it on their 2060's
  6. billington.mark

    RTX2080 passthrough surprise

    All working as expected no issues at all.
  7. billington.mark

    RTX2080 passthrough surprise

    So, I received an RTX2080 today (took advantage of the EVGA step up programme as I got a 1080 in July). This is what hardware it presents: IOMMU group 20: [10de:1e87] 03:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] (rev a1) [10de:10f8] 03:00.1 Audio device: NVIDIA Corporation Device 10f8 (rev a1) [10de:1ad8] 03:00.2 USB controller: NVIDIA Corporation Device 1ad8 (rev a1) [10de:1ad9] 03:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1ad9 (rev a1) So.. Previous Nvidia cards I've had presented as 2 devices. One graphics device and one audio device. The 'new' extra two are the serial bus (which I assume is the RGB controller), and a usb controller. To my surprise, the usb type-c port on the back of the card actually functions as a full fledged usb port, so I'm able to connect a usb3 hub to it using a 'type-c to a' adapter and no longer need to pass through an additional pci-e usb card! The hub is being powered by the usb port, and has a keyboard, mouse and usb DAC connected with zero issues. Seeing as these cards are quite new and virtualization is a bit niche, I thought I'd put this down in a post for people to see.
  8. billington.mark

    QEMU Patch for Epyc/Threadripper

    Newer qemu version has been confirmed for 6.7: Linky link
  9. billington.mark

    Qemu 3.1

    +1 to this. There's a lot of stuff going on to improve PCIe bandwidth on Q35 based VMs for passed through PCIe devices too. So im keen to get these updates as an Intel user as well. Not sure if these fixes are present in 3.1, or if they're coming for 4.0, but being up to date with QEMU\libvirt versions looks like it should yeild a pretty big bump in performance in the near future.
  10. billington.mark

    NVME M.2 Passthrough

    Seems to be a hardware issue with the SMI SM2262 controller on these NVME devices. A few posts up, this was resolved by swapping out to a Samsung PM961. Id stick to known good hardware like Samsung\Intel NVME drives tbh.
  11. billington.mark

    [6.6.1] GUI doesnt ever finish loading, causes stutter in VMs

    Changed Status to Solved
  12. billington.mark

    [6.6.1] GUI doesnt ever finish loading, causes stutter in VMs

    Can confirm this is resolved in 6.6.2 Thanks all :)
  13. billington.mark

    [6.6.1] GUI doesnt ever finish loading, causes stutter in VMs

    Also, behavior is the same in safe-mode boot
  14. billington.mark

    [6.6.1] GUI doesnt ever finish loading, causes stutter in VMs

    Xorg.0.log.oldXorg.0.log Xorg log(s) attached. its just creating these over and over again. I assume every time the screen refreshes (as shown in the original post)
  15. billington.mark

    [6.6.1] GUI doesnt ever finish loading, causes stutter in VMs

    No change. regardless of legacy or UEFI boot.