Jump to content

Tritech

Members
  • Content Count

    41
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Tritech

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Tritech

    QEMU PCIe Root Port Patch

    Just chiming in to say I am waiting for this and any other threadripper performance related changes as well.
  2. Step one would be keeping this on the first page and visible here as well 😁 Devs, you pickin' up what we're putting down?
  3. Yea I was just wondering if its driver related on the host side/vfio.
  4. Actually I let it run a bit longer and both of the highest execution are network related. ndis.sys and adf.sys. Come to think of it, you're using a different ethernet port than I am. I wonder if that may have some issue. I'm using the 10G port, which I don't really have a use for right now, the rest of my network is gigabit.
  5. I get what your saying, I think its saying that they should be in the included range. You know how you left out cores 8/24? Well I think they have to be on the same "domain" to be used at all, well to at least get the most out of them. At least that's the way I interpret it. I've tweaked my config for now just so they're all on the same domain. I'll fix it later when I change my isolcpus at reboot. Here's some updates as well, seems that storport.sys is whats giving me the highest execution time. Gonna see if I can track down any gains there.
  6. I cross-referenced that several times. Really helpful stuff. I reapplied the Epyc "hack" and that further brought down my latency, to ~300u, as low as 125ish. https://pastebin.com/dLWncwhV
  7. Evidently it does start with 1. I think the "0-15" refers to the vcpupin'd cpus, not physical.
  8. I saw that, and yes, that's what it looks like to me. Lemme test.
  9. You're right... mine seems to be grabbing almost 1.5 gb from node0.
  10. Yea, I didn't grasp the concept that the initial post was making about creating a pci root bus and assigning it vs a card. The more recent activity there does seem like that the bulk of improvements should come with QEMU updates...whenever we get those. The guy I got it from said that the last lines in his xml we for a patched QEMU. I was also recommend "hugepages", but after a cursory search it seems that unraid enabled that by default. Couldn't get a vm to load with it enabled. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  11. @bastl Thanks! I'll try switching some things around and see if that improves anything. Check my post above yours for some updates. IIRC I had my unraid usb where yours is, but I moved it so I can pass through the whole controller.
  12. . Ladies and gentlemen, we got'em. Massive thanks to reddit user setzer with helping on this. I don't think he's on unraid but his help was invaluable. Latency is now down to at least manageable levels. I'll continue more tweaking. His .xml = https://pastebin.com/GT1dySwt My .xml = https://pastebin.com/yGcL0GNj and he also sent along some additional reading for us. https://forum.level1techs.com/t/increasing-vfio-vga-performance/133443
  13. I MAY be onto something. I can't seem to get the gpu passed through, so I'm not sure yet. I tried some of the things that guy had in his xml at level1techs. This is by far the lowest latency I've seen, but gpu isnt really in the equation. It's throwing an error 43.
  14. I've noticed that as well. But in this case I was only editing a single VM and it was deleted during boot. Probably related to changes that only would apply to q35 and trying to apply it to a i440 vm.
  15. @bastlIf you get a chance, can you show or just tell me where things are plugged into your rear USB.