Tritech

Members
  • Posts

    64
  • Joined

  • Last visited

Everything posted by Tritech

  1. Yea I was just wondering if its driver related on the host side/vfio.
  2. Actually I let it run a bit longer and both of the highest execution are network related. ndis.sys and adf.sys. Come to think of it, you're using a different ethernet port than I am. I wonder if that may have some issue. I'm using the 10G port, which I don't really have a use for right now, the rest of my network is gigabit.
  3. I get what your saying, I think its saying that they should be in the included range. You know how you left out cores 8/24? Well I think they have to be on the same "domain" to be used at all, well to at least get the most out of them. At least that's the way I interpret it. I've tweaked my config for now just so they're all on the same domain. I'll fix it later when I change my isolcpus at reboot. Here's some updates as well, seems that storport.sys is whats giving me the highest execution time. Gonna see if I can track down any gains there.
  4. I cross-referenced that several times. Really helpful stuff. I reapplied the Epyc "hack" and that further brought down my latency, to ~300u, as low as 125ish. https://pastebin.com/dLWncwhV
  5. Evidently it does start with 1. I think the "0-15" refers to the vcpupin'd cpus, not physical.
  6. You're right... mine seems to be grabbing almost 1.5 gb from node0.
  7. Yea, I didn't grasp the concept that the initial post was making about creating a pci root bus and assigning it vs a card. The more recent activity there does seem like that the bulk of improvements should come with QEMU updates...whenever we get those. The guy I got it from said that the last lines in his xml we for a patched QEMU. I was also recommend "hugepages", but after a cursory search it seems that unraid enabled that by default. Couldn't get a vm to load with it enabled. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  8. @bastl Thanks! I'll try switching some things around and see if that improves anything. Check my post above yours for some updates. IIRC I had my unraid usb where yours is, but I moved it so I can pass through the whole controller.
  9. . Ladies and gentlemen, we got'em. Massive thanks to reddit user setzer with helping on this. I don't think he's on unraid but his help was invaluable. Latency is now down to at least manageable levels. I'll continue more tweaking. His .xml = https://pastebin.com/GT1dySwt My .xml = https://pastebin.com/yGcL0GNj and he also sent along some additional reading for us. https://forum.level1techs.com/t/increasing-vfio-vga-performance/133443
  10. I MAY be onto something. I can't seem to get the gpu passed through, so I'm not sure yet. I tried some of the things that guy had in his xml at level1techs. This is by far the lowest latency I've seen, but gpu isnt really in the equation. It's throwing an error 43.
  11. I've noticed that as well. But in this case I was only editing a single VM and it was deleted during boot. Probably related to changes that only would apply to q35 and trying to apply it to a i440 vm.
  12. @bastlIf you get a chance, can you show or just tell me where things are plugged into your rear USB.
  13. Got it. My board already had a 2.x bios when I got it, and I updated to 3.5 before I really got into most of this. It's kinda confusing with old and new information being mixed in, especially on a pinned thread. Just about to try a new vm with q35 bios. BTW I noticed something with my i440 vm. Remember how you said I forgot a portion of the epic hack. Well turns out I checked my new vm, and it seems like that portion just gets deleted in a i440 .xml. Edit: With q35 the .xml changes, specifically these lines <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> stayed in the .xml. In both instances, windows still recognized it as a Epic.
  14. Is this true? my lstopo says the first is correct. The pinned thread here also mentions sequential pairings in the second post. I've tried this with terrible results.
  15. I'd be fine with that latency. Really the only difference in setups is my passed through nvme drive. I haven't tried an image on the drive itself, I just can't imagine that being faster. Probably my project for tomorrow.
  16. One is probably the video card audio. Mine only emulates as a usb device and doesn't output anything. I've disabled all my nvidia stuff.
  17. weird, I installed the most recent from asrock and the driver still says from MS.
  18. Well it was an idea. We did figure out UEFI booting. Unfortunately for me, latency is still the same.
  19. Awesome! I won't be back home for a few hours to test and see if this even makes an improvement. I guess those few commands could be automated into a script that runs at boot.
  20. Shit. I guess the only way is two gpus, with one shitty one in the top slot. I have a hard time believing UEFI support is this lacking.
  21. Yea, that's exactly what mine was doing. I just had an idea but i'm heading out and can't test atm. I think to get uefi working i read somewhere that you need to add it in your syslinux like: kernel /bzimage append vfio-pci.ids=10de:1b06,10de:10ef isolcpus=9-15,25-31 initrd=/bzroot those ids being your gpu and gpu audio.
  22. It's crazy that the Zenith isn't getting support. That was the flagship Gen1 board.
  23. Thanks man, I figured it out. Well the most recent issue. Disabling CSM only lets you boot unraid USB in UEFI. Evidently it shits the bed when you do so. I managed to get into a vm with csm disabled and latency was great. Not sure if that was just due to no nvidia drivers loaded, or thats actually the fix. Can you actually boot and USE unraid if it boots as UEFI? CSM - disabled = Big Problems Launch PXE OpROM Policy - Legacy only =all network boot stuff disabled Launch Storage OpROM Policy - Legacy only Launch Video OpROM Policy - Legacy only Fast Boot - disabled = yep Secure Boot - disabled =yep SD Configuration Mode - disabled = there were two of these "sd configuration mode, and eMMC/sd configuration" (currently reenabled to get shit working again, will test in a few) EDIT: disabled both ACS enabled - auto = think it was auto (will check on this next reboot) NVMe Raid Mode - disabled = yep ACPI HPET Table - enabled = yep Deep Sleep - disabled = didn't see it but anything suspend related was off AMD fTPM switch - disabled = yep SMT Mode - auto = yep also to note the choices for "memory interleaving" are "none, channel, die, socket or auto". I selected channel.