Jump to content

Tritech

Members
  • Content Count

    46
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Tritech

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm on Pro. I have all windows updates off, and usually update all machines in the house monthly. Like I said, this last instance the machine went down and the screen went black while in use. It's just odd that it's pegging the cores when it crashes.
  2. Hey, bastl. We've exchanged a few times here on the "at my wits end..." thread dealing with latency... I have the same board as you. I haven't changed anything in over a month or so. Software wise in the VM, I haven't really installed anything new, just steam/browser updates. The windows logs looked fine, no weird errors other than the one above when trying to restart it. Maybe it was just a fluke. I keep my vm running 24/7. Should I just bring it down every few days as a preventative measure?
  3. Currently on 6.7 RC5. My VM has crashed twice this week. Once when I was asleep, and woke up to a functioning server. All dockers and shares were accessible, but the main win 10 VM was down, and all the allocated cores were pegged to 100%. Stopping the vm fixed the CPU usage, but it was unable to restart. The only thing I could see was: host-cpu char device redirected to /dev/pts/0 (label charserial0) in the vm log, and then stating that it crashed. Only a hard reboot solved the issue. Second time was just now, while I was just browsing, screen just went black, and same maxxed cpu cores, just the ones pinned to the vm. Any ideas of what to look for the next time it happens?
  4. Tritech

    Improving Windows VM performance for gaming

    I've converted to vm and back a few times to test with the same windows install. I'm not sure what you mean by this, I'm assuming you mean to config the vm. This is my current vm, the xml is heavily customized but you should get the idea from here. There is no vdisk location as I've passed through the NVME controller that you see on the bottom. As far as extra vdisk images, just check when they were last modified, and if there are any existing vms that point to that .img.
  5. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline> Tested the new RC4 and looks like it's working!
  6. Tritech

    QEMU PCIe Root Port Patch

    Just chiming in to say I am waiting for this and any other threadripper performance related changes as well.
  7. Step one would be keeping this on the first page and visible here as well 😁 Devs, you pickin' up what we're putting down?
  8. Yea I was just wondering if its driver related on the host side/vfio.
  9. Actually I let it run a bit longer and both of the highest execution are network related. ndis.sys and adf.sys. Come to think of it, you're using a different ethernet port than I am. I wonder if that may have some issue. I'm using the 10G port, which I don't really have a use for right now, the rest of my network is gigabit.
  10. I get what your saying, I think its saying that they should be in the included range. You know how you left out cores 8/24? Well I think they have to be on the same "domain" to be used at all, well to at least get the most out of them. At least that's the way I interpret it. I've tweaked my config for now just so they're all on the same domain. I'll fix it later when I change my isolcpus at reboot. Here's some updates as well, seems that storport.sys is whats giving me the highest execution time. Gonna see if I can track down any gains there.
  11. I cross-referenced that several times. Really helpful stuff. I reapplied the Epyc "hack" and that further brought down my latency, to ~300u, as low as 125ish. https://pastebin.com/dLWncwhV
  12. Evidently it does start with 1. I think the "0-15" refers to the vcpupin'd cpus, not physical.
  13. I saw that, and yes, that's what it looks like to me. Lemme test.
  14. You're right... mine seems to be grabbing almost 1.5 gb from node0.
  15. Yea, I didn't grasp the concept that the initial post was making about creating a pci root bus and assigning it vs a card. The more recent activity there does seem like that the bulk of improvements should come with QEMU updates...whenever we get those. The guy I got it from said that the last lines in his xml we for a patched QEMU. I was also recommend "hugepages", but after a cursory search it seems that unraid enabled that by default. Couldn't get a vm to load with it enabled. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>