Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About Tritech

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. 0 issues on RC4. Also to note, I've never had any of the DB corruption issues either.
  2. I just get the simple recaptcha checkbox, but I haven't done anything special. EVGA 1080ti
  3. With the way xmls get modified when you alter any setting on the form view can we get a way to clone or make a copy of an xml before making any changes? I've got over a hundred xml variants trying to track down better latency and just being able to have a known good working xml, make a copy/clone of it, and the work off that would be nice. Maybe even being able to set a template for your common settings as well.
  4. Geforce experience works fine on my end. Did you just try later, maybe they were having an outage?
  5. Can anyone point me in the right direction for enabling hugepages? Will 32gb be enough with only 1 vm allocating 10gb and a handful of dockers?
  6. https://pastebin.com/7CG7ntKw Take a look at mine, running a similar system (Asrock x399fatality, 1950x, 1080ti, passed through NVME controller and a few usb controllers) Specifically, take a look at the <hyperv> and <cpu> section differences. I've got latency down to reasonable levels. Still have audio pops and cracks every now and then though, but it brought down my latency from over 2000 to 100ish average. You could also experiment with the "<topology sockets='1' cores='8' threads='1'/>" section and try <topology sockets='1' cores='4' threads='2'/>... well however many cores you allocated.
  7. Change any instances of "pcie-root" to "pci-root".
  8. @jbartlett Off topic, but do you have a single controller card you can recommend, preferably 4 port that plays nice with usb3? Looking for something in the sub $50 usd range.
  9. I'm on Pro. I have all windows updates off, and usually update all machines in the house monthly. Like I said, this last instance the machine went down and the screen went black while in use. It's just odd that it's pegging the cores when it crashes.
  10. Hey, bastl. We've exchanged a few times here on the "at my wits end..." thread dealing with latency... I have the same board as you. I haven't changed anything in over a month or so. Software wise in the VM, I haven't really installed anything new, just steam/browser updates. The windows logs looked fine, no weird errors other than the one above when trying to restart it. Maybe it was just a fluke. I keep my vm running 24/7. Should I just bring it down every few days as a preventative measure?
  11. Currently on 6.7 RC5. My VM has crashed twice this week. Once when I was asleep, and woke up to a functioning server. All dockers and shares were accessible, but the main win 10 VM was down, and all the allocated cores were pegged to 100%. Stopping the vm fixed the CPU usage, but it was unable to restart. The only thing I could see was: host-cpu char device redirected to /dev/pts/0 (label charserial0) in the vm log, and then stating that it crashed. Only a hard reboot solved the issue. Second time was just now, while I was just browsing, screen just went black, and same maxxed cpu cores, just the ones pinned to the vm. Any ideas of what to look for the next time it happens?
  12. I've converted to vm and back a few times to test with the same windows install. I'm not sure what you mean by this, I'm assuming you mean to config the vm. This is my current vm, the xml is heavily customized but you should get the idea from here. There is no vdisk location as I've passed through the NVME controller that you see on the bottom. As far as extra vdisk images, just check when they were last modified, and if there are any existing vms that point to that .img.
  13. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline> Tested the new RC4 and looks like it's working!
  14. Just chiming in to say I am waiting for this and any other threadripper performance related changes as well.