bastl

Members
  • Posts

    1266
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by bastl

  1. @sonicsjuuh Just as an option, you can also use a plugin and the App ControIR to Start/Restart/Stop your VMs or Dockers via phone.
  2. I know that feeling. There is always that one guy that has another little tweak ๐Ÿ˜‚ Already seeing a 19% improvement in the memory score with this set. I'll also test hugepages. By using "interleave" you spread the RAM accross all memory controllers from all nodes, even the ones from the node you're maybe not using in the VM. On first gen TR4 this was a big issue, because it added a lot of RAM latency. Sure you get the higher memory bandwith by using "quad channel" but in most scenarios in my tests the lower latency was the preferred option. Not exactly sure how big of a difference it is on second gen TR4, but using "Preferred" or "Strict" was the better choice for me. Every program, game or benchmark is more or less affected by the lower bandwith by basically turning the RAM into a dual channel configuration. The bigger impact I saw by reducing the latency by using the "strict" setting. Maybe have a look into the "Cache & Memory Benchmark" which comes with AIDA64 to test this. This is a part of the extra CPU flags I use for a while now. <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC</model> <topology sockets='1' cores='7' threads='2'/> <cache level='3' mode='emulate'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> <feature policy='disable' name='x2apic'/> </cpu> By forcing Windows into recognizing the CPU as an EPYC with these tweaks, it also recognizes the correct L1, L2 and L3 cache sizes which the node has to offer. Without it showed wrong cache sizes and wrong mapping numbers. Without these tweaks and the correct readings, starting up 3DMark for example always crashed or frooze the VM completly at the point, where it gathers the system infos. Not sure which other software might be affected, but this helped me in this scenario. Obvisiously the vcore is reported wrong, but the cache info is reported correctly with this tweak. 1 core is used for iothread and emulatorpin <emulatorpin cpuset='8,24'/> <iothreadpin iothread='1' cpuset='8,24'/> and the rest only specifically for this one VM. One 8 core die of the 2 from the 1950x is dedicated to this VM only and by adding up the numbers it exactly matches the specs of AMD. BUT this isn't the complete list of tweaks. There are way more you can play around with ๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚ <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> <vendor>AMD</vendor> <topology sockets='1' cores='4' threads='2'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='arch-capabilities'/> <feature policy='require' name='cmp_legacy'/> <feature policy='require' name='perfctr_core'/> <feature policy='require' name='virt-ssbd'/> <feature policy='require' name='skip-l1dfl-vmentry'/> <feature policy='require' name='invtsc'/> </cpu> At some point I stopped, because I had no time back than to fiddle arround with it even further and the system was stable enough anyways. Main programs run fine and games performed great. Edit: Forgot to mention it reports "Hyperthreaded" for me in CoreInfo.
  3. @jbartlett Did you by any chance set a strict RAM allocation to the node which cores you're using? If not, you might have to test this again. By not setting this up, unraid will use RAM from all nodes. <numatune> <memory mode='strict' nodeset='1'/> </numatune> The following shows you from which node the VMs taking their RAM numastat qemu
  4. @luca2 The issue the Threadripper chips had is that they used 2 dies or 4 where the ressources (RAM, PCI lanes etc) are shared between them. On 1st gen you ended up with a higher latency which could cause some small performance decrease with mixing cores from 2 dies. With Ryzen 2nd gen this shouldn't be that big of an issue anymore and with 3rd gen the latency is even further reduced. In your case I guess cores 0/1/2 and it's HT are on the same chiplet and the rest on the second. You can kinda test it. Setup a VM with lets say cores 1/7 and 2/8 and do some memory benchmarks and do the same benchmarks after changing the cores to 1/7 and 3/9. You will maybe see some slightly higher latencies for RAM. But in general you won't notice it in games anyways. Always keep in mind to not to use the core 0 and it's HT because this will always used by unraid itself for doin stuff in the backround.
  5. @cap089 You can limit your dockers to specific cores like you can do to VMs. If you don't do that, they will grab all the cores which are free. In a situation where a docker uses a lot of cpu cicles or ram you might affect your gaming VM. Go to settings > CPU pinning and make sure you not overlap the cores for your dockers and VMs or shutdown your dockers when gaming.
  6. @jbartlett Just an idea: Switch the slot for your card. Maybe the one you are using is wired via the chipset to the cpu and limits the card. Other devices like usb or network cards often share the same x4 connection to the CPU. Maybe thats your bottleneck
  7. @jbartlett Which "CPU Scaling Governor" are u using?
  8. @jbartlett Maybe you have set some cores which are not directly attached to the memory? Did you manualy changed the topology for the VM? <topology sockets='1' cores='8' threads='2'/> I always do this for all my VMs to match the actual core/thread count. Default is always all selected cores and 1 thread.
  9. So you only experiencing this with i440fx VMs? Maybe I have some time tomorrow to test i440 on 6.8 RC. I'am running all my VMs on Q35 for quiet some time now without any problems.
  10. @jbartlett Never experienced this issue on my 1950x with OC. Windows VMs with and without passthrough, default template or tweaked settings, I never noticed something you described. Maybe an temp issue?! What cooling solution are you using?
  11. @iJumbo Did you tried to set the machine type of the VM to Q35? Some users with AMD passthrough issues had success with this. Trying an different slot can also help or if you have an old GPU laying around use this in the first slot.
  12. The custom Qemu arguments are still working. Starting with Qemu 4.0 they changed the naming to x-speed and x-width. from the Official Changelog Qemu 4.0 PCI/PCIe Generic PCIe root port link speed and width enhancements: Starting with the Q35 QEMU 4.0 machine type, generic pcie-root-port will default to the maximum PCIe link speed (16GT/s) and width (x32) provided by the PCIe 4.0 specification. Experimental options x-speed= and x-width= are provided for custom tuning, but it is expected that the default over-provisioning of bandwidth is optimal for the vast majority of use cases. Previous machine versions and ioh3420 root ports will continue to default to 2.5GT/x1 links. This is how it looks like in 6.8 RC5 now <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.x-speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.x-width=16'/> </qemu:commandline> Nvidia System Info is reporting the correct link speeds again
  13. @justvano Try to use a different PCI slot.
  14. @darthcircuit 3-5fps more or less? come on ๐Ÿคจ 1080ti for 4k as the bare minimum and 60fps in most games on highest settings not even reachable. If you have a high refresh rate monitor and want a good 4k experience, get a 2080ti ๐Ÿ˜‚
  15. True for dedicated cards but not for unboard controllers, maybe connected to the chipset, grouped with other devices and with ACS Override split. Lots of people have problems passing through onboard controllers. ACS is only a workaround which won't work in all szenarios. Dedicated usb cards is the way to go. This is why I told him to test without and reduce the devices to a minimum to narrow down the issues. Yesterday a guy had issues where the VM shutdown or restart caused the server to freeze and guess what was the culprit? A passed through USB controller ๐Ÿ˜‰
  16. @cap089 From your xml it looks like you're passing a lot of stuff to your VM. It's adviced to reduce it to a minimum. Besides of the GPU and it's audio, you're passing through IOMMU group 23: [10ec:8125] 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8125 IOMMU group 26: [1022:149c] 06:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Device 149c IOMMU group 27: [1022:149c] 06:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 149c Are you sure you really need all these? Ethernet can be handled via a virtual nic and is faster (up to 10gig) if you transfer files to unraid if you're using a fast cache for your shares. USB onboard controllers are often really tricky to get them to work. Did you tried tried to only passthrough the GPU without anything else and get it to work and add the onboard audio next? Mouse and keyboard you can add via unraids web ui.
  17. I bet you have a free USB header on your motherboard which you can use ๐Ÿ˜‰ Most modern boards have 2 or even more and in most cases you only use one for the front panel USB ports of the case.
  18. Ever thought about using an external USB drive or buying a cheap adapter for your existing one and pass this through? CD-DVD drives in 2019 ๐Ÿ˜‚ just sayin
  19. Starting from 6.8 RC1 I had to remove the extra qemu commandlines and haven't noticed any graphics performance decreases yet. Not in RC1, RC4 or RC5
  20. @gambit820 Only first gen Ryzens and older BIOS versions are affected by this.
  21. @darthcircuit Starting with 6.8 RC1 I had to remove the pci-root-port patch. Sure it shows x1 in the Nvidia control panel, but I can't notice any performance drop as in qemu 3.xx versions. Couple people using qcow2 and the underlying xfs filesystem which is the main issue with qemu 4.1. Compressed qcow2 files are only a issue on top of the corruption of vdisks. 4.1.1 maybe will have a fix and if not 4.2 for sure which is already worked on. https://bugs.launchpad.net/qemu/+bug/1847793