Jump to content


Community Developer
  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by jbartlett

  1. Already seeing a 19% improvement in the memory score with this set. I'll also test hugepages.
  2. This is what it took to get it to divide up the memory between the nodes. <numatune> <memory mode='interleave' nodeset='0,2'/> </numatune> Couldn't use any of the "auto" methods because numad isn't part of the unraid package.
  3. Ya know, I had a feeling someone would pop in and tell me all my tests were invalid because there was another optimization. Ha! Sa'right. I'm in processing of recreating the VM with the numa setting in place from the start and I'll retest the numa config with the memory pinned to nodes 0 & 2. It was grabbing all 12G of RAM from node 0.
  4. CPU Threadripper 2990WX with a VM running Windows 10 fully patched, RAM is G.SKILL Ripjaws 4 Series 64GB (8 x 8GB) DDR4 2133 (PC4 17000) Motherboard is a ASUS ROG Zenith Extreme Alpha X399. MB & RAM are at stock settings, CPU governor set to Performance. VM is pinned to NUMA nodes 0 & 2 which has the PCIe & RAM attached, utilizing all CPUs and the emulator pin is on NUMA node 1, CPU 16. Total OS memory assigned is 12GB. No NUMA || NUMA Benchmark CB R20 PT CPU PT RAM | CB R20 PT CPU PT RAM || Benchmark CB R20 PT CPU PT RAM | CB R20 PT CPU PT RAM CPU Topo 1/32/1 1/32/1 1/32/1 | 1/16/2 1/16/2 1/16/2 || CPU Topo 1/32/1 1/32/1 1/32/1 | 1/16/2 1/16/2 1/16/2 Average 6572 20944 1261 | 6515 20831 1257 || Average 6408 20617 1389 | 6539 20958 1300 Highest 6620 21085 1263 | 6443 20873 1258 || Highest 6525 20728 1391 | 6589 21144 1306 Lowest 6537 20810 1255 | 6484 20805 1254 || Lowest 6438 20455 1385 | 6511 20746 1297 Variance 83 275 8 | 59 68 4 || Variance 87 273 6 | 78 398 9 The left set has no NUMA node configuration. The 1/16/2 paring shows a roughly 0.7% drop in CPU performance and a negligible difference in RAM performance. However, the variance was much lower - or the difference between the high & low scores which indicates increased stability in processing speeds at a slight performance loss. With a NUMA configuration, things were different. CineBench showed roughly the same performance but PerformanceTest 9.0 showed a larger variance in scores in which there was several much higher scores that were dropped in order to bring the variance down below a thousand. What the NUMA configuration clearly benefited is in RAM test scores if you create the numa node in the guest OS to match the host. <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='32' threads='1'/> <numa> <cell id='0' cpus='0-15' memory='6291456' unit='KiB'/> <cell id='1' cpus='16-31' memory='6291456' unit='KiB'/> </numa> </cpu> In short, since AMD CPU's do not support hyperthreaded CPU's to the Guest OS, setting cores=16/threads=2 shows mixed results based on if you specify a numa node or not. It's probably recommended to always have threads=1 so it matches what the guest OS sees.
  5. I'll give that a shot. For informational sake, having an emulated CPU gave a 9% boost in CPU performance with PerformanceTest 9.0 on 1/16/2 but only on every OTHER test. On the odd test, it scored the same as 1/32/1. Twenty test over two runs showed the same pattern. Cinebench R20 showed comparable scores between 1/32/1 & 1/16/2.
  6. CPU is half if threads=2. Did you mean a quarter? I've tested to see if such a thing would even boot with CPU & threads reversed, it did, but I didn't do any benchmarks with it. That was back when I was trying to figure out how to get the AMD guest OS to see hyperthreaded CPUs before I discovered that AMD doesn't support it.
  7. My testing shows that setting up a numa configuration in your guest benefits memory speed but not really CPU performance on an AMD system. Best to leave threads=1 for a slightly improved CPU performance over threads=2. Edit: ARGH! Some how, the CPU Mode ended up set to Emulated instead of passthrough. I didn't make that change but let me re-do these tests yet again.
  8. @limetech - I've been experiencing network drops on my 10G network card and from my investigation, it's due to a memory leak in the driver which was patched in the 1.6.13 version. It looks like unraid is loading 1.5.44. Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1499321 - The meat of the discussion is just a little past half way. Affects onboard NICs & PCIe cards. Nov 12 01:42:57 VM1 kernel: atlantic: link change old 10000 new 0 Nov 12 01:42:57 VM1 kernel: br0: port 1(eth0) entered disabled state Nov 12 01:43:11 VM1 kernel: atlantic: link change old 0 new 10000 Nov 12 01:43:11 VM1 kernel: br0: port 1(eth0) entered blocking state Nov 12 01:43:11 VM1 kernel: br0: port 1(eth0) entered forwarding state root@VM1:~# ethtool -i eth0 driver: atlantic version: 5.3.8-Unraid-kern firmware-version: 1.5.44 expansion-rom-version: bus-info: 0000:0a:00.0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: yes supports-priv-flags: no vm1-diagnostics-20191112-1903.zip
  9. Just learned something new. Using Microsoft's "Coreinfo" utility, Intel CPU's that support hyper threading will show up as hyper threaded CPU's to the guest regardless if you have threads=1 or threads=2. AMD CPU's, notably the Threadripper series (the only ones I have), do not. I was clued in when I was viewing the VM logs and saw a warning that AMD doesn't support the feature (which doesn't show up all the time either). For the unRAID GUI, it seemed to auto flag my Intel CPU for threads=2 'cause I don't recall making that change myself (my mileage will vary).
  10. SeaBIOS VM scores roughly 2% higher than OVMF BIOS on CPU benchmarks with Windows 10.
  11. If a VM name contains a + sign (example: Arch+Test), the following is displayed if you click on the VM icon and select "Logs" /usr/bin/tail: cannot open '/var/log/libvirt/qemu/Arch Test.log' for reading: No such file or directory /usr/bin/tail: no files remaining Created a VM named "Arch-Test". Able to view logs. Created a VM named "Arch+Test". Unable to view logs. Created a VM named "Arch Test". Able to view logs. Viewed logs on "Arch+Test", saw logs for "Arch Test". The + sign is being translated into a space.
  12. I see many people commenting that they always change their VM XML from something like the following to improve system performance. I've done the same but I came across Microsoft's "coreinfo" utility which revealed that the VM wasn't actually seeing any hyper-threaded CPUs. So I decided to benchmark it using the following topologies on an existing Win10 VM. Iteration 1: <topology sockets='1' cores='32' threads='1'/> Iteration 2: <topology sockets='1' cores='16' threads='2'/> Based on my testing, I do not see any improvements. I used Cinebench R20 & PerformanceTest 9.0, running each ten times and taking the max & average score, discarding any test that scored too far off of the high & low end of the variance (difference between the high & low scores) until the variance fell into an acceptable value based on my observations of running these benchmarks dozens of times. I picked these because I needed a couple and they were quick & easy to set up (I lost count on how many Win10 VM's I've set up in the past month). For the initial test, I ran with cores=32 threads=1 and then tried to get close to or better variance on the cores=16 threads=2 test. In my scenario, I'm running a Threadripper 2 2990wx with the CPU's 0-15 & 32-47 pinned (numa 0 & 2 which has the PCIe & RAM attached) and the emulator pin on CPU 16 (numa 1). This particular config is my intended use case with video broadcasting using Livestream Studio outputting a 1080p@60fps stream to YouTube plus at least two NDI streams of a 1080p@60fps video feed to be consumed by other VM/PC's. OS is Windows 10 fully updated, GPU is a Quadro P2000. Benchmark CB R20 PT CPU PT RAM | CB R20 PT CPU PT RAM CPU Topo 1/32/1 1/32/1 1/32/1 | 1/16/2 1/16/2 1/16/2 Average 6572 20944 1261 | 6515 20831 1257 Highest 6620 21085 1263 | 6443 20873 1258 Lowest 6537 20810 1255 | 6484 20805 1254 Variance 83 275 8 | 59 68 4 The average scores are lower in the 1/16/2 config but they're also tighter. I'm currently running this test again passing in the numa configuration to match the host. Past test runs have shown marked improvements.
  13. FYI - These post came from an 6.8.0 release thread but replying here on this thread as it's more on-topic. I've been having issues getting VM's to run with a video passthrough if I OC the MB by any amount. I haven't seen any difference between halving the cores and setting the threads to 2. Based off of Microsoft's "coreinfo" utility, the VM still sees single CPU's, none being hyper-threaded. I have not been successful in finding an XML config that passed the cpu pairing to the OS where the VM saw a hyper-threaded CPU. I'm doing another round of benchmarks with 10 iterations, I'll comment again when done. I still wasn't able to OC with the P2000 in the 3rd PCIe slot after moving it from the 2nd PCIe slot. In addition, I wasn't able to run any SeaBIOS VM's at all passing the card in slot #3. The most I'd see is the monitor activate but never even a POST. Graphics scores on the Quadro P2000 were in the same ballpark running in the x8 slot and the x16 slot. The 2nd & 3rd slots attach to different numa nodes but taking that into consideration with CPU pinning made no difference. My current guess is that it's due to having a different GPU (GeForce GT 1030) in the 1st PCIe slot for unraid to bind to. Unfortunately, I don't have two identical GPU's that I could put in it to test that theory.
  14. Based on my own benchmarks with a 2990wx, the "Infinityfabric" for letting the different dies communicate has a barely noticeable effect when I forced GPUs and memory to talk off die. No difference in GPU performance, slight dip in memory performance.
  15. I'll take this off-thread for any further notes and share it on my thread for this MB.
  16. I don't really even know what that is other than I've seen it posted in the forums a few times. Did some searching, installed the Tips & Tweaks plugin. Governor was set to "On Demand". I set it to Performance, rebooted & restarted the VM to make sure it still loaded as intended, then OC'ed the CPU to 6% using it's wizard. It took a couple minutes before I saw the BIOS load and I could see the text outputting character by character, probably a quarter second per line. It reminds me of the old 2400 baud modem days. Did some tinkering around, creating a blank VM with no drives attached, 2 CPUs, and 2GB of RAM. It was all slow until I removed the Quadro P2000, the VNC client showed it booted real fast. I actually DO want to squeeze as much juice out of this as I can. I'm actually a little stoked because I found a combination that allowed my multiple Brio 4K cams to work without flickering. My goal with this build is to have a 32 cpu running Livestream Studio for my 24x7 Foster Kitten Cam on the two main numa nodes and smaller Ubuntu VM's running OBS on the off-nodes feeding other streams to YouTube (multi-cam viewing ya'll!). I'm going to dig up the MB manual and dig into the BIOS to see if I can see anything causing the slowdown on OC with video passthrough.
  17. Meh, I've got something going on here. Tried OC'ing with SeaBios and got the same laggy thing. I'm going to wipe my hands of this and stick to stock speeds.
  18. I had the VM pinned directly to the two numa nodes that had the memory attached (all CPUs). Hrm, the emulatorpin'ed CPU was on one of the other numa nodes but it didn't pose any other issues if that is the root cause. The infinity fabric on the Threadripper 2 chips seems to be so fast that I couldn't find any degregation even pinning a VM off node forcing everything to go through the infinity fabric.
  19. Q35-4.0.1 + OVMF + Emulated CPU. I just started tinkering with i440fx VMs, haven't yet met some test scenarios yet to even try an OC.
  20. Noctua cooler & fans. Had no issues with other types of VM's with OVMF BIOS or bare metal, even with a much higher OC. Just started tinkering with Seabios to test if ingesting two Brio 4k cams set to 1080p@60 is more steady via a passthrough USB3 4 controller card. A Quadro P2000 is also passed through.
  21. Just an observation on my Threadripper 2990WX with a test VM using QEMU Emulated CPU's. If I overclock the CPU via the BIOS, even just a teeny little bit, my Win10 VM becomes extremely sluggish, even just to start the spinning circles at boot. The one time it actually managed to get to the Desktop, the mouse pointer was jumping around like it was running at 5fps. Go back to stock settings in the BIOS, no issues.
  22. Now here's something interesting. Switching bond from N to Y worked. Immediately changing it from Y to N also worked. Repeated successfully several times. Rebooted, was able to still change it successfully. vm1-diagnostics-20191108-1140.zip Removed the two network cfg files and rebooted. I was once again not able to change bonding from Y to N as above. Rebooted and was able to repeat the issues above - after rebooting, I was able to switch between bonding & non-bonding dynamically at-will with no issues. So I looked into what's different between the two. I took a snapshot of /boot/config, /usr/local/emhttp, and /var/local/emhttp and compared them before & after. Rebooting after removing the network*.cfg files caused things to default. network.cfg did not exist after booting, probably because no changes were made yet. The notable differences that I could see before & after was with network.ini with bonding enabled again. I don't think there is anything wrong with your code. I suspect the issue is in the defaults applied if no network.cfg file exists.
  23. It was already set to eth0. Changing it to "lo" and then back to "eth0" corrected it. It worked properly again after a reboot. I'm assuming that there's a persistent setting that gets updated when the interface to monitor is changed but when changing the bonding from Y to N, that setting isn't updated to something other than "bond0".
  24. Spoke too soon. Pulled up the Dashboard after rebooting and I see this. <br /> <b>Warning</b>: file_get_contents(/sys/class/net/bond0/statistics/rx_bytes): failed to open stream: No such file or directory in <b>/usr/local/emhttp/plugins/dynamix/include/DashUpdate.php</b> on line <b>338</b><br /> <br /> <b>Warning</b>: file_get_contents(/sys/class/net/bond0/statistics/tx_bytes): failed to open stream: No such file or directory in <b>/usr/local/emhttp/plugins/dynamix/include/DashUpdate.php</b> on line <b>339</b><br /> 0.0 bps
  25. If you're feeling spunky, try downgrading your BIOS a version.