Jump to content

jbartlett

Community Developer
  • Content Count

    1493
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by jbartlett

  1. It's the same kernel. 6.8.1 RC mainly only has fixes for the hypervisor. The 5.x kernel is being introduced in 6.9
  2. 6.8.1 RC 1 My passed through USB keyboard was still returned back to the OS during the Windows 10 boot process
  3. Did some on/off node testing with a TR 2990WX (off node = no direct access to ram/devices). Configured a VM with four numa nodes with two CPU's each (4 with HT) and 4 GB of RAM (1 GB per VM numa node), all set to hit against the same physical numa node. Tests were done on UNRAID 6.8.1 RC1 with the TR being passed through and the guest seeing both the hyperthreaded CPUs & NUMA nodes. Benchmarks were Cinemark R20 for CPU and AIDA64 for memory. Each score is an average of 5 runs with outlayers dropped (tested too far off the variance). Each pass moved a VM node from a numa node with direct access to one without. On Numa R20 Read Write Copy Latency 4 3663 31944 32558 32159 93.5 3 3645 30786 30483 30292 93.7 2 3587 27387 29982 27557 93.6 1 3607 19153 19809 20805 93.5 0 3526 14693 15097 17033 162.1 CPU scores saw a diminishing gain as the CPU's got moved off to a numa node without direct memory access as expected by the evenly spread score was surprisingly low. Memory scores shows a clear benefit to having at least one CPU on node in memory access with the times being negatively impacted nearly 100% being completely isolated.
  4. I'm passing through my TR on 6.8.1 RC1 with no hacks or workarounds (but does need the topoext cpu flag) and getting hyperthreadding. I recommend upgrading. I haven't finished my benchmarking yet but am seeing some small improvements.
  5. Here's the controller link report for the 0000:08:00.0 device 08:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) Capabilities: [68] Express (v2) Endpoint, MSI 00 LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns, L1 <1us ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+ LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- I have a lookup table that for PCI3 with a transfer rate of 8 and a width of 8 returns "7.88 GB/s". It looks like this reference table is in error. Based on information from https://paolozaino.wordpress.com/2013/05/21/converting-gts-to-gbps/ I should be able to compute it. The transfer rate of 8 GT/s identifies it as a PCI 3 controller. 8 GT/s multiplied by 8 lanes equals 64 GT/s, multiplied by the overhead (LineCodeL/LineCodeH) of 128b/130b equals 63.02 GB/s.
  6. @icemansid - Please upload/email a debug file from the DiskSpeed app (link at the bottom of the page). Use the left button to create a regular/smaller report. The link speed uses the results from a lspci -vmm command and the debug file will have what it returned.
  7. The DiskSpeed app simply reports what the dd command is outputting. If you're having issues reading all the drives at once, try removing half of the drives and then run a controller benchmark to see if you get the same results or better. If the same, remove the existing and hook the others back up. If better, then add a couple drives and do another benchmark. Rinse & repeat. See if there's a magic number or a given drive that's causing the issue.
  8. @bonienl The VM GUI editor is hard coded to set the thread count to 1 if it detects an AMD processor in libvirt.php // detect if the processor is AMD, and if so, force single threaded $strCPUInfo = file_get_contents('/proc/cpuinfo'); if (strpos($strCPUInfo, 'AuthenticAMD') !== false) { $intCPUThreadsPerCore = 1; } This was due to AMD reporting no support for hyperthreadding in a VM. With UNRAID 6.8.1 RC1, hyperthreadding is supported with CPU passthrough as is (and CPU cache) if the CPU feature topoext is enabled. Previously, the CPU had to be forced to report as an EPYC to get it to support hyperthreadding. <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='6' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> Microsoft's CoreInfo returns Coreinfo v3.31 - Dump information on system CPU and memory topology Copyright (C) 2008-2014 Mark Russinovich Sysinternals - www.sysinternals.com Logical to Physical Processor Map: **---------- Physical Processor 0 (Hyperthreaded) --**-------- Physical Processor 1 (Hyperthreaded) ----**------ Physical Processor 2 (Hyperthreaded) ------**---- Physical Processor 3 (Hyperthreaded) --------**-- Physical Processor 4 (Hyperthreaded) ----------** Physical Processor 5 (Hyperthreaded) Note that changes to the CPU layout may not be detected in the VM until the VM is rebooted from inside the VM itself (for example: Start > Power > Restart) Prior to 6.8.1 RC1, I could not get CPU-Z to run, it would always hang at the 10%/Processors on load. It still takes a bit but does return now.
  9. CONFIRMED: On 6.8.1 RC1, I don't need the EPYC workaround to get hyperthreadding enabled but the TR does need the following CPU feature: <feature policy='require' name='topoext'/> I was also able to get CPU-Z to run with CPU passthrough and hyperthreading detected though it takes a bit at the 10%/Processors stage where in the past it apparently hung. I'm going to look to see if HT support has already been recommended for Threadripper CPUs and request it if not.
  10. Thanks for mentioning "virtio", that reminded me to check for a new virtio-win ISO file. New one out last October, version 0.1.173.2
  11. Ya know, you just invalidated 4 benchmark runs with 25 passes each run. At least you caught be before the next two runs! 😁 Might as well upgrade to 6.8.1 RC1 too. Though this would also mean a lot less edits in the XML every time I needed to use the GUI editor.
  12. Can you install Microsoft's CoreInfo and see if it detects a hyperthreaded setup? I use this Batch file for a quick report since it's a command line utility and the screen will close after running Coreinfo.bat @echo off coreinfo.exe -ncs pause My Results with the EPYC hyperthreading workaround (weird numa mapping due to cross-numa node testing) Coreinfo v3.31 - Dump information on system CPU and memory topology Copyright (C) 2008-2014 Mark Russinovich Sysinternals - www.sysinternals.com Logical to Physical Processor Map: **---------- Physical Processor 0 (Hyperthreaded) --**-------- Physical Processor 1 (Hyperthreaded) ----**------ Physical Processor 2 (Hyperthreaded) ------**---- Physical Processor 3 (Hyperthreaded) --------**-- Physical Processor 4 (Hyperthreaded) ----------** Physical Processor 5 (Hyperthreaded) Logical Processor to Socket Map: ************ Socket 0 Logical Processor to NUMA Node Map: **---------- NUMA Node 0 --********** NUMA Node 1 Press any key to continue . . .
  13. AMD Threadripper doesn't support hyperthreadding out of the box to VM's but it can be tricked into working with the EPYC workaround. As such, the code behind the VM Editor is hard-coded to set the CPU threads to 1.
  14. I have not but AMD agrees with you on the latency issues having to utilize the Infinityfabric. They're not creating multiple numa nodes for TR3.
  15. (Ref "on node": Numa 0 & 2 with direct access to PCI & RAM. "off node": Numa 1 or 3) Some interesting findings I had last night. I had three VM's running, Cam1 (hogging numa 0 & 2) had four Brios connected, each set to output the cam over NDI. Cam2 running OBS taking in a NDI feed from Cam1. CPU % steady. When I started OBS on Cam3 taking in a NDI feed from Cam1, Cam2's CPU utilization jumped. When I closed OBS on Cam3, the CPU % on Cam2 returned back. The effect could also be seen in reverse. Cam2 & Cam3 were running "off node" with accessing the memory. I just ran a benchmark and showed that memory latencies as well as read/write/copy times were negatively affected by around 50% if the memory access had to utilize the Infinityfabric of the TR2.
  16. 2990WX memory speeds are seriously impacted if the VM has to utilize the Infinityfabric. This is likely true for all TR2 models. Memory latency is 58.8% slower and the read/write/copy scores jump by 47% / 48.9% / 54% respectively.
  17. 1. I got slightly better CPU scores under i440fx SeaBIOS than Q35 OVMF. 2. Still fine tuning CPU placement. One live test I ran last night showed CPU utilization steady on Cam2 taking in a NDI video feed jumped when Cam3 also started taking in a NDI video feed. Cam1 is hogging the CPUs with direct access to the memory and I may spread it out so it takes a numa node on memory and one off so I can move Cam2 & Cam3 on the numa nodes with memory. Ran a test just now and memory latency is 58.8% slower when it has to cross the Infinityfabric and the read/write/copy scores jump by 47% / 48.9% / 54% respectively.
  18. I saw a 0.73% drop with the additional hyper-v settings than what the GUI put in.
  19. Test it without passing through any other devices. I have a MB that one of the LAN ports may not always survive a VM reboot if I was also passing through a graphics card - stopping passing through the card and it survived over & over. Just something to troubleshoot.
  20. My guess is that the read speed of the faster drives were capped by the OS in order to maximize the output of the other slower drives. Gives better overall performance. You're only utilizing 2GB of the stated 7.8GB bandwidth so you may have a bottleneck elsewhere in the PCI chain.
  21. Ah, I missed that it was under Process Lasso. My mind clicked in on the unraid VM editor.