mkfelidae

Members
  • Posts

    50
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mkfelidae's Achievements

Rookie

Rookie (2/14)

18

Reputation

  1. Emulator pinning and iothread pinning can help further improve performance, and unless you are crossing Numa nodes they too should be pinned to the same node your cpu is pinned to, that said, they do not need to be isolated in most cases. You can use numactl to pin processes like plex to node 1 if you choose. Ultimately plex is pretty light weight in terms of cpu usage and it might not really change plex performance crossing Numa nodes.
  2. Excellent! So with that switch flipped does Unraid correctly discover the layout of your computer?
  3. To be honest, I have found that nothing is to be expected when dealing with old hardware. That said, check inside the bios to make sure that any NUMA associated options are enabled or selected. I have seen a board that allowed you, (for some reason) to configure the memory profile in UMA the opposite of NUMA if you wanted, and all it did was hide the node structure from the OS without actually helping with memory access at all. Do you have memory attached to both sockets? And if so is Unraid detecting all of it? There may be a mode in the bios that controls memory access policies at the hardware level that may be dictating how the system appears to the OS above. I would offer more help but I would ideally need a copy of your syslog's first 50-100 lines and a look at your whole bios, but the bios screens would probably be quite difficult to collect due to the number of anticipated submenus. Just the top stub of the syslog would allow me to see if linux configured your system in Numa to start or if it detected the system as a single node.
  4. I didn't see it in there either, I see you have several USB controllers bound to VFIO are those all going to one VM? or are they being passed to different VMs? The Intel AX200 chipset is designed to expose Wi-Fi over PCIe and bluetooth over USB, so unless Gigabyte did something strange with that chip it should have been connected to the rest of motherboard via both PCIe and USB. Check the VMs those USB hubs are connected, you're looking for a USB device with the Vendor ID 8087:0029. Hope this helps
  5. That particular card exposes, (like most wifi/bluetooth cards) the bluetooth portion of the card as a USB device, check your USB devices and you should find one that is "intel bluetooth" or something, you have to pass that through separately.
  6. I used double quotes without a problem but technically the XML spec does prefer single quotes.
  7. I feel kind of foolish for not including the answer to this. The command you are looking for is: numastat -c qemu This will list out all of the processes whose name starts with qemu (like qemu-system-x86) and display their numa node memory usage. If you have multiple VMs started you may need to differentiate VMs by process ID (PID) instead, (figuring out which "qemu-system-x86" process is which specific VM can be frustrating). In that case the command would look something like: numastat -p [PID] It is normal for a process with its memory bound to a node other than 0 with the <numatune> tag to have a small ammount of memory allocated from node 0. I suspect that this is due to the qemu emulation process itself being bound to node 0. I did not find a way to eliminate allocation from node 0 entirely. Hope this helps
  8. I see questions on Reddit and here on the forums asking about VM performance for Dual(or multi) socket motherboards. So I figured I’d write up a basic tuning guide for NUMA environments. In this guide I will discuss three things, CPU tuning, memory tuning and IO tuning. The three intertwine quite a bit so, while I will try to write them separate, they really should be considered as one complex tuning. Also, if there is one rule I recommend you follow it is this: don’t cross NUMA nodes for performance sensitive VMs unless you have no choice. So for CPU tuning, it is fairly simple to determine, from the command line, what CPUs are connected to which node. Issuing the following command: numactl -H Will give you a readout of which CPUs are connected to which nodes and should look like this: (Yes they are hyperthreaded 10-cores, they are from 2011 and your first warning should be that they are only $25USD a pair on eBay: xeon e7-2870, LGA1567) This shows us that CPUs 0-9 and their hyperthreads 20-29 are on node 0, it also shows us the memory layouts for the system, which will be useful later. With this layout pinning a VM to cores 0-7 and 20-27 would give the VM 8 cores and 8 threads all from one CPU. If you were to pin cores 7-14 and 27-34 your VM would still have 8 cores and 8 threads but now you have a problem for your VM, without tuning the XML, it has no idea that the CPU it was given is really on two CPUs. One other thing that you can do to help with latency is to isolate an entire CPU in the unRAID settings, (Settings>CPU Pinning). That would basically reserve the CPU for that VM and help reduce unnecessary cache misses by the VM. For memory tuning, you will need to add some XML to the VM to force allocation of memory from the correct node. That XML will look something like: <numatune> <memory mode='strict' nodeset='0' /> </numatune> For this snippet of XML mode=”strict” means that if there isn’t enough memory for the VM to allocate it all to this node then it will fail to start, you can change this to “preferred” if you would like it to start anyway with some of its memory allocated on another NUMA node. Lastly, IO tuning is a bit different from the last two. Before we were choosing CPUs and memory to assign to the VM based on their node, but for IO tuning the device you want to pass-through, (be it PCI or USB) the node is fixed and you may not have the same kind of resource(a graphics card) on the other node. This means that ultimately the IO devices you want to pass-through will, in most cases, actually determine which node your VM should prefer to be assigned to. To determine which node a PCI device is connected to you will first need that devices bus address, which should look like this: 0000:0e:00.0. To find the devices address in the unRAID webGUI go to Tools>System Devices then serach for your devices in the PCI Devices and IOMMU Groups box. Then open a terminal and issue the following commands: cd /sys/bus/pci/devices/[PCI bus address] cat numa_node The output should look like this: For my example here you can see that my device is assigned to NUMA node 0. I will point out that if you are passing multiple devices, (GPU, USB controller, NVMe drive) that they all might not be on the same node, in that case i would prioritize which node you ultimately assign your VM to based on the use of that VM. For gaming i would prioritize the GPU being on the same node personally but YMMV. Other thing that you can do to help with latency is to isolate an entire CPU for the a VM if it is for something like Gaming. That would basically reserve the CPU for that VM and help reduce unnecessary cache misses by the VM It can be easy to visualize NUMA nodes as their own computers. Each node may have its own CPU, RAM and even IO devices. The nodes are interconnected through high-speed interconnects but if one node wants memory or IO from a device on another node, it has to ask the other node for it, rather than address it directly. This request causes latency and costs performance. In the CPU section of this guide we issued the command “numactl -H” and this command also shows us the distance from one node to another, abstractly, with the nodes laid out in a grid showing distance from one node to another. The farther the distance, the longer the turnaround time for cross-node requests and the higher the latency. Advanced tuning: It is possible, if you have need of it, to craft the XML for a VM in such a way as to make the VM NUMA aware so that the VM is able to properly use two or more socketed CPUs. This can be done by changing both the <cputune> and <vcpu> tags. This is outside the scope of a basic tuning guide and I will just include a link to https://libvirt.org/formatdomain.html which included the entire specification for libvirt Domain XML, which the unraid VMs are written in.
  9. How does this work with DVB drivers? If we aren't using customized bz* files then are you going to be including DVB drivers directly? Or will one of the community devs be building a plug-in for those too?
  10. if you want to just retrieve the copy of the flash backup, any linux live distro (like ubuntu or manjaro) should be able to mount the array disks one at a time (because they are xfs) and on one of them you will find the flash backup that you could extract onto the new flash drive, then make the flash drive bootable, and you would be all set.
  11. I would appreciate this as well.
  12. I use two USB tuner sticks, both from hauppage, a WinTV-HVR (a single ATSC tuner with composite video input as well) and a WinTV-DualHD (a double ATSC tuner with no other features) both appear to use different drivers. and for the DualHD it shows up as the same driver twice. This uses the LibreELEC drivers as far as I know, I have always used the LibreELEC build before as that was the only build that showed my tuners.
  13. Looks good here, shows the Nvidia GPU information i would need to pass a GPU to a docker, shows my ZFS information (currently no pools is correct, i haven't set any up yet.) and also shows that there are DVB adapters on my system. Fine work I must say.
  14. I would be completely down to test anything new as I will continue to have a use for both an nvidia gpu and a hauppage TV Tuner that requires the LibreELEC driver pack