mkfelidae

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by mkfelidae

  1. Emulator pinning and iothread pinning can help further improve performance, and unless you are crossing Numa nodes they too should be pinned to the same node your cpu is pinned to, that said, they do not need to be isolated in most cases. You can use numactl to pin processes like plex to node 1 if you choose. Ultimately plex is pretty light weight in terms of cpu usage and it might not really change plex performance crossing Numa nodes.
  2. Excellent! So with that switch flipped does Unraid correctly discover the layout of your computer?
  3. To be honest, I have found that nothing is to be expected when dealing with old hardware. That said, check inside the bios to make sure that any NUMA associated options are enabled or selected. I have seen a board that allowed you, (for some reason) to configure the memory profile in UMA the opposite of NUMA if you wanted, and all it did was hide the node structure from the OS without actually helping with memory access at all. Do you have memory attached to both sockets? And if so is Unraid detecting all of it? There may be a mode in the bios that controls memory access policies at the hardware level that may be dictating how the system appears to the OS above. I would offer more help but I would ideally need a copy of your syslog's first 50-100 lines and a look at your whole bios, but the bios screens would probably be quite difficult to collect due to the number of anticipated submenus. Just the top stub of the syslog would allow me to see if linux configured your system in Numa to start or if it detected the system as a single node.
  4. I didn't see it in there either, I see you have several USB controllers bound to VFIO are those all going to one VM? or are they being passed to different VMs? The Intel AX200 chipset is designed to expose Wi-Fi over PCIe and bluetooth over USB, so unless Gigabyte did something strange with that chip it should have been connected to the rest of motherboard via both PCIe and USB. Check the VMs those USB hubs are connected, you're looking for a USB device with the Vendor ID 8087:0029. Hope this helps
  5. That particular card exposes, (like most wifi/bluetooth cards) the bluetooth portion of the card as a USB device, check your USB devices and you should find one that is "intel bluetooth" or something, you have to pass that through separately.
  6. I used double quotes without a problem but technically the XML spec does prefer single quotes.
  7. I feel kind of foolish for not including the answer to this. The command you are looking for is: numastat -c qemu This will list out all of the processes whose name starts with qemu (like qemu-system-x86) and display their numa node memory usage. If you have multiple VMs started you may need to differentiate VMs by process ID (PID) instead, (figuring out which "qemu-system-x86" process is which specific VM can be frustrating). In that case the command would look something like: numastat -p [PID] It is normal for a process with its memory bound to a node other than 0 with the <numatune> tag to have a small ammount of memory allocated from node 0. I suspect that this is due to the qemu emulation process itself being bound to node 0. I did not find a way to eliminate allocation from node 0 entirely. Hope this helps
  8. I see questions on Reddit and here on the forums asking about VM performance for Dual(or multi) socket motherboards. So I figured I’d write up a basic tuning guide for NUMA environments. In this guide I will discuss three things, CPU tuning, memory tuning and IO tuning. The three intertwine quite a bit so, while I will try to write them separate, they really should be considered as one complex tuning. Also, if there is one rule I recommend you follow it is this: don’t cross NUMA nodes for performance sensitive VMs unless you have no choice. So for CPU tuning, it is fairly simple to determine, from the command line, what CPUs are connected to which node. Issuing the following command: numactl -H Will give you a readout of which CPUs are connected to which nodes and should look like this: (Yes they are hyperthreaded 10-cores, they are from 2011 and your first warning should be that they are only $25USD a pair on eBay: xeon e7-2870, LGA1567) This shows us that CPUs 0-9 and their hyperthreads 20-29 are on node 0, it also shows us the memory layouts for the system, which will be useful later. With this layout pinning a VM to cores 0-7 and 20-27 would give the VM 8 cores and 8 threads all from one CPU. If you were to pin cores 7-14 and 27-34 your VM would still have 8 cores and 8 threads but now you have a problem for your VM, without tuning the XML, it has no idea that the CPU it was given is really on two CPUs. One other thing that you can do to help with latency is to isolate an entire CPU in the unRAID settings, (Settings>CPU Pinning). That would basically reserve the CPU for that VM and help reduce unnecessary cache misses by the VM. For memory tuning, you will need to add some XML to the VM to force allocation of memory from the correct node. That XML will look something like: <numatune> <memory mode='strict' nodeset='0' /> </numatune> For this snippet of XML mode=”strict” means that if there isn’t enough memory for the VM to allocate it all to this node then it will fail to start, you can change this to “preferred” if you would like it to start anyway with some of its memory allocated on another NUMA node. Lastly, IO tuning is a bit different from the last two. Before we were choosing CPUs and memory to assign to the VM based on their node, but for IO tuning the device you want to pass-through, (be it PCI or USB) the node is fixed and you may not have the same kind of resource(a graphics card) on the other node. This means that ultimately the IO devices you want to pass-through will, in most cases, actually determine which node your VM should prefer to be assigned to. To determine which node a PCI device is connected to you will first need that devices bus address, which should look like this: 0000:0e:00.0. To find the devices address in the unRAID webGUI go to Tools>System Devices then serach for your devices in the PCI Devices and IOMMU Groups box. Then open a terminal and issue the following commands: cd /sys/bus/pci/devices/[PCI bus address] cat numa_node The output should look like this: For my example here you can see that my device is assigned to NUMA node 0. I will point out that if you are passing multiple devices, (GPU, USB controller, NVMe drive) that they all might not be on the same node, in that case i would prioritize which node you ultimately assign your VM to based on the use of that VM. For gaming i would prioritize the GPU being on the same node personally but YMMV. Other thing that you can do to help with latency is to isolate an entire CPU for the a VM if it is for something like Gaming. That would basically reserve the CPU for that VM and help reduce unnecessary cache misses by the VM It can be easy to visualize NUMA nodes as their own computers. Each node may have its own CPU, RAM and even IO devices. The nodes are interconnected through high-speed interconnects but if one node wants memory or IO from a device on another node, it has to ask the other node for it, rather than address it directly. This request causes latency and costs performance. In the CPU section of this guide we issued the command “numactl -H” and this command also shows us the distance from one node to another, abstractly, with the nodes laid out in a grid showing distance from one node to another. The farther the distance, the longer the turnaround time for cross-node requests and the higher the latency. Advanced tuning: It is possible, if you have need of it, to craft the XML for a VM in such a way as to make the VM NUMA aware so that the VM is able to properly use two or more socketed CPUs. This can be done by changing both the <cputune> and <vcpu> tags. This is outside the scope of a basic tuning guide and I will just include a link to https://libvirt.org/formatdomain.html which included the entire specification for libvirt Domain XML, which the unraid VMs are written in.
  9. How does this work with DVB drivers? If we aren't using customized bz* files then are you going to be including DVB drivers directly? Or will one of the community devs be building a plug-in for those too?
  10. if you want to just retrieve the copy of the flash backup, any linux live distro (like ubuntu or manjaro) should be able to mount the array disks one at a time (because they are xfs) and on one of them you will find the flash backup that you could extract onto the new flash drive, then make the flash drive bootable, and you would be all set.
  11. I would appreciate this as well.
  12. I use two USB tuner sticks, both from hauppage, a WinTV-HVR (a single ATSC tuner with composite video input as well) and a WinTV-DualHD (a double ATSC tuner with no other features) both appear to use different drivers. and for the DualHD it shows up as the same driver twice. This uses the LibreELEC drivers as far as I know, I have always used the LibreELEC build before as that was the only build that showed my tuners.
  13. Looks good here, shows the Nvidia GPU information i would need to pass a GPU to a docker, shows my ZFS information (currently no pools is correct, i haven't set any up yet.) and also shows that there are DVB adapters on my system. Fine work I must say.
  14. I would be completely down to test anything new as I will continue to have a use for both an nvidia gpu and a hauppage TV Tuner that requires the LibreELEC driver pack
  15. It works better than my hodge-podge'd together Nvidia / DVB build that required a modprobe script at array start.
  16. I would love to see you and @ich777 collaborate as he just released a docker container that builds a combined NVIDIA / DVB kernel from scratch. It does not seem to work completely with either plugin as it probably doesn't have the code or scripts needed to interface with the plugins correctly but it did create a functional kernel that allowed me to use a Hauppage WinTV-dualHD AND provide an NVIDIA GPU to plex to offload transcoding.
  17. For those of us who would like to combine DVB drivers as well as NVIDIA GPU drivers I would like to thank @ich777. Huge shout out to all the help I got for getting it configured.
  18. Shout out to @ich777 for the awesome docker container he just put up in Community Applications that help compile your own custom kernel
  19. I have a script that calls: modprobe em28xx When the array is started. Then I have my Plex docker setup for an autostart delay of 30 seconds, to give the script time to fire and for modprobe to load not only em28xx but all of its dependencies such as the dvb_core driver. For me, /dev/dvb, isn't an actual path until the modprobe script fires. That is the real reason my Plex docker has an autostart delay, the docker crashes on start if the drivers haven't been loaded and /dev/dvb hasn't been created. When I said that my USB stick doesn't show up in the dvb plugin, I meant it ONLY doesn't show up in the adapters list on the plugin GUI. If I had to take a guess why, it would be that the GUI is looking for the devices at boot time, but that the device doesn't show as a dvb device until I load the driver at array start. I hope this helps, if not let me know.
  20. I AM THE CAT!!! So, starting with the NVIDIA build for 6.8.3 AND the DVB Build for 6.8.3 I was able to smush the DVB drivers into the NVIDIA Build, I had to modprobe the specific driver (em28xx) for my Hauppage WinTV-Dual-USB as I didn't manage to get it to auto-load but it works, SEE!: my only problem is that my DVB adapter, a hauppage WINTV-Dual-USB do not show up in the UNRAID DVB plugin but plex does detect and stream from it just as well as it did previously. Now I would like to point out that there is no polish on this setup, I have to manually modprobe the driver after boot time at this point, before I start my plex docker (thank you LSIO for the docker, that was the last thing that I needed to make this thing work, the official plex docker wouldn't do the HW transcode) and that, after testing this issue it does not go away on a reboot. Here is how I did it, bask in the simple power of duct-tape based jank: I first downloaded both NVIDIA and DVB plugins. Then I selected the LibreElec 6.8.3 (because that is what works with my tuner) build in the DVB plugin and installed it Reboot the server I then confirmed that my DVB device was showing in the DVB plugin Sifting through the syslog (check my snip it helps to show what I mean) I wrote down the dvb driver that my device was being assigned to (the snip is from after the combination but it is identical to the way that it would be from the before combination) Then I copied the bzfirmware and bzmodules files from my flash share I then installed the NVIDIA 6.8.3 build in the NVIDIA plugin Reboot the server I confirmed that my Graphics card was showing in the NVIDIA plugin I copied the bzfirmware and bzmodules files from my flash share (yes, i did this twice, once for the DVB setup and once for the Nvidia setup) I then unpacked both of them in windows into separate folders (you could do this in whatever you want but i like the simplicity of the windows GUI) I then copied the contents of the DVB bzfirmware into the NVIDIA bzfirmware folder chosing NOT to overwrite any existing files I then copied the contents of the DVB bzmodules \4.19.107-Unraid\kernel\drivers\media to the same folder in the NVIDIA bzmodules folder I then shared both folders to a linux VM I spun up for this and used mksquashfs to zip them both back up (make sure to use the -all-root option to avoid contaminating the permissions of the output bzfirmware and bzmodules files) I then copied the two brand new bz files on top of the ones in my flash drive After a final reboot (and some related troubleshooting that led me to migrate my plex container to the LSIO version) all I have to do is modprobe my driver back in after boot EVERY SINGLE TIME! and start my plex container My next task will be to setup a user script that runs on array start to modprobe that driver so I don't have to do it by hand but it is almost midnight here and I have to work in the morning.
  21. @CHBMB I went to try out your combined Nvidia / DVB build based on 6.7.1rc2 but the Unraid Nvidia plugin no longer shows that particular build. Do you have a link somewhere to that build that I can try and install myself?
  22. Someone further up in this thread said that they used mksquashfs to add a firmware file for one of their devices.I'm not sure how it worked but look through the forum you should be able to find somebody that has done it.
  23. I have used modprobe before for devices I couldn't get to show up at boot. If you know how to get the module for it then the only thing to remember is that UNRAID unpacks into memory at boot so you may need to copy the module every time before you modprobe.