mkfelidae

Members
  • Posts

    56
  • Joined

  • Last visited

Everything posted by mkfelidae

  1. Alright, I have it fixed.... a bit more hacked together than un-get for sure but here goes...... I first went and found a bluez package that was both built for slackware, and was newer than 5.63 (the busted version) I narrowed it down to this one: https://slackware.uk/slackware/slackware64-current/slackware64/n/bluez-5.70-x86_64-1.txz I then removed the one that un-get had installed, dumped the more up to date packages somewhere convenient (in my case /boot/config) and added the following line to my go file (/boot/config/go): installpkg /boot/config/go/bluez-5.70-x86_64-1.txz before the rc.d call (if you're using bluetooth inside home assistant you probably modified the go file already to add /etc/rc.d/rc.bluetooth start) I am running unraid 6.11.5 and havent tested anything more up to date, hope this helps.
  2. I have the same issue, it is currently reported upstream in bluez but is definitely the dongle you are using (mine is doing it too) we have to wait for someone to repackage a newer version of bluez for slackware and then will need to update it from our end (i used un-get for my bluez)
  3. I am truly stumped on this one. I have had a functioning Remote access to LAN running for over a year until changing my ISP (comcast -> Centurylink GPON Fiber) after changing I no longer could access all of the devices on my LAN I wanted to including a Ubuntu server. Some Devices respond to pings, some do not; and even more strange the list of devices is not set in stone. I haven't changed the settings within the wireguard clients nor in unRAID itself. I previously had a static route configured with my previous router, and had resetup that same static route in my new router, when this didn't work I actually switched out for another router entirely ( a cradlepoint AER2100) and configured a static route for that one too (my previous comcast setup is broken) I can ping some devices one the lan (the two that always are available are the default LAN gateway [either c4000z or the AER2100] and the unRAID host. I cannot resolve most of the other hosts on my network from a remote wireguard client. What am I doing wrong? let me know what I can include to better help troubleshoot this issue.
  4. So I have the virsh commands for changing the Floppies/CD-Roms in a running VM here it is with breakdown: change-media [domain name] [vm-device-tag] [fully qualified path to disk to attach] --live should look like this (at least for me it did): change-media "Windows 3.11 For Workgroups" fda "/mnt/user/isos/Microsoft MS-DOS 6.22 Plus Enhanced Tools (3.5)/Disk1.img" --live this command should return a reply of: "successfully updated media" if it was successful XML for a floppy drive should look something like this: <disk type="file" device="floppy"> <driver name="qemu" type="raw"/> <target dev="fda" bus="fdc"/> <readonly/> <boot order="1"/> <address type="drive" controller="0" bus="0" target="0" unit="0"/> </disk> There are other flags like --update and --eject that may be useful for higher level OSs that track drive states, MS-Dos does not seem to do so. hope these help.
  5. here is my WFW 3.11 XML rough ported to unraid from VMM, may still need to clean up the vm console as noVNC didn't like this quite yet. I attached floppies through GUI in VMM but you could likely do it through virsh editing while the VM is live, I can do some troubleshooting on that later if you would like but I have to go to work soon. At me specifically if I can help further as I sometimes don't get notifications from this thread. WFW 3.11 tcp-ip.xml
  6. I have floppy images for windows 3.11 WFW and dos 6.22 I also have drivers for TCP-IP and the RTL8139 network interface card for WFW 3.11 I have an operational VM that was native to VMM (KVM/QEMU) and would likely just need some unRAID specific tweaks (like changing out spice for VNC and changing the emulator from kvm-spice to qemu-system-x86_64) let me know if that would be something you would like @sasdakota
  7. Emulator pinning and iothread pinning can help further improve performance, and unless you are crossing Numa nodes they too should be pinned to the same node your cpu is pinned to, that said, they do not need to be isolated in most cases. You can use numactl to pin processes like plex to node 1 if you choose. Ultimately plex is pretty light weight in terms of cpu usage and it might not really change plex performance crossing Numa nodes.
  8. Excellent! So with that switch flipped does Unraid correctly discover the layout of your computer?
  9. To be honest, I have found that nothing is to be expected when dealing with old hardware. That said, check inside the bios to make sure that any NUMA associated options are enabled or selected. I have seen a board that allowed you, (for some reason) to configure the memory profile in UMA the opposite of NUMA if you wanted, and all it did was hide the node structure from the OS without actually helping with memory access at all. Do you have memory attached to both sockets? And if so is Unraid detecting all of it? There may be a mode in the bios that controls memory access policies at the hardware level that may be dictating how the system appears to the OS above. I would offer more help but I would ideally need a copy of your syslog's first 50-100 lines and a look at your whole bios, but the bios screens would probably be quite difficult to collect due to the number of anticipated submenus. Just the top stub of the syslog would allow me to see if linux configured your system in Numa to start or if it detected the system as a single node.
  10. I didn't see it in there either, I see you have several USB controllers bound to VFIO are those all going to one VM? or are they being passed to different VMs? The Intel AX200 chipset is designed to expose Wi-Fi over PCIe and bluetooth over USB, so unless Gigabyte did something strange with that chip it should have been connected to the rest of motherboard via both PCIe and USB. Check the VMs those USB hubs are connected, you're looking for a USB device with the Vendor ID 8087:0029. Hope this helps
  11. That particular card exposes, (like most wifi/bluetooth cards) the bluetooth portion of the card as a USB device, check your USB devices and you should find one that is "intel bluetooth" or something, you have to pass that through separately.
  12. I used double quotes without a problem but technically the XML spec does prefer single quotes.
  13. I feel kind of foolish for not including the answer to this. The command you are looking for is: numastat -c qemu This will list out all of the processes whose name starts with qemu (like qemu-system-x86) and display their numa node memory usage. If you have multiple VMs started you may need to differentiate VMs by process ID (PID) instead, (figuring out which "qemu-system-x86" process is which specific VM can be frustrating). In that case the command would look something like: numastat -p [PID] It is normal for a process with its memory bound to a node other than 0 with the <numatune> tag to have a small ammount of memory allocated from node 0. I suspect that this is due to the qemu emulation process itself being bound to node 0. I did not find a way to eliminate allocation from node 0 entirely. Hope this helps
  14. I see questions on Reddit and here on the forums asking about VM performance for Dual(or multi) socket motherboards. So I figured I’d write up a basic tuning guide for NUMA environments. In this guide I will discuss three things, CPU tuning, memory tuning and IO tuning. The three intertwine quite a bit so, while I will try to write them separate, they really should be considered as one complex tuning. Also, if there is one rule I recommend you follow it is this: don’t cross NUMA nodes for performance sensitive VMs unless you have no choice. So for CPU tuning, it is fairly simple to determine, from the command line, what CPUs are connected to which node. Issuing the following command: numactl -H Will give you a readout of which CPUs are connected to which nodes and should look like this: (Yes they are hyperthreaded 10-cores, they are from 2011 and your first warning should be that they are only $25USD a pair on eBay: xeon e7-2870, LGA1567) This shows us that CPUs 0-9 and their hyperthreads 20-29 are on node 0, it also shows us the memory layouts for the system, which will be useful later. With this layout pinning a VM to cores 0-7 and 20-27 would give the VM 8 cores and 8 threads all from one CPU. If you were to pin cores 7-14 and 27-34 your VM would still have 8 cores and 8 threads but now you have a problem for your VM, without tuning the XML, it has no idea that the CPU it was given is really on two CPUs. One other thing that you can do to help with latency is to isolate an entire CPU in the unRAID settings, (Settings>CPU Pinning). That would basically reserve the CPU for that VM and help reduce unnecessary cache misses by the VM. For memory tuning, you will need to add some XML to the VM to force allocation of memory from the correct node. That XML will look something like: <numatune> <memory mode='strict' nodeset='0' /> </numatune> For this snippet of XML mode=”strict” means that if there isn’t enough memory for the VM to allocate it all to this node then it will fail to start, you can change this to “preferred” if you would like it to start anyway with some of its memory allocated on another NUMA node. Lastly, IO tuning is a bit different from the last two. Before we were choosing CPUs and memory to assign to the VM based on their node, but for IO tuning the device you want to pass-through, (be it PCI or USB) the node is fixed and you may not have the same kind of resource(a graphics card) on the other node. This means that ultimately the IO devices you want to pass-through will, in most cases, actually determine which node your VM should prefer to be assigned to. To determine which node a PCI device is connected to you will first need that devices bus address, which should look like this: 0000:0e:00.0. To find the devices address in the unRAID webGUI go to Tools>System Devices then serach for your devices in the PCI Devices and IOMMU Groups box. Then open a terminal and issue the following commands: cd /sys/bus/pci/devices/[PCI bus address] cat numa_node The output should look like this: For my example here you can see that my device is assigned to NUMA node 0. I will point out that if you are passing multiple devices, (GPU, USB controller, NVMe drive) that they all might not be on the same node, in that case i would prioritize which node you ultimately assign your VM to based on the use of that VM. For gaming i would prioritize the GPU being on the same node personally but YMMV. Other thing that you can do to help with latency is to isolate an entire CPU for the a VM if it is for something like Gaming. That would basically reserve the CPU for that VM and help reduce unnecessary cache misses by the VM It can be easy to visualize NUMA nodes as their own computers. Each node may have its own CPU, RAM and even IO devices. The nodes are interconnected through high-speed interconnects but if one node wants memory or IO from a device on another node, it has to ask the other node for it, rather than address it directly. This request causes latency and costs performance. In the CPU section of this guide we issued the command “numactl -H” and this command also shows us the distance from one node to another, abstractly, with the nodes laid out in a grid showing distance from one node to another. The farther the distance, the longer the turnaround time for cross-node requests and the higher the latency. Advanced tuning: It is possible, if you have need of it, to craft the XML for a VM in such a way as to make the VM NUMA aware so that the VM is able to properly use two or more socketed CPUs. This can be done by changing both the <cputune> and <vcpu> tags. This is outside the scope of a basic tuning guide and I will just include a link to https://libvirt.org/formatdomain.html which included the entire specification for libvirt Domain XML, which the unraid VMs are written in.
  15. How does this work with DVB drivers? If we aren't using customized bz* files then are you going to be including DVB drivers directly? Or will one of the community devs be building a plug-in for those too?
  16. if you want to just retrieve the copy of the flash backup, any linux live distro (like ubuntu or manjaro) should be able to mount the array disks one at a time (because they are xfs) and on one of them you will find the flash backup that you could extract onto the new flash drive, then make the flash drive bootable, and you would be all set.
  17. I would appreciate this as well.
  18. I use two USB tuner sticks, both from hauppage, a WinTV-HVR (a single ATSC tuner with composite video input as well) and a WinTV-DualHD (a double ATSC tuner with no other features) both appear to use different drivers. and for the DualHD it shows up as the same driver twice. This uses the LibreELEC drivers as far as I know, I have always used the LibreELEC build before as that was the only build that showed my tuners.
  19. Looks good here, shows the Nvidia GPU information i would need to pass a GPU to a docker, shows my ZFS information (currently no pools is correct, i haven't set any up yet.) and also shows that there are DVB adapters on my system. Fine work I must say.
  20. I would be completely down to test anything new as I will continue to have a use for both an nvidia gpu and a hauppage TV Tuner that requires the LibreELEC driver pack
  21. It works better than my hodge-podge'd together Nvidia / DVB build that required a modprobe script at array start.
  22. I would love to see you and @ich777 collaborate as he just released a docker container that builds a combined NVIDIA / DVB kernel from scratch. It does not seem to work completely with either plugin as it probably doesn't have the code or scripts needed to interface with the plugins correctly but it did create a functional kernel that allowed me to use a Hauppage WinTV-dualHD AND provide an NVIDIA GPU to plex to offload transcoding.
  23. For those of us who would like to combine DVB drivers as well as NVIDIA GPU drivers I would like to thank @ich777. Huge shout out to all the help I got for getting it configured.
  24. Shout out to @ich777 for the awesome docker container he just put up in Community Applications that help compile your own custom kernel