Koenig

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by Koenig

  1. I have this issue as well. I have two monitors attached to a Nvidia 2070 Super. The VM becomes unresponsive when this happens, it is a Windows VM and i have tried to used remote desktop when it happens but it doesn't seem to connect (says configuring session or something like that the whole time until I aborts it), also a noraml shutdown doesn't work but a forced one does.
  2. For a long time WINS-server has worked for me by adding "wins support = yes" in the "SMB Extras" section, but after I upgraded from 6.8.3 to 6.9.2 I keep getting this in my log: May 26 08:31:54 NAS nmbd[15831]: process_name_registration_request: unicast name registration request received for name SAMSUNG_6260ND<20> from IP 192.168.8.65 on subnet UNICAST_SUBNET. Error - should be sent to WINS server May 26 08:33:27 NAS nmbd[15831]: [2021/05/26 08:33:27.527778, 0] ../../source3/nmbd/nmbd_incomingrequests.c:170(process_name_refresh_request) May 26 08:33:27 NAS nmbd[15831]: process_name_refresh_request: unicast name registration request received for name DAILY-DRIVER<20> from IP 192.168.8.205 on subnet UNICAST_SUBNET. May 26 08:33:27 NAS nmbd[15831]: [2021/05/26 08:33:27.527875, 0] ../../source3/nmbd/nmbd_incomingrequests.c:173(process_name_refresh_request) May 26 08:33:27 NAS nmbd[15831]: Error - should be sent to WINS server May 26 08:33:32 NAS nmbd[15831]: [2021/05/26 08:33:32.051536, 0] ../../source3/nmbd/nmbd_incomingrequests.c:170(process_name_refresh_request) May 26 08:33:32 NAS nmbd[15831]: process_name_refresh_request: unicast name registration request received for name DAILY-DRIVER<00> from IP 192.168.8.205 on subnet UNICAST_SUBNET. May 26 08:33:32 NAS nmbd[15831]: [2021/05/26 08:33:32.051696, 0] ../../source3/nmbd/nmbd_incomingrequests.c:173(process_name_refresh_request) May 26 08:33:32 NAS nmbd[15831]: Error - should be sent to WINS server May 26 08:33:36 NAS nmbd[15831]: [2021/05/26 08:33:36.571248, 0] ../../source3/nmbd/nmbd_incomingrequests.c:170(process_name_refresh_request) May 26 08:33:36 NAS nmbd[15831]: process_name_refresh_request: unicast name registration request received for name TOK<00> from IP 192.168.8.205 on subnet UNICAST_SUBNET. May 26 08:33:36 NAS nmbd[15831]: [2021/05/26 08:33:36.571325, 0] ../../source3/nmbd/nmbd_incomingrequests.c:173(process_name_refresh_request) May 26 08:33:36 NAS nmbd[15831]: Error - should be sent to WINS server May 26 08:33:51 NAS nmbd[15831]: [2021/05/26 08:33:51.694575, 0] ../../source3/nmbd/nmbd_incomingrequests.c:211(process_name_registration_request) May 26 08:33:51 NAS nmbd[15831]: process_name_registration_request: unicast name registration request received for name SAMSUNG_6260ND<00> from IP 192.168.8.65 on subnet UNICAST_SUBNET. Error - should be sent to WINS server Anyone know if the support for WINS has been removed or if something else has changed that break this? (I use WINS only to get name resolution when connecting to my network via VPN, so if there's another way to get that working I would be thankful for some tips on how to do that.)
  3. Thank you, changing the CPU-governor to "on demand" seems to bring the CPU-levels back to where they were before the upgrade. However that also seems to increase the power-draw a bit, not unexpected and I can't really say if it is more now than before the upgrade (should be, atleast some as it was on power save before but not stuck at the lowest p-state). Again - Thank you for the tip!
  4. Hi! I upgraded to 6.9.2 from 6.8.3 on one of my servers and I thought everything went just fine, but now a couple of days in I see that the CPU-usage on my 6-core Intel 4930K has gone from ~6% to ~20% when "idle" or maybe it should expressed as a "normal" state. This with 2 VM's (hassio and unifi) and 3 dockers running, same things both before and after the upgrade. I can't wrap my head around what is using the CPU so much, "top" and "htop" gives different answers + I'm not that familiar with linux. So maybe someone could be kind enough to check my attached diagnostics and help me figure out what is going on? nas-diagnostics-20210415-0744.zip
  5. On one of my servers I do, but on the other it is gone....
  6. I have a Gigabyte Aorus Xtreme also with dual audio (at least one of them ALC1220), and to me they show up as USB-devices, see attached image.
  7. As I understand it you want to passthrough the NVME and use it as primary disk for the VM, then you should set the option "Primary vDisk Location:" to "none" instead.
  8. I can confirm this behaviour, powered off I have "host-model" for CPU (own edit to get win 10 to work with ryzen 3), but if I look at the xml with the machine powered on I can see the same "features" as the OP, but this is all reverted back when machine is powered off again. And indows reports it as "EPYC", same as OP.
  9. I do have a Gibyte Aorus Xtreme with dual 10Gbe NIC so I might just test this suggestion, thank you, I had some other plans for the second NIC but that comes further into the future and perhaps this issue is ironed out until then.
  10. Well, I only have the one docker with static IP....
  11. I just answered in another thread about the "unexpected GSO type" where someone claimed that using Q35-5.0 would solve the issue, Ill just paste that post here: "I just tried yesterday with 2 newly created VM Q35-5.0 machines (Windows 10), on Beta25 and I still get the "unexpected GSO type" flooded in my logs when i use "virto" so I don't see how using Q35-5.0 would be a solution. Only way I get rid of that in the logs for me is to use "vitio-net" with the severely diminished performance. Edit: Just tried again and still same results, attaching my diagnostics if you wish to see for yor self." unraid-diagnostics-20200821-0848.zip
  12. Not that this is really the thread to adress this but anyway: I just tried yesterday with 2 newly created VM Q35-5.0 machines (Windows 10), on Beta25 and I still get the "unexpected GSO type" flooded in my logs when i use "virto" so I don't see how using Q35-5.0 would be a solution. Only way I get rid of that in the logs for me is to use "vitio-net" with the severely diminished performance. Edit: Just tried again and still same results, attaching my diagnostics if you wish to see for yor self. unraid-diagnostics-20200821-0848.zip
  13. There is a thread about a custom kernel for this issue in another subforum: https://forums.unraid.net/forum/15-motherboards-and-cpus/ I also have faint memory about reading that this is something that is fixed in more recent kernels, and will probably be included in next release.
  14. In trying to find the best way to get past the Ryzen 3000 bug with Win10 giving "KERNEL_SECURITY_CHECK_FAILURE" I found this post: https://forum.level1techs.com/t/amd-fix-for-host-passthrough-on-qemu-5-0-0-kernel-5-6/157710 I have posted about this before in the beta22-thread and how I solved with changeing host-passthrough to host-model. But it seems that the solution given in the linked post is a more "accurate" one so I tried that as well and it works and it doesn't break my Nvidia-GPU passthrough either, but at each reboot my changes to "/usr/share/libvirt/cpu_map/x86_features.xml" reverts to the original state thus breaking my VM's again. So how can I make the changes I made to that file become permanent?
  15. Yes, it doesn't help. I also have another issue or perhaps it is related, when I try to add custom network to Dockers (192.168.8.0/24, gateway: 192.168.8.1 and DHCP: not set) I lose all network connectivity to Unraid. If I via the console reboot the machine, when rebooted the console says "ipv4 not set" but if I log in and run "ifconfig eth0" I can see it gets the correct IP (I have set it to always get the same IP via the DHCP-server in my LAN) but I can still not contact the web-gui. EDIT: A change to the network settings seems to have solved it, Not that any real change was made just a random setting back and forth to get the update button to become enabled and then press update, and then solved....
  16. After I updated to beta24 Br0 disappeard and I do not understand how to get it back, can't run any of the VM's due to this missing. What I have done so far is to delete and recreate the docker.img (found this in another thread), but it didn't help What to do next? unraid-diagnostics-20200712-0908.zip
  17. Somehow it seems you have read my solution but misunderstod it or something. From the looks of your xml you should start from the top of my solution post.
  18. Yes. I got rid of these lines: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/> </qemu:commandline> And changed the cpu-mode from host-passthrough to host-model and removed the line on cache so the cp sektion looks like this: <cpu mode='host-model' check='none'> <topology sockets='1' dies='1' cores='4' threads='2'/> <feature policy='require' name='topoext'/> </cpu> I also added a line to run kvm hidden: <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <reset state='on'/> <vendor_id state='on' value='1234567890ab'/> <frequencies state='on'/> </hyperv> <kvm> <hidden state='on'/> </kvm> The part about hyper-v I had before, but I thought it might be good to have here for reference. I also have since earlier stubbed all devices i the actual IOMMU-group and the passed them through with the multifunction option like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x21' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x3'/> </hostdev> No bios passed since it is a secondary card. I still got the error code 43 when booting the VM, but going into the device manager and deleting the card with error 43 and then doing a rescan of hardware got it back to functioning state again. I hope someone else will be helped by this, it has taken me many hours of googling, reading and many newly created VM's to get to this solution. On the plusside I'm also seeing rather large increase in performance for the VM by changeing to host-model instead of host-passthrough, I was under the assumption that passthrough was the best option, (might not be the change that did the uptick in performance, might as well be the updated kvm, I can't be really sure since this core and my processor won't boot in passthough-mode, but from the previous core and running passthough to this core and running host-model I'm seeing a significant uptick in performance)
  19. I got the dreadful error code 43 after updating to this beta and applying this... What to do now, it was sort of my work@home-production machine with some software from work installed, Autocad, Magicad and some other stuff. Is there a way to get rid of the error 43? ( have dumped BIOS ) and I have the part of the .xml where you specify vendor ID, the graphics card in question is a RTX 2070, pretty please? My guess would be that something in those arguments tipped the driver off... EDIT: Yes pretty sure something in there breaks Nvidia passthrough, tried it on my other VM with a GT 1030 passed through with the exact same result, error code 43.... EDIT2: Attaching my diagnostics if it could help someone to solve this. unraid-diagnostics-20200624-2210.zip
  20. I just built a server with a threadripper and I'm letting Unraid, dockers and some VM's that don't use much resources ( mail-server, home-assistant, XP, tvheadend and some various others) share my first 8 cores, then I have two others that need all the performance they can get out their assigned cores. My question is: Is it better to pin the emulator pins of the VM's last mentioned to just a single core of the first eight or to all eight and let Unraid handle the load balancing?
  21. So I'm building a new setup for my Unraid server and I'm in need of more SATA-ports and it seems that LSI is the general consensus on how to solve that. I'm looking at the three cards in the title and I don't really know wich one to get... Or it's more like I'm looking at the 9201 vs 9400 as the 9300 and the 9400 seems cost similarly. My issue is that I spent a little too much on other hardware and sort of forgot about this part and I'm in europe so it's hard to find the 9400 at a low cost. I have 12 drives that needs to be connected to the card and I need room to connect another atleast another 2. All drives connected to it will be mechanical. So the main question is: Is the 9201 card going the be a bottleneck as it is PCIe 2.0 vs. PCIe 3.0 for the 9400?
  22. I think it would be possible to do with any docker that have ffmpeg, like "liquid dl" for instance. A small script would do it. Something like: for i in *.mkv; do ffmpeg -i "$i" -codec copy "${i%.*}.mp4" done
  23. Is there anywhere to report this issue? I tried updataing to 138 today but still had the same issue "adress not available" so I rolled back to 136 again.