Nephilgrim

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by Nephilgrim

  1. You just fix all the issues I had! Changing both mount points and now it just performs as it should!
  2. Someone may have to correct me, but i think it's needed to have this tag inside <cpu> for the nested virtualization to work prior to everything else you have tried: <cpu mode='host-passthrough' check='none'> ... <feature policy='require' name='vmx'/> ... </cpu> At least it was needed when i tried to do some nested virtualization a year or so ago.
  3. Change to q35 instead of i440fx. I cannot start my 1070 if i dont create the VM as q35 + the modified rom file (as spaceinvaderone tutorial).
  4. Today i booted the server + dockers + VM's like always and no trace of the errors.
  5. Tried to force it: Launched a speedtest + fast.com + downloading a game from steam to stress the connection (300/300 fiber) and no new log entries since I started the server + mainVM 2 hours ago. Edit: Also tried to launch a speedtest from the unraid plugin (out of the VM) but no more logs neither: So far I only see those logs messages once (or once per nic port) when a VM is booted. I suppose that this could be a expected behaviour since they stop.
  6. I'm on RC7. For me the errors only appear 4 times and stop when I start a VM (I have 4 nic ports bonded, don't know if it's a coincidence). Maybe i'ts because I only have 2 dockers with custom IP but so far I have no log spamming as reported by other people here.
  7. Still, seems that a valid workaround is to have a dummy VM created pointing to nothing just to be unable to have ALL vm's running.
  8. Indeed, disabling NAT option (and because of this: upnp too) fixes the issue for any docker in bridge mode.
  9. Same here with PiHole when VM is on too. Webpage is unaccesible until I shutdown any VM that is running. Not even able to ping the IP of pihole docker. Edit: Just noticed that after 30 seconds i disabled Wireguard plugin everything started to work. Can someone confirm same behaviour?
  10. I just noticed this part. Did you tried to use msi_util? https://www.dropbox.com/s/gymaipg6vprd508/MSI_util.zip?dl=0 apply to your nvidia sound output like this and reboot
  11. I'm glad the emulatorpin change make some difference, but you are right, there's still room to improve. Check a couple of things when the VM is up and running. 1- When the VM is up, go to an unraid console and type: lspci -s 42:00 -vv | grep Lnk if i'm not mistaken, that is your GPU+GPUaudio and will return the current link speed. I hope it will match x16 width and x8 speed. This is just to be sure the GPU is applying the correct speed. I had an issue sometime ago where the gpu got "loose" and, even when working propperly, the speeds and width varied with each boot. Had to unplug and reatach it to solve it and cost me 2 weeks to realize. 2- Try a test with a software like passmark to check the CPU raw performance just to see if you are close to baremetal. Even if you are using ~half of your cores it can help to see if your being CPU bottlenecked somehow. 3- AFTER this 2 tests, try this (im unsure it can make a difference, but it is more "realistic") change this in your xml: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/appdata/Zotac.GTX1080Ti.11264.170316-fixed.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/appdata/Zotac.GTX1080Ti.11264.170316-fixed.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> Only change is the "bus" & "function" of the 2nd device. Those 2 are your GPU + GPUAudio (hdmi,displayport). They are in the same place: 42:00.0 and 42:00:1. With this change, the VM will mount a 03:00.0 and a 03:00.1 instead of spliting the device in two (03 and 04, like they were different PCIe devices). It (normally) should not make any difference at all, but since in your case there's no way to know where the computer is suffering the loss I think its better try everything. Backup your XML before changing it
  12. Im on mobile, but first thing I noticed is that you set the emulatorpin to a core/thread from node 0. Try using 11cores on your vm from the node1 instead of 12 and change the emulatorpin to that free core/thread. Probably that wil get rid of your latency issue since a core in node 0 is taking control of your vm in node1.
  13. Yes. This is what numad does but it does it automatically. I forced it manually creating the node allocations. Strict only works for me with node 0. Node 1 uses full node 0 too. Only work sometimes on first vm but it's inconsistent.
  14. For the people experiencing random numa allocation this is what i did to workaround it. In the XML template had to add two things: Inside the cpu TAG: And up in the numatune tag: With this i avoided the random numa node allocations between nodes making it forced to node 1. Pastebin example here. Only thing i noticed is that now my VM take 10~15 more seconds to start booting. Gaming performance looks good.
  15. That would be good if strict worked but the strict node 1 allocate my whole VM in node 0 and setting to preferred is the only way to, at least, allocate it where i want. Enviado desde mi Mi MIX 2 mediante Tapatalk
  16. I would be satisfied if they add NUMAD first. I'm getting tired of having to restart unRAID so my VMs gets all the RAM from the right numa node. For me there are more performance benefits if the ram i'm getting is strictly from the cores i'm pinning to the vm's.
  17. Same here. Somehow setting it to "preferred" to node 1 in xml will make it to take 10~20mb from node 0 and the rest in node 1. Sometimes it just split it randomly but i don't know what causes it (probably dockers or just plex... or something that is allocating ram in node 1 even though all dockers are pinned to the cpu0 cores). If set to "strict" it will take random allocations that can be all to the node0 or split like 75/25 between the 2 CPUs. When this happens gaming vm will suffer performance hit with some stuttering and fps loss and sometimes audio latencies with usb headsets. I hope someday we can get this fixed or at least we can enforce it properly to avoid the performance and latency issues. In my case this is the last step to finally reach near full baremetal performance in vm's. The times when ram allocation is almost 100% of the proper cpu node i am able to reach it. But, since this is sometimes just random, I have to stop/start array a couple of times to get it. Enviado desde mi Mi MIX 2 mediante Tapatalk
  18. lspci -s 00:00.0 -vv | grep Lnk Change 00:00.0 to the one that matches your device. In that motherboard should be 84:00.0 , 83, and 82 for the cpu2 PCIe's and see what link speed is detected by unraid. Example, my GTX 1070: # lspci -s 84:00.0 -vv | grep Lnk LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <16us LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ LnkSta: Speed 2.5GT/s (downgraded), Width x16 (ok) LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+ Speed appears as downgraded since it's not currently being used by the system. If i do the same when a game is running it will get the 8GT/s speed. Try that both with your card and the nvme disk to see if the pci bandwith is correctly detected. If not maybe something in bios is not ok. Example for a usb3.0 pcie card i have in the 1x slot of the cpu2: lspci -s 82:00.0 -vv | grep Lnk LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <4us, L1 unlimited LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ LnkSta: Speed 5GT/s (ok), Width x1 (ok) LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis- LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
  19. I just signed to confirm this. I had to create all my VM's with Q35 due to this issue when drivers are installed using i440 with an MSI RX 470 Gaming X. Only way to workaround the problem was converting/creating my VMs to Q35. Adittionally, with Q35 i can have Hyper-V turned ON with a GTX 1070 without suffering the error 34 (or was it 43?). I don't know if it's just a coincidence (the combination of my mobo and GPU,some bios option or whatever) but actually i can enjoy like 30 or 40 more fps in games thx to being able to enable HyperV in a gaming VM at the moment. I'm actually using the latest drivers without having to fix anything at all. For me at least, being unable to use Q35 will be a serious NO to continue using unraid right now.