Nephilgrim

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nephilgrim's Achievements

Newbie

Newbie (1/14)

2

Reputation

  1. Someone may have to correct me, but i think it's needed to have this tag inside <cpu> for the nested virtualization to work prior to everything else you have tried: <cpu mode='host-passthrough' check='none'> ... <feature policy='require' name='vmx'/> ... </cpu> At least it was needed when i tried to do some nested virtualization a year or so ago.
  2. Change to q35 instead of i440fx. I cannot start my 1070 if i dont create the VM as q35 + the modified rom file (as spaceinvaderone tutorial).
  3. Today i booted the server + dockers + VM's like always and no trace of the errors.
  4. Tried to force it: Launched a speedtest + fast.com + downloading a game from steam to stress the connection (300/300 fiber) and no new log entries since I started the server + mainVM 2 hours ago. Edit: Also tried to launch a speedtest from the unraid plugin (out of the VM) but no more logs neither: So far I only see those logs messages once (or once per nic port) when a VM is booted. I suppose that this could be a expected behaviour since they stop.
  5. I'm on RC7. For me the errors only appear 4 times and stop when I start a VM (I have 4 nic ports bonded, don't know if it's a coincidence). Maybe i'ts because I only have 2 dockers with custom IP but so far I have no log spamming as reported by other people here.
  6. Still, seems that a valid workaround is to have a dummy VM created pointing to nothing just to be unable to have ALL vm's running.
  7. Indeed, disabling NAT option (and because of this: upnp too) fixes the issue for any docker in bridge mode.
  8. Same here with PiHole when VM is on too. Webpage is unaccesible until I shutdown any VM that is running. Not even able to ping the IP of pihole docker. Edit: Just noticed that after 30 seconds i disabled Wireguard plugin everything started to work. Can someone confirm same behaviour?
  9. I just noticed this part. Did you tried to use msi_util? https://www.dropbox.com/s/gymaipg6vprd508/MSI_util.zip?dl=0 apply to your nvidia sound output like this and reboot
  10. I'm glad the emulatorpin change make some difference, but you are right, there's still room to improve. Check a couple of things when the VM is up and running. 1- When the VM is up, go to an unraid console and type: lspci -s 42:00 -vv | grep Lnk if i'm not mistaken, that is your GPU+GPUaudio and will return the current link speed. I hope it will match x16 width and x8 speed. This is just to be sure the GPU is applying the correct speed. I had an issue sometime ago where the gpu got "loose" and, even when working propperly, the speeds and width varied with each boot. Had to unplug and reatach it to solve it and cost me 2 weeks to realize. 2- Try a test with a software like passmark to check the CPU raw performance just to see if you are close to baremetal. Even if you are using ~half of your cores it can help to see if your being CPU bottlenecked somehow. 3- AFTER this 2 tests, try this (im unsure it can make a difference, but it is more "realistic") change this in your xml: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/appdata/Zotac.GTX1080Ti.11264.170316-fixed.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/appdata/Zotac.GTX1080Ti.11264.170316-fixed.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> Only change is the "bus" & "function" of the 2nd device. Those 2 are your GPU + GPUAudio (hdmi,displayport). They are in the same place: 42:00.0 and 42:00:1. With this change, the VM will mount a 03:00.0 and a 03:00.1 instead of spliting the device in two (03 and 04, like they were different PCIe devices). It (normally) should not make any difference at all, but since in your case there's no way to know where the computer is suffering the loss I think its better try everything. Backup your XML before changing it
  11. Im on mobile, but first thing I noticed is that you set the emulatorpin to a core/thread from node 0. Try using 11cores on your vm from the node1 instead of 12 and change the emulatorpin to that free core/thread. Probably that wil get rid of your latency issue since a core in node 0 is taking control of your vm in node1.
  12. Yes. This is what numad does but it does it automatically. I forced it manually creating the node allocations. Strict only works for me with node 0. Node 1 uses full node 0 too. Only work sometimes on first vm but it's inconsistent.
  13. For the people experiencing random numa allocation this is what i did to workaround it. In the XML template had to add two things: Inside the cpu TAG: And up in the numatune tag: With this i avoided the random numa node allocations between nodes making it forced to node 1. Pastebin example here. Only thing i noticed is that now my VM take 10~15 more seconds to start booting. Gaming performance looks good.
  14. That would be good if strict worked but the strict node 1 allocate my whole VM in node 0 and setting to preferred is the only way to, at least, allocate it where i want. Enviado desde mi Mi MIX 2 mediante Tapatalk
  15. I would be satisfied if they add NUMAD first. I'm getting tired of having to restart unRAID so my VMs gets all the RAM from the right numa node. For me there are more performance benefits if the ram i'm getting is strictly from the cores i'm pinning to the vm's.