Bozo

Members
  • Posts

    3
  • Joined

  • Last visited

Bozo's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hey @SpaceInvaderOne, sorry for bumping like this but are there any other things i could perhaps try? I found two other threads that had a similar problem, but one never got resolved and the other one just got around it by switching the PCIe slot.
  2. Warm thanks for the reply. So far, both VMs were running without vbioses - Unraid says it's optional, so i tried running the VMs without vbioses and it worked. The VM with the GTX 980 ti is set up with OVFM. The VM with the GTX 650 is indeed set up with seabios, as it's the 1gig version - it runs fine as long as the 650 isn't in the top PCI-E slot. Following Your advice I tried running the first VM with only the 980 ti 980 ti as the only discrete GPU (igpu for unraid) Without vbios Interestingly I had to reinstall the GPU driver (system acted as if a new GPU was installed), but after the installation exactly the same problem appears - The system is extremely laggy and unresponsive. With provided vbios (selected in the dropdown in the vm config, i'm not sure if that's the proper way to do it) No display 980 ti as the only GPU (igpu disabled) Without vbios No display With provided vbios (selected in the dropdown in the vm config, i'm not sure if that's the proper way to do it) No display Initially both GPUs were in the same IOMMU group, I selected the "Downstream" setting in PCIe ACS override dropdown and added "pcie_acs_override=id:8086:1905" to the configuration. My IOMMU groups with both GPUs installed and igpu set as primary (980 ti in the top slot, VM laggy; 650 in the 2nd slot, VM working fine) Sorry for the late reply, it took me quite a bit of time to configure and test all options as stated above. Edit: Tried updating the bios, but it seems to have failed - after the update the PC restarted 2 times and then when I checked the bios version it still says F8 (I was trying to update to F10), so i think one of the bios chips (MB has dual bios) might be cooked now and I'm afraid to try that again.
  3. Hello guys, I'm completely new to unraid and I wanted to host 2 gaming VMs for lan parties, so that there's one tower less to bring. In the end I got it to work following those excellent guides from Spaceinvader, but funnily enough - my GPUs have to be in the 2nd and 3rd PCI-E slots (I set the iGPU as primary for unraid) - When any of the GPUs is in the first slot, that VMs graphics start lagging horribly after installing the NVIDIA driver (with just the basic windows driver it runs smoothly, however I cannot run any games like that). If the GPUs are in the 2nd and 3rd PCI-E slots it runs fine with no problems, ran games like this already. It doesn't matter which of the 2 cards is in which slot, switched them around with exactly the same results - the VM with the GPU in the first PCI-E slot lags horribly, no matter which GPU it is. The problem is, in my computer case the PSU interferes with the GPU in the 3rd slot, so I could only test it with the PSU laying around loosely, which is not acceptable for lan parties. Is there any way to make it run fine in the 1st slot? I'll post any configuration files that would be needed, I just don't know what to post. My setup is: Aorus z390 PRO WIFI i9 9900k 2x8 3333mhz hyperx predator ASUS GTX 980 TI STRIX OC Palit GTX 650 (just for testing, it'll probably be a friends' 1060 for the LAN parties) I sincerely thank in advance for any help or responses