Ravi Parmar

Members
  • Posts

    5
  • Joined

Everything posted by Ravi Parmar

  1. Thank you for the response. I went back and redone all the advice your mentioned, the good news is that the VM logs no longer show the error like it used to. But, the VM itself still does not recognize the GPU. In the VM, the display still only shows the typical virtual display, rather than nvidia gpu!
  2. Hi Guys, I am running into exact same issue that chis34 has reported. I have tried different combination of bios and drivers but no avail. I have the exact same GPU as well. For the life of me I can't seem to figure this out! I thought I was the only guy who has this issue... but Thank you chis34 posting this. I am having same exact problem. Please someone help!
  3. Hey Guys, I am new to unraid, and drinking from the fire-hose for last couple of months on this. But I love what it stands for and what it allows us to do. I must have watched almost all video tutorials from @Spaceinvaderone and many other folks. It picked on lot of my interests and been trying to get the unraid setup done. I have gone through his video on how to unlock the nvidia gpu to allow more than 3 sessions at a time. seems like it's working. Plex seems to be doing well in docker container (seems to crash the servers in middle of the night though, but that another topic!!). I have installed the nvidia app from the community as well which installs the drivers for the docker containers. Interestingly, I think it says that it can only work with docker only and if you had docker and VM running at the same time trying to use the same gpu, it won't work. However, I see all these videos out on you tube where everyone has some sort of a heck going on that seems to work for them. 100% of them are older version of unraid like 6.8 or 6.9. Lucky if you find anything even on 6.10! anyway, back to my issue. I think I am running into I think 1 issue with 3 flavors! So I am trying to setup a windows 11 vm in 6.11.1, downloaded the latest iso from MS and of course latest VM template as well (virtio-win-01.221-1.iso) I can go through the setup fine like @spaceinvaderone does (first pass through vnc and then shutdown the VM, change the GPU to my nvidia and audio to nvidia). At this point, If the vm boots up, the I can get in through RDP to that VM, but the VM doesn't know anything about the GPU, (if you go in vm and try to install nvidia drivers in the VM itself, it will tell you it can't find the nvidia device). Interestingly, following is noted in the VM log in Unraid: 2022-10-19T17:38:14.494279Z qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:01:00.0","id":"hostdev0","bus":"pci.4","addr":"0x0"}: Failed to mmap 0000:01:00.0 BAR 1. Performance may be slow Attached the screenshot of the vm template setup. So that's the flavor 1 of the issue. Next flavor 2 of the issue: If I select to add optional gpu bios rom file in the vm template, I get the error: qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:01:00.0","id":"hostdev0","bus":"pci.4","addr":"0x0","romfile":"/mnt/disk1/Download/RandomStuff/nvidia_ventus_2x_oc_3060_12gb_ddr6_pcie4/vbios/MSI.RTX3060.12288.210118_2.rom"}: Failed to mmap 0000:01:00.0 BAR 1. Performance may be slow. and the VM wouldn't boot. Flavor #3: Depending on a bios I think, if I select the GPU as my nvidia rtx 3060, the vm would boot up, I can RDP, but then I would see following in the log file: qemu-system-x86_64: vfio_region_write(0000:01:00.0:region1+0x4552b, 0x0,1) failed: Device or resource busy This msg would be repeated contently as long as the VM is online and running, to the point it fills up the entire /var/log to it's default 128MB (syslog is what fills up the entire disk with above message being repeated constantly). my apology in advance if the same case has been reported in other forums, any help is appreciate it in figuring this out. I need ability to run multiple dockers, and VMs at the same time while utilizing the nvidia GPU as passthrough to them. or is that just a pipedream?