a_g Posted January 4, 2022 Share Posted January 4, 2022 (edited) I've been bit by the Unraid bug thanks to a friend and we have spent the past 3 days trying to solve this issue to no avail. I currently have a SFFPC built that has 3 drives with one of those housing my Windows10 OS. I wanted to avoid reinstalling Windows if possible and would rather keep the current image. I'm going to post pretty much anything I can think would be helpful to know regarding my current setup. I'd really appreciate any help as I would love to get UnRaid working. One thing I do want to know is that I've tried my VM on both the X4 machine and Q35 but I tend to have better results on the X4 machine. As long as I don't change any settings from the initial config further below, I can reboot the VM itself without issue. If I reboot my server, it defauls back to the 800x600 resolution. I confirmed I am getting a Code 43 in devmgmt.msc once I sign into my VM. ---------------------------------------------------------------------- #Current PC Specs: CPU | Ryzen 3600x Motherboard | Asus Strix X570-i GPU | Nvidia RTX 2070s Memory | G-Skills 32 GB (2x16GB 3200 MHz) ---------------------------------------------------------------------- #BIOS Settings: ACS = Enabled SR-IOV = Enabled ---------------------------------------------------------------------- **VM XML Configuration** ><?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10 Personal</name> <uuid>My_UUID_REDACTED</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>20971520</memory> <currentMemory unit='KiB'>20971520</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='8'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='9'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e63f1e8b-03bc-ba94-7452-4bb2993a2245_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:96:11:18'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/rtx2070s.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0x085b'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> </domain> ---------------------------------------------------------------------- **VM Log** ErrorWarningSystemArrayLogin -boot strict=on \ -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.190-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.0,unit=1,drive=libvirt-1-format,id=ide0-0-1 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:96:11:18,bus=pci.0,addr=0x2 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device vfio-pci,host=0000:0a:00.0,id=hostdev0,bus=pci.0,addr=0x4,romfile=/mnt/user/isos/vbios/rtx2070s.rom \ -device vfio-pci,host=0000:0a:00.1,id=hostdev1,bus=pci.0,addr=0x5 \ -device vfio-pci,host=0000:01:00.0,id=hostdev2,bus=pci.0,addr=0x6 \ -device vfio-pci,host=0000:07:00.1,id=hostdev3,bus=pci.0,addr=0x8 \ -device vfio-pci,host=0000:07:00.3,id=hostdev4,bus=pci.0,addr=0x9 \ -device vfio-pci,host=0000:0a:00.2,id=hostdev5,bus=pci.0,addr=0xa \ -device vfio-pci,host=0000:0a:00.3,id=hostdev6,bus=pci.0,addr=0xb \ -device usb-host,hostbus=7,hostaddr=6,id=hostdev7,bus=usb.0,port=1 \ -device usb-host,hostbus=7,hostaddr=5,id=hostdev8,bus=usb.0,port=2 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2022-01-04 00:11:09.947+0000: Domain id=1 is tainted: high-privileges 2022-01-04 00:11:09.947+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2022-01-04T00:11:12.985283Z qemu-system-x86_64: -device vfio-pci,host=0000:0a:00.0,id=hostdev0,bus=pci.0,addr=0x4,romfile=/mnt/disk1/isos/vbios/rtx2070s.rom: Failed to mmap 0000:0a:00.0 BAR 3. Performance may be slow 2022-01-04T00:11:26.185589Z qemu-system-x86_64: vfio_region_write(0000:0a:00.0:region3+0x41a8, 0x1fdf6f01,8) failed: Device or resource busy 2022-01-04T00:11:26.185624Z qemu-system-x86_64: vfio_region_write(0000:0a:00.0:region3+0x41a8, 0x1fdf6f01,8) failed: Device or resource busy 2022-01-04T00:11:26.185678Z qemu-system-x86_64: vfio_region_write(0000:0a:00.0:region3+0x41b0, 0x1fdf6e01,8) failed: Device or resource busy 2022-01-04T00:11:26.185866Z qemu-system-x86_64: vfio_region_write(0000:0a:00.0:region3+0x36000, 0xabcdabcd,4) failed: Device or resource busy 2022-01-04T00:11:26.185887Z qemu-system-x86_64: vfio_region_write(0000:0a:00.0:region3+0x36004, 0xabcdabcd,4) failed: Device or resource busy 2022-01-04T00:11:26.185903Z qemu-system-x86_64: vfio_region_write(0000:0a:00.0:region3+0x36008, 0xabcdabcd,4) failed: Device or resource busy 2022-01-04T00:11:26.185915Z qemu-system-x86_64: vfio_region_write(0000:0a:00.0:region3+0x3600c, 0xabcdabcd,4) failed: Device or resource busy 2022-01-04T00:11:26.186039Z qemu-system-x86_64: vfio_region_read(0000:0a:00.0:region3+0x35000, 4) failed: Device or resource busy 2022-01-04T00:11:28.018207Z qemu-system-x86_64: vfio_region_read(0000:0a:00.0:region3+0x35000, 4) failed: Device or resource busy ---------------------------------------------------------------------- UnRaid Configuration: **Settings** * Added "User Scripts" plugin so I could dump GPU vbios per SpaceInvaderOnes video. During script execution I was prompted to restart server as expected since I only have the single GPU. PCIe ACS Override | No VFIO allow unsafe interrupts | No **Tools** **System Devices (bound to VFIO at boot)** * Group 14: Non-Volatile memory controller: Sandisk Corp WD Blue SN500 (512 GB M.2 that has Windows installed) * Group 19: 07:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP 07:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller. This controller is bound to vfio, connected USB devices are not visible. 07:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller. This controller is bound to vfio, connected USB devices are not visible. * Group 25: 0a:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1) 0a:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1) 0a:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1). This controller is bound to vfio, connected USB devices are not visible. 0a:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1) Edited January 4, 2022 by a_g Add discovery of Code 43 in devmgmt.msc Quote Link to comment
ghost82 Posted January 4, 2022 Share Posted January 4, 2022 Try to set your gpu at 0a:00.x as a multifunction device: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/rtx2070s.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/> </hostdev> Quote Link to comment
ghost82 Posted January 4, 2022 Share Posted January 4, 2022 7 hours ago, a_g said: vfio_region_write(0000:0a:00.0:region3+0x36008, 0xabcdabcd,4) failed: Device or resource busy You gpu may be in use by something else: paste output of cat /proc/iomem and attach also diagnostics file Quote Link to comment
a_g Posted January 4, 2022 Author Share Posted January 4, 2022 @ghost82 I modified the XML to enable multifunction and also modified the next hostdev block per what you copied above, no dice. I ended up having to delete the VM and recreate it as it froze up a few times which required a hard reset. Same issue with the newly created VM. (I enabled multifunction from the very start of this new VM) Output of cat /proc/iomem: root@Apollo:~# cat /proc/iomem 00000000-00000fff : Reserved 00001000-0009ffff : System RAM 000a0000-000fffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000dffff : PCI Bus 0000:00 000c0000-000cdbff : Video ROM 000f0000-000fffff : System ROM 00100000-09d1efff : System RAM 04000000-04a00816 : Kernel code 04c00000-04e4afff : Kernel rodata 05000000-05127f7f : Kernel data 05471000-055fffff : Kernel bss 09d1f000-09ffffff : Reserved 0a000000-0a1fffff : System RAM 0a200000-0a210fff : ACPI Non-volatile Storage 0a211000-b7138017 : System RAM b7138018-b7157e57 : System RAM b7157e58-b7158017 : System RAM b7158018-b7169067 : System RAM b7169068-c347afff : System RAM c347b000-c347bfff : Reserved c347c000-c4ce5fff : System RAM c4ce6000-c4ce6fff : Reserved c4ce7000-ca021fff : System RAM ca022000-ca35dfff : Reserved ca33c000-ca33ffff : MSFT0101:00 ca340000-ca343fff : MSFT0101:00 ca35e000-ca5b8fff : ACPI Tables ca5b9000-cacd4fff : ACPI Non-volatile Storage cacd5000-cbbfefff : Reserved cbbff000-ccffffff : System RAM cd000000-cfffffff : Reserved d0000000-fec02fff : PCI Bus 0000:00 d0000000-e20fffff : PCI Bus 0000:0a d0000000-dfffffff : 0000:0a:00.0 d0000000-dfffffff : vfio-pci e0000000-e1ffffff : 0000:0a:00.0 e1000000-e12fffff : efifb e2000000-e203ffff : 0000:0a:00.2 e2000000-e203ffff : vfio-pci e2040000-e204ffff : 0000:0a:00.2 e2040000-e204ffff : vfio-pci f0000000-f7ffffff : PCI MMCONFIG 0000 [bus 00-7f] f0000000-f7ffffff : Reserved f0000000-f7ffffff : pnp 00:00 fb000000-fc0fffff : PCI Bus 0000:0a fb000000-fbffffff : 0000:0a:00.0 fb000000-fbffffff : vfio-pci fc080000-fc083fff : 0000:0a:00.1 fc080000-fc083fff : vfio-pci fc084000-fc084fff : 0000:0a:00.3 fc084000-fc084fff : vfio-pci fc200000-fc8fffff : PCI Bus 0000:02 fc200000-fc8fffff : PCI Bus 0000:03 fc200000-fc3fffff : PCI Bus 0000:07 fc200000-fc2fffff : 0000:07:00.3 fc200000-fc2fffff : vfio-pci fc300000-fc3fffff : 0000:07:00.1 fc300000-fc3fffff : vfio-pci fc400000-fc4fffff : PCI Bus 0000:09 fc400000-fc4007ff : 0000:09:00.0 fc400000-fc4007ff : ahci fc500000-fc5fffff : PCI Bus 0000:08 fc500000-fc5007ff : 0000:08:00.0 fc500000-fc5007ff : ahci fc600000-fc6fffff : PCI Bus 0000:06 fc600000-fc61ffff : 0000:06:00.0 fc600000-fc61ffff : igb fc620000-fc623fff : 0000:06:00.0 fc620000-fc623fff : igb fc700000-fc7fffff : PCI Bus 0000:05 fc700000-fc703fff : 0000:05:00.0 fc800000-fc8fffff : PCI Bus 0000:04 fc800000-fc803fff : 0000:04:00.0 fc800000-fc803fff : nvme fc900000-fcbfffff : PCI Bus 0000:0c fc900000-fc9fffff : 0000:0c:00.3 fc900000-fc9fffff : xhci-hcd fca00000-fcafffff : 0000:0c:00.1 fca00000-fcafffff : ccp fcb00000-fcb07fff : 0000:0c:00.4 fcb00000-fcb07fff : vfio-pci fcb08000-fcb09fff : 0000:0c:00.1 fcb08000-fcb09fff : ccp fcc00000-fccfffff : PCI Bus 0000:01 fcc00000-fcc03fff : 0000:01:00.0 fcc00000-fcc03fff : vfio-pci fd200000-fd2fffff : Reserved fd200000-fd2fffff : pnp 00:01 fd380000-fd3fffff : amd_iommu fd400000-fd5fffff : Reserved fea00000-fea0ffff : Reserved feb80000-fec01fff : Reserved fec00000-fec003ff : IOAPIC 0 fec01000-fec013ff : IOAPIC 1 fec10000-fec10fff : Reserved fec10000-fec10fff : pnp 00:04 fed00000-fed00fff : Reserved fed00000-fed003ff : HPET 0 fed00000-fed003ff : PNP0103:00 fed40000-fed44fff : Reserved fed80000-fed8ffff : Reserved fed81500-fed818ff : AMDI0030:00 fedc0000-fedc0fff : pnp 00:04 fedc2000-fedcffff : Reserved fedd4000-fedd5fff : Reserved fee00000-ffffffff : PCI Bus 0000:00 fee00000-fee00fff : Local APIC fee00000-fee00fff : pnp 00:04 ff000000-ffffffff : Reserved ff000000-ffffffff : pnp 00:04 100000000-82f37ffff : System RAM 82f380000-82fffffff : Reserved root@Apollo:~# Diagnostics attached. Thanks for your help! apollo-diagnostics-20220104-0624.zip Quote Link to comment
ghost82 Posted January 4, 2022 Share Posted January 4, 2022 ok, as expected the gpu is in use by efifb. From your log (qemu): 2022-01-04T11:21:31.329168Z qemu-system-x86_64: vfio_region_read(0000:0a:00.0:region3+0x35000, 4) failed: Device or resource busy From your log (syslog): Jan 4 03:21:11 Apollo kernel: vfio-pci 0000:0a:00.0: BAR 3: can't reserve [mem 0xe0000000-0xe1ffffff 64bit pref] From your /proc/iomem: e1000000-e12fffff : efifb As you can see the vm can't use that memory region because it's in use by efifb. You need to add to your syslinux config in the line where you read "append": video=efifb:off Something like this: append video=efifb:off initrd=/bzroot Then reboot unraid. Note that you will loose display output for unraid. Quote Link to comment
a_g Posted January 4, 2022 Author Share Posted January 4, 2022 Adding the video=enfifb:off eliminated the "failed: Device or resource busy" errors in the log and I am no longer seeing any "failed to mmap" errors that refer to the GPU vbios. The only remaining issue seems to be the resolution is still locked in at 800x600 with a Code 43 error in devmgmt.msc. From what I've read, this seems to be an issue with Nvidia allowing the card to be used in a VM. Quote Link to comment
Solution ghost82 Posted January 4, 2022 Solution Share Posted January 4, 2022 Did you installed the nvidia proprietary drivers? Newest drivers allow consumer cards to be used in a vm. Quote Link to comment
a_g Posted January 4, 2022 Author Share Posted January 4, 2022 @ghost82 Interesting...I did that on my last VM but through the GeForce Control Panel and it didn't work. I just manually downloaded the latest drivers from the Nvidia site and that worked perfect, it even persisted through a server reboot! This is great! Thank you so much for your help! Quote Link to comment
ghost82 Posted January 4, 2022 Share Posted January 4, 2022 21 minutes ago, a_g said: I did that on my last VM but through the GeForce Control Panel It should work through the geforce panel too: maybe you tried to install them without setting the gpu as multifunction device: sometimes drivers fail because they expect to see the different portions of the gpu on the same bus/slot but different function. Quote Link to comment
a_g Posted January 4, 2022 Author Share Posted January 4, 2022 Ah yes, I actually did try to install them prior to setting the gpu as a multifunction device. That explains it. Thanks again! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.