Robin R Posted October 25, 2022 Share Posted October 25, 2022 System context (if relevent): Unraid Version: 6.11.1 AM5 platform Ryzen 7950X + 128 GB ram MSI x760e MEG MB slot 0 - 3090 MSI GeForce RTX slot 1 - PNY T1000 8BG slot 2 - Allegro Pro USB-C 8-Port PCIe Card From unraid info: Model: Custom M/B: Micro-Star International Co., Ltd. MEG X670E ACE (MS-7D69) Version 1.0 - s/n: [REMOVED] BIOS: American Megatrends International, LLC. Version 1.25. Dated: 09/26/2022 CPU: AMD Ryzen 9 7950X 16-Core @ 4500 MHz HVM: Enabled IOMMU: Enabled Cache: 1 MB, 16 MB, 64 MB Memory: 128 GiB DDR5 (max. installable capacity 128 GiB) Network: bond0: fault-tolerance (active-backup), mtu 1500 Kernel: Linux 5.19.14-Unraid x86_64 OpenSSL: 1.1.1q BOUND IOMMU: IOMMU group 12: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 71) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 14: [10de:2204] 01:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1) [10de:1aef] 01:00.1 Audio device: NVIDIA Corporation GA102 High Definition Audio Controller (rev a1) IOMMU group 16: [10de:1ff0] 03:00.0 VGA compatible controller: NVIDIA Corporation TU117GL [T1000 8GB] (rev a1) [10de:10fa] 03:00.1 Audio device: NVIDIA Corporation Device 10fa (rev a1) IOMMU group 32: [1b21:2142] 17:00.0 USB controller: ASMedia Technology Inc. ASM2142/ASM3142 USB 3.1 Host Controller This controller is bound to vfio, connected USB devices are not visible. IOMMU group 33: [1b21:2142] 19:00.0 USB controller: ASMedia Technology Inc. ASM2142/ASM3142 USB 3.1 Host Controller This controller is bound to vfio, connected USB devices are not visible. IOMMU group 34: [1b21:2142] 1c:00.0 USB controller: ASMedia Technology Inc. ASM2142/ASM3142 USB 3.1 Host Controller This controller is bound to vfio, connected USB devices are not visible. IOMMU group 35: [1b21:2142] 1d:00.0 USB controller: ASMedia Technology Inc. ASM2142/ASM3142 USB 3.1 Host Controller This controller is bound to vfio, connected USB devices are not visible. IOMMU group 36: [1002:164e] 1e:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Raphael (rev c1) IOMMU group 37: [1002:1640] 1e:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt Radeon High Definition Audio Controller IOMMU group 38: [1022:1649] 1e:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] VanGogh PSP/CCP IOMMU group 41: [1022:15e3] 1e:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller UNBOUND/UNRAID BOUND IOMMU: IOMMU group 0: [1022:14da] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14da IOMMU group 1: [1022:14db] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14db IOMMU group 2: [1022:14db] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14db IOMMU group 3: [1022:14db] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14db IOMMU group 4: [1022:14da] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14da IOMMU group 5: [1022:14db] 00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14db IOMMU group 6: [1022:14db] 00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14db IOMMU group 7: [1022:14da] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14da IOMMU group 8: [1022:14da] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14da IOMMU group 9: [1022:14da] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14da IOMMU group 10: [1022:14dd] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14dd IOMMU group 11: [1022:14dd] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14dd IOMMU group 13: [1022:14e0] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e0 [1022:14e1] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e1 [1022:14e2] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e2 [1022:14e3] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e3 [1022:14e4] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e4 [1022:14e5] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e5 [1022:14e6] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e6 [1022:14e7] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e7 IOMMU group 15: [15b7:5030] 02:00.0 Non-Volatile memory controller: Sandisk Corp Device 5030 (rev 01) [N:0:8224:1] disk WD_BLACK SN850X 4000GB__1 /dev/nvme0n1 4.00TB IOMMU group 17: [1022:43f4] 04:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f4 (rev 01) IOMMU group 18: [1022:43f5] 05:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [15b7:5030] 06:00.0 Non-Volatile memory controller: Sandisk Corp Device 5030 (rev 01) [N:1:8224:1] disk WD_BLACK SN850X 4000GB__1 /dev/nvme1n1 4.00TB IOMMU group 19: [1022:43f5] 05:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1d6a:94c0] 07:00.0 Ethernet controller: Aquantia Corp. AQC113CS NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 03) IOMMU group 20: [1022:43f5] 05:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) IOMMU group 21: [1022:43f5] 05:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f4] 09:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f4 (rev 01) [1022:43f5] 0a:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f5] 0a:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f5] 0a:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f5] 0a:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f5] 0a:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f5] 0a:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f5] 0a:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f5] 0a:0d.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [15b7:5030] 0b:00.0 Non-Volatile memory controller: Sandisk Corp Device 5030 (rev 01) [N:2:8224:1] disk WD_BLACK SN850X 4000GB__1 /dev/nvme2n1 4.00TB [14c3:0616] 0c:00.0 Network controller: MEDIATEK Corp. MT7922 802.11ax PCI Express Wireless Network Adapter [1b21:0612] 0d:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02) [2:0:0:0] disk ATA WDC WD101EFAX-68 0A81 /dev/sdb 10.0TB [3:0:0:0] disk ATA WDC WD101EFAX-68 0A81 /dev/sdc 10.0TB [1bb1:5018] 10:00.0 Non-Volatile memory controller: Seagate Technology PLC FireCuda 530 SSD (rev 01) [N:3:1:1] disk Seagate FireCuda 530 ZP4000GM30013__1 /dev/nvme3n1 4.00TB [1022:43f7] 11:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43f7 (rev 01) Bus 001 Device 001 Port 1-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002 Port 1-2 ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub Bus 001 Device 003 Port 1-6 ID 1462:7d69 Micro Star International MYSTIC LIGHT Bus 002 Device 001 Port 2-0 ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 002 Port 2-2 ID 174c:3074 ASMedia Technology Inc. ASM1074 SuperSpeed hub Bus 002 Device 003 Port 2-3 ID 0bda:9210 Realtek Semiconductor Corp. RTL9210 M.2 NVME Adapter Bus 002 Device 004 Port 2-2.3 ID 090c:1000 Silicon Motion, Inc. - Taiwan (formerly Feiya Technology Corp.) Flash Drive [1022:43f6] 12:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43f6 (rev 01) IOMMU group 22: [1022:43f5] 05:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f7] 13:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43f7 (rev 01) Bus 003 Device 001 Port 3-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002 Port 3-6 ID 0db0:961e Micro Star International USB Audio Bus 003 Device 003 Port 3-7 ID 0e8d:0616 MediaTek Inc. Wireless_Device Bus 004 Device 001 Port 4-0 ID 1d6b:0003 Linux Foundation 3.0 root hub IOMMU group 23: [1022:43f5] 05:0d.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43f5 (rev 01) [1022:43f6] 14:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43f6 (rev 01) IOMMU group 24: [10b5:8724] 15:00.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 25: [10b5:8724] 16:01.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 26: [10b5:8724] 16:02.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 27: [10b5:8724] 16:03.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 28: [10b5:8724] 16:04.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 29: [10b5:8724] 16:08.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 30: [10b5:8724] 16:09.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 31: [10b5:8724] 16:0a.0 PCI bridge: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA (rev ca) IOMMU group 39: [1022:15b6] 1e:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b6 Bus 013 Device 001 Port 13-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 013 Device 002 Port 13-1 ID 051d:0002 American Power Conversion Uninterruptible Power Supply Bus 014 Device 001 Port 14-0 ID 1d6b:0003 Linux Foundation 3.0 root hub IOMMU group 40: [1022:15b7] 1e:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b7 Bus 015 Device 001 Port 15-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 015 Device 002 Port 15-1 ID 2109:0100 VIA Labs, Inc. USB-C dongle Bus 016 Device 001 Port 16-0 ID 1d6b:0003 Linux Foundation 3.0 root hub IOMMU group 42: [1022:15b8] 1f:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b8 Bus 017 Device 001 Port 17-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 018 Device 001 Port 18-0 ID 1d6b:0003 Linux Foundation 3.0 root hub VM definition: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 11 Work</name> <uuid>1cddcec7-6540-da7d-1179-3ccd0d2f9ea7</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>22</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='17'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='19'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='20'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='21'/> <vcpupin vcpu='8' cpuset='6'/> <vcpupin vcpu='9' cpuset='22'/> <vcpupin vcpu='10' cpuset='7'/> <vcpupin vcpu='11' cpuset='23'/> <vcpupin vcpu='12' cpuset='8'/> <vcpupin vcpu='13' cpuset='24'/> <vcpupin vcpu='14' cpuset='9'/> <vcpupin vcpu='15' cpuset='25'/> <vcpupin vcpu='16' cpuset='11'/> <vcpupin vcpu='17' cpuset='27'/> <vcpupin vcpu='18' cpuset='13'/> <vcpupin vcpu='19' cpuset='29'/> <vcpupin vcpu='20' cpuset='15'/> <vcpupin vcpu='21' cpuset='31'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/1cddcec7-6540-da7d-1179-3ccd0d2f9ea7_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='11' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 11 Work/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 11 Work/vdisk2.img'/> <target dev='hdd' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Win11_22H2_English_x64v1.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.221-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:d6:a5:b5'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/T1000 8GB.extracted.gpuz.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x17' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x1c' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> Context: I used GPU-z extracted bios for 3090 and T1000 and removed the bios header (as per instructions from spaceindavderone). The VM works with the T1000/3090 until I add the windows feature "Virtual Machine Platform", i.e.: dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart Once the feature is enabled inside windows (no change to the VM settings), the GPU passthrough no longer works. The system starts with the Windows spinner, freezes. I can "remote" into the machine and I see it has 800x600 only with the NVIDIA card reporting the dreaded error 43. However, this is my 2nd unraid system. My other AM4 based system works perfectly. I have an identically configured VM (using the Win11 template) with an idential PYN T1000 passthrough and "Virtual Machine Platform" working perfectly and prior to enabling this feature the PNY T1000 works perfectly. Switching to the RTX 3090 does not help. I am required to run this Windows feature for my work so it's not optional for me and it doesn't matter if the system is actively using this feature or not. NVidia Driver 2022-09-12 31.0.15.1748 I also tried a newer 2022-10 driver and it doesn't help. Latest Quadro nor RTX driver stable driver does not help. I have tried: - enabling/disabling driver [with or without reboot] -- if I enable it again the screen will paint once (from the intial frozen spinner) but stops updating even though the driver shows as working, and the screen resolution options are still not available as if the driver isn't working - uninstalling the driver + re-enabling [with or without reboot] - boots without freezing the screen on the spinner but then screen goes black as soon as the driver re-initializes - KVM VM hiding <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='2D76A8B352F1'/> </hyperv> <kvm> <hidden state='on'/> </kvm> <vmport state='off'/> <ioapic driver='kvm'/> </features> - Enabling/removing pcie_acs_override / unsafe_interrupts: kernel /bzimage append isolcpus=1,3-15,17,19-31 pcie_acs_override=downstream,multifunction vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot Normally it's set to: kernel /bzimage append isolcpus=1,3-15,17,19-31 initrd=/bzroot VM logs: -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.221-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device '{"driver":"ide-cd","bus":"ide.0","unit":1,"drive":"libvirt-1-format","id":"ide0-0-1"}' \ -netdev tap,fd=37,id=hostnet0 \ -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:d6:a5:b5","bus":"pci.0","addr":"0x2"}' \ -chardev pty,id=charserial0 \ -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=35,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -chardev 'socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/6-Windows 11 Work-swtpm.sock' \ -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm \ -device '{"driver":"tpm-tis","tpmdev":"tpm-tpm0","id":"tpm0"}' \ -audiodev '{"id":"audio1","driver":"none"}' \ -device '{"driver":"vfio-pci","host":"0000:03:00.0","id":"hostdev0","bus":"pci.0","addr":"0x6","romfile":"/mnt/user/isos/vbios/T1000 8GB.extracted.gpuz.rom"}' \ -device '{"driver":"vfio-pci","host":"0000:03:00.1","id":"hostdev1","bus":"pci.0","addr":"0x8"}' \ -device '{"driver":"vfio-pci","host":"0000:17:00.0","id":"hostdev2","bus":"pci.0","addr":"0x9"}' \ -device '{"driver":"vfio-pci","host":"0000:1c:00.0","id":"hostdev3","bus":"pci.0","addr":"0xa"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/1 (label charserial0) 2022-10-25T18:51:50.029303Z qemu-system-x86_64: terminating on signal 15 from pid 7479 (/usr/sbin/libvirtd) 2022-10-25 18:51:52.449+0000: shutting down, reason=shutdown 2022-10-25 19:40:46.888+0000: Starting external device: TPM Emulator /usr/bin/swtpm socket --ctrl 'type=unixio,path=/run/libvirt/qemu/swtpm/7-Windows 11 Work-swtpm.sock,mode=0600' --tpmstate dir=/var/lib/libvirt/swtpm/1cddcec7-6540-da7d-1179-3ccd0d2f9ea7/tpm2,mode=0600 --log 'file=/var/log/swtpm/libvirt/qemu/Windows 11 Work-swtpm.log' --terminate --tpm2 2022-10-25 19:40:46.901+0000: starting up libvirt version: 8.7.0, qemu version: 7.1.0, kernel: 5.19.14-Unraid, hostname: AM5 LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME='/var/lib/libvirt/qemu/domain-7-Windows 11 Work' \ XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-7-Windows 11 Work/.local/share' \ XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-7-Windows 11 Work/.cache' \ XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-7-Windows 11 Work/.config' \ /usr/local/sbin/qemu \ -name 'guest=Windows 11 Work,debug-threads=on' \ -S \ -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-7-Windows 11 Work/master-key.aes"}' \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/1cddcec7-6540-da7d-1179-3ccd0d2f9ea7_VARS-pure-efi-tpm.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-i440fx-7.1,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -accel kvm \ -cpu host,migratable=on,topoext=on,hv-time=on,host-cache-info=on,l3-cache=off \ -m 32768 \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":34359738368}' \ -overcommit mem-lock=off \ -smp 22,sockets=1,dies=1,cores=11,threads=2 \ -uuid 1cddcec7-6540-da7d-1179-3ccd0d2f9ea7 \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=36,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device '{"driver":"ich9-usb-ehci1","id":"usb","bus":"pci.0","addr":"0x7.0x7"}' \ -device '{"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pci.0","multifunction":true,"addr":"0x7"}' \ -device '{"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pci.0","addr":"0x7.0x1"}' \ -device '{"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pci.0","addr":"0x7.0x2"}' \ -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.0","addr":"0x3"}' \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 11 Work/vdisk1.img","node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-4-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-4-storage"}' \ -device '{"driver":"virtio-blk-pci","bus":"pci.0","addr":"0x4","drive":"libvirt-4-format","id":"virtio-disk2","bootindex":1,"write-cache":"on"}' \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 11 Work/vdisk2.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \ -device '{"driver":"virtio-blk-pci","bus":"pci.0","addr":"0x5","drive":"libvirt-3-format","id":"virtio-disk3","write-cache":"on"}' \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/Win11_22H2_English_x64v1.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device '{"driver":"ide-cd","bus":"ide.0","unit":0,"drive":"libvirt-2-format","id":"ide0-0-0","bootindex":2}' \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.221-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device '{"driver":"ide-cd","bus":"ide.0","unit":1,"drive":"libvirt-1-format","id":"ide0-0-1"}' \ -netdev tap,fd=37,id=hostnet0 \ -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:d6:a5:b5","bus":"pci.0","addr":"0x2"}' \ -chardev pty,id=charserial0 \ -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=35,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -chardev 'socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/7-Windows 11 Work-swtpm.sock' \ -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm \ -device '{"driver":"tpm-tis","tpmdev":"tpm-tpm0","id":"tpm0"}' \ -audiodev '{"id":"audio1","driver":"none"}' \ -device '{"driver":"vfio-pci","host":"0000:03:00.0","id":"hostdev0","bus":"pci.0","addr":"0x6","romfile":"/mnt/user/isos/vbios/T1000 8GB.extracted.gpuz.rom"}' \ -device '{"driver":"vfio-pci","host":"0000:03:00.1","id":"hostdev1","bus":"pci.0","addr":"0x8"}' \ -device '{"driver":"vfio-pci","host":"0000:17:00.0","id":"hostdev2","bus":"pci.0","addr":"0x9"}' \ -device '{"driver":"vfio-pci","host":"0000:1c:00.0","id":"hostdev3","bus":"pci.0","addr":"0xa"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/1 (label charserial0) - Legacy vs UEFI booting - I normally boot in legacy non-UEFI mode my USB unraid. - Removing "hyper-v" support from the VM - Making the PNY T1000 a "multifunction='on'" with binding to the same VM PCIE address with the audio being function 1 on the same virtual PCIE bug - re-creating the VM image from scratch with the VM template, fresh installation, etc (just in case of corruption) Absolutely nothing helps that I have tried when "Virtual Machine Platform" is enabled. If I revert back to my pre-"Virtual Machine Platform" feature install then everything works once again but that doesn't help the issue and given this feature works on my AM4 platform (with "Virtual Machine Platform" fully enabled), I don't understand why it won't work on my AM5 system. Normally, I just passthrough the vbios and I'm golden with no extra hoops to jump through. As I understand it, NVidia removed the VM checks that disable error 43 normally but in my case it's come back. Help is very much appreciated as I'm completely stumpted and I don't have a clue what to do next! Quote Link to comment
Robin R Posted October 26, 2022 Author Share Posted October 26, 2022 (edited) I'm attaching an unraid diagnostic file in case there's something that can help. I really hope someone has some ideas. am5-diagnostics-20221026-1453.zip Edited October 26, 2022 by Robin R Quote Link to comment
Robin R Posted October 29, 2022 Author Share Posted October 29, 2022 Another update - I tried copying a working VM from AM4-based system to the new non working AM5-based system (including the referenced TPM in the XML). Once again the graphics card in the copied VM does not work on the AM5 system and reports error 43. I'm going to investigate differences between the two systems. I really need this to be fixed otherwise WSL2 won't work... Quote Link to comment
Robin R Posted October 30, 2022 Author Share Posted October 30, 2022 (edited) Another test - I installed Windows on an empty drive and booted directly into windows, installed the NVidia drivers and turned on "Virtual Machine Platform". It works fine. Something about unraid is causing the issue. Also the same issue occurs under Unraid if you install the Hyper-V platform as Virtual Machine Platform is a weakened version of Hyper-V. Edited October 30, 2022 by Robin R Quote Link to comment
Robin R Posted October 31, 2022 Author Share Posted October 31, 2022 (edited) 2022-10-31 12:19:27.214 ( 0.150) | INFO: [system] 391@Nvidia::Logging::Logger::Logger : 2022-Oct-31 12:19:27 : Logging init OK. Using configuration from HKLM for DefaultProcess, for the nvdisplay.container.exe. 2022-10-31 12:19:27.217 ( 0.153) | DEBUG: [UXD.NvXDCore.Module] 701@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Received OnInitialize() from NvContainer. 2022-10-31 12:19:27.219 ( 0.155) | DEBUG: [UXD.NvXDCore.Module] 703@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : service name from NvContainer is NVDisplay.ContainerLocalSystem. 2022-10-31 12:19:27.220 ( 0.156) | DEBUG: [UXD.NvXDCore.Module] 711@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Registering AppId for NvXDCoreModule. 2022-10-31 12:19:27.221 ( 0.157) | DEBUG: [UXD.NvXDCore.Module] 716@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Registering server for NvXDCoreModule. 2022-10-31 12:19:27.224 ( 0.160) | DEBUG: [UXD.NvXDCore.Module] 727@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Plugin initialized successfully. 2022-10-31 12:19:27.239 ( 0.175) | DEBUG: [UXD.NvXDCore.Module] 740@Nvidia::UXDriver::Core::NvXDCorePlugin::OnStart : Received OnStart() from NvContainer. 2022-10-31 12:19:27.241 ( 0.177) | DEBUG: [UXD.NvXDCore.Module] 1110@Nvidia::UXDriver::Core::NvXDCorePlugin::WorkerThreadProc : Creating child processes in worker thread. 2022-10-31 12:19:27.242 ( 0.178) | DEBUG: [NvXDCore] 269@CreateChildProcesses : child processes Session id is 1. 2022-10-31 12:19:27.247 ( 0.183) | DEBUG: [NvXDCore] 275@CreateChildProcesses : launching sync from child processes. 2022-10-31 12:19:27.250 ( 0.186) | DEBUG: [UXD.SyncProxy] 70@Nvidia::UXDriver::Core::SyncProxy::SyncProxy : Create SyncProxy. 2022-10-31 12:19:27.250 ( 0.186) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:27.257 ( 0.193) | DEBUG: [UXD.SyncProxy] 224@Nvidia::UXDriver::Core::SyncProxy::OnSessionChange : Session is changing from 0 to 1. 2022-10-31 12:19:27.257 ( 0.193) | WARNING: [NvXDCore] 44@PowerOnDGpu : Failed to open NvPciFilter driver interface. 2022-10-31 12:19:27.258 ( 0.194) | WARNING: [UXD.NvXDCore.Module] 1120@Nvidia::UXDriver::Core::NvXDCorePlugin::WorkerThreadProc : Power-On DGPU control failed. 2022-10-31 12:19:27.259 ( 0.195) | DEBUG: [UXD.NvXDCore.Module] 975@Nvidia::UXDriver::Core::NvXDCorePlugin::WaitControl : Starting WaitControl(). 2022-10-31 12:19:27.259 ( 0.195) | DEBUG: [UXD.WmiBrightnessControl.Module] 699@Nvidia::UXDriver::Core::WmiBrightnessControl::HandleBrightnessChange : ACELOG: Begin. 2022-10-31 12:19:27.370 ( 0.306) | DEBUG: [UXD.WmiBrightnessControl.Module] 581@Nvidia::UXDriver::Core::WmiBrightnessControl::GetSetBrightnessSourceViaWmi : ACELOG: Failed to enumerate. 2022-10-31 12:19:27.371 ( 0.307) | INFO: [UXD.WmiBrightnessControl.Module] 733@Nvidia::UXDriver::Core::WmiBrightnessControl::HandleBrightnessChange : ACELOG: Current brightness source : 0 bBrighSrcSupported: 0. 2022-10-31 12:19:27.373 ( 0.309) | DEBUG: [UXD.WmiBrightnessControl.Module] 804@Nvidia::UXDriver::Core::WmiBrightnessControl::HandleBrightnessChange : ACELOG: End. 2022-10-31 12:19:33.408 ( 6.344) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 1. 2022-10-31 12:19:33.409 ( 6.345) | DEBUG: [UXD.NvXDCore.Module] 889@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_CONSOLE_CONNECT for session : 1. 2022-10-31 12:19:33.410 ( 6.346) | DEBUG: [UXD.NvXDCore.Module] 890@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : child processes gets called from session console connect. 2022-10-31 12:19:33.411 ( 6.347) | DEBUG: [NvXDCore] 269@CreateChildProcesses : child processes Session id is 1. 2022-10-31 12:19:33.412 ( 6.348) | DEBUG: [NvXDCore] 275@CreateChildProcesses : launching sync from child processes. 2022-10-31 12:19:33.415 ( 6.351) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:33.417 ( 6.353) | DEBUG: [UXD.SyncProxy] 224@Nvidia::UXDriver::Core::SyncProxy::OnSessionChange : Session is changing from 1 to 1. 2022-10-31 12:19:33.418 ( 6.354) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 1. 2022-10-31 12:19:33.419 ( 6.355) | DEBUG: [UXD.NvXDCore.Module] 889@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_CONSOLE_CONNECT for session : 1. 2022-10-31 12:19:33.421 ( 6.357) | DEBUG: [UXD.NvXDCore.Module] 890@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : child processes gets called from session console connect. 2022-10-31 12:19:33.422 ( 6.358) | DEBUG: [NvXDCore] 269@CreateChildProcesses : child processes Session id is 1. 2022-10-31 12:19:33.423 ( 6.359) | DEBUG: [NvXDCore] 275@CreateChildProcesses : launching sync from child processes. 2022-10-31 12:19:33.425 ( 6.361) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:33.426 ( 6.362) | DEBUG: [UXD.SyncProxy] 224@Nvidia::UXDriver::Core::SyncProxy::OnSessionChange : Session is changing from 1 to 1. 2022-10-31 12:19:33.649 ( 0.206) | INFO: [system] 391@Nvidia::Logging::Logger::Logger : 2022-Oct-31 12:19:33 : Logging init OK. Using configuration from HKLM for DefaultProcess, for the nvdisplay.container.exe. 2022-10-31 12:19:33.651 ( 0.208) | DEBUG: [UXD.NvXDSyncPlugin.Module] 118@Nvidia::UXDriver::Sync::NvXDSyncPlugin::GetPluginInfo : Received GetPluginInfo. 2022-10-31 12:19:33.799 ( 0.356) | DEBUG: [UXD.NvXDSyncPlugin.Module] 486@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Received OnInitialize() from NvContainer for session 1. 2022-10-31 12:19:33.802 ( 0.359) | DEBUG: [UXD.NvXDSyncPlugin.Module] 492@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Registering AppId for NvXDsyncModule. 2022-10-31 12:19:33.804 ( 0.361) | DEBUG: [UXD.NvXDSyncPlugin.Module] 497@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Registering server for NvXDsyncModule. 2022-10-31 12:19:33.813 ( 0.370) | DEBUG: [UXD.NvXDSyncPlugin.Module] 508@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Plugin initialized successfully. 2022-10-31 12:19:34.515 ( 1.072) | DEBUG: [UXD.NvXDSyncPlugin.Module] 517@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnStart : Received OnStart() from NvContainer for session 1. 2022-10-31 12:19:34.515 ( 1.072) | DEBUG: [UXD.NvXDSyncPlugin.Module] 534@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnStart : sync mutex handle is 000000000000066C. 2022-10-31 12:19:34.523 ( 1.080) | DEBUG: [UXD.NvXDSyncPlugin.Module] 551@Nvidia::UXDriver::Sync::NvXDSyncPlugin::WorkerThreadProc : worker thread started. 2022-10-31 12:19:34.546 ( 1.103) | DEBUG: [UXDriver.Sync.Startup] 308@CNvXDSyncModule::PreMessageLoop : premessageloop result is 0. 2022-10-31 12:19:34.547 ( 1.104) | DEBUG: [UXDriver.Sync.Startup] 351@CNvXDSyncModule::RegisterWithSyncProxy : trying for sync registration mutex. 2022-10-31 12:19:34.548 ( 1.105) | DEBUG: [UXDriver.Sync.Startup] 353@CNvXDSyncModule::RegisterWithSyncProxy : acquired sync registration mutex. 2022-10-31 12:19:34.570 ( 7.506) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:34.586 ( 1.143) | DEBUG: [UXDriver.Sync.Startup] 364@CNvXDSyncModule::RegisterWithSyncProxy : Created an instance of SyncProxy class. 2022-10-31 12:19:34.587 ( 1.144) | DEBUG: [UXDriver.Sync.Startup] 370@CNvXDSyncModule::RegisterWithSyncProxy : GetClassObject for nvxdsync Finished. 2022-10-31 12:19:34.589 ( 1.146) | DEBUG: [UXDriver.Sync] 105@Nvidia::UXDriver::Sync::CNvXDSyncEngine::WorkerThreadProc : waiting for sync plugin to stop in worker thread. 2022-10-31 12:19:34.589 ( 1.146) | DEBUG: [UXDriver.Sync] 96@Nvidia::UXDriver::Sync::CNvXDSyncEngine::FinalConstruct : unload event was created for sync engine. 2022-10-31 12:19:34.591 ( 1.148) | DEBUG: [UXDriver.Sync.Startup] 376@CNvXDSyncModule::RegisterWithSyncProxy : Found Class object for Sync class and created an instance. 2022-10-31 12:19:34.592 ( 7.528) | DEBUG: [UXD.SyncProxy] 152@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : sync register session id: 1. 2022-10-31 12:19:34.593 ( 7.529) | DEBUG: [UXD.SyncProxy] 165@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : sync session is valid. 2022-10-31 12:19:34.598 ( 7.534) | DEBUG: [UXD.SyncProxy] 178@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : sync mutex handle is 00000000000007FC. 2022-10-31 12:19:34.599 ( 7.535) | DEBUG: [UXD.SyncProxy] 179@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : creating sync mutex handle error 183. 2022-10-31 12:19:34.600 ( 7.536) | DEBUG: [UXD.SyncProxy] 183@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : Locking the mutex to store Sync object for session 1. 2022-10-31 12:19:34.609 ( 7.545) | DEBUG: [UXD.SyncProxy] 213@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : Successfully set the Sync object for session 1 . Notifying listeners. 2022-10-31 12:19:34.610 ( 1.167) | DEBUG: [UXDriver.Sync.Startup] 384@CNvXDSyncModule::RegisterWithSyncProxy : Successfully registered worker with sync proxy. 2022-10-31 12:19:34.611 ( 1.168) | DEBUG: [UXDriver.Sync.Startup] 433@CNvXDSyncModule::StartUxdService : Creating APIX. 2022-10-31 12:19:34.659 ( 1.216) | INFO: [system] 391@Nvidia::Logging::Logger::Logger : 2022-Oct-31 12:19:34 : Logging init OK. Using configuration from HKLM for DefaultProcess, for the nvdisplay.container.exe. 2022-10-31 12:19:34.661 ( 1.218) | INFO: [UXDriver.ApiX.Features.HCloneEventHandler] 56@Nvidia::UXDriver::ApiX::Features::HCloneEventHandler::OpenHCloneGdiProc : Error loading UmdShim Library !!!. 2022-10-31 12:19:34.662 ( 1.219) | DEBUG: [UXDriver.ApiX.Engine] 133@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalConstruct : in APIX final construct. 2022-10-31 12:19:34.665 ( 1.222) | DEBUG: [UXDriver.ApiX.Engine] 180@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalConstruct : NvAPI_Initialize failed with error : -6 intimating nvvsvc. 2022-10-31 12:19:34.666 ( 1.223) | DEBUG: [UXDriver.ApiX.Engine] 197@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalRelease : CALL. 2022-10-31 12:19:34.667 ( 1.224) | DEBUG: [UXDriver.ApiX.Engine] 234@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalRelease : Done. 2022-10-31 12:19:34.661 ( 1.218) | DEBUG: [UXDriver.Sync.Startup] 460@CNvXDSyncModule::StartUxdService : failed in setting data source as APIX with -2147467259. 2022-10-31 12:19:34.662 ( 1.219) | ERROR: [UXDriver.Sync.Startup] 390@CNvXDSyncModule::RegisterWithSyncProxy : triggering uxd service start failed with -2147467259. 2022-10-31 12:19:34.663 ( 1.220) | DEBUG: [UXD.NvXDSyncPlugin.Module] 585@Nvidia::UXDriver::Sync::NvXDSyncPlugin::WorkerThreadProc : failed to open apix unload event. 2022-10-31 12:19:58.232 ( 31.168) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 5. 2022-10-31 12:19:58.233 ( 31.169) | DEBUG: [UXD.NvXDCore.Module] 897@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_SESSION_LOGON for session : 1. 2022-10-31 12:19:58.234 ( 31.170) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 5. 2022-10-31 12:19:58.235 ( 31.171) | DEBUG: [UXD.NvXDCore.Module] 897@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_SESSION_LOGON for session : 1. I enabled nvidia logging to see if it would give anything meaningful. This is what I noticed: 2022-10-31 12:19:34.665 ( 1.222) | DEBUG: [UXDriver.ApiX.Engine] 180@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalConstruct : NvAPI_Initialize failed with error : -6 intimating nvvsvc. NvAPI_Initialize returned -6 which is: NVAPI_NVIDIA_DEVICE_NOT_FOUND No NVIDIA display driver, or NVIDIA GPU driving a display, was found. So that seems pretty pertanent to the issue but I don't know what would cause this error. Edited October 31, 2022 by Robin R Quote Link to comment
SimonF Posted October 31, 2022 Share Posted October 31, 2022 (edited) 19 minutes ago, Robin R said: 2022-10-31 12:19:27.214 ( 0.150) | INFO: [system] 391@Nvidia::Logging::Logger::Logger : 2022-Oct-31 12:19:27 : Logging init OK. Using configuration from HKLM for DefaultProcess, for the nvdisplay.container.exe. 2022-10-31 12:19:27.217 ( 0.153) | DEBUG: [UXD.NvXDCore.Module] 701@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Received OnInitialize() from NvContainer. 2022-10-31 12:19:27.219 ( 0.155) | DEBUG: [UXD.NvXDCore.Module] 703@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : service name from NvContainer is NVDisplay.ContainerLocalSystem. 2022-10-31 12:19:27.220 ( 0.156) | DEBUG: [UXD.NvXDCore.Module] 711@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Registering AppId for NvXDCoreModule. 2022-10-31 12:19:27.221 ( 0.157) | DEBUG: [UXD.NvXDCore.Module] 716@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Registering server for NvXDCoreModule. 2022-10-31 12:19:27.224 ( 0.160) | DEBUG: [UXD.NvXDCore.Module] 727@Nvidia::UXDriver::Core::NvXDCorePlugin::OnInitialize : Plugin initialized successfully. 2022-10-31 12:19:27.239 ( 0.175) | DEBUG: [UXD.NvXDCore.Module] 740@Nvidia::UXDriver::Core::NvXDCorePlugin::OnStart : Received OnStart() from NvContainer. 2022-10-31 12:19:27.241 ( 0.177) | DEBUG: [UXD.NvXDCore.Module] 1110@Nvidia::UXDriver::Core::NvXDCorePlugin::WorkerThreadProc : Creating child processes in worker thread. 2022-10-31 12:19:27.242 ( 0.178) | DEBUG: [NvXDCore] 269@CreateChildProcesses : child processes Session id is 1. 2022-10-31 12:19:27.247 ( 0.183) | DEBUG: [NvXDCore] 275@CreateChildProcesses : launching sync from child processes. 2022-10-31 12:19:27.250 ( 0.186) | DEBUG: [UXD.SyncProxy] 70@Nvidia::UXDriver::Core::SyncProxy::SyncProxy : Create SyncProxy. 2022-10-31 12:19:27.250 ( 0.186) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:27.257 ( 0.193) | DEBUG: [UXD.SyncProxy] 224@Nvidia::UXDriver::Core::SyncProxy::OnSessionChange : Session is changing from 0 to 1. 2022-10-31 12:19:27.257 ( 0.193) | WARNING: [NvXDCore] 44@PowerOnDGpu : Failed to open NvPciFilter driver interface. 2022-10-31 12:19:27.258 ( 0.194) | WARNING: [UXD.NvXDCore.Module] 1120@Nvidia::UXDriver::Core::NvXDCorePlugin::WorkerThreadProc : Power-On DGPU control failed. 2022-10-31 12:19:27.259 ( 0.195) | DEBUG: [UXD.NvXDCore.Module] 975@Nvidia::UXDriver::Core::NvXDCorePlugin::WaitControl : Starting WaitControl(). 2022-10-31 12:19:27.259 ( 0.195) | DEBUG: [UXD.WmiBrightnessControl.Module] 699@Nvidia::UXDriver::Core::WmiBrightnessControl::HandleBrightnessChange : ACELOG: Begin. 2022-10-31 12:19:27.370 ( 0.306) | DEBUG: [UXD.WmiBrightnessControl.Module] 581@Nvidia::UXDriver::Core::WmiBrightnessControl::GetSetBrightnessSourceViaWmi : ACELOG: Failed to enumerate. 2022-10-31 12:19:27.371 ( 0.307) | INFO: [UXD.WmiBrightnessControl.Module] 733@Nvidia::UXDriver::Core::WmiBrightnessControl::HandleBrightnessChange : ACELOG: Current brightness source : 0 bBrighSrcSupported: 0. 2022-10-31 12:19:27.373 ( 0.309) | DEBUG: [UXD.WmiBrightnessControl.Module] 804@Nvidia::UXDriver::Core::WmiBrightnessControl::HandleBrightnessChange : ACELOG: End. 2022-10-31 12:19:33.408 ( 6.344) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 1. 2022-10-31 12:19:33.409 ( 6.345) | DEBUG: [UXD.NvXDCore.Module] 889@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_CONSOLE_CONNECT for session : 1. 2022-10-31 12:19:33.410 ( 6.346) | DEBUG: [UXD.NvXDCore.Module] 890@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : child processes gets called from session console connect. 2022-10-31 12:19:33.411 ( 6.347) | DEBUG: [NvXDCore] 269@CreateChildProcesses : child processes Session id is 1. 2022-10-31 12:19:33.412 ( 6.348) | DEBUG: [NvXDCore] 275@CreateChildProcesses : launching sync from child processes. 2022-10-31 12:19:33.415 ( 6.351) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:33.417 ( 6.353) | DEBUG: [UXD.SyncProxy] 224@Nvidia::UXDriver::Core::SyncProxy::OnSessionChange : Session is changing from 1 to 1. 2022-10-31 12:19:33.418 ( 6.354) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 1. 2022-10-31 12:19:33.419 ( 6.355) | DEBUG: [UXD.NvXDCore.Module] 889@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_CONSOLE_CONNECT for session : 1. 2022-10-31 12:19:33.421 ( 6.357) | DEBUG: [UXD.NvXDCore.Module] 890@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : child processes gets called from session console connect. 2022-10-31 12:19:33.422 ( 6.358) | DEBUG: [NvXDCore] 269@CreateChildProcesses : child processes Session id is 1. 2022-10-31 12:19:33.423 ( 6.359) | DEBUG: [NvXDCore] 275@CreateChildProcesses : launching sync from child processes. 2022-10-31 12:19:33.425 ( 6.361) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:33.426 ( 6.362) | DEBUG: [UXD.SyncProxy] 224@Nvidia::UXDriver::Core::SyncProxy::OnSessionChange : Session is changing from 1 to 1. 2022-10-31 12:19:33.649 ( 0.206) | INFO: [system] 391@Nvidia::Logging::Logger::Logger : 2022-Oct-31 12:19:33 : Logging init OK. Using configuration from HKLM for DefaultProcess, for the nvdisplay.container.exe. 2022-10-31 12:19:33.651 ( 0.208) | DEBUG: [UXD.NvXDSyncPlugin.Module] 118@Nvidia::UXDriver::Sync::NvXDSyncPlugin::GetPluginInfo : Received GetPluginInfo. 2022-10-31 12:19:33.799 ( 0.356) | DEBUG: [UXD.NvXDSyncPlugin.Module] 486@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Received OnInitialize() from NvContainer for session 1. 2022-10-31 12:19:33.802 ( 0.359) | DEBUG: [UXD.NvXDSyncPlugin.Module] 492@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Registering AppId for NvXDsyncModule. 2022-10-31 12:19:33.804 ( 0.361) | DEBUG: [UXD.NvXDSyncPlugin.Module] 497@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Registering server for NvXDsyncModule. 2022-10-31 12:19:33.813 ( 0.370) | DEBUG: [UXD.NvXDSyncPlugin.Module] 508@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnInitialize : Plugin initialized successfully. 2022-10-31 12:19:34.515 ( 1.072) | DEBUG: [UXD.NvXDSyncPlugin.Module] 517@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnStart : Received OnStart() from NvContainer for session 1. 2022-10-31 12:19:34.515 ( 1.072) | DEBUG: [UXD.NvXDSyncPlugin.Module] 534@Nvidia::UXDriver::Sync::NvXDSyncPlugin::OnStart : sync mutex handle is 000000000000066C. 2022-10-31 12:19:34.523 ( 1.080) | DEBUG: [UXD.NvXDSyncPlugin.Module] 551@Nvidia::UXDriver::Sync::NvXDSyncPlugin::WorkerThreadProc : worker thread started. 2022-10-31 12:19:34.546 ( 1.103) | DEBUG: [UXDriver.Sync.Startup] 308@CNvXDSyncModule::PreMessageLoop : premessageloop result is 0. 2022-10-31 12:19:34.547 ( 1.104) | DEBUG: [UXDriver.Sync.Startup] 351@CNvXDSyncModule::RegisterWithSyncProxy : trying for sync registration mutex. 2022-10-31 12:19:34.548 ( 1.105) | DEBUG: [UXDriver.Sync.Startup] 353@CNvXDSyncModule::RegisterWithSyncProxy : acquired sync registration mutex. 2022-10-31 12:19:34.570 ( 7.506) | DEBUG: [UXD.SyncProxy] 305@Nvidia::UXDriver::Core::SyncProxy::GetClassForHandler : Specifying StateDataSessionFilter as the handler object. 2022-10-31 12:19:34.586 ( 1.143) | DEBUG: [UXDriver.Sync.Startup] 364@CNvXDSyncModule::RegisterWithSyncProxy : Created an instance of SyncProxy class. 2022-10-31 12:19:34.587 ( 1.144) | DEBUG: [UXDriver.Sync.Startup] 370@CNvXDSyncModule::RegisterWithSyncProxy : GetClassObject for nvxdsync Finished. 2022-10-31 12:19:34.589 ( 1.146) | DEBUG: [UXDriver.Sync] 105@Nvidia::UXDriver::Sync::CNvXDSyncEngine::WorkerThreadProc : waiting for sync plugin to stop in worker thread. 2022-10-31 12:19:34.589 ( 1.146) | DEBUG: [UXDriver.Sync] 96@Nvidia::UXDriver::Sync::CNvXDSyncEngine::FinalConstruct : unload event was created for sync engine. 2022-10-31 12:19:34.591 ( 1.148) | DEBUG: [UXDriver.Sync.Startup] 376@CNvXDSyncModule::RegisterWithSyncProxy : Found Class object for Sync class and created an instance. 2022-10-31 12:19:34.592 ( 7.528) | DEBUG: [UXD.SyncProxy] 152@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : sync register session id: 1. 2022-10-31 12:19:34.593 ( 7.529) | DEBUG: [UXD.SyncProxy] 165@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : sync session is valid. 2022-10-31 12:19:34.598 ( 7.534) | DEBUG: [UXD.SyncProxy] 178@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : sync mutex handle is 00000000000007FC. 2022-10-31 12:19:34.599 ( 7.535) | DEBUG: [UXD.SyncProxy] 179@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : creating sync mutex handle error 183. 2022-10-31 12:19:34.600 ( 7.536) | DEBUG: [UXD.SyncProxy] 183@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : Locking the mutex to store Sync object for session 1. 2022-10-31 12:19:34.609 ( 7.545) | DEBUG: [UXD.SyncProxy] 213@Nvidia::UXDriver::Core::SyncProxy::RegisterWorker : Successfully set the Sync object for session 1 . Notifying listeners. 2022-10-31 12:19:34.610 ( 1.167) | DEBUG: [UXDriver.Sync.Startup] 384@CNvXDSyncModule::RegisterWithSyncProxy : Successfully registered worker with sync proxy. 2022-10-31 12:19:34.611 ( 1.168) | DEBUG: [UXDriver.Sync.Startup] 433@CNvXDSyncModule::StartUxdService : Creating APIX. 2022-10-31 12:19:34.659 ( 1.216) | INFO: [system] 391@Nvidia::Logging::Logger::Logger : 2022-Oct-31 12:19:34 : Logging init OK. Using configuration from HKLM for DefaultProcess, for the nvdisplay.container.exe. 2022-10-31 12:19:34.661 ( 1.218) | INFO: [UXDriver.ApiX.Features.HCloneEventHandler] 56@Nvidia::UXDriver::ApiX::Features::HCloneEventHandler::OpenHCloneGdiProc : Error loading UmdShim Library !!!. 2022-10-31 12:19:34.662 ( 1.219) | DEBUG: [UXDriver.ApiX.Engine] 133@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalConstruct : in APIX final construct. 2022-10-31 12:19:34.665 ( 1.222) | DEBUG: [UXDriver.ApiX.Engine] 180@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalConstruct : NvAPI_Initialize failed with error : -6 intimating nvvsvc. 2022-10-31 12:19:34.666 ( 1.223) | DEBUG: [UXDriver.ApiX.Engine] 197@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalRelease : CALL. 2022-10-31 12:19:34.667 ( 1.224) | DEBUG: [UXDriver.ApiX.Engine] 234@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalRelease : Done. 2022-10-31 12:19:34.661 ( 1.218) | DEBUG: [UXDriver.Sync.Startup] 460@CNvXDSyncModule::StartUxdService : failed in setting data source as APIX with -2147467259. 2022-10-31 12:19:34.662 ( 1.219) | ERROR: [UXDriver.Sync.Startup] 390@CNvXDSyncModule::RegisterWithSyncProxy : triggering uxd service start failed with -2147467259. 2022-10-31 12:19:34.663 ( 1.220) | DEBUG: [UXD.NvXDSyncPlugin.Module] 585@Nvidia::UXDriver::Sync::NvXDSyncPlugin::WorkerThreadProc : failed to open apix unload event. 2022-10-31 12:19:58.232 ( 31.168) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 5. 2022-10-31 12:19:58.233 ( 31.169) | DEBUG: [UXD.NvXDCore.Module] 897@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_SESSION_LOGON for session : 1. 2022-10-31 12:19:58.234 ( 31.170) | DEBUG: [UXD.NvXDCore.Module] 884@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WM_WTSSESSION_CHANGE for : 5. 2022-10-31 12:19:58.235 ( 31.171) | DEBUG: [UXD.NvXDCore.Module] 897@Nvidia::UXDriver::Core::NvXDCorePlugin::OnSystemMessage : Received WTS_SESSION_LOGON for session : 1. I enabled nvidia logging to see if it would give anything meaningful. This is what I noticed: 2022-10-31 12:19:34.665 ( 1.222) | DEBUG: [UXDriver.ApiX.Engine] 180@Nvidia::UXDriver::ApiX::CNvApixEngine::FinalConstruct : NvAPI_Initialize failed with error : -6 intimating nvvsvc. NvAPI_Initialize returned -6 which is: NVAPI_NVIDIA_DEVICE_NOT_FOUND No NVIDIA display driver, or NVIDIA GPU driving a display, was found. So that seems pretty pertanent to the issue but I don't know what would cause this error. Have you tried turning on nesting? Edited October 31, 2022 by SimonF 1 Quote Link to comment
Robin R Posted October 31, 2022 Author Share Posted October 31, 2022 @SimonF I had not, but now I have tried. Unfortunately it appears not to help (assuming I did things correctly) I modified my flash boot as follows (using kvm_amd instead of kvm_intel as I'm using an AMD chip) and rebooted unraid: kernel /bzimage append isolcpus=1,3-15,17,19-31 kvm_amd.nested=1 initrd=/bzroot And I added: <feature policy='require' name='vmx'/> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='11' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <feature policy='require' name='vmx'/> </cpu> If I did the correct steps above, then it doesn't help. I also tried removing and adding the driver back but without any luck. Quote Link to comment
Solution Robin R Posted November 1, 2022 Author Solution Share Posted November 1, 2022 (edited) WORKING NOW! Thank you @SimonF Actually, @SimonF was correct. I misunderstood the instructions and blindly added <feature policy='require' name='vmx'/> which is not correct for two reasons. First, that is an Intel CPU feature whereas amd is 'svm'. Second, specifying 'svm' is not compatible with `host-passthrough` which does not work on my AM5/Ryzen 7950x system. Once I changed my CPU section inside the XML from: <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='11' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> To: <cpu mode='host-model' check='partial'> <topology sockets='1' dies='1' cores='11' threads='2'/> <feature policy='require' name='topoext'/> </cpu> And started my VM, the XML below was auto-generated and modeled my CPU's features and created a custom XML CPU specification of: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-Milan</model> <vendor>AMD</vendor> <topology sockets='1' dies='1' cores='11' threads='2'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='avx512f'/> <feature policy='require' name='avx512dq'/> <feature policy='require' name='avx512ifma'/> <feature policy='require' name='avx512cd'/> <feature policy='require' name='avx512bw'/> <feature policy='require' name='avx512vl'/> <feature policy='require' name='avx512vbmi'/> <feature policy='require' name='avx512vbmi2'/> <feature policy='require' name='gfni'/> <feature policy='require' name='vaes'/> <feature policy='require' name='vpclmulqdq'/> <feature policy='require' name='avx512vnni'/> <feature policy='require' name='avx512bitalg'/> <feature policy='require' name='avx512-vpopcntdq'/> <feature policy='require' name='spec-ctrl'/> <feature policy='require' name='stibp'/> <feature policy='require' name='arch-capabilities'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='avx512-bf16'/> <feature policy='require' name='cmp_legacy'/> <feature policy='require' name='virt-ssbd'/> <feature policy='disable' name='lbrv'/> <feature policy='disable' name='tsc-scale'/> <feature policy='disable' name='vmcb-clean'/> <feature policy='disable' name='pause-filter'/> <feature policy='disable' name='pfthreshold'/> <feature policy='require' name='rdctl-no'/> <feature policy='require' name='skip-l1dfl-vmentry'/> <feature policy='require' name='mds-no'/> <feature policy='require' name='pschange-mc-no'/> <feature policy='disable' name='pcid'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='svm'/> <feature policy='disable' name='npt'/> <feature policy='disable' name='nrip-save'/> <feature policy='disable' name='svme-addr-chk'/> </cpu> WARNING: DO NOT COPY ABOVE "custom" CPU SPECIFICATION. It's auto-generated specifically for my CPU chipset. This is my CPU's model and may or may not model other system's CPUs correctly. So the correct steps in my case were: 1) extract the vbios from the GPU using Spaceinvader One's process 2) bind VFIO at boot to the GPU(s) 3) pass the vbios to the GPU in the virtual machine (in my case I had to remove the nvidia bios header as per Spaceinvader One's videos) 4) Set my bios to boot my system in Legacy mode (non-UEFI mode) 5) Enable SVM in my bios (since I'm AMD) 6) Enable nested virtualization, in my case by adding kvm_amd.nested=1 (since I'm AMD) into my flash drive's boot settings kernel /bzimage append kvm_amd.nested=1 initrd=/bzroot and changing CPU 'host-passthrough' to 'host-model' <cpu mode='host-passthrough' check='none' migratable='on'> To: <cpu mode='host-model' check='partial'> And removed this line inside the cpu tag (since it's not compatible with 'host-model'): <cache mode='passthrough'/> Edited November 1, 2022 by Robin R 1 1 Quote Link to comment
buetzel Posted December 2, 2022 Share Posted December 2, 2022 (edited) Thank you SO MUCH. I was having the worst time since having set up a Win11 VM four days ago - I experienced crashes in all browsers (Chrome, Edge, Firefox), especially in Websites with games that utilize WebGL/Unity or video, but also just on normal websites. The performance was just bad, also. I looked for the problem mostly in the GPU department, but what finally seems like the solution for me are those two changes to the <cpu> and <cache> lines. I didn't enable nested virtualization because I don't need it and SVM and GPU passthrough were already done. So maybe if someone else experiences browser crashes in a Win11 VM or general slowness - try this one. Also CPU usage was really high before - most of the time between 30% and 60%. Now it's below 10%, even with a browsergame open (kinda idling, though). For the record: this is on a MSI MEG X570 Unify (AM4 platform) with a Ryzen 9 3900X (undervolted to 1.000V for power efficiency), GeForce GTX 1050 Ti, 32GB DDR4/3600/CL16 RAM. Unraid 6.11.5. Difference like night and day. Thanks again. Edited December 2, 2022 by buetzel Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.