ccsnet Posted December 20, 2021 Share Posted December 20, 2021 (edited) Hi all, I'm having problems getting and VMs working with my new 1060 on 6.10.0-rc2. I've removed the AMD reset from the last card I had and dumped the ROm using Spaceinvaders script (it does seem a little small though as I was expecting about 100+k and not 58k). I dumped the rom with out Nvida drivers installed but have since installed them and get the Unraid GUI fine. I have also tried the following WITHOUT the unraid Nvida drivers and the result is the same. Where possible testing has been a clean boot, no dockers running, no partity check and the VM to be run on a dedicated SSD. I've created new Win 10 and 11 VMs in the same way (latest Q35, If, qcow etc) except I used OVMF for 10 and OVMF TPM for 11. I have tried both with and with out the BIOS file linked in the config while trying to create a fresh install of the OS. When I start the VM no image seems to be created on the drive and the VM part of Unraid seems to hang until reboot - every thing else is fine. I did think it was a Windows 11 thing or even that VNC drivers were installed but as its both 10 and 11 I'm wondering if I am missing some thing at a hardware level ? I've attached the ROM and diags for comment if any one has any thoughts on this as I suspect this is not an RC problem either. Thanks in advance. T gpu_vbios_MSI_1060.zip diagnostics-20211220-1211.zip Edited December 20, 2021 by ccsnet Quote Link to comment
ccsnet Posted December 20, 2021 Author Share Posted December 20, 2021 So with some extra playing with the machine type I can now hang the whole UI. I will still carry on and diag this but any hints would be appriciated as I am begining to think the machine does not like the card. Terran Quote Link to comment
runamuk Posted December 21, 2021 Share Posted December 21, 2021 (edited) I have never even needed a Vbios if setup correctly. Please post a screenshot of your VM page, Also post a screenshot of your IOMMU groups please. (Tools -> System Devices) I see you have not turned on multifunction. I took part of your VM XML and changed it below. I also changed the secondary function to be under the same slot and changed the function. Last I would never recommend having a PFsense VM and Windows VM the same Unraid sever. In fact I wouldn't recommend having PFsense being on your unraid sever at all tbh, but that's just me do what ever makes you happy 😄. <source> <address domain="0x0000" bus="0x02" slot="0x00" function="0x0"/> </source> <rom file="/mnt/user/isos/vbios/gpu_vbios_MSI_1060.rom"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0" multifunction='on'/> </hostdev> <hostdev mode="subsystem" type="pci" managed="yes"> <driver name="vfio"/> <source> <address domain="0x0000" bus="0x02" slot="0x00" function="0x1"/> </source> <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x1"/> </hostdev> <memballoon model="none"/> Edited December 21, 2021 by runamuk Quote Link to comment
JonathanM Posted December 21, 2021 Share Posted December 21, 2021 50 minutes ago, runamuk said: In fact I wouldn't recommend having PFsense being on your unraid sever at all tbh, Curious as to your reasons for that. I have a low power bare metal pf box that I keep updated periodically, but my daily pf is a VM in Unraid, much faster VPN and just overall performance. When the Unraid needs to go down for maintenance, the bare metal box is fired up to keep everyone connected, but for day to day use, it seems like a waste NOT to use the VM. Quote Link to comment
runamuk Posted December 21, 2021 Share Posted December 21, 2021 4 minutes ago, JonathanM said: Curious as to your reasons for that. I have a low power bare metal pf box that I keep updated periodically, but my daily pf is a VM in Unraid, much faster VPN and just overall performance. When the Unraid needs to go down for maintenance, the bare metal box is fired up to keep everyone connected, but for day to day use, it seems like a waste NOT to use the VM. For advanced user such as yourself second baremetal setup as a backup is a viable solution but at that point in time why not run a bare metal system fulltime. It uses such low resources that any old machine can run it smoothly. Personally using PFsense in a VM as my network routing seems high risk. If anything happens to Unraid or the VM, it would completely borked my entire network. If Unraid is your only network solution you could be up a creek without a paddle verry quickly. Simply my concept is don't put all my eggs in one basket; as the last thing I want is be trying to fix my network and whatever happen to my unraid system at the same time. Quote Link to comment
ccsnet Posted December 21, 2021 Author Share Posted December 21, 2021 8 hours ago, runamuk said: I have never even needed a Vbios if setup correctly. Please post a screenshot of your VM page, Also post a screenshot of your IOMMU groups please. (Tools -> System Devices) I see you have not turned on multifunction. I took part of your VM XML and changed it below. I also changed the secondary function to be under the same slot and changed the function. Last I would never recommend having a PFsense VM and Windows VM the same Unraid sever. In fact I wouldn't recommend having PFsense being on your unraid sever at all tbh, but that's just me do what ever makes you happy 😄. <source> <address domain="0x0000" bus="0x02" slot="0x00" function="0x0"/> </source> <rom file="/mnt/user/isos/vbios/gpu_vbios_MSI_1060.rom"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0" multifunction='on'/> </hostdev> <hostdev mode="subsystem" type="pci" managed="yes"> <driver name="vfio"/> <source> <address domain="0x0000" bus="0x02" slot="0x00" function="0x1"/> </source> <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x1"/> </hostdev> <memballoon model="none"/> Thanks... I will have a look at this and grab my groups. I'm also going to check for a firmware update on this card today and see if it applies or not in case that helps. Re pfsense. Been a home setup I'm playing to see if it's better than my isp router which is reaching its limits. I do have a dedicated network card for it just not in use at this time. I do see the points made but for now it's not an issue for me. T Quote Link to comment
runamuk Posted December 21, 2021 Share Posted December 21, 2021 5 minutes ago, ccsnet said: Thanks... I will have a look at this and grab my groups. I'm also going to check for a firmware update on this card today and see if it applies or not in case that helps. Re pfsense. Been a home setup I'm playing to see if it's better than my isp router which is reaching its limits. I do have a dedicated network card for it just not in use at this time. I do see the points made but for now it's not an issue for me. T ok also if you hard struck on having a vbois (not sure what model you had) https://www.techpowerup.com/vgabios/185951/msi-gtx1060-6144-160630 Download it here and you can Hex edit out the first part. 1 Quote Link to comment
ccsnet Posted December 21, 2021 Author Share Posted December 21, 2021 9 hours ago, runamuk said: I have never even needed a Vbios if setup correctly. Please post a screenshot of your VM page, Also post a screenshot of your IOMMU groups please. (Tools -> System Devices) I see you have not turned on multifunction. I took part of your VM XML and changed it below. I also changed the secondary function to be under the same slot and changed the function. Last I would never recommend having a PFsense VM and Windows VM the same Unraid sever. In fact I wouldn't recommend having PFsense being on your unraid sever at all tbh, but that's just me do what ever makes you happy 😄. <source> <address domain="0x0000" bus="0x02" slot="0x00" function="0x0"/> </source> <rom file="/mnt/user/isos/vbios/gpu_vbios_MSI_1060.rom"/> <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0" multifunction='on'/> </hostdev> <hostdev mode="subsystem" type="pci" managed="yes"> <driver name="vfio"/> <source> <address domain="0x0000" bus="0x02" slot="0x00" function="0x1"/> </source> <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x1"/> </hostdev> <memballoon model="none"/> Same result - when it comes back I will try with out the bios. In the mean time here is a copy of my groups Quote PCI Devices and IOMMU Groups IOMMU group 0:[1002:5a14] 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD9x0/RX980 Host Bridge (rev 02) IOMMU group 1:[1002:5a16] 00:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GFX port 0) IOMMU group 2:[1002:5a17] 00:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0 PCI to PCI bridge (PCI Express GFX port 1) IOMMU group 3:[1002:5a18] 00:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 0) IOMMU group 4:[1002:5a19] 00:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 1) IOMMU group 5:[1002:5a1a] 00:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 2) IOMMU group 6:[1002:4391] 00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] (rev 40) [1:0:0:0] disk ATA WDC WDS240G2G0A- 0000 /dev/sdb 240GB [2:0:0:0] disk ATA WDC WDS240G2G0A- 0400 /dev/sdc 240GB [3:0:0:0] disk ATA TOSHIBA HDWQ140 FJ1M /dev/sdd 4.00TB [4:0:0:0] disk ATA TOSHIBA HDWQ140 FJ1M /dev/sde 4.00TB [5:0:0:0] disk ATA TOSHIBA HDWQ140 FJ1M /dev/sdf 4.00TB [6:0:0:0] disk ATA TOSHIBA HDWQ140 FJ1M /dev/sdg 4.00TB IOMMU group 7:[1002:4397] 00:12.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Bus 004 Device 001 Port 4-0 ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002 Port 4-3 ID 413c:2107 Dell Computer Corp. Dell USB Entry Keyboard Bus 004 Device 003 Port 4-4 ID 046d:c05a Logitech, Inc. M90/M100 Optical Mouse [1002:4396] 00:12.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller Bus 001 Device 001 Port 1-0 ID 1d6b:0002 Linux Foundation 2.0 root hub IOMMU group 8:[1002:4397] 00:13.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Bus 005 Device 001 Port 5-0 ID 1d6b:0001 Linux Foundation 1.1 root hub [1002:4396] 00:13.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller Bus 002 Device 001 Port 2-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 002 Port 2-3 ID 0951:1666 Kingston Technology DataTraveler 100 G3/G4/SE9 G2 IOMMU group 9:[1002:4385] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 SMBus Controller (rev 42) IOMMU group 10:[1002:4383] 00:14.2 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 Azalia (Intel HDA) (rev 40) IOMMU group 11:[1002:439d] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) IOMMU group 12:[1002:4384] 00:14.4 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SBx00 PCI to PCI Bridge (rev 40) IOMMU group 13:[1002:4399] 00:14.5 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI2 Controller Bus 006 Device 001 Port 6-0 ID 1d6b:0001 Linux Foundation 1.1 root hub IOMMU group 14:[1002:43a0] 00:15.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) [1002:43a1] 00:15.1 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] SB700/SB800/SB900 PCI to PCI bridge (PCIE port 1) [1969:e091] 07:00.0 Ethernet controller: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller (rev 13) [10ec:8168] 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) IOMMU group 15:[1002:4397] 00:16.0 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB OHCI0 Controller Bus 007 Device 001 Port 7-0 ID 1d6b:0001 Linux Foundation 1.1 root hub [1002:4396] 00:16.2 USB controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 USB EHCI Controller Bus 003 Device 001 Port 3-0 ID 1d6b:0002 Linux Foundation 2.0 root hub IOMMU group 16:[8086:105e] 01:00.0 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (rev 06) IOMMU group 17:[8086:105e] 01:00.1 Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) (rev 06) IOMMU group 18:[10de:1c03] 02:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1) [10de:10f1] 02:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) IOMMU group 19:[1106:3483] 03:00.0 USB controller: VIA Technologies, Inc. VL805/806 xHCI USB 3.0 Controller (rev 01) Bus 008 Device 001 Port 8-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 002 Port 8-1 ID 2109:3431 VIA Labs, Inc. Hub Bus 009 Device 001 Port 9-0 ID 1d6b:0003 Linux Foundation 3.0 root hub IOMMU group 20:[1b21:0612] 04:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02) [7:0:0:0] disk ATA WDC WD10EZEX-00B 1A01 /dev/sdh 1.00TB [8:0:0:0] disk ATA WDC WD10EACS-00D 1A01 /dev/sdi 1.00TB IOMMU group 21:[1106:3483] 05:00.0 USB controller: VIA Technologies, Inc. VL805/806 xHCI USB 3.0 Controller (rev 01) Bus 010 Device 001 Port 10-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 010 Device 002 Port 10-1 ID 2109:3431 VIA Labs, Inc. Hub Bus 011 Device 001 Port 11-0 ID 1d6b:0003 Linux Foundation 3.0 root hub CPU Thread Pairings Pair 1:cpu 0 / cpu 1 Pair 2:cpu 2 / cpu 3 Pair 3:cpu 4 / cpu 5 Pair 4:cpu 6 / cpu 7 USB Devices Bus 001 Device 001 Port 1-0ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001 Port 2-0ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 002 Port 2-3ID 0951:1666 Kingston Technology DataTraveler 100 G3/G4/SE9 G2 Bus 003 Device 001 Port 3-0ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001 Port 4-0ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002 Port 4-3ID 413c:2107 Dell Computer Corp. Dell USB Entry Keyboard Bus 004 Device 003 Port 4-4ID 046d:c05a Logitech, Inc. M90/M100 Optical Mouse Bus 005 Device 001 Port 5-0ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001 Port 6-0ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001 Port 7-0ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001 Port 8-0ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 002 Port 8-1ID 2109:3431 VIA Labs, Inc. Hub Bus 009 Device 001 Port 9-0ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 010 Device 001 Port 10-0ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 010 Device 002 Port 10-1ID 2109:3431 VIA Labs, Inc. Hub Bus 011 Device 001 Port 11-0ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 012 Device 001 Port 12-0ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 013 Device 001 Port 13-0ID 1d6b:0003 Linux Foundation 3.0 root hub SCSI Devices [0:0:0:0]disk Kingston DataTraveler 3.0 PMAP /dev/sda 15.5GB [1:0:0:0]disk ATA WDC WDS240G2G0A- 0000 /dev/sdb 240GB [2:0:0:0]disk ATA WDC WDS240G2G0A- 0400 /dev/sdc 240GB [3:0:0:0]disk ATA TOSHIBA HDWQ140 FJ1M /dev/sdd 4.00TB [4:0:0:0]disk ATA TOSHIBA HDWQ140 FJ1M /dev/sde 4.00TB [5:0:0:0]disk ATA TOSHIBA HDWQ140 FJ1M /dev/sdf 4.00TB [6:0:0:0]disk ATA TOSHIBA HDWQ140 FJ1M /dev/sdg 4.00TB [7:0:0:0]disk ATA WDC WD10EZEX-00B 1A01 /dev/sdh 1.00TB [8:0:0:0]disk ATA WDC WD10EACS-00D 1A01 /dev/sdi 1.00TB Thanks for your help... T Quote Link to comment
ghost82 Posted December 21, 2021 Share Posted December 21, 2021 You should attach to vfio iommu group 18 and reboot the server, otherwise I think the nvidia plugin in unraid you installed will use your 1060 and cause issues with passthrough. Quote Link to comment
ccsnet Posted December 21, 2021 Author Share Posted December 21, 2021 16 minutes ago, ghost82 said: You should attach to vfio iommu group 18 and reboot the server, otherwise I think the nvidia plugin in unraid you installed will use your 1060 and cause issues with passthrough. Not too up on what the grouping does... I assume its some kind of reseve ? Any way the result is that the GUI does not load now locally (I assume this is expected) and when I start the VM the screen is blank but its doing some thing as it refreshed as I started. The VM screen is no longer hanging. I'm just shutting down the machine to run this Nvidia firmware tool and see if that helps. T Quote Link to comment
ghost82 Posted December 21, 2021 Share Posted December 21, 2021 1 minute ago, ccsnet said: Any way the result is that the GUI does not load now locally (I assume this is expected) and when I start the VM the screen is blank but its doing some thing as it refreshed as I started. The VM screen is no longer hanging. yes, it's expected because the gpu is now isolated from the host. Attach new diagnostics after running the vm and with this new config. Quote Link to comment
ccsnet Posted December 21, 2021 Author Share Posted December 21, 2021 Just going through the syslog ad found this... Quote Dec 21 17:24:21 tbmaindoma kernel: vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x900 Dec 21 17:24:21 tbmaindoma kernel: vfio-pci 0000:02:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 64bit pref] Dec 21 17:24:21 tbmaindoma kernel: vfio-pci 0000:02:00.0: No more image in the PCI ROM Dec 21 17:24:23 tbmaindoma kernel: vfio-pci 0000:02:00.0: No more image in the PCI ROM Dec 21 17:24:23 tbmaindoma kernel: vfio-pci 0000:02:00.0: No more image in the PCI ROM Quote <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 11</name> <uuid>e48d1b87-4979-09d6-036b-1be269ab4658</uuid> <description>Windows 11</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-6.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e48d1b87-4979-09d6-036b-1be269ab4658_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='3' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/disk5/domains/Windows 11/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.208-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pci-root'/> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:38:32:55'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> Thats about the time I started the VM - not sure if it helps. I've also attached new diags as requested too as well as a copy of the VM xml. There must be some thing strange about this as I know thr 1060 is used by quite a few people on here. I might take the spare network card out of the PCIE slot at some point but I doubt it will make any diffrence. T diagnostics-20211221-1911.zip Quote Link to comment
ghost82 Posted December 21, 2021 Share Posted December 21, 2021 (edited) Perfect, I was hoping to read "the can't reserve memory error". Can you type this in an unraid terminal and paste the output here? cat /proc/iomem Edited December 21, 2021 by ghost82 Quote Link to comment
ccsnet Posted December 21, 2021 Author Share Posted December 21, 2021 (edited) 2 hours ago, ghost82 said: Perfect, I was hoping to read "the can't reserve memory error". Can you type this in an unraid terminal and paste the output here? cat /proc/iomem Hi - thanks - I assume you have some thoughts on this ? I'm not a in depth Unraid person but I am techy so just wondering what they are ? Memory clashing ? Quote root@tbmaindoma:~# cat /proc/iomem 00000000-00000fff : Reserved 00001000-0009ffff : System RAM 000a0000-000bffff : PCI Bus 0000:00 000c0000-000dffff : PCI Bus 0000:00 000c0000-000ce5ff : Video ROM 000f0000-000fffff : System ROM 00100000-a124b017 : System RAM 04000000-04a01f66 : Kernel code 04c00000-04fa0fff : Kernel rodata 05000000-051aecbf : Kernel data 055f7000-057fffff : Kernel bss a124b018-a1257057 : System RAM a1257058-a1258017 : System RAM a1258018-a1260057 : System RAM a1260058-a1261017 : System RAM a1261018-a1280a57 : System RAM a1280a58-aef36fff : System RAM aef37000-af177fff : Reserved af178000-bd3f4fff : System RAM bd3f5000-bd424fff : Reserved bd425000-bd704fff : System RAM bd705000-bd826fff : ACPI Non-volatile Storage bd827000-bea26fff : Reserved bea27000-bea27fff : System RAM bea28000-bec2dfff : ACPI Non-volatile Storage bec2e000-bf035fff : System RAM bf036000-bf7b6fff : Reserved bf7b7000-bf7fffff : System RAM bf800000-bfffffff : RAM buffer c0000000-ffffffff : PCI Bus 0000:00 c0000000-d1ffffff : PCI Bus 0000:02 c0000000-cfffffff : 0000:02:00.0 d0000000-d1ffffff : 0000:02:00.0 d1000000-d12fffff : efifb d2100000-d21fffff : PCI Bus 0000:08 d2100000-d2103fff : 0000:08:00.0 e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff] e0000000-efffffff : pnp 00:00 fd000000-fe0fffff : PCI Bus 0000:02 fd000000-fdffffff : 0000:02:00.0 fe080000-fe083fff : 0000:02:00.1 fe100000-fe1fffff : PCI Bus 0000:08 fe100000-fe100fff : 0000:08:00.0 fe100000-fe100fff : r8169 fe200000-fe2fffff : PCI Bus 0000:07 fe200000-fe23ffff : 0000:07:00.0 fe200000-fe23ffff : alx fe300000-fe3fffff : PCI Bus 0000:05 fe300000-fe300fff : 0000:05:00.0 fe300000-fe300fff : xhci-hcd fe400000-fe4fffff : PCI Bus 0000:04 fe400000-fe40ffff : 0000:04:00.0 fe410000-fe4101ff : 0000:04:00.0 fe410000-fe4101ff : ahci fe500000-fe5fffff : PCI Bus 0000:03 fe500000-fe500fff : 0000:03:00.0 fe500000-fe500fff : xhci-hcd fe600000-fe6fffff : PCI Bus 0000:01 fe600000-fe61ffff : 0000:01:00.1 fe600000-fe61ffff : e1000e fe620000-fe63ffff : 0000:01:00.1 fe620000-fe63ffff : e1000e fe640000-fe65ffff : 0000:01:00.0 fe640000-fe65ffff : e1000e fe660000-fe67ffff : 0000:01:00.0 fe660000-fe67ffff : e1000e fe700000-fe703fff : 0000:00:14.2 fe704000-fe7040ff : 0000:00:16.2 fe704000-fe7040ff : ehci_hcd fe705000-fe705fff : 0000:00:16.0 fe705000-fe705fff : ohci_hcd fe706000-fe706fff : 0000:00:14.5 fe706000-fe706fff : ohci_hcd fe707000-fe7070ff : 0000:00:13.2 fe707000-fe7070ff : ehci_hcd fe708000-fe708fff : 0000:00:13.0 fe708000-fe708fff : ohci_hcd fe709000-fe7090ff : 0000:00:12.2 fe709000-fe7090ff : ehci_hcd fe70a000-fe70afff : 0000:00:12.0 fe70a000-fe70afff : ohci_hcd fe70b000-fe70b3ff : 0000:00:11.0 fe70b000-fe70b3ff : ahci feb00000-feb03fff : amd_iommu fec00000-fec01fff : Reserved fec00000-fec003ff : IOAPIC 0 fec01000-fec013ff : IOAPIC 1 fec10000-fec10fff : Reserved fec10000-fec10fff : pnp 00:05 fed00000-fed00fff : Reserved fed00000-fed003ff : HPET 0 fed00000-fed003ff : PNP0103:00 fed61000-fed70fff : Reserved fed61000-fed70fff : pnp 00:05 fed80000-fed8ffff : Reserved fed80000-fed8ffff : pnp 00:05 fee00000-fee00fff : Local APIC fee00000-fee00fff : pnp 00:05 fef00000-ffffffff : Reserved ff800000-ffffffff : pnp 00:05 100001000-83effffff : System RAM 83f000000-83fffffff : RAM buffer Thanks Terran Edited December 21, 2021 by ccsnet Quote Link to comment
runamuk Posted December 21, 2021 Share Posted December 21, 2021 (edited) 1 hour ago, ccsnet said: <type arch='x86_64' machine='pc-i440fx-6.1'>hvm</type> I promise you change your machine type to highest Q35 on your unraid version. This solves most of video card passthrough issues. Edited December 21, 2021 by runamuk Quote Link to comment
ghost82 Posted December 21, 2021 Share Posted December 21, 2021 (edited) 11 hours ago, ccsnet said: vfio-pci 0000:02:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 8 hours ago, ccsnet said: d1000000-d12fffff : efifb Gpu is in use by efifb. Add to your syslinux configuration (Main - Boot Device - Flash - Syslinux Configuration --> edit the green label (unRAID OS, unRAID OS GUI, or whatever you are booting); video=efifb:off so it becomes. for example: append video=efifb:off initrd=/bzroot Reboot unraid and passthrough should work. Edited December 22, 2021 by ghost82 Quote Link to comment
ghost82 Posted December 22, 2021 Share Posted December 22, 2021 11 hours ago, ccsnet said: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> And please enable multifunction as it was suggested: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> Quote Link to comment
ccsnet Posted December 22, 2021 Author Share Posted December 22, 2021 Thanks... I did start off on q35 and I did edit the xml, saved and rechecked it so not sure why it reverted. That said I've made some many changes including rebuilding the vm settings I might have missed it. I'll get this done today. Thanks again. T Quote Link to comment
ccsnet Posted December 24, 2021 Author Share Posted December 24, 2021 Hi - a couple of days since I posted as I've been quite busy but I wanted to update you. I 'think' its booting, there is a lot of VM disk activity and its grabbing an IP however there is nothing on the display nor can I RDP in. I'm going to see if a fresh build helps. For ref (mainly for my self) I'll just post the xml here. I'm hoping I'm closer and that all of this will be of some use for others. T Quote <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='1'> <name>Windows 11</name> <uuid>e48d1b87-4979-09d6-036b-1be269ab4658</uuid> <description>Windows 11</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-6.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e48d1b87-4979-09d6-036b-1be269ab4658_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='3' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/disk5/domains/Windows 11/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.208-1.iso' index='1'/> <backingStore/> <target dev='hdb' bus='sata'/> <readonly/> <alias name='sata0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:38:32:55'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Windows 11/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> <alias name='tpm0'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> Quote Link to comment
ghost82 Posted December 24, 2021 Share Posted December 24, 2021 29 minutes ago, ccsnet said: For ref (mainly for my self) I'll just post the xml here Please, when you find an issue, always attach diagnostics file without rebooting unraid, so to have a more complete overview and try to understand what is happening. 1 Quote Link to comment
ccsnet Posted December 24, 2021 Author Share Posted December 24, 2021 (edited) Hi @ghost82 - thanks for all your help. So diags attached. Steps taken today - booted up a previsly working Win 11 image - could not logon although I could see it running. For any one else following it seems it was trying to go through the pin and app set up again as it saw its self on new hardware. It also removed the RDP access which had to be renabled. I did that by leaving the video card as the sound and VNC as the graphics. While in I managed to confirm like last time the sound element of the 1060 is loading fine but the graphics will not start (although is seen) as Windows sees a problem. To that end I deleted the image and VM in Unraid and created a new one (adding in the hostdev info above) and restarted. Nothing - the VM just seems to stop and I have the following errors in the log: Quote Dec 24 13:12:21 tbmaindoma kernel: vfio-pci 0000:02:00.0: vfio_ecap_init: hiding ecap 0x19@0x900 Dec 24 13:12:21 tbmaindoma kernel: vfio-pci 0000:02:00.0: No more image in the PCI ROM Dec 24 13:12:23 tbmaindoma kernel: vfio-pci 0000:02:00.0: No more image in the PCI ROM Dec 24 13:12:23 tbmaindoma kernel: vfio-pci 0000:02:00.0: No more image in the PCI ROM I know the card (MSI GEforce GTX 1060 6GT OCV1) is fine as I tested it in a stand alone box when checking the firmware and this is a card that others have used. As I understand it the steps have been to isolate the card so it can not be used by Unraid so I would have thought this would be fine. All very odd. Not sure if this is a bug for the RC or not. Thanks T EDIT - Just reading and actioning this - EDIT 2 - Same result EDIT 3 - Reading for later https://www.google.com/search?q=Unraid+kernel:+vfio-pci+No+more+image+in+the+PCI+ROM+site:forums.unraid.net&rlz=1C1ONGR_en-GBGB964GB964&sxsrf=AOaemvJcenf3tqBCnPpgEQBZrxQzbZqHRw:1640354868447&sa=X&ved=2ahUKEwjIyvSJzvz0AhXah1wKHZFxDnQQrQIoBHoECAoQBQ&biw=1920&bih=929&dpr=1 Edit 4 - Diage Guide https://docs.google.com/document/d/17Wh9_5HPqAx8HHk-p2bGlR0E-65TplkG18jvM98I7V8/edit Edit 5 Edit 6 - Code 43 diagnostics-20211224-1314.zip Edited January 1, 2022 by ccsnet Quote Link to comment
ccsnet Posted December 24, 2021 Author Share Posted December 24, 2021 Ok - not 100% trusting it yet but I seem to have it working. I cleaned up a lot of the settings including removing the Nvidia plugin, changing the machine type from Q35-6.x to Q35-5.1 and using the bios I had dumped. I have also uninstalled and reinstalled the Nvidia drivers (no Geforce) in the VM too. I'm still getting the PCI error but it works so no biggy. Diags attached if any one wants a look - in the mean time I'm going to do a few reboots etc and see what happerns. T Quote Link to comment
ccsnet Posted December 24, 2021 Author Share Posted December 24, 2021 (edited) 1 hour ago, ccsnet said: Ok - not 100% trusting it yet but I seem to have it working. I cleaned up a lot of the settings including removing the Nvidia plugin, changing the machine type from Q35-6.x to Q35-5.1 and using the bios I had dumped. I have also uninstalled and reinstalled the Nvidia drivers (no Geforce) in the VM too. I'm still getting the PCI error but it works so no biggy. Diags attached if any one wants a look - in the mean time I'm going to do a few reboots etc and see what happerns. T Well that didnt last for long... A couple of reboots / cold starts later and nothing again. I'm going to see if I cabn find a DVI to VGA adaptor move away from the HDMI to VGA one - long shot I know. T diagnostics-20211224-1636.zip Edited December 24, 2021 by ccsnet Quote Link to comment
ghost82 Posted December 24, 2021 Share Posted December 24, 2021 (edited) Hi, no suggestion was implemented, efifb off, multifunction, no vfio,... I'm looking at the latest diagnostics: in order: 1. replace the whole xml with this: <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit e8495946-fef8-c2e5-b7f3-36639b8386c5 or other application using the libvirt API. --> <domain type='kvm'> <name>Windows 11</name> <uuid>e8495946-fef8-c2e5-b7f3-36639b8386c5</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e8495946-fef8-c2e5-b7f3-36639b8386c5_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='3' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/disk5/domains/Windows 11/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.208-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:7b:bb:81'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> </domain> 2. Go to Tools > System Devices and put a tick for: iommu group 18 3. Go to Main - Boot Device - Flash - Syslinux Configuration --> edit the green label (unRAID OS) Replace the line starting with append with: append video=efifb:off initrd=/bzroot 4. reboot unraid and connect to the server from an external device in the lan 5. start the vm with a monitor attached Note 1: in your syslinux configuration you have this line also: vfio-pci.ids=8086:105e These are ethernet controllers, you have 2, with the same ids: 01:00.0 Ethernet controller [0200]: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) [8086:105e] (rev 06) Subsystem: Intel Corporation PRO/1000 PT Dual Port Server Adapter [8086:135e] Kernel driver in use: vfio-pci Kernel modules: e1000e 01:00.1 Ethernet controller [0200]: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) [8086:105e] (rev 06) Subsystem: Intel Corporation PRO/1000 PT Dual Port Server Adapter [8086:135e] Kernel driver in use: vfio-pci Kernel modules: e1000e If you want to isolate these too, do not use syslinux configuration but put a tick in Tools > System Devices for: iommu group 16 and/or iommu group 17 Do this and if it doesn't work reattach diagnostics. Edited December 24, 2021 by ghost82 Quote Link to comment
ccsnet Posted December 24, 2021 Author Share Posted December 24, 2021 52 minutes ago, ghost82 said: Hi, no suggestion was implemented, efifb off, multifunction, no vfio,... I'm looking at the latest diagnostics: in order: 1. replace the whole xml with this: <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit e8495946-fef8-c2e5-b7f3-36639b8386c5 or other application using the libvirt API. --> <domain type='kvm'> <name>Windows 11</name> <uuid>e8495946-fef8-c2e5-b7f3-36639b8386c5</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='7'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e8495946-fef8-c2e5-b7f3-36639b8386c5_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='3' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/disk5/domains/Windows 11/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.208-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:7b:bb:81'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> </domain> 2. Go to Tools > System Devices and put a tick for: iommu group 18 3. Go to Main - Boot Device - Flash - Syslinux Configuration --> edit the green label (unRAID OS) Replace the line starting with append with: append video=efifb:off initrd=/bzroot 4. reboot unraid and connect to the server from an external device in the lan 5. start the vm with a monitor attached Note 1: in your syslinux configuration you have this line also: vfio-pci.ids=8086:105e These are ethernet controllers, you have 2, with the same ids: 01:00.0 Ethernet controller [0200]: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) [8086:105e] (rev 06) Subsystem: Intel Corporation PRO/1000 PT Dual Port Server Adapter [8086:135e] Kernel driver in use: vfio-pci Kernel modules: e1000e 01:00.1 Ethernet controller [0200]: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) [8086:105e] (rev 06) Subsystem: Intel Corporation PRO/1000 PT Dual Port Server Adapter [8086:135e] Kernel driver in use: vfio-pci Kernel modules: e1000e If you want to isolate these too, do not use syslinux configuration but put a tick in Tools > System Devices for: iommu group 16 and/or iommu group 17 Do this and if it doesn't work reattach diagnostics. Hi - I had put the changes in but also took them out while I tried other things (spent a few hours on it today) which is why they are not in the diags. Note 1 - thanks - this was based on spaceindaverones video but I believe there was a recent change so I follow what your saying. I've added them back and attached the diags for you as screen is not displaying. I really do apprciate your time on this and I am trying your suggestions but I'm also trying actions too after looking around the forums, redit and Nvida sites. Thanks T diagnostics-20211224-1838.zip Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.