johnsanc Posted October 28, 2023 Share Posted October 28, 2023 (edited) I have an issue that has been present I believe with every 6.12.x release. My Windows VM will not start properly unless its first shut down using a Force Stop command. Restarting reboots the VM, but it will not start properly. Graceful shutdown then restarting boots the VM but it does not start properly. There is zero information in the logs. This VM does use one of my graphics cards that is passed through. The only thing I noticed is when I plug in a monitor to see whats going on, the times it doesnt boot properly I see a big unraid logo then the screen goes black. When it does work properly after a Force Stop and restart it goes straight to a Windows spinner icon and loads up just fine. Any idea what could be causing this? Its really annoying and I'm not comfortable always force stopping the VM. It makes Windows updates very nerve racking. Edited October 28, 2023 by johnsanc Quote Link to comment
ghost82 Posted November 2, 2023 Share Posted November 2, 2023 Seems like the gpu is not properly isolated and as soon you restart or shoutdown the vm the host is using it without releasing it properly on the next vm boot. Quote Link to comment
johnsanc Posted November 2, 2023 Author Share Posted November 2, 2023 Any pointers on how to properly isolate it? I have it bound here: and vfio-pci.cfg: BIND=0000:2f:00.1|1022:149c 0000:2f:00.3|1022:149c 0000:33:00.0|10de:1e84 0000:33:00.1|10de:10f8 0000:33:00.2|10de:1ad8 0000:33:00.3|10de:1ad9 Long ago I had something in my go file... do I need to do something there? Quote Link to comment
ghost82 Posted November 2, 2023 Share Posted November 2, 2023 1 minute ago, johnsanc said: do I need to do something there? No. Please attach diagnostics so I can have a look Quote Link to comment
johnsanc Posted November 3, 2023 Author Share Posted November 3, 2023 (edited) Attached is an older diagnostics zip from a couple weeks ago. I tried downloading a new one but it was taking forever due to literally hundreds of thousands of error lines from Dynamic File Integrity not finding export files. Also this other post of mine may be related to the VM issue: tower-diagnostics-20231016-1916.zip Edited November 3, 2023 by johnsanc Quote Link to comment
ghost82 Posted November 3, 2023 Share Posted November 3, 2023 I need recent diagnostics, the one you attached doesn't have any vm defined. Quote Link to comment
johnsanc Posted November 3, 2023 Author Share Posted November 3, 2023 Apologies. Updated diagnostics attached. tower-diagnostics-20231102-1947.zip Quote Link to comment
ghost82 Posted November 3, 2023 Share Posted November 3, 2023 (edited) Try this: 1- add: video=efifb:off in the syslinux configuration. You find it under: Main - Boot Device - Flash - Syslinux Configuration Add video=efifb:off to the Unraid OS label, in the 'append line', so it results like this: append video=efifb:off vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=1-8,13-20 iommu=pt avic=1 This may not be necessary since the boot vga is not the one you are trying to pass, but let's decrease the possibilities of errors. If it will work you could try to remove efifb:off and see if it still works. 2- modify your vm in avanced view (xml mode), replace the whole xml with this: <domain type='kvm'> <name>Windows 11</name> <uuid>2b7d02e7-ce93-6934-5afb-641e9b93ab6e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows11.png" os="windows10"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='15'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='16'/> <vcpupin vcpu='8' cpuset='5'/> <vcpupin vcpu='9' cpuset='17'/> <vcpupin vcpu='10' cpuset='6'/> <vcpupin vcpu='11' cpuset='18'/> <vcpupin vcpu='12' cpuset='7'/> <vcpupin vcpu='13' cpuset='19'/> <vcpupin vcpu='14' cpuset='8'/> <vcpupin vcpu='15' cpuset='20'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/2b7d02e7-ce93-6934-5afb-641e9b93ab6e_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/domains/iso/Win11_22H2_English_x64v2.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/domains/iso/virtio-win-0.1.240.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/loaders/spaces_win_clover.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:1f:a0:a3'/> <source bridge='br1'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x2f' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x32' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x3'/> </hostdev> <memballoon model='none'/> </devices> </domain> 3- Reboot the server nad start Unraid OS (NO GUI) 4- try to run the vm I don't see any other configuration error; basically, it could hang at reboot because the drivers could expect a gpu multifunction device, like in bare metal, and the gpu in your vm was not configured to be a multifunction device. I hope it's not dependant on the clover bootloader you set in the vm (it should not...), which I think it's not needed anymore with recent windows. It could also be related to the passed through nvme controller, sometimes passing both gpu and nvme could create issues. Edited November 3, 2023 by ghost82 Quote Link to comment
johnsanc Posted November 3, 2023 Author Share Posted November 3, 2023 (edited) Unfortunately that did not work. When looking at the monitor I can see the unraid logo flash then the screen goes black, If I force stop and restart it also doesn't work. For now I will revert back to my old config. I just want to note this only started happening with 6.12.x, this was never an issue with earlier releases. Edited November 3, 2023 by johnsanc Quote Link to comment
johnsanc Posted November 3, 2023 Author Share Posted November 3, 2023 One other thing I noticed today.... Once I start the VM after force stopping, my other cheap graphics card that I use just for local terminal access cuts out. Based on my configuration do you see any explanation for this? Quote Link to comment
johnsanc Posted November 12, 2023 Author Share Posted November 12, 2023 For anyone else running into this issue, I figured out how to fix it. In my case I was still using the old clover bootloader and my nvme controller was not bound at boot. So to fix I did: Changed primary vDisk to none since I wanted to boot from the NVME Went into the system devices profiler and bound my nvme controller at boot Set the NVME to boot order of 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.