Rick Sanchez Posted September 30, 2021 Share Posted September 30, 2021 Hi there 😁 I am hoping that this is a simple issue that you can help me to fix? 👏 I created a brand new VM following 'Spaceinvader One' 's Windows 10 and 11 guide and I am receiving the same error on both and I cannot work out what is wrong?! [quote] internal error: qemu unexpectedly closed the monitor: 2021-09-30T15:40:30.278619Z qemu-system-x86_64: -device vfio-pci,host=0000:03:00.0,id=hostdev0,bus=pci.0,addr=0x6: vfio 0000:03:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. [/quote] Please, can I have some help? Is this a missing driver issue? Thank you Quote Link to comment
ghost82 Posted September 30, 2021 Share Posted September 30, 2021 4 hours ago, Rick Sanchez said: Hi there 😁 I am hoping that this is a simple issue that you can help me to fix? 👏 I created a brand new VM following 'Spaceinvader One' 's Windows 10 and 11 guide and I am receiving the same error on both and I cannot work out what is wrong?! [quote] internal error: qemu unexpectedly closed the monitor: 2021-09-30T15:40:30.278619Z qemu-system-x86_64: -device vfio-pci,host=0000:03:00.0,id=hostdev0,bus=pci.0,addr=0x6: vfio 0000:03:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. [/quote] Please, can I have some help? Is this a missing driver issue? Thank you Please attach diagnostic file and output of terminal "cat /proc/iomem" 1 Quote Link to comment
Rick Sanchez Posted October 1, 2021 Author Share Posted October 1, 2021 15 hours ago, ghost82 said: Please attach diagnostic file and output of terminal "cat /proc/iomem" Thanks, for your help! [quote] root@#####:~# cat /proc/iomem 00000000-00000fff : Reserved 00001000-0009afff : System RAM 0009b000-0009ffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000ce5ff : Video ROM 000ce800-000d45ff : Adapter ROM 000e0000-000fffff : Reserved 000f0000-000fffff : System ROM 00100000-3fffffff : System RAM 01000000-01a00816 : Kernel code 01c00000-01e4afff : Kernel rodata 02000000-02127f7f : Kernel data 02471000-025fffff : Kernel bss 40000000-403fffff : Reserved 40000000-403fffff : pnp 00:00 40400000-5f4e5fff : System RAM 5f4e6000-5f4e6fff : Reserved 5f4e7000-5f8dffff : System RAM 5f8e0000-5f8e0fff : Reserved 5f8e1000-6979efff : System RAM 6979f000-6979ffff : ACPI Non-volatile Storage 697a0000-697c2fff : System RAM 697c3000-697c3fff : Reserved 697c4000-6d324fff : System RAM 6d325000-6efe1fff : Reserved 6efe2000-6f111fff : ACPI Tables 6f112000-6f5d5fff : ACPI Non-volatile Storage 6f5d6000-6fffdfff : Reserved 6fffe000-6fffefff : System RAM 6ffff000-70ffffff : Reserved 71000000-dfffffff : PCI Bus 0000:00 71000000-715fffff : PCI Bus 0000:04 71000000-7107ffff : 0000:04:00.0 71080000-710bffff : 0000:04:00.0 71080000-710bffff : mpt2sas 710c0000-714bffff : 0000:04:00.0 714c0000-714c3fff : 0000:04:00.0 714c0000-714c3fff : mpt2sas 714c4000-71503fff : 0000:04:00.0 80000000-901fffff : PCI Bus 0000:01 80000000-901fffff : PCI Bus 0000:02 80000000-901fffff : PCI Bus 0000:03 80000000-8fffffff : 0000:03:00.0 90000000-901fffff : 0000:03:00.0 94000000-c20fffff : PCI Bus 0000:0c c2400000-c28fffff : PCI Bus 0000:78 c2400000-c27fffff : 0000:78:00.0 c2400000-c27fffff : atlantic_mmio c2800000-c283ffff : 0000:78:00.0 c2840000-c284ffff : 0000:78:00.0 c2840000-c284ffff : atlantic_mmio c2850000-c2850fff : 0000:78:00.0 c2850000-c2850fff : atlantic_mmio c2900000-c2bfffff : PCI Bus 0000:05 c2900000-c29fffff : 0000:05:00.0 c2a00000-c2afffff : 0000:05:00.0 c2a00000-c2afffff : igc c2b00000-c2b03fff : 0000:05:00.0 c2b00000-c2b03fff : igc c2c00000-c2dfffff : PCI Bus 0000:06 c2c00000-c2dfffff : PCI Bus 0000:07 c2c00000-c2cfffff : PCI Bus 0000:0a c2c00000-c2c001ff : 0000:0a:00.0 c2c00000-c2c001ff : ahci c2d00000-c2dfffff : PCI Bus 0000:08 c2d00000-c2d03fff : 0000:08:00.0 c2e00000-c2ffffff : PCI Bus 0000:01 c2e00000-c2efffff : PCI Bus 0000:02 c2e00000-c2efffff : PCI Bus 0000:03 c2e00000-c2e7ffff : 0000:03:00.0 c2ea0000-c2ea3fff : 0000:03:00.1 c2f00000-c2f03fff : 0000:01:00.0 c3000000-c30fffff : PCI Bus 0000:77 c3000000-c3003fff : 0000:77:00.0 c3000000-c3003fff : nvme c3200000-c3201fff : 0000:00:17.0 c3200000-c3201fff : ahci c3202000-c32027ff : 0000:00:17.0 c3202000-c32027ff : ahci c3203000-c32030ff : 0000:00:17.0 c3203000-c32030ff : ahci c3204000-c3204fff : 0000:00:16.3 e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff] e0000000-efffffff : Reserved e0000000-efffffff : pnp 00:08 fc800000-fe7fffff : PCI Bus 0000:00 fd000000-fd69ffff : pnp 00:05 fd6a0000-fd6affff : pnp 00:07 fd6b0000-fd6bffff : pnp 00:07 fd6c0000-fd6cffff : pnp 00:05 fd6d0000-fd6dffff : pnp 00:07 fd6e0000-fd6effff : pnp 00:07 fd6f0000-fdffffff : pnp 00:05 fe000000-fe010fff : Reserved fe010000-fe010fff : 0000:00:1f.5 fe038000-fe038fff : pnp 00:09 fe200000-fe7fffff : pnp 00:05 fec00000-fec00fff : Reserved fec00000-fec003ff : IOAPIC 0 fed00000-fed003ff : HPET 0 fed00000-fed003ff : PNP0103:00 fed10000-fed17fff : pnp 00:08 fed18000-fed18fff : pnp 00:08 fed19000-fed19fff : pnp 00:08 fed40000-fed44fff : MSFT0101:00 fed91000-fed91fff : dmar0 fee00000-fee00fff : Local APIC fee00000-fee00fff : Reserved ff000000-ffffffff : Reserved ff000000-ffffffff : pnp 00:05 100000000-88affffff : System RAM 88b000000-88bffffff : RAM buffer 4000000000-7fffffffff : PCI Bus 0000:00 4000000000-4049ffffff : PCI Bus 0000:0c 404a000000-404a0fffff : 0000:00:1f.3 404a100000-404a10ffff : 0000:00:14.0 404a100000-404a10ffff : xhci-hcd 404a110000-404a113fff : 0000:00:1f.3 404a114000-404a115fff : 0000:00:14.2 404a116000-404a1160ff : 0000:00:1f.4 404a117000-404a117fff : 0000:00:16.0 404a118000-404a118fff : 0000:00:14.2 404a119000-404a119fff : 0000:00:12.0 404a119000-404a119fff : Intel PCH thermal driver 404a11a000-404a11afff : 0000:00:08.0 [/quote] Quote Link to comment
ghost82 Posted October 1, 2021 Share Posted October 1, 2021 16 hours ago, ghost82 said: Please attach diagnostic file 1 Quote Link to comment
Rick Sanchez Posted October 1, 2021 Author Share Posted October 1, 2021 7 minutes ago, ghost82 said: Thank you server-diagnostics-20211001-1235.zip Quote Link to comment
ghost82 Posted October 1, 2021 Share Posted October 1, 2021 19 minutes ago, Rick Sanchez said: Thank you server-diagnostics-20211001-1235.zip 135.25 kB · 0 downloads Ok, You are not using, acs override, your iommu group 1 contains the gpu and other bridges, it may work, but first of all I would use acs override set to downstream, multifunction (settings --> vm manager). Once acs override is applied, reboot and post again the diadgnostic, and we will be able to modify the xml of the vms. 1 Quote Link to comment
Rick Sanchez Posted October 1, 2021 Author Share Posted October 1, 2021 52 minutes ago, ghost82 said: Ok, You are not using, acs override, your iommu group 1 contains the gpu and other bridges, it may work, but first of all I would use acs override set to downstream, multifunction (settings --> vm manager). Once acs override is applied, reboot and post again the diadgnostic, and we will be able to modify the xml of the vms. Which option should I select? "your iommu group 1 contains the gpu and other bridges, it may work" Can you dumb this down for me, please? Lol Quote Link to comment
ghost82 Posted October 1, 2021 Share Posted October 1, 2021 (edited) 19 minutes ago, Rick Sanchez said: Which option should I select? In unraid go to settings --> vm manager change pcie acs override to both Save Reboot Attach diagnostic again Edited October 1, 2021 by ghost82 1 Quote Link to comment
Rick Sanchez Posted October 2, 2021 Author Share Posted October 2, 2021 10 hours ago, ghost82 said: In unraid go to settings --> vm manager change pcie acs override to both Save Reboot Attach diagnostic again Thanks for your help! server-diagnostics-20211002-0217.zip Quote Link to comment
ghost82 Posted October 2, 2021 Share Posted October 2, 2021 Ok, now it's splitted. In unraid go to the vm page, edit the win10 vm, click the advanced view (xml view) (top right), select the whole code and replace with this: <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit df5c1f38-a6f0-9667-757e-0c4b4cc9c912 or other application using the libvirt API. --> <domain type='kvm'> <name>Windows 10</name> <uuid>df5c1f38-a6f0-9667-757e-0c4b4cc9c912</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='10'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='12'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='14'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/df5c1f38-a6f0-9667-757e-0c4b4cc9c912_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='3' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows/Windows10.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:9c:03:84'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> </domain> For the win11 vm you are using vnc, if you want to passthrough the gpu, replace this: <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-gb'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> With this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> Since this is the only gpu in your system you wouldn't want to attach it to vfio at boot, otherwise you will loose the output on video of unraid, but some gpus will not work for passthrough if they are not isolated, try and report back. Moreover, I think that if it works, you will not be able to get the video output of unraid once the vm is shutdown. Quote Link to comment
Rick Sanchez Posted October 2, 2021 Author Share Posted October 2, 2021 5 hours ago, ghost82 said: Ok, now it's splitted. In unraid go to the vm page, edit the win10 vm, click the advanced view (xml view) (top right), select the whole code and replace with this: <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit df5c1f38-a6f0-9667-757e-0c4b4cc9c912 or other application using the libvirt API. --> <domain type='kvm'> <name>Windows 10</name> <uuid>df5c1f38-a6f0-9667-757e-0c4b4cc9c912</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='10'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='12'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='14'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/df5c1f38-a6f0-9667-757e-0c4b4cc9c912_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='3' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows/Windows10.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:9c:03:84'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> </domain> For the win11 vm you are using vnc, if you want to passthrough the gpu, replace this: <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-gb'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> With this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> Since this is the only gpu in your system you wouldn't want to attach it to vfio at boot, otherwise you will loose the output on video of unraid, but some gpus will not work for passthrough if they are not isolated, try and report back. Absolute legend. Thank you! Everything booted, but no image on my monitor... But one step closer! 5 hours ago, ghost82 said: Moreover, I think that if it works, you will not be able to get the video output of unraid once the vm is shutdown. The server has onboard graphics, so hopefully that'll work! Quote Link to comment
ghost82 Posted October 2, 2021 Share Posted October 2, 2021 59 minutes ago, Rick Sanchez said: Everything booted, but no image on my monitor Can you clarify? no image when unraid boots or when your vm boots? 1 hour ago, Rick Sanchez said: The server has onboard graphics, so hopefully that'll work! If you have onboard gpu set it to primary in the bios and once booted in unraid go to system devices and put a check next to iommu groups 16 and 17. Reboot unraid. This will isolate the gpu at boot. Try to boot the vm. 1 Quote Link to comment
Rick Sanchez Posted October 4, 2021 Author Share Posted October 4, 2021 I've done that and now UnRAID won't boot at all lol ####-diagnostics-20211004-1138.zip Quote Link to comment
Rick Sanchez Posted October 4, 2021 Author Share Posted October 4, 2021 It auto boots into the BIOS now 🤷♂️ Quote Link to comment
ghost82 Posted October 4, 2021 Share Posted October 4, 2021 (edited) 4 hours ago, Rick Sanchez said: I've done that and now UnRAID won't boot at all lol You mean you changed the onboard gpu to primary and it doesn't boot? Maybe the onboard gpu is not uefi capable if you are booting unraid in uefi mode? If you cannot boot what does the diagnostics refer to? The win 10 / win 11 xml are not edited as I written in a previous post. Edited October 4, 2021 by ghost82 1 Quote Link to comment
Rick Sanchez Posted October 4, 2021 Author Share Posted October 4, 2021 Server Spec: I edited and saved it? Quote Link to comment
Rick Sanchez Posted October 5, 2021 Author Share Posted October 5, 2021 (edited) I've wiped the UnRAID USB to start a fresh and to problem solve - after backing it up, formatted it and installed the latest version of UnRAID. Ran make bootable etc. Still won't boot from USB, even if I manually select the USB upon boot up. Edited October 5, 2021 by Rick Sanchez Quote Link to comment
ghost82 Posted October 5, 2021 Share Posted October 5, 2021 Did you change something in the bios??revert the changes! Quote Link to comment
Rick Sanchez Posted October 5, 2021 Author Share Posted October 5, 2021 5 hours ago, ghost82 said: Did you change something in the bios??revert the changes! I have. But still nothing. Quote Link to comment
Rick Sanchez Posted October 11, 2021 Author Share Posted October 11, 2021 The third time lucky! Hopefully, my new USBs will arrive quickly as one has a broken case! Quote Link to comment
Rick Sanchez Posted October 13, 2021 Author Share Posted October 13, 2021 From AsRock [quote] Hello, Could it be related to CSM? BIOS > Boot > CSM If CSM is enabled then the system should be able to boot from both legacy and EFI bootable devices. If CSM is disabled then the system can boot only from EFI bootable devices. Devices without the required EFI folder/file structure will be ignored. Please note that the integrated graphics in your CPU do not support legacy mode properly. So when using this integrated GPU (iGPU) please make sure to install the OS in UEFI/GPT mode, and boot from EFI bootable devices. If you have a PCIe graphics card installed and want to keep the iGPU active as well then please set: BIOS > Advanced > Chipset Configuration > IGPU Multi-Monitor > Enabled (you probably know that already) Thanks. Kind regards, ASRock Support [/quote] Stops UnRAID from working! Quote Link to comment
ghost82 Posted October 13, 2021 Share Posted October 13, 2021 11 minutes ago, Rick Sanchez said: From AsRock [quote] Hello, Could it be related to CSM? BIOS > Boot > CSM If CSM is enabled then the system should be able to boot from both legacy and EFI bootable devices. If CSM is disabled then the system can boot only from EFI bootable devices. Devices without the required EFI folder/file structure will be ignored. Please note that the integrated graphics in your CPU do not support legacy mode properly. So when using this integrated GPU (iGPU) please make sure to install the OS in UEFI/GPT mode, and boot from EFI bootable devices. If you have a PCIe graphics card installed and want to keep the iGPU active as well then please set: BIOS > Advanced > Chipset Configuration > IGPU Multi-Monitor > Enabled (you probably know that already) Thanks. Kind regards, ASRock Support [/quote] Stops UnRAID from working! I don't understand..did you change the boot options by enabling csm? Does your unraid usb stick have the EFI folder?(note: EFI and not -EFI). I would just backup the config folder in the unraid usb stick, delete all, copy new files on it, run the bootable script and allow uefi boot when asked, copy back the config folder, disable csm in the bios. Is unraid your only os or are you booting other bare metal oses?If so, if they were installed with legacy bios you won't be able to boot them with csm disabled, until you convert the partition scheme from mbr to gpt and make the fat32 efi partition with bootloader file(s) in it. 1 Quote Link to comment
Rick Sanchez Posted October 13, 2021 Author Share Posted October 13, 2021 (edited) 4 hours ago, ghost82 said: I don't understand..did you change the boot options by enabling csm? Does your unraid usb stick have the EFI folder?(note: EFI and not -EFI). I would just backup the config folder in the unraid usb stick, delete all, copy new files on it, run the bootable script and allow uefi boot when asked, copy back the config folder, disable csm in the bios. Is unraid your only os or are you booting other bare metal oses?If so, if they were installed with legacy bios you won't be able to boot them with csm disabled, until you convert the partition scheme from mbr to gpt and make the fat32 efi partition with bootloader file(s) in it. Yes, and yes I did a fresh install Quote If you have a PCIe graphics card installed and want to keep the iGPU active as well then please set: BIOS > Advanced > Chipset Configuration > IGPU Multi-Monitor > Enabled (you probably know that already) This is what keeps knackering my system! UnRAID refuses to boot when this is enabled?! Edited October 13, 2021 by Rick Sanchez Quote Link to comment
Rick Sanchez Posted December 13, 2021 Author Share Posted December 13, 2021 [quote]Execution error operation failed: unable to find any master var store for loader: /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd0[/quote] Any ideas why this still isn't working? ###-diagnostics-20211213-0012.zip Quote Link to comment
Squid Posted December 13, 2021 Share Posted December 13, 2021 Did you fix this 2021-12-12 23:39:55.344+0000: 13274: error : qemuProcessReportLogError:2097 : internal error: qemu unexpectedly closed the monitor: 2021-12-12T23:39:55.300339Z qemu-system-x86_64: -device vfio-pci,host=0000:03:00.0,id=hostdev0,bus=pci.4,addr=0x0,romfile=/mnt/disk3/isos/Windows/Saphire.RX5708GB.8192.191031.rom: vfio 0000:03:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. By isolating (Tools - system devices) what you're attempting to passthrough from the OS? 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.