Kung-Fubick Posted October 21, 2021 Share Posted October 21, 2021 Hi I have been trying to passthrough this card for some time but I am only met with a black screen. What I have done: Watched spaceinvader videos on passing through. Passed through 2 IUMMO groups (GPU and sound from GPU). These are group 23 and 24. Tried with and without vbios (Pretty sure this only applies to nvidia cards. My attempts to solve it: Been trying with different linux distros and safe graphics installs. Tried installing the distro with VNC and then installing the drivers, then removing VNC but still the same black screen (0 output). Tried the same on windows, it recognized my card after drivers were installed but still a black screen after removing VNC. Attaching logs and files I was able to find. I can upload more logs or info if needed just make a post what you would like to see. I think I have made an error while passing through the card. Any tips and help much appreciated. iummo.txt log.txt nas-diagnostics-20211022-0034.zip nas-syslog-20211021-2234.zip vfio-log.txt Win10_log.txt Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 7 hours ago, Kung-Fubick said: but I am only met with a black screen Quote Oct 22 00:15:41 NAS kernel: vfio-pci 0000:08:00.0: BAR 0: can't reserve [mem 0xd0000000-0xdfffffff 64bit pref] Output of "cat /proc/iomem" from unraid terminal please. Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 cat /proc/iomem 00000000-00000fff : Reserved 00001000-0009ffff : System RAM 000a0000-000fffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000dffff : PCI Bus 0000:00 000c0000-000cffff : Video ROM 000f0000-000fffff : System ROM 00100000-09c7efff : System RAM 04000000-04a00816 : Kernel code 04c00000-04e4afff : Kernel rodata 05000000-05127f7f : Kernel data 05471000-055fffff : Kernel bss 09c7f000-09ffffff : Reserved 0a000000-0a1fffff : System RAM 0a200000-0a20ffff : ACPI Non-volatile Storage 0a210000-0affffff : System RAM 0b000000-0b01ffff : Reserved 0b020000-a7b4f017 : System RAM a7b4f018-a7b6d457 : System RAM a7b6d458-a7b6e017 : System RAM a7b6e018-a7b7f057 : System RAM a7b7f058-b920dfff : System RAM b920e000-b920efff : Reserved b920f000-baf98fff : System RAM baf99000-bc5a7fff : Reserved bc5a8000-bc6dafff : ACPI Tables bc6db000-bcd6afff : ACPI Non-volatile Storage bcd6b000-bd9fefff : Reserved bd9ff000-beffffff : System RAM bf000000-bfffffff : Reserved c0000000-fec2ffff : PCI Bus 0000:00 d0000000-e01fffff : PCI Bus 0000:08 d0000000-dfffffff : 0000:08:00.0 d0000000-d08c9fff : efifb e0000000-e01fffff : 0000:08:00.0 f0000000-f7ffffff : PCI MMCONFIG 0000 [bus 00-7f] f0000000-f7ffffff : Reserved f0000000-f7ffffff : pnp 00:00 fc900000-fcbfffff : PCI Bus 0000:0b fc900000-fc9fffff : 0000:0b:00.3 fc900000-fc9fffff : xhci-hcd fca00000-fcafffff : 0000:0b:00.1 fca00000-fcafffff : ccp fcb00000-fcb07fff : 0000:0b:00.4 fcb08000-fcb09fff : 0000:0b:00.1 fcb08000-fcb09fff : ccp fcc00000-fcdfffff : PCI Bus 0000:01 fcc00000-fccfffff : PCI Bus 0000:02 fcc00000-fccfffff : PCI Bus 0000:03 fcc00000-fcc1ffff : 0000:03:00.0 fcc00000-fcc1ffff : igb fcc20000-fcc23fff : 0000:03:00.0 fcc20000-fcc23fff : igb fcd00000-fcd7ffff : 0000:01:00.1 fcd80000-fcd9ffff : 0000:01:00.1 fcd80000-fcd9ffff : ahci fcda0000-fcda7fff : 0000:01:00.0 fce00000-fcefffff : PCI Bus 0000:09 fce00000-fce03fff : 0000:09:00.0 fce00000-fce03fff : nvme fcf00000-fcffffff : PCI Bus 0000:08 fcf00000-fcf3ffff : 0000:08:00.0 fcf60000-fcf63fff : 0000:08:00.1 fd200000-fd2fffff : Reserved fd200000-fd2fffff : pnp 00:01 fd500000-fd57ffff : amd_iommu fd600000-fd7fffff : Reserved fea00000-fea0ffff : Reserved feb80000-fec01fff : Reserved fec00000-fec003ff : IOAPIC 0 fec01000-fec013ff : IOAPIC 1 fec10000-fec10fff : Reserved fec10000-fec10fff : pnp 00:05 fec30000-fec30fff : Reserved fec30000-fec30fff : AMDIF030:00 fed00000-fed00fff : Reserved fed00000-fed003ff : HPET 0 fed00000-fed003ff : PNP0103:00 fed40000-fed44fff : Reserved fed80000-fed8ffff : Reserved fed81500-fed818ff : AMDI0030:00 fedc0000-fedc0fff : pnp 00:05 fedc2000-fedcffff : Reserved fedd4000-fedd5fff : Reserved fee00000-ffffffff : PCI Bus 0000:00 fee00000-fee00fff : Local APIC fee00000-fee00fff : pnp 00:05 ff000000-ffffffff : Reserved ff000000-ffffffff : pnp 00:05 100000000-83f37ffff : System RAM 83f380000-83fffffff : Reserved Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 1 hour ago, ghost82 said: Output of "cat /proc/iomem" from unraid terminal please. This is the full output Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 6 minutes ago, Kung-Fubick said: This is the full output Quote Oct 22 00:15:41 NAS kernel: vfio-pci 0000:08:00.0: BAR 0: can't reserve [mem 0xd0000000-0xdfffffff 64bit pref] Quote d0000000-d08c9fff : efifb That region is in use by efifb, unraid is using the gpu, so if you want to pass it through you need to prevent efifb to attach, so just disable in syslinux config, append this video=efifb:off so it becomes something like (no gui): append video=efifb:off initrd=/bzroot Save and reboot. Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 18 minutes ago, ghost82 said: That region is in use by efifb, unraid is using the gpu, so if you want to pass it through you need to prevent efifb to attach, so just disable in syslinux config, append this video=efifb:off so it becomes something like (no gui): append video=efifb:off initrd=/bzroot Save and reboot. I tried this, (added it under Unraid OS Safe Mode (no plugins, no GUI)) So full entry under this is: kernel /bzimage append video=efifb:off initrd=/bzroot unraidsafemode I assumed that's what you meant, saved and rebooted. Is this the wrong place to edit it? Checked /proc/iommu , same group still in use by efifb. Thank you for helping me. Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 (edited) 10 minutes ago, Kung-Fubick said: I tried this, (added it under Unraid OS Safe Mode (no plugins, no GUI)) There are several labels, I'm not in front of unraid now so I cannot be specific, but you should have label unraid, label unraid with gui, and you mention also label unraid safe mode. If you boot unraid (label unraid) add video=efifb:off to that label. If you boot unraid with gui (label unraid with gui) add video=efifb:off to that label. If you boot unraid in safe mode (label unraid safe mode) add video=efifb:off to that label. I don't think you booted unraid in safe mode, so your changes don't apply. Fix this by applying video=efifb:off to the correct label. Edited October 22, 2021 by ghost82 Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 2 hours ago, ghost82 said: There are several labels, I'm not in front of unraid now so I cannot be specific, but you should have label unraid, label unraid with gui, and you mention also label unraid safe mode. If you boot unraid (label unraid) add video=efifb:off to that label. If you boot unraid with gui (label unraid with gui) add video=efifb:off to that label. If you boot unraid in safe mode (label unraid safe mode) add video=efifb:off to that label. I don't think you booted unraid in safe mode, so your changes don't apply. Fix this by applying video=efifb:off to the correct label. Yeah, changed it to the regular label and it seems like that did something good. Searched in /proc/iomem for efifb, no result. However when I now try to start the VM the result is still the same. Here is the output of the VM log, it has changed a little, I think the lines starting with "blockdev" are new. -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=31,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ -blockdev '{"driver":"file","filename":"/mnt/user/cacheShare/VM/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-3-storage","backing":null}' \ -device virtio-blk-pci,bus=pci.0,addr=0x3,drive=libvirt-3-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/Windows.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device ide-cd,bus=ide.0,unit=0,drive=libvirt-2-format,id=ide0-0-0,bootindex=2 \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.173-2.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.0,unit=1,drive=libvirt-1-format,id=ide0-0-1 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:0b:57:c3,bus=pci.0,addr=0x2 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -device vfio-pci,host=0000:08:00.0,id=hostdev0,bus=pci.0,addr=0x5 \ -device vfio-pci,host=0000:08:00.1,id=hostdev1,bus=pci.0,addr=0x6 \ -device vfio-pci,host=0000:01:00.0,id=hostdev2,bus=pci.0,addr=0x8 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-10-22 10:43:37.681+0000: Domain id=1 is tainted: high-privileges 2021-10-22 10:43:37.681+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2021-10-22T10:43:41.008138Z qemu-system-x86_64: vfio: Cannot reset device 0000:01:00.0, depends on group 15 which is not owned. 2021-10-22T10:43:42.032887Z qemu-system-x86_64: vfio: Cannot reset device 0000:01:00.0, depends on group 15 which is not owned. Also pasting the /proc/iomem if it is relevant: less /proc/iomem 00000000-00000fff : Reserved 00001000-0009ffff : System RAM 000a0000-000fffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000dffff : PCI Bus 0000:00 000c0000-000cffff : Video ROM 000f0000-000fffff : System ROM 00100000-09c7efff : System RAM 04000000-04a00816 : Kernel code 04c00000-04e4afff : Kernel rodata 05000000-05127f7f : Kernel data 05471000-055fffff : Kernel bss 09c7f000-09ffffff : Reserved 0a000000-0a1fffff : System RAM 0a200000-0a20ffff : ACPI Non-volatile Storage 0a210000-0affffff : System RAM 0b000000-0b01ffff : Reserved 0b020000-a7b4f017 : System RAM a7b4f018-a7b6d457 : System RAM a7b6d458-a7b6e017 : System RAM a7b6e018-a7b7f057 : System RAM a7b7f058-b920dfff : System RAM b920e000-b920efff : Reserved b920f000-baf98fff : System RAM baf99000-bc5a7fff : Reserved bc5a8000-bc6dafff : ACPI Tables bc6db000-bcd6afff : ACPI Non-volatile Storage bcd6b000-bd9fefff : Reserved bd9ff000-beffffff : System RAM bf000000-bfffffff : Reserved c0000000-fec2ffff : PCI Bus 0000:00 d0000000-e01fffff : PCI Bus 0000:08 d0000000-dfffffff : 0000:08:00.0 d0000000-dfffffff : vfio-pci e0000000-e01fffff : 0000:08:00.0 e0000000-e01fffff : vfio-pci f0000000-f7ffffff : PCI MMCONFIG 0000 [bus 00-7f] f0000000-f7ffffff : Reserved Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 (edited) 53 minutes ago, Kung-Fubick said: 2021-10-22T10:43:42.032887Z qemu-system-x86_64: vfio: Cannot reset device 0000:01:00.0, depends on group 15 which is not owned. This is the usb controller, that's strange because if I look at the diagnostics file you attached the only device in iommu group 15 is that controller and it is attached to vfio. Did you change something in your configuration? Can you upload another diagnostics file? Can you also run this in an unraid terminal and report the output? for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);do echo "IOMMU group $(basename "$iommu_group")"; for device in $(\ls -1 "$iommu_group"/devices/); do if [[ -e "$iommu_group"/devices/"$device"/reset ]]; then echo -n "[RESET]"; fi; echo -n $'\t';lspci -nns "$device"; done; done Update: 1:00.0 is in iommu group 14 (not 15), it's reporting that it depends on iommu group 15...attach new diagnostics please. Maybe you can't pass only the 1:00.0 usb controller, but also the sata controller 1:00.1 (group 15), but you don't want this..If this is the case you cannot passthrough the usb controller, so delete this from your vm template and unbind from vfio the usb controller since it cannot be passed through: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> Replace also this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> With this, to have a multifunction gpu device in the vm also: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> Edited October 22, 2021 by ghost82 Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 (edited) Removed the USB controller and replaced the ealier gpu section with the one you wrote. Recognize that part from spaceinvaders video. I didn't change anything else in the VM. It still starts after the changes but I still get a black screen. Attaching diagnostics after trying to start the VM with configurations. Here is the output of the lines you provided me with: for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);do echo "IOMMU group $(basename "$iommu_group")"; for device in $(\ls -1 "$iommu_group"/devices/); do if [[ -e "$iommu_group"/devices/"$device"/reset ]]; then echo -n "[RESET]"; fi; echo -n $'\t';lspci -nns "$device"; done; done IOMMU group 17 [RESET] 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01) IOMMU group 7 00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU group 25 [RESET] 09:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808] IOMMU group 15 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller [1022:43c8] (rev 01) IOMMU group 5 [RESET] 00:03.4 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483] IOMMU group 23 [RESET] 08:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tonga PRO [Radeon R9 285/380] [1002:6939] (rev f1) IOMMU group 13 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1440] 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1441] 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1442] 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1443] 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1444] 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1445] 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1446] 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 [1022:1447] IOMMU group 3 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU group 21 02:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01) IOMMU group 11 [RESET] 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484] IOMMU group 1 [RESET] 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483] IOMMU group 28 [RESET] 0b:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486] IOMMU group 18 02:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01) IOMMU group 8 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU group 26 [RESET] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a] IOMMU group 16 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge [1022:43c6] (rev 01) IOMMU group 6 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU group 24 08:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tonga HDMI Audio [Radeon R9 285/380] [1002:aad8] IOMMU group 14 [RESET] 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset USB 3.1 XHCI Controller [1022:43d5] (rev 01) IOMMU group 4 [RESET] 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483] IOMMU group 22 [RESET] 03:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03) IOMMU group 12 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61) 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51) IOMMU group 30 0b:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487] IOMMU group 2 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU group 20 02:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01) IOMMU group 10 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU group 29 [RESET] 0b:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c] IOMMU group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482] IOMMU group 19 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01) IOMMU group 9 [RESET] 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484] IOMMU group 27 [RESET] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485] nas-diagnostics-20211022-1401.zip Edited October 22, 2021 by Kung-Fubick Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 (edited) Everything looks good but... It could be you need to dump the vbios of the gpu and set it in the vm xml. Do not download a vbios from internet, are you able to dump the vbios of the card? See here (EDIT E): https://www.reddit.com/r/VFIO/comments/6iyd9m/screen_goes_black_after_starting_vm/ Edited October 22, 2021 by ghost82 Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 I have the spaceinvader script for it so I will use that. Will update after. Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 36 minutes ago, ghost82 said: Everything looks good but... It could be you need to dump the vbios of the gpu and set it in the vm xml. Do not download a vbios from internet, are you able to dump the vbios of the card? See here (EDIT E): https://www.reddit.com/r/VFIO/comments/6iyd9m/screen_goes_black_after_starting_vm/ Getting some display output, was into the "repair windows screen" which I was able to click around in once I forwarded the USB controller. Tried reinstalling windows and it all went fine until some type of crash when screen went fully black. This was when I had finished installing and was browsing in the OS. And all of this happened after I added the vbios. However right now when I start either Windows VM (created a new one to reinstall windows) it freezes while booting. Getting the windows "ring" of loading and the TianoCore picture. Going to try again and see if I did something wrong after installing windows. I did install a display driver that might have been intended for VNC (followed an old spaceinvader video). Attaching another diagnostics if it is of any interest. nas-diagnostics-20211022-1513.zip Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 (edited) 9 minutes ago, Kung-Fubick said: However right now when I start either Windows VM (created a new one to reinstall windows) it freezes while booting. That's because of the amd reset bug, unfortunately your gpu suffers from this issue: gpu is not able to properly reset after a shutdown or restart of a vm, and you need to reboot the whole host. And unfortunately gnif vendor reset patch seems to not support your gpu. Not the most friendly gpu to play with passthrough... Some people are reporting some success with your gpu and reset by following this guide: https://forum.level1techs.com/t/linux-host-windows-guest-gpu-passthrough-reinitialization-fix/121097 Edited October 22, 2021 by ghost82 Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 2 minutes ago, ghost82 said: That's because of the amd reset bug, unfortunately your gpu suffers from this issue: gpu is not able to properly reset after a shutdown or restart of a vm, and you need to reboot the whole host. And unfortunately gnif vendor reset patch seems to not support your gpu. Not the most friendly gpu to play with passthrough... Oh well that's a bummer, tried with the ubuntu VM aswell. Screen went black and gpu fans maxed out so had to reboot anyway. So if I would use this card I would have to reboot host every time I shutdown the VM? Also is there any easy way to find out which cards that are supported? Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 (edited) 11 minutes ago, Kung-Fubick said: So if I would use this card I would have to reboot host every time I shutdown the VM? Check the link above if you want to try in the windows vm. In general yes, without any patch the only method to initialize the gpu is the restart or shutdown of the host 11 minutes ago, Kung-Fubick said: Also is there any easy way to find out which cards that are supported? As far as I know every amd gpu prior to the 6000 series, but a 6000 series gpu requires that you sell parts of your body ...prices are crazy Nvidia gpus don't suffer the reset bug, if you are not running mac os vm (kepler gpus are supported till big sur, monterey will not support them) I would go for nvidia. I'm not quite expert in this reset bug, as I only read arounf the issues, I don't own any amd gpu. Edited October 22, 2021 by ghost82 1 Quote Link to comment
Kung-Fubick Posted October 22, 2021 Author Share Posted October 22, 2021 I will keep trying a bit and see if it gets me anywhere. Yeah those prices are truly insane, got the card to be able to run a mac vm with linux and windows. I do have a nvidia gpu that currently occupies my workstation so I might get a new card and put the old nvidia card in for ubuntu and windows. Then try with the R9 for the mac. Big thanks for all the effort you put into helping me anyway! Quote Link to comment
ghost82 Posted October 22, 2021 Share Posted October 22, 2021 1 minute ago, Kung-Fubick said: I will keep trying a bit and see if it gets me anywhere. Yeah those prices are truly insane, got the card to be able to run a mac vm with linux and windows. I do have a nvidia gpu that currently occupies my workstation so I might get a new card and put the old nvidia card in for ubuntu and windows. Then try with the R9 for the mac. Big thanks for all the effort you put into helping me anyway! No problem, if I can be of help I'm happy. Check that your nvidia is not too old, so that it supports uefi Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.