Jump to content

ghost82

Members
  • Posts

    2,726
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by ghost82

  1. I would suggest to completely uninstall nvidia drivers with ddu and try to install them again, maybe testing different versions, starting with the version that works on bare metal. Make sure to first delete all nvidia devices (even hidden ones) in windows device manager too. The vbios should be ok, it contains valid legacy and efi vbioses.
  2. Try to increase the ram of the vm, 1gb doesn't seem appropriate, set it to 4gb, at least. Check the full report (screenshot) to see if it points to anything useful. If it doesn't work try to disable the nic, i.e. delete: <interface type='bridge'> <mac address='52:54:00:53:33:b7'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface>
  3. Check in the guest vm if the virtual network adapter is there, open a terminal and type: lspci then press enter. It should list the adapter at 02:01.0 if you didn't change your xml.
  4. Well, if you don't want too much troubles, go with nvidia, or go with amd series 6000 (could be expensive depending on your budget). Note that a kernel fix is included in unraid that should fix some older amd gpus, but this may work or not, depending on brand, firmware, revision, etc... Quick search on google or even here in the forum for "amd gpu reset bug" and you will find a lot of info. If you are going to buy a second hand nvidia gpu, it should be preferred that updated drivers still exist: this because old nvidia gpus without newer drivers (i.e. with older drivers) are not able to be passed through in a vm, unless you modify the xml to hide the hypervisor. Only with newer nvidia drivers (by "newer" I mean from v. 465), nvidia allowed for its consumer gpus (geforce, titan) to be passed through in vms.
  5. Consider, when searching for gpus, the amd reset bug, i.e. older amd gpus (<series 6000) could not reset properly on vm shutdown/restart and this needs the whole server to be restarted.
  6. Not when the vm is running. Moreover, when the vm is shutdown you need to detach the gpu from vfio driver to make it available again in the host.
  7. Can you send me, in pm if you want, the reddit link?Just curious about what it's written. Update: found, but it doesn't seem sponsored in any way. It seems a simple review, not good, not too bad.
  8. Sorry ignore this, the position you wrote is the right one! As far as the other issue, I'm sorry I didn't try but only reported some findings
  9. It's normal that you have some video output when unraid boots, vfio attaches after. Yes, multifunction is applied correctly You didn't Attach diagnostics and the vbios file you are using. Note that if you dump the vbios using gpuz you still needs to remove the header.
  10. Is the issue that you don't have internet inside the vm or the message guest agent not installed? If it's the latter, just install qemu guest agent into your linux vm. Package name could differ for different linux distribution, for example it can be qemu-guest-agent. After installation enable it, for example: systemctl enable qemu-guest-agent systemctl start qemu-guest-agent Then check if it's running correctly, for example: systemctl status qemu-guest-agent If you don't have internet change network type from e1000 to virtio or virtio-net or e1000-82545em
  11. See if this applies: https://forums.unraid.net/topic/125626-fehler-mit-einer-win-10-vm-bei-unraid-seit-ver-6103/?do=findComment&comment=1145374
  12. You edit it from the unraid gui, but editing the file is right too..can you attach the file?I think there could be some incorrect format.
  13. As you can see from your command output efifb attaches to your gpu. By looking at your syslog you are not applying the correct kernel arguments you pasted in your post #1
  14. Are you asking if it will work if you hybernate the vm from inside the guest instead of from the host?My reply is...I don't know But several users reported it working with virsh commands; dompmsuspend and dompmwakeup are virsh commands to be given from the host and the guest requires the guest agent installed. here are posts where I get some info: https://www.reddit.com/r/VFIO/comments/568mmt/saving_vm_state_with_gpu_passthrough/
  15. mmm...this shouldn't happen..if hibernation is set to disk, the vm should report as shutdown and the gpu should be free for other uses. Did you enable sustend to disk in the xml? Check this, might help: https://forums.unraid.net/topic/130134-switching-from-gpu-passthrough-local-to-vnc-in-linux-pop/?do=findComment&comment=1184943
  16. Attach a diagnostics and the output of command "cat /proc/iomem"
  17. In addition to hot22shot suggestion, which I think it's necessary, otherwise you could get a code 12 error in windows, pay attention to the layout in the guest os; you can't have the audio of the gpu 2 in the same bus and slot of the video of gpu 1. Moreover addresses and multifunction are in the wrong place So change with this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1'/> </hostdev>
  18. This could suggest a kernel issue: 6.10.3 --> kernel 5.15.46 6.11.x --> kernel 5.19.x
  19. me too, and it could be a great idea to share results anonimously. For what it worth all I wrote has already been written here in the forum.
  20. Restore the syslinux parameters you need and in particular video=efifb:off Set Pcie acs override to "both", restart the server and see if iommu group 16 is splitted with your video and audio splitted in a group without anything else.
  21. Attach new diagnostics Fix the syslinux line, video=efifb:off is repeated 2 times.
  22. Read above, it seems virtualization has been resetted to off in your bios, or you changed it.
  23. One step back...sorry, looking at your diagnostics it seems iommu is not available (?), make sure virtualization is enabled in bios (vtx+vtd/svm+amd-v)
  24. Tools -> System Devices; put checkmarks and reboot
×
×
  • Create New...