ghost82

Members
  • Posts

    2721
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by ghost82

  1. vfio: Cannot reset device 0000:0b:00.1, depends on group 31 which is not owned. Even if in different iommu groups, maybe you need to attach to vfio also the usb controller at 0b:00.2 and the serial bus controller at 0b:00.3 of the gpu: 0b:00.2 USB controller [0c03]: NVIDIA Corporation TU116 USB 3.1 Host Controller [10de:1aec] (rev a1) Subsystem: NVIDIA Corporation Device [10de:2182] Kernel driver in use: xhci_hcd 0b:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU116 USB Type-C UCSI Controller [10de:1aed] (rev a1) Subsystem: NVIDIA Corporation Device [10de:2182] Attach them to vfio and change in your ubuntu vm from this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/gtx1660ti.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/gtx1660ti.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x3'/> </hostdev> Reboot before running the ubuntu vm.
  2. Yes, they are the same. As far as the NvVars file you don't need it, EFI variables are written into OVMF_VARS file. NvVars is needed when the VARS file is not specified and you have only a big read only file for bios (OVMF): in this case ovmf is already splitted into CODE and VARS, the last one being writable.
  3. vga port output never existed in real macs, so it wont work out of the box; it may work by patching connectors with whatevergreen (and I think it's only possible for igpus).
  4. I would suggest to not passthrough the nvme controller, but save a vdisk file into the nvme attached to a virtio controller; it wont be as fast as passing through the controller, but I think you wont notice any particular performance issue. Pass your gpu, fix the multifunction as described above, and attach the new diagnostics if it doesn't boot.
  5. This line has nothing to do with your nvme address. That line specifies the pci address inside the vm (it's the target address, not the source!). You are using a vdisk saved in /mnt/user/domains/Windows 11 - Test/ attached to a virtio controller, at address 03:00.0 in the vm. You can change that address in the xml to something else, not in use by something else, for example 04:00.0 or 05:00.0, or what you want, obviously providing also a pcie-root-port to attach to it. To passthrough the nvme, you simply delete this: <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/Windows 11 - Test/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> and add this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> Better to isolate at boot the nvme 03:00.0
  6. Make sure the gpu is not used by something else in the host. Update drivers in the vm, latest version is R510 U2 (511.65). If it doesn't work you may need to hide the hypervisor in the xml, quadros are not consumer gpus.
  7. I think you misunderstood, he couldn't have told this... If you want to attach diagnostics maybe you did some mistake, so we can look at some data.
  8. 17G14033 is before 17G14042, so update to the latest. nvda_drv should be for previous nvidia drivers versions; following dortania guide, using v. 387.10.10.10.40.140, use boot arg nvda_drv_vrl=1
  9. And I think you also need boot arg nvda_drv_vrl=1 Make sure you have an up to date high sierra version: 17G14042 Then download and install webdriver v. 387.10.10.10.40.140. https://images.nvidia.com/mac/pkg/387/WebDriver-387.10.10.10.40.140.pkg
  10. I mean installing a new pcie card, with a separate usb controller, so you can pass it through. Stop your vm, attach your device to the usb port, go to your vm tab, edit your vm, in the vm gui go to the bottom and you should see your usb device listed, put a check on it and save. NOTE: it's not for plug and play!If you remove the usb device you will not be able to start again the vm, if you don't remove from the xml the code block pointing to that usb device. It is in general ok for a usb mouse dongle, a webcam, et., that you plug in and you forget there, not for removable usb pendrives, hd, etc.
  11. So, it doesn't work with qxl/vnc graphics? Is the vm booting to bios?Where it hangs? For gpu passthrough, since you have the nvidia driver plugin, try to isolate (attach to vfio) at boot the gpu, iommu groups 90 and 91, reboot and try. Attach new diagnostics if it wont work. But first you need to make it work with qxl+vnc. About "Windows 10vnc" vm, what's the output of: fdisk -l "/mnt/user/domains/Windows 10vnc/vdisk1.img"
  12. It's there... It could be the virtio type that doesn't play well with mojave. You can check if the AppleQEMUGuestAgent is running, in a mac os terminal: pgrep AppleQEMUGuestAgent If it returns the process id number it's running. If it's not running, try to manually start it, the binary should be inside /usr/libexec/ (check it): cd /usr/libexec/ ./AppleQEMUGuestAgent Things you can try: - change the machine version, from pc-q35-4.2 to 5.2; sometimes changing version slightly changes the dsdt - update opencore: you can use this 0.7.7 version as a try: https://github.com/SpaceinvaderOne/Macinabox/raw/7aab68aa382a07862d15998fdb28bd7b366f8715/bootloader/OpenCore.img.zip First backup your current opencore img. Disconnect internet if you logon with your apple id, since that image above has general smbios data. In first place I would focus on making it working "natively" instead of playing with scripts.
  13. happy to read that you solved. A thank you, which you already said, is more than enough for me.
  14. I have mac os vms installed since high sierra and it never happened, since most of the hardware is emulated we should have all near the same dsdt with the same acpi tables. virsh shutdown in fact works for me. The guest agent is included in mac os system, at least I'm 100% sure it's there in big sur and monterey, no need to install. If you grep your cpu features in the vm and you have VMM, guest agent is automatically started. What version of mac os you have?which bootloader?is it updated?Do you have the guest agent defined in the xml?Which q35 version you set in xml?
  15. All this to say...no it's wrong Understand the logic behind this and it will be easy! I had issues too in understanding this at the beginning. Starting from the working xml, all you have to do is to change all the sources addresses, which will probably be: 01:00.0, 01:00.1, 01:00.2, 01:00.3 instead of 04:00.0, 04:00.1, 04:00.2, 04:00.3
  16. There are two addresses for each component of the gpu, one is the source address, the other is the target address. Your gpu has 4 components, video, audio, usb controller and serial bus controller. Each component is inside a hostdev block. Let's take as an example this hostdev block: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/Geforce_RTX2070.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> This is the block of the video component of the gpu. The source address: source address, specified inside the <source></source> block is the address of the video component seen by unraid. Where you can find it? For example with the lspci command, or in unraid diagnostics in the /system/lspci.txt file, or even in the unraid gui. In one of your diagnostics you had in the lspci.txt file: 04:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2070] [10de:1f02] (rev a1) Subsystem: Gigabyte Technology Co., Ltd Device [1458:3fda] Kernel driver in use: vfio-pci 04:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1) Subsystem: Gigabyte Technology Co., Ltd Device [1458:3fda] Kernel driver in use: vfio-pci 04:00.2 USB controller [0c03]: NVIDIA Corporation TU106 USB 3.1 Host Controller [10de:1ada] (rev a1) Subsystem: Gigabyte Technology Co., Ltd Device [1458:3fda] Kernel driver in use: vfio-pci 04:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU106 USB Type-C UCSI Controller [10de:1adb] (rev a1) Subsystem: Gigabyte Technology Co., Ltd Device [1458:3fda] Kernel driver in use: vfio-pci VGA: video component of the gpu; its source address is 04:00.0 (bus:slot.function), meaning the video component of the gpu is at the source address bus=0x04, slot=0x00, function=0x00. Audio device: audio component of the gpu; its source address is 04:00.1 (bus:slot.function), meaning the audio component of the gpu is at the source address bus=0x04, slot=0x00, function=0x01. USB controller: usb controller component of the gpu; its source address is 04:00.2 (bus:slot.function), meaning the usb controller component of the gpu is at the source address bus=0x04, slot=0x00, function=0x02. Serial bus controller: serial bus controller component of the gpu; its source address is 04:00.3 (bus:slot.function), meaning the serial bus controller component of the gpu is at the source address bus=0x04, slot=0x00, function=0x03. Back to the xml code, as you can see, for the video component you set the source address to this: <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> which is correct. Target address: target address, specified outside the <source></source> block is the address of the video component seen by the vm. <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> In this example in that block you are saying: unraid, pick the pcie device that you have at 04:00.0, attach it to vfio driver and put it in a vm at address 05:00.0. Note that in the target address line you have also multifunction='on': it means that the whole gpu is a multifunction device made of different components. A multifunction device is a device with different components, each having the same bus, same slot and different function. The source device (gpu): 04:00.0 04:00.1 04:00.2 04:00.3 is multifunction, all the components are at bus 0x04, slot 0x00, but each has a different function, 0x00, 0x01, 0x02, 0x03. So, in the xml, you specified multifunction='on' only in the video component, and you change bus/slot/function of the other components so to have the same bus, same slot and different function. In the target vm the components of the gpu are at: 05:00.0 05:00.1 05:00.2 05:00.3 In other words the source-->target map is: 04:00.0 --> 05:00.0 04:00.1 --> 05:00.1 04:00.2 --> 05:00.2 04:00.3 --> 05:00.3
  17. It disables memory mapped PCI configuration registers, so the kernel can't read these registers. I don't know, you added it manually, not me Usually added for bugged bioses/bugged pcie devices. If your system runs well, without any flooding error in the logs you are good without it.
  18. You can't. Not sure why you highlighted iommu groups 5, 6 and 14, these have nothing to do with usb controller. You have 2 options: 1. install a second usb controller 2. passthrough to the vm single usb devices
  19. Hi, that is not a usual behavior, can you try in unraid terminal? virsh shutdown MacOSVMName and see what happens? Also try: virsh shutdown MacOSVMName --mode agent The second command will use qemu guest agent inside mac os vm instead of acpi. Did you have custom ssdt/dsdt injected that may interfere with the shutdown?
  20. first try as it is, since we know you got a video output, after and if it boots experiment what you want, you will always have "a restore point".
  21. This one: https://forums.unraid.net/topic/119422-unable-to-pass-through-graphics-board-to-win10-vm/?do=findComment&comment=1092051 but first, try the fdisk command after you copy the vdisk and see if it lists partitions.
  22. ok that is explaining everything, the passthrough with ovmf works, because you can output the uefi shell to your screen; it works with dp port only because the layout of your gpu sets the dp port as primary port. The issue is the disk, which is blank and it has no partition at all. You need to reinstall windows on that disk, or try to copy the vdisk1 from your "Windows 10 Office" to your "Windows 10 Test" folder.
  23. impossible, seabios doesn't have a uefi shell. Anyway if you installed it with ovmf there's no need to test seabios. This is wrong, one is using: /mnt/user/domains/Windows 10 Test/vdisk1.img The other is using: /mnt/user/domains/Windows 10 Office/vdisk1.img So, it's not the same disk, maybe you copied the disk? Run this in an unraid terminal and paste the output here: fdisk -l "/mnt/user/domains/Windows 10 Test/vdisk1.img"
  24. Try to change from ovmf to seabios and see if it boots: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10 Test</name> <uuid>2edc8daf-caf0-4d2e-70ea-c7ab8cb5bac1</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='14'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='2' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 Test/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </controller> <controller type='pci' index='10' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:00:05:7f'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/Geforce_RTX2070.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x3'/> </hostdev> <memballoon model='none'/> </devices> </domain>