ghost82

Members
  • Posts

    1274
  • Joined

  • Last visited

  • Days Won

    5

ghost82 last won the day on October 22

ghost82 had the most liked content!

7 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ghost82's Achievements

Collaborator

Collaborator (7/14)

243

Reputation

  1. System Devices > PCI Devices and IOMMU Groups Put a check for iommu group 72. Sometimes even revision number may cause issues, so it's better to dump it. You can't dump it most probably because the gpu is attached and in use by the host. Try to dump it again after attaching to vfio (bullet point 1+restart): you need an additional primary gpu or access unraid from another device since the gpu will be attached to vfio and cannot be used by unraid.
  2. Can you please try to: 1. attach to vfio iommu group 72 2. in the xml of the vm replace this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/gtx1070z2.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> With this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/gtx1070z2.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> 3. make sure /mnt/user/isos/vbios/gtx1070z2.rom is your own bios and not a downloaded one 4. reboot unraid and start the vm
  3. You could try latencymon and follow these advices: https://www.sweetwater.com/sweetcare/articles/solving-dpc-latency-issues/ Probably irq conflicts: msi util v2/v3 can help in this.
  4. Make sure to attach to vfio groups 26, 27, 28 and 29. Reboot. Replace this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> With this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x3'/> </hostdev> Start the vm.
  5. I understand it can be difficult to manually edit the config.plist to update smbios data. As I wrote I pushed a pr to the container. Since it's not merged yet, you can "manually" apply the changes by following these advices: 0. clean the container, delete all files 1. Setup Macinabox in Apps (uncheck Autostart) 2. wget https://github.com/SpaceinvaderOne/Macinabox/raw/2a2400c44af497a00f2610523cb0c0844d2aae27/bootloader/OpenCore.img.zip 3. docker cp ./OpenCore.img.zip macinabox:/Macinabox/bootloader/OpenCore.img.zip 4. Start Docker Container and proceed as normal Then, you can use opencore configurator, latest version at the time of writing, or better to use v. 2.51.0.0: https://mackie100projects.altervista.org/download/opencore-configurator-2-51-0-0/ since the latest version 2.52.0.0 supports opencore development version (not stable) which added a couple of new options in the config.plist.
  6. It should be related to nvidia drivers, it seems for now only disabling hardware acceleration in the browser solves the issue. btw, are you using latest nvidia drivers? Does it hang if 3d support is off in the browser?
  7. Why do you say it's freezing? When using efifb:off in all cases I saw the output on the monitor stops exactly to that of your screenshot: vfio-pci blablabla vga decodes blablabla In my opinion it's working as expected.
  8. It has been said several times : STOP USING OPENCORE CONFIGURATOR Start over or download a fresh opencore img and boot from it and after this: STOP USING OPENCORE CONFIGURATOR Opencore in macinabox is v. 0.7.0 Maybe I will push a pr with latest opencore just to not read these msgs all over again, let's do it PR submitted, let's wait to it being merged (if it is accepted): https://github.com/SpaceinvaderOne/Macinabox/pull/53
  9. ok at least there are partitions on it. Now we need to know what is the filesystem of the 240G partition, where the data should be. You can use parted command: parted /path/to/vdisk2.img print Check the column "File system" for the 240G partition. If it's ntfs, try to mount it: create a directory for your mount point first (/mount/point/). mount -t ntfs -o ro,offset=$((512*32768)) /path/to/vdisk2.img /mount/point/ Does it mount?
  10. Basically you have 2 options for the graphics: 1. Emulated 2. gpu passthrough 1. When you emulate the graphics you will connect to the vm through vnc, from the host, meaning from the same pc where the vm is installed (case 1a) or from another pc in the lan, or even outside the lan, able to have connectivity to the unraid host (case 1b). case 1a: when you connect through vnc from the same box all you have to do is to go to the virtual machines tab in unraid, click on the logo of the vm and you will see VNC remote: if you click on it a new browser window will open conencting to the vm via vnc. case 1b: the vnc server will listen on the host (unraid) -- not on the vm -- on port 5900 (for the first vm) --> if you have multiple vms running at the same time they will listen on ports 5900, 5901, 5902, etc. So you can connect from an external device with a vnc client to the host, hosting your vm and manage the vm from there. 2. When you use gpu passthrough you can get the video output of the vm on a monitor attached to the gpu; if you need to manage the vm remotely you can install vnc or any remote management software into the os of the vm. Sometimes gpu passthrough is needed on some oses to get video acceleration, then one can use a dummy plug attached to the gpu (if the vm is accessed remotely) or use a monitor attached to the gpu.
  11. Why did you use opencore configurator....................... No, but you could mount the efi in the mac os disk and replace the bootloader files. imacpro1,1 is the best choice for a vm. Do not play too much with smbios data, especially if you are logged in with your apple id: apple detects all your devices and if you change too many times the smbios data apple may lock your account server side. You have different choices, you can use penryn (easiest way), you can use a newer emulated cpu (some users reported success in using "Skylake-Server" instead of "Penryn" in the xml), you can passthrough your amd cpu, but this will require a lot of patches to apply, you can passthrough your amd cpu and spoof it to an intel cpu, as described here at the end of the message by pavo (doesn't require any patch):
  12. Well..you were right, apparently you are booting without any kext injection: that sounded strange to me, but it's possible and I remember that it was possible. Let's try with something simple, to modify the vm template by removing some of the virtual usb controllers; maybe that you have too many usb ports defined exceeding mac os limits.. 1. copy and paste your current xml somewhere as a backup 2. replace the xml with this: <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit f3aa0671-06f6-a8ef-6bb1-76050b958a5a or other application using the libvirt API. --> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>El Capitan</name> <uuid>f3aa0671-06f6-a8ef-6bb1-76050b958a5a</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='12'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='13'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='14'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/f3aa0671-06f6-a8ef-6bb1-76050b958a5a_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/vm/El Capitan/clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/vm/El Capitan/El Capitan.img'/> <target dev='hdd' bus='sata'/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/vm/El Capitan/vdisk3.img'/> <target dev='hde' bus='sata'/> <boot order='3'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:cc:37:24'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <rom file='/mnt/vm/BIOS/Nvidia Geforce 970 - UnraidNAS.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-device'/> REPLACE HERE <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,vendor=GenuineIntel,kvm=on,+invtsc,+avx,+avx2,+aes,+xsave,+xsaveopt,vmware-cpuid-freq=on,'/> </qemu:commandline> </domain> At the end of the xml you see "REPLACE HERE": delete this line and paste this (highlighted in yellow): https://github.com/SpaceinvaderOne/Macinabox/blob/master/xml/Macinabox BigSur.xml#L139 Save and start the vm. Test the vm, if remote mouse and keyboard work. If it works, try this with ioregistry explorer opened: 1. disconnect your hub 2. in ioreg find pci1022,149c: 3. plug a usb 2 device into the usb port in which the hub was connected and see if something gets detected (something in green should appear in ioreg) 4. plug a usb 3 device into the usb port in which the hub was connected and see if something gets detected (something in green should appear in ioreg) 5. if these tests are successful, plug the hub to that port 6. plug a usb 2 device into the hub and see if something gets detected (something in green should appear in ioreg) 7. plug a usb 3 device into the hub and see if something gets detected (something in green should appear in ioreg) Another test you can try: plug a usb 2 device into the hub, start the vm with the device plugged in, does it get detected in ioreg? If usb still doesn't work, but remote mouse and keyboard work, attach a new copy of ioreg.
  13. I'm assuming your vdisk2 has only one partition, like ntfs or something else? In unraid in the terminal what's the output of fdisk -l /path/to/image/vdisk2.img ?
  14. mmm no...and it's difficult to say what's happening...if you have no errors during the vm creation I would exclude errors in the template. I would check the network, how are you connecting to the vm? from localhost (same box), or from another device via lan? If it's the second, remember that you need to connect vnc to the ip of the host and not to that of the vm.