IronRooster

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by IronRooster

  1. I ran into this recently while changing graphics cards in my host. In my case, it was referring to a source (host) device that was listed twice twice in the XML. Once I found it (please can you let us search the whole XML at once!?) and changed it as well, I could save it no problem. (edit, if you have changed hardware in the form view, you may need to switch to XML view so you can see all the hardware - IIRC, the form view hides hardware that isn't there any more, but it still there in the XML)
  2. Thank you for that! I wound up figuring it out by putting the drive into a new VM under the Form View, and then copying the XML it created. A bit frustrating that the upgrade didn't catch and update it, but very happy to have my machines back up and running!
  3. Hello! I recently upgraded to 6.10.3 and now my primary VM on my machine is not starting, and gives the following error: Other VMs start, and I'm not sure why this one isn't. It has been running without a problem for the past several versions of Unraid. The /dev/disk/by-id/ link should not be a regular file - those are always symlinks to the /dev/sdx device, so I don't understand why it is complaining. Any thoughts or insights would be great! config is below <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Uther BigSur (macinabox)</name> <uuid>f3e5d01d-c836-41bc-a37e-b9260713f970</uuid> <description>MacOS Big Sur</description> <metadata> <vmtemplate xmlns="unraid" name="BigSur (macinabox)" icon="/mnt/user/appdata/vm_custom_icons/icons/BigSur.png" os="BigSur"/> </metadata> <memory unit='KiB'>67108864</memory> <currentMemory unit='KiB'>67108864</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='16'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='17'/> <vcpupin vcpu='4' cpuset='2'/> <vcpupin vcpu='5' cpuset='18'/> <vcpupin vcpu='6' cpuset='3'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='4'/> <vcpupin vcpu='9' cpuset='20'/> <vcpupin vcpu='10' cpuset='5'/> <vcpupin vcpu='11' cpuset='21'/> <vcpupin vcpu='12' cpuset='6'/> <vcpupin vcpu='13' cpuset='22'/> <vcpupin vcpu='14' cpuset='7'/> <vcpupin vcpu='15' cpuset='23'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/f3e5d01d-c836-41bc-a37e-b9260713f970_VARS-pure-efi.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/isos/BigSur-opencore.img'/> <target dev='hda' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/dev/disk/by-id/ata-Samsung_SSD_870_QVO_2TB_S6R4NJ0R501149J'/> <target dev='hdc' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='10' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='11' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='11' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='12' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='12' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='13' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='13' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='14' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='14' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='15' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='15' port='0x15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='16' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='16' port='0x16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/roms/msi-vega64.2.rom'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x10' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x11' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x12' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x10' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'/> <qemu:commandline> <qemu:arg value='-usb'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=xxx(c)AppleComputerInc'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/> </qemu:commandline> </domain>
  4. I bought one of these cards and it worked in Windows & Linux VMs for me, but did not work in a macOS VM.
  5. Hey, sorry for the slow reply, was a crazy weekend. The kext worked for me too! My card has the 8086,1533 for the id's, so it must have just been the new kext in 11.4 Woo! Thank you so much!
  6. Anybody have any ideas? Does anyone use ethernet passthrough for their macs? The card works just fine passed through to Windows and Linux.
  7. Hi! I have a macinabox based Big Sur install that I've customized to use a pci-e graphics card (Radeon Vega64), an nvme drive for the OS, and an Intel i210 network card so that I can get decent throughput (the emulated e1000 just never performed well enough for me) and everything has been working great until I updated to 11.4 yesterday, and then VM would no longer finish booting up. :-/ Long story short, I set up the same hardware on a new VM using VNC (to get around having to power down the whole system due to the Radeon not resetting properly), and after a bunch of testing discovered that it is crashing whenever I plug in the ethernet cable into the i210 card and it gets an IP address. Really weird. Of note, I did also update from 6.9.1 to 6.9.2, but I've reverted back and the behaviour is the same, so I don't think that's related. The hardware behind this is a Dell T7910, and it has the i210 network card and a i217-LM, and from what I remember when I first researched which to pass to Macos, the i210 was more desirable. I suppose I can see about passing the other, but haven't had the energy to do that yet. Any ideas where I can start to debug this?
  8. I did finally figure out what I need to do to get it to work reliably. It is very ugly and hacky and makes me think twice about doing it long term, but it is reliable for now. There are two big things: 1) I have to completely power the physical machine off to get it to work. Nothing short of a complete power off works - I am assuming this is to do with the reset not working nicely on AMD cards. 2) I cannot have my 4K monitor plugged in on VM bootup, otherwise it fails to boot! (I have a secondary 1080p monitor that I leave plugged in) I have my machine type set to be a q35 5.1, and used OpenCore from SpaceInvaderOne's macinabox. I read the ROM from the card and have specified that. I also set up the graphics card and the audio part together like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/roms/msi-vega64.rom'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> Hope this helps anyone else struggling with Vega cards!
  9. Hi! Yes, I did finally figure out what I need to do to get it to work reliably under a macOS VM. The biggest thing I found was that I have to completely power the machine off, and then, when the unraid server comes up, my macOS VM will display video output on the card. But only once. After that, I have to power the machine completely off, and then turn it back on. I have my machine type set to be a q35 5.1, and used OpenCore from SpaceInvaderOne's macinabox. I read the ROM from the card and have specified that. I also set up the graphics card and the audio part together like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/roms/msi-vega64.2.rom'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> fwiw, I have an MSI aircool version of the card. Oh, and I almost forgot the most important thing - I cannot have my 4k monitor plugged in (or at least, for me, not turned on). Otherwise it doesn't work. I have a secondary 1080p monitor on DP that I leave plugged in, and then I get the OC boot screen and everything works fine. I can plug in the 4k monitor after it is up and it works perfectly. Hope this helps!
  10. Hello, all! So, long story short, I have been using a Dell T7610 (dual E5 v2) as my primary workstation with a Win10 and a macOS VMs so I can easily swap between. I decided I wanted to get some newer CPUs, so I bought a Dell T7910 (dual E5 v4) and planned to swap the two graphics cards (one Nvidia 980ti, one Radeon Vega64) into the newer T7910 machine once everything was set up. I decided to try and keep things simple I would start with fresh versions of both the Windows 10 & macOS VMs. I got both the new VMs working on the new machine with older video cards (again, one nvidia, one radeon (an RX 570)), and after swapping the newer graphics cards from the T7610 into the T7910, setting up Win 10 with its Nvidia 980ti card went swimmingly, but the macOS with the Vega64 transition has been a no-go. The Radeon Vega worked fine running macOS in the older T7610, and in the newer T7910 macOS works great with a Radeon RX 580. When I put the Vega64 into the T7910, it acts very strangely - details below, but first what I've done. I've dumped the video bios and use it, even though the vega64 is the 2ndary video card. I've tried two different PCI-E slots on CPU1 (and even a CPU2 slot) with no change in behaviour. I've tried using the Vega 64 within Windows 10 & Linux VMs with similar bad behaviours, but, not always, which is weird. I don't see the Tiano Core logo indicating its starting - what seems to be going on is that the Vega 64 does not want to display graphics until the the VM is completely up, which makes me suspect the vbios, but it behaves the same both with and without me supplying one. What is weird is that it will sometimes work (with Win10 or Linux), and sometimes the VM completely fails to boot. It never works with macos. It really doesn't seem to work well when it is the only/primary card in the system - VM's never boot when it isn't the 2ndary card. Does anyone have any suggestions on what I might do? Any one seen anything like this? Thanks! (ps. I moved this from the VM forum to here because there was no response there, and this seems to be more of a compatibility problem with the Dell and the Radeon, rather than something specific to the VMs - I apologize if that isn't appropriate!)
  11. I think this is more appropriate for the General support forum, could someone please delete?
  12. Hi! So, long story short, my MacPro died unexpectedly, and I needed a new workstation Right Now, and after some frustrations with bare metal hackintosh, I found unraid and it was pretty easy (with some help from Space Invader One's videos) to get set up and running with a Dell Precision T7610 w/ 2x e5-2680 v2 CPUs. Things were great, I was running macos High Sierra 10.13, and then bought an AMD Vega 64, and had the great idea of upgrading to Catalina! Because I'm here, you can imagine how that went. So (I have the problem of the Vega 64 not working on VM reboot, so that exacerbates the problem, but) the main problem is that it doesn't actually display out on the monitors - the VNC server is still running, and I can log in, but under About This Mac, the graphics line is missing, but it is under the System Report, however, the monitors aren't there, and it only lists a built-in monitor at 1280x1024. fwiw, the machine shows up as a late 2013 imac 27" :-/ The frustrating thing is that 10.14 works normally... I'm a little hesitant to start over with the Macina from Space Invader One because I feel like this ought to be something I can make work X-D A little about my background - I'm fairly new to unraid, but am quite familiar with Linux.