[Support] SpaceinvaderOne - Macinabox


Recommended Posts

3 minutes ago, ghost82 said:

Installation is not from disk to disk, but it's a network installation, meaning that you download the base system image and then download from apple servers the installer.

OK, I understand now. 

Catalina-install.img & HighSierra-install.img are just a part of DVD install. Thanks !

Link to comment

I tried wiping out the OpenCore image by setting the setting to 'yes' in the docker image and then 'no' as it directed me, but, despite the log claiming it put it in my vm share:

 

OpenCore bootloader image named BigSur-opencore.img was put in your Unraid vm share in the folder named BigSur

 

it did not appear there. 

Link to comment
11 hours ago, PicPoc said:

 

On first reboot, you have to choose "macOS Installer" to complete installation.

 

Correction, for both me and anyone else with the similar issue. I had to choose MacOS Installer *twice* (the first reboot + ~30m of install, the second reboot + ~5m of install) before I saw my disk relabeled BigSur. Then I had to go through the usual (from previous Macinabox install attempts) multiple reboots to get it to install. I'll see how it goes from here, but so far so good, just didn't expect this from previous attempts...

Link to comment
3 hours ago, ghost82 said:

There's no more method 1 and 2

There is where I am looking. In any case, i am almost sure that the media is downloading ok.

High Sierra also throws up the same message

"

Execution error

operation failed: unable to find any master var store for loader: /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.f

"

 

What is the "master var store"?

 

Or should I attempt to completely remove MacInABox so that i can start again? And does anyone know precisely what that entails? I tried deleting a bunch of stuff but i suspect that has just got me into more mess.... 😒

Link to comment
3 minutes ago, Black Aura said:

What is the "master var store"?

It's the .fd file where nvram variables are stored, it's defined in the xml, between <nvram></nvram>, for example:

    <nvram>/opt/macos/OVMF_VARS.fd</nvram>

In your case, it's missing in that path, why I don't know.

 

5 minutes ago, Black Aura said:

Or should I attempt to completely remove MacInABox so that i can start again?

If you still see methods 1 and 2, yes, remove the container and install the new one.

Link to comment

So about the "master var store" issue. Deleted macinabox with the templates and everything but still keep getting it. Catalina is the only one that does this so far for me. Any  kind of suggestions would be appreciated.

What I tried knowing it wouldn't work was to copy the same path that working big sur install has for var. Once you do that then it starts giving an error about directory not existing for opencore.img.

Link to comment
On 1/31/2022 at 5:08 PM, EG-Thundy said:

Are you trying to install Catalina? I'm having the same issue on Catalina. Big Sur installed nicely but I'm having this issue with Catalina. Also the Iso default location is not used in the xml (looking for the iso under /isos/macinabox catalina/ which it shouldn't) so I'm guessing something is off with the template.

Catalina xml is now fixed and i have pushed an update. If you update the container and run the Catalina install it should be okay

 

Link to comment
1 hour ago, EG-Thundy said:

So about the "master var store" issue. Deleted macinabox with the templates and everything but still keep getting it. Catalina is the only one that does this so far for me. Any  kind of suggestions would be appreciated.

What I tried knowing it wouldn't work was to copy the same path that working big sur install has for var. Once you do that then it starts giving an error about directory not existing for opencore.img.

Do you see the files in your system share?

Screenshot_23.thumb.png.d88d10386721044c038e5022cb87d6a7.png

Link to comment

@SpaceInvaderOne Okay so unsure if I'm correct but I thought I might as well report this:

 

The helper script to fix XMLs doesn't seem to be working? I initially got this to work but it crashed my server because it was using the cores unRAID seems to like more. But everything was showing up: OpenCore boot menu and after the apple logo the progress bar did show up.

So then I rebooted, edited XML and reran the script after changing RAM and CPU core allocations and then all I got was a black screen with the REL date of OpenCore at the bottom. Pressing enter does bring up the apple logo but nothing else happened.

Thinking the server crash may have had corrupted something I uninstalled everything, removed all the macinabox related files including the ones in usr/system and did a clean reinstall of everything. This time before booting up the VM I did the RAM and CPU core allocations first, then ran the script to fix XML and still nothing.

 

The only thing the script said was something about the VM already existing but I assume that's normal.

Link to comment
21 hours ago, kftX said:

@SpaceInvaderOne Okay so unsure if I'm correct but I thought I might as well report this:

 

The helper script to fix XMLs doesn't seem to be working? I initially got this to work but it crashed my server because it was using the cores unRAID seems to like more. But everything was showing up: OpenCore boot menu and after the apple logo the progress bar did show up.

So then I rebooted, edited XML and reran the script after changing RAM and CPU core allocations and then all I got was a black screen with the REL date of OpenCore at the bottom. Pressing enter does bring up the apple logo but nothing else happened.

Thinking the server crash may have had corrupted something I uninstalled everything, removed all the macinabox related files including the ones in usr/system and did a clean reinstall of everything. This time before booting up the VM I did the RAM and CPU core allocations first, then ran the script to fix XML and still nothing.

 

The only thing the script said was something about the VM already existing but I assume that's normal.

Ok i think what happened for you is.

You uninstalled macinabox and its appdata getting rid of the container and related files.

However the vm template was still there.

 

What the helper script does it first checks if a folder called autoinstall in the appdata exists.  This contains the newly generated vm xml. If it is present it attempts to define the vm from the template in that folder then deletes the autoinstall folder and exits.

So as it said the vm was already present it couldnt define the vm and just exited. It didnt replace your existing template, nether did it run the fixes on it.

The reason i think it stops and goes no further on the apple logo is because your existing template was missing this at the bottom

Screenshot_24.thumb.png.97329dafd3a42d3baf01cffea7969ae0.png

Running the helper script a second time would then fix this xml adding it back as the autoinstall folder wouldn't be there now. 

 

I hope this makes sense

 

 

Link to comment

Hi, is the VM power management working correctly with this kind of installation (i.e. with Macinabox)?

I installed Mojave, everything works fine, except the "Stop" (shutdown) of the VM.

If I try to "Stop" the VM from the webUI, or even issuing a shell command "virsh shutdown", only the "Restart / Sleep / Cancel / Shut Down" dialog in the guest macOS appears, and no automatic shutdown happens, ever. It just get stuck there. So there's no graceful/automatic shutdown posible, as explained in this post:

Any help really appreciated! Thanks!

 

Edited by Audio01
Link to comment

@SpaceInvaderOne Thanks a lot for the container update and your guides per usual! Appreciated!

@ghost82 Thanks for all of your good and explanatory inputs! You have helped a lot of rookies like me!


Just want to report a success for once and not an issue/error.

Steps I took when updating to new macinabox and using the new scripts on current VMs with complete success:

  1. Remove Macinabox plus the scripts and also rm -r /mnt/user/appdata/macinabox.
  2. Download Macinabox with the desired new settings, wait for scripts to load.
  3. Copy the name of whatever macOS VM you want to update with and input it into the new helper script.
  4. Run script twice, now XML of that VM is updated.
  5. Now I can make whatever changes (remove/attach PCI & USB devices, HDDs/SSDs) I want in the VM and then just run the script once or twice without further manual XML editing.

All inputs are welcome if I have forgotten or misunderstood something in the update

Goodnights!

Link to comment
On 2/1/2022 at 2:05 PM, ghost82 said:

It's the .fd file where nvram variables are stored, it's defined in the xml, between <nvram></nvram>, for example:

    <nvram>/opt/macos/OVMF_VARS.fd</nvram>

In your case, it's missing in that path, why I don't know.

 

If you still see methods 1 and 2, yes, remove the container and install the new one.

OK.

So i have removed Macinabox, the macinabox appdata, deleted the docker template and so on and reinstalled. 

 

I no longer have method 1 and 2. So that looks consistent

 

But i am still getting the error

"operation failed: unable to find any master var store for loader: /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd".

 

 

my XML has this in it

 <nvram>/mnt/user/system/custom_ovmf/Macinabox_VARS-pure-efi.fd</nvram>

 

 

and the file referenced exists

root@Tower:/mnt/user/system/custom_ovmf# ls -larth
total 3.5M
-rwxrwxrwx 1 root   root   786 Dec 19  2020 readme.txt*
-rwxrwxrwx 1 root   root  3.5M Dec 19  2020 Macinabox_CODE-pure-efi.fd*
-rwxrwxrwx 1 root   root  6.1K Dec 19  2020 .DS_Store*
drwxrwxrwx 1 nobody users   48 May 14  2021 ../
drwxrwxrwx 1 nobody root    90 Jan 26 16:17 ./

 

not sure where to go now...

 

 

Edited by Black Aura
Link to comment
28 minutes ago, Black Aura said:

OK.

So i have removed Macinabox, the macinabox appdata, deleted the docker template and so on and reinstalled. 

 

I no longer have method 1 and 2. So that looks consistent

 

But i am still getting the error

"operation failed: unable to find any master var store for loader: /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd".

 

 

my XML has this in it

 <nvram>/mnt/user/system/custom_ovmf/Macinabox_VARS-pure-efi.fd</nvram>

 

 

and the file referenced exists

root@Tower:/mnt/user/system/custom_ovmf# ls -larth
total 3.5M
-rwxrwxrwx 1 root   root   786 Dec 19  2020 readme.txt*
-rwxrwxrwx 1 root   root  3.5M Dec 19  2020 Macinabox_CODE-pure-efi.fd*
-rwxrwxrwx 1 root   root  6.1K Dec 19  2020 .DS_Store*
drwxrwxrwx 1 nobody users   48 May 14  2021 ../
drwxrwxrwx 1 nobody root    90 Jan 26 16:17 ./

 

not sure where to go now...

 

 

 

I deleted the custom_ovmf dir.  I believe i am making progress. I will report back later. Thanks all

image.png.ca482ad6adc80b15d6e9cc31cc05c84b.png

Link to comment
19 hours ago, Black Aura said:

 

I deleted the custom_ovmf dir.  I believe i am making progress. I will report back later. Thanks all

image.png.ca482ad6adc80b15d6e9cc31cc05c84b.png

 

I am guessing my Macinabox_VARS-pure-efi.fd was somehow corrupted/broken. Or maybe my entire Macinabox was broken!

 

Either way, I am the proud owner of a clean install of Monterey.

 

Now shame I don't have a compatible GPU...

 

Thanks again

 

 

Edited by Black Aura
Link to comment
On 2/3/2022 at 9:34 AM, Black Aura said:

 

I deleted the custom_ovmf dir.  I believe i am making progress. I will report back later. Thanks all

image.png.ca482ad6adc80b15d6e9cc31cc05c84b.png

I'm having the same issue with Monterey, where did you delete this dir at? And does it fully work now?

Link to comment

Hello everyone thanks for all the Wonderfull  work I have an error I can't pass GPU to VM I have always this error if someone could help me please :) 

 

 

-boot strict=on \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x8,chassis=5,id=pci.5,bus=pcie.0,multifunction=on,addr=0x1 \
-device pcie-root-port,port=0x9,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x1 \
-device pcie-root-port,port=0xa,chassis=1,id=pci.1,bus=pcie.0,addr=0x1.0x2 \
-device pcie-root-port,port=0xb,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x3 \
-device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,multifunction=on,addr=0x7.0x1 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \
-blockdev '{"driver":"file","filename":"/mnt/user/domains/Macinabox Monterey/Monterey-opencore.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
-device ide-hd,bus=ide.2,drive=libvirt-3-format,id=sata0-0-2,bootindex=1,write-cache=on \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/Monterey-install.img","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-hd,bus=ide.3,drive=libvirt-2-format,id=sata0-0-3,write-cache=on \
-blockdev '{"driver":"file","filename":"/mnt/user/domains/Macinabox Monterey/macos_disk.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device virtio-blk-pci,bus=pci.4,addr=0x0,drive=libvirt-1-format,id=virtio-disk4,write-cache=on \
-fsdev local,security_model=passthrough,id=fsdev-fs0,path=/mnt/user/data/ \
-device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=data,bus=pci.1,addr=0x0 \
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=37 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:6c:8f:4b,bus=pci.3,addr=0x0 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=38,server=on,wait=off \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-audiodev id=audio1,driver=none \
-device vfio-pci,host=0000:04:00.0,id=hostdev0,bus=pci.5,addr=0x0,romfile=/mnt/user/isos/vbios/AMDRX580.rom \
-device vfio-pci,host=0000:04:00.1,id=hostdev1,bus=pci.6,addr=0x0 \
-device usb-host,hostdevice=/dev/bus/usb/001/002,id=hostdev2,bus=usb.0,port=2 \
-device usb-host,hostdevice=/dev/bus/usb/001/003,id=hostdev3,bus=usb.0,port=3 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/0 (label charserial0)
2022-02-06T17:03:35.629628Z qemu-system-x86_64: vfio_err_notifier_handler(0000:04:00.1) Unrecoverable error detected. Please collect any data possible and then kill the guest
2022-02-06T17:03:35.629714Z qemu-system-x86_64: vfio_err_notifier_handler(0000:04:00.0) Unrecoverable error detected. Please collect any data possible and then kill the guest

 

Edited by ptichalouf
full boot log from Unraid
Link to comment

Having problems with the helper script.  I have tried a couple of times starting fresh but still have issue.

 

I tried changing the vm name in my script as per the docker name but still get the error below.  I am sure it is something stupid:)  Any help appreciated. 

Thanks

 

image.png.44344f63541168ba188705ab2a3d65c5.png

Link to comment

Same error message as above. 

Added custom qemu:args for macOS
topolgy line left as is
custom ovmf added
error: Failed to define domain from /tmp/Macinabox BigSurfixed.xml
error: (domain_definition):3: Extra content at the end of the document

 

edit: never mind.. delete the container and /mnt/user/appdata/macinabox. problem resolved

 

edit2: made an edit to the VM and then reran the script, error came back 

Edited by ziggie216
Link to comment
7 hours ago, ptichalouf said:

Hello everyone thanks for all the Wonderfull  work I have an error I can't pass GPU to VM I have always this error if someone could help me please :) 

2022-02-06T17:03:35.629628Z qemu-system-x86_64: vfio_err_notifier_handler(0000:04:00.1) Unrecoverable error detected. Please collect any data possible and then kill the guest
2022-02-06T17:03:35.629714Z qemu-system-x86_64: vfio_err_notifier_handler(0000:04:00.0) Unrecoverable error detected. Please collect any data possible and then kill the guest

 

So it looks like QEMU is having issue with accessing your card (assuming it is on address 04:00.0 and 04:00.1

 

When trying to debug an issue with passing a video card to the VM there are many things to consider/try.

 

1) Ensure the card is in its own IOMMU group and not sharing a group with any other hardware.

Also ensure that it is stubbed so that the host OS doesn't take control of the card.

118039272_ScreenShot2022-02-07at11_16_51am.thumb.png.b66f2b8a3391a4daf2076bc1f7f11539.png

 

2) Ensure your config is correct. Here is an example from my config. My card is on bus 0x12 and I pass it to bus 0x04.

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x12' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/user/isos/vbios/Yeston-RX550-4G-LP-D5.rom'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x12' slot='0x00' function='0x1'/>
      </source>
       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
    </hostdev>

 

3) Try passing the VBIOS

The goto place for VBIOS files is TechPowerUp but most people recommend that you rip your own BIOS. Spaceinvader has written some software and made a video that is easy to follow. 


4) If it is the primary GPU in the system you will need to prevent the Linux OS from using the virtual frame buffer to display the boot process. So under Main -> Flash you can add video=efifb:off to the Unraid boot option. More details can be found in this blog.607533261_ScreenShot2022-02-07at11_23_17am.png.ea99fdf09bf97f2f3f04a6ec72393721.png

 

If you need more help than this, you will need to provide us more information.

- The motherboard, CPU and video card(s) details.

- Also the relevant logs from the Unraid OS and VM.

 

Also be aware that weird things can happen when you upgrade Unraid (aka upgrade the host OS kernel). I ran for years without needing to do anything but then after one upgrade I suddenly needed to stub the card. Then another upgrade and I had to pass a VBIOS. Then last upgrade to 6.10.0-rc2 I needed to enable efifb:off.

Good Luck!

Edited by cat2devnull
  • Like 3
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.