[Support] SpaceinvaderOne - Macinabox


Recommended Posts

20 hours ago, ghost82 said:

There should be, but not so easy.

As described by dortania you should spoof the id of your R7 370: R7 370 is listed as compatible with latest mac os but needs fakeid and maybe the -raddvi boot arg.

https://dortania.github.io/GPU-Buyers-Guide/modern-gpus/amd-gpu.html#r7-r9

 

Basically, you need to get the acpi address of your gpu, modify the source code of ssdt (.dsl file) template available at dortania, compile with maciasl, or iasl or anything else able to compile dsl files (--> .aml file) and inject this compiled ssdt with opencore.

https://dortania.github.io/Getting-Started-With-ACPI/Universal/spoof.html

 

i totally messed up my config.plist

the vm is not booting anymore. I got a copy of my old config.plist, but  i don't now how to mount the proper image and where it is....

I don't get into the boot menu, because i configured it in opencore to boot directly into the image....

 

Any help appreciated

Link to comment
3 hours ago, DrMucki said:

Any help appreciated

I hope you didn't overwrite the efi of mac os with that of opencore.

If you didn't overwrite the efi and you kept separated the opencore img, simply download the original macinabox opencore image:
https://github.com/SpaceinvaderOne/Macinabox/raw/master/bootloader/OpenCore.img.zip

Extract it and overwrite yours.

This will be a fresh img file for the bootloader.

 

Otherwise you need to mount your img vdisk, and mount the efi inside that img, mount the downloaded opencore.img and overwrite the files from opencore.img to the efi.

 

Or if you are passing through a physical disk, mount the efi partition of that disk, mount the downloaded opencore.img and overwrite the files from opencore.img to the efi.

 

Quote

I don't get into the boot menu, because i configured it in opencore to boot directly into the image

Opencanopy is only hidden, if you press esc button it will show, if the system it's still able to reach the boot picker..

Edited by ghost82
Link to comment
4 hours ago, ghost82 said:

Otherwise you need to mount your img vdisk, and mount the efi inside that img, mount the downloaded opencore.img and overwrite the files from opencore.img to the efi.

 

Or if you are passing through a physical disk, mount the efi partition of that disk, mount the downloaded opencore.img and overwrite the files from opencore.img to the efi.

 

Opencanopy is only hidden, if you press esc button it will show, if the system it's still able to reach the boot picker..

 

 

Just started from scratch and the vm is working again. I also got it managed to mount the disk, but this ended up with a stable apple logo... so start from scratch looks only an hour .....

 

But I did not get it managed to get it started with the amd passing through. 

Here is what I did, for trying it to get to work:

1. found the Device ID from my GPU: Device: Curacao PRO [Radeon R7 370 / R9 270/370 OEM] [6811]

2. discovered the firmware path: 

cat /sys/bus/pci/devices/0000:01:00.0/firmware_node/path

\_SB_.PCI0.GPP0.X161

3. Edited the provided SSDT-GPU-SPOOF.dsl file

 

a) changed the firmware path , wherefore I changed twice the "EXTERNAL" and the "Scope"

changed the spoofed "device id" to 0x11 , 0x68  and renamed the model

 

(Did I forget anything here or did I make a mistake? I provided the file at the end...)

3. Downloaded  MaciASL and double clicked my .dsl file and saved it as aml file

4. opened Openconfigurator and mounted the partition and opened it. Looked for the config.plist and double clicked it to open it with OC

5. On the first page I saw several aml files, so I dragged and dropped my aml. file here and saved the config.plist, and unmounted the partition (I am not sure for this step, so I decided to do as described, may be it is wrong....)

6. Did a shutdown and started the vm again... I did not change anything in the VM , because I want to see if the VM comes up. It does and it started without any problems. Teamviewer starts up automatically and I was able to get in via TeamViewer . OK Shut Down....

7. Edited the VM: changed graphics to AMD Radeon, added VBIOS,  Saved changes, runned  helper script and started the VM again. VM is starting but no possibility to get in via TeamViewer 

8. changing back to VNC (after editing the known bug in the xml file..) brings back the machine... but with no amd-

 

What am I doing wrong?

 

Thank you for your help!

 

 

 

SSDT-GPU-SPOOF.aml SSDT-GPU-SPOOF.dsl config.plist config.original.plist

Link to comment

Hi, I'm not sure it will work, just think at it about what may be wrong.

First of all, make a backup of all if you have important data, so you don't have to spend time to restore all if something goes wrong.

 

10 hours ago, DrMucki said:

1. found the Device ID from my GPU: Device: Curacao PRO [Radeon R7 370 / R9 270/370 OEM] [6811]

OK

10 hours ago, DrMucki said:

2. discovered the firmware path: 

cat /sys/bus/pci/devices/0000:01:00.0/firmware_node/path

\_SB_.PCI0.GPP0.X161

OK, how did you find it?linux or windows?Here my first doubts about the acpi path, I'm not sure it's ok, since we are using qemu but we are applying a method for a real hackintosh, but let's take as it for now.

10 hours ago, DrMucki said:

3. Edited the provided SSDT-GPU-SPOOF.dsl file

 

a) changed the firmware path , wherefore I changed twice the "EXTERNAL" and the "Scope"

changed the spoofed "device id" to 0x11 , 0x68  and renamed the model

Here I'm sure, there's an error: your actual device-id is 0x6811 but you need to spoof it, otherwise injecting this ssdt will result as if not injecting; change it to 0x6810 --> the "X" equivalent of your card (attached files).

10 hours ago, DrMucki said:

3. Downloaded  MaciASL and double clicked my .dsl file and saved it as aml file

OK

10 hours ago, DrMucki said:

4. opened Openconfigurator and mounted the partition and opened it. Looked for the config.plist and double clicked it to open it with OC

OK

10 hours ago, DrMucki said:

5. On the first page I saw several aml files, so I dragged and dropped my aml. file here and saved the config.plist, and unmounted the partition (I am not sure for this step, so I decided to do as described, may be it is wrong....)

OK, I'm not a fan of opencore configurator, but the config.plist is correctly modified:
 

			<dict>
				<key>Comment</key>
				<string></string>
				<key>Enabled</key>
				<true/>
				<key>Path</key>
				<string>SSDT-GPU-SPOOF.aml</string>
			</dict>

Just to make sure: mount the efi, go to \EFI\OC\ACPI and check if SSDT-GPU-SPOOF.aml is in this folder with all the other aml files.

10 hours ago, DrMucki said:

6. Did a shutdown and started the vm again... I did not change anything in the VM , because I want to see if the VM comes up. It does and it started without any problems. Teamviewer starts up automatically and I was able to get in via TeamViewer . OK Shut Down....

OK

10 hours ago, DrMucki said:

7. Edited the VM: changed graphics to AMD Radeon, added VBIOS,  Saved changes, runned  helper script and started the VM again. VM is starting but no possibility to get in via TeamViewer 

8. changing back to VNC (after editing the known bug in the xml file..) brings back the machine... but with no amd-

Let's hope is only due to wrong spoofed device-id..

SSDT-GPU-SPOOF.dsl SSDT-GPU-SPOOF.aml

Link to comment
18 minutes ago, ghost82 said:

Here my first doubts about the acpi path, I'm not sure it's ok, since we are using qemu but we are applying a method for a real hackintosh, but let's take as it for now.

Regarding this, when I passthrough my gpu and I extract the DSDT (qemu) once inside mac os, my path should be \_SB_.PCI0.GPE0

dsdt.png.7c3d98d1e0a5d0d0353e42125f886fea.png

 

So let's try also with this path, but I'm not sure the path will be same...

 

SSDT-GPU-SPOOF.aml SSDT-GPU-SPOOF.dsl

Edited by ghost82
Link to comment

Otherwise try to inject spoofed device-id property with config.plist only, without ssdt.

 

You could add the following snippet of code in the DeviceProperties-->Add in the config.plist:

		<dict>
			<key>PciRoot(0x1)/Pci(0x1,0x4)/Pci(0x0,0x0)</key>
			<dict>
				<key>device-id</key>
				<data>10680000</data>
			</dict>
		</dict>

So that it becomes:

	<key>DeviceProperties</key>
	<dict>
		<key>Add</key>
		<dict>
			<key>PciRoot(0x1)/Pci(0x1F,0x0)</key>
			<dict>
				<key>compatible</key>
				<string>pci8086,2916</string>
				<key>device-id</key>
				<data>FikA</data>
				<key>name</key>
				<string>pci8086,2916</string>
			</dict>
			<key>PciRoot(0x1)/Pci(0x1,0x4)/Pci(0x0,0x0)</key>
			<dict>
				<key>device-id</key>
				<data>10680000</data>
			</dict>
		</dict>

Here comes another problem...as you can see I wrote the gpu address as PciRoot(0x1)/Pci(0x1,0x4)/Pci(0x0,0x0)

but this is MY gpu address, because I found it in ioreg with the gpu attached...But how can you know your gpu address if the gpu is not attached?

By guessing?

All we know is that the gpu will be under a bridge in qemu: the PciRoot(0x1) is OK, what may change is the second address Pci(0x1,0x4): this corresponds to pci-bridge@1,4

I would try in order:

Pci(0x1,0x4)

Pci(0x1,0x1)

Pci(0x1,0x2)

Pci(0x1,0x3)

Pci(0x1,0x5)

 

Not sure about the lat path Pci(0x0,0x0) if it can change or not...probably not..

 

Sorry, I know there are a lot of variables to test...

Edited by ghost82
Link to comment

I just trief

55 minutes ago, ghost82 said:

Hi, I'm not sure it will work, just think at it about what may be wrong.

First of all, make a backup of all if you have important data, so you don't have to spend time to restore all if something goes wrong.

 

OK

OK, how did you find it?linux or windows?Here my first doubts about the acpi path, I'm not sure it's ok, since we are using qemu but we are applying a method for a real hackintosh, but let's take as it for now.

I did it on the unraid server using the terminal...

 

55 minutes ago, ghost82 said:

Here I'm sure, there's an error: your actual device-id is 0x6811 but you need to spoof it, otherwise injecting this ssdt will result as if not injecting; change it to 0x6810 --> the "X" equivalent of your card (attached files).

OK

OK

OK, I'm not a fan of opencore configurator, but the config.plist is correctly modified:
 


			<dict>
				<key>Comment</key>
				<string></string>
				<key>Enabled</key>
				<true/>
				<key>Path</key>
				<string>SSDT-GPU-SPOOF.aml</string>
			</dict>

Just to make sure: mount the efi, go to \EFI\OC\ACPI and check if SSDT-GPU-SPOOF.aml is in this folder with all the other aml files.

OK

Let's hope is only due to wrong spoofed device-id..

SSDT-GPU-SPOOF.dsl 1.86 kB · 0 downloads SSDT-GPU-SPOOF.aml 261 B · 0downloads

I just copied your aml file to the efi folder unmounted and rebooted....

Am i doing correct... I always switch back to vnc, booting back the vm, do the changes in OC, powering down Vm, edit the VM to passthrough with vbios, helper script, starting the machine... but than the only way is Teamviewer to check... and this is not coming up again so with your 1st file nu success

.Where to check whether the path is correct? Can i do this within the vm ( i dont think so because im not getting i the vm with the card passing through)

Meanwhile I will try the other file you provided... Thanks for your help so far....

Link to comment
3 minutes ago, DrMucki said:

Where to check whether the path is correct? Can i do this within the vm ( i dont think so because im not getting i the vm with the card passing through)

I'm not sure, this is the main issue!

The hamletic question is: how can you check for the device path if the device is not attached? :D

This is why in my last reply I wrote "by guessing"...

If I were you, I would try the DeviceProperty Add:
https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/?do=findComment&comment=932679

 

without messing with the SSDT

  • Like 1
Link to comment
4 minutes ago, DrMucki said:

Am i doing correct... I always switch back to vnc, booting back the vm, do the changes in OC, powering down Vm, edit the VM to passthrough with vbios, helper script, starting the machine... but than the only way is Teamviewer to check... and this is not coming up again so with your 1st file nu success

Yes, this is correct, if it doesn't come up something is wrong with the ssdt or with the path.

Link to comment
15 minutes ago, ghost82 said:

I'm not sure, this is the main issue!

The hamletic question is: how can you check for the device path if the device is not attached? :D

This is why in my last reply I wrote "by guessing"...

If I were you, I would try the DeviceProperty Add:
https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/?do=findComment&comment=932679

 

without messing with the SSDT

Thank you. I will give it a try.... plz stand by 🙂 this will take a while....

Doing this while in homeoffice.... but now breakfast....

Link to comment
1 minute ago, DrMucki said:

Thank you. I will give it a try.... plz stand by 🙂 this will take a while....

Doing this while in homeoffice.... but now breakfast....

no problem take your time and check carefully the code: please note that I edited the code of DeviceProperty because of an error, so refresh the page and check.

Link to comment
1 hour ago, ghost82 said:

Otherwise try to inject spoofed device-id property with config.plist only, without ssdt.

 

You could add the following snippet of code in the DeviceProperties-->Add in the config.plist:


		<dict>
			<key>PciRoot(0x1)/Pci(0x1,0x4)/Pci(0x0,0x0)</key>
			<dict>
				<key>device-id</key>
				<data>10680000</data>
			</dict>
		</dict>

So that it becomes:


	<key>DeviceProperties</key>
	<dict>
		<key>Add</key>
		<dict>
			<key>PciRoot(0x1)/Pci(0x1F,0x0)</key>
			<dict>
				<key>compatible</key>
				<string>pci8086,2916</string>
				<key>device-id</key>
				<data>FikA</data>
				<key>name</key>
				<string>pci8086,2916</string>
			</dict>
			<key>PciRoot(0x1)/Pci(0x1,0x4)/Pci(0x0,0x0)</key>
			<dict>
				<key>device-id</key>
				<data>10680000</data>
			</dict>
		</dict>

Here comes another problem...as you can see I wrote the gpu address as PciRoot(0x1)/Pci(0x1,0x4)/Pci(0x0,0x0)

but this is MY gpu address, because I found it in ioreg with the gpu attached...But how can you know your gpu address if the gpu is not attached?

By guessing?

All we know is that the gpu will be under a bridge in qemu: the PciRoot(0x1) is OK, what may change is the second address Pci(0x1,0x4): this corresponds to pci-bridge@1,4

I would try in order:

Pci(0x1,0x4)

Pci(0x1,0x1)

Pci(0x1,0x2)

Pci(0x1,0x3)

Pci(0x1,0x5)

 

Not sure about the lat path Pci(0x0,0x0) if it can change or not...probably not..

 

Sorry, I know there are a lot of variables to test...

I tried the other aml... with no success will go on with the other method withoud SSDT...

Just to get it right...... I will boot in VM, restore my old config, add the code snippet in config.plist add gpu and restart....

I think it could be a good idea to mount the disk in unraid terminal, modify the config.plist there and try it this way. So i do not have to restart the vm in vnc to do the changes there....

 

<key>PciRoot(0x1)/Pci(0x1,0x4)/Pci(0x0,0x0)</key>  the red part is the part to change... right?

 

 

 

Link to comment
1 hour ago, ghost82 said:

no problem take your time and check carefully the code: please note that I edited the code of DeviceProperty because of an error, so refresh the page and check.

Before i will start with it i got a further question

 

when changing from VNC to passthrough back to VNC the following in the XML gets wrong after the script:

 

<video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x01' function='0x0'/>
    </video>

 

It leads "the Guest has not initialized the display yet error" 

 

changing the red part to:   bus='0x00' slot='0x02" fixes the problem...

 

when swapping to GPU XML changes like this

 <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/disk1/isos/vbios/ATI.rom'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>

 

I am just wondering if this error also occurs, when changing to gpu and i cannot see it, beacuse i do not have vnc... and it stucks somewhere there.....

What do you think

  • Like 1
Link to comment
1 hour ago, ghost82 said:

It should only apply to vnc display.

ok. So the only way to check if the machine starts is Teamviewer coming up , right?

I edited the config.plist, but I think I misunderstood you somewhere

First I checked it with vnc without gpu, machine was starting but when opening with OC it complained about the config.plist when opening it

 

"OpenCore Configurator was unable read because it isn’t in the correct format.

Please fix the following code before saving: Found non-key inside <dict> at line 212"

 

I did not manage to fix it...maybe I forgot something or added wrongly. Would you mind having a look to my edited config.plist please?

Thought I had to check my config before adding the GPU...

 

 

config.plist

Link to comment
2 hours ago, ghost82 said:

Should be fixed now, you messed the DeviceProperty section.

 

 

config.plist 20.83 kB · 2 downloads

Thank you with this file OC shows no error (of cause :-)). I went back went the GPU and tested

(0x1, 0x4) and the numbers from 0-5 at the last position..

now going on to check 0x2, 0x0... 

what numbers are logical to test I mean the first an the second part....

0xa,0xb.  (where a and b are ini which range) to ask mathematically---

Link to comment
50 minutes ago, DrMucki said:

now going on to check 0x2, 0x0... 

If it didn't work I would not have too much hope..I'm afraid but I'm out of ideas :(

If you can replace opencore files with that of a debug version and obtain a full log of a gpu failure maybe we will have some more info to evaluate..

Edited by ghost82
Link to comment
29 minutes ago, ghost82 said:

If it didn't work I would not have too much hope..I'm afraid but I'm out of ideas :(

If you can replace opencore files with that of a debug version and obtain a full log of a gpu failure maybe we will have some for info to evaluate..

it's ok... I wasted many hours of your time...... and mine... but if I will not work I will give it up.

I am trying to get from 0x0,0x0 to 0x5, 0x5 with all combinations and than I will stop it.

I always have to wait for 60 sec to get the VM a chance to come up..

with vnc I get a response after 30 sec..

 

What do you mean with "f you can replace opencore files with that of a debug version and obtain a full log of a gpu failure maybe we will have some for info to evaluate.." I need a little help with it..

google was not so helpful.

Otherwise may be you have suggestion for a GPU for max 150 Euros, which will work out of the box 🙂

 

I am sorry for not being as smart as you with these things. BTW where are you from? seems to be the same time zone 🙂 as here in Germany

Edited by DrMucki
Link to comment
12 minutes ago, DrMucki said:

seems to be the same time zone 🙂 as here in Germany

Italy, yes same time zone!

 

Opencore is compiled into 2 versions: debug and release.

Macinabox includes the release version.

Debug version should be used when issues appear, because by changing the key "Target" to 83 in the config.plist, you will be able to log to a file all of the things the bootloader is doing, saved in the root of the efi partition.

The release version is able to log too, but with less information saved.

I don't know which release version macinabox includes, maybe 0.6.4: if you go to github.com/acidanthera/opencorepkg and you click on "releases" on the right of the page, you can download the precompiled "debug" version and replace the files in the efi, then changing Target to 83 will let you save to a log file.

Link to comment
16 hours ago, ghost82 said:

Opencore is compiled into 2 versions: debug and release.

Macinabox includes the release version.

Debug version should be used when issues appear, because by changing the key "Target" to 83 in the config.plist, you will be able to log to a file all of the things the bootloader is doing, saved in the root of the efi partition.

The release version is able to log too, but with less information saved.

I don't know which release version macinabox includes, maybe 0.6.4: if you go to github.com/acidanthera/opencorepkg and you click on "releases" on the right of the page, you can download the precompiled "debug" version and replace the files in the efi, then changing Target to 83 will let you save to a log file.

Ok, did that at installed the debug version (replaced the files mentioned here https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/debug.html and followed the instructions)

finally I was able to create some log files.

For comparison I created the 3 files:

1st: (VNC): I started up the VM with the standard config.plist without any changes (except the ones for the logging)

2nd(VNC with new config): I changed the config.plist while adding the part in Device Properties and booted up with this in VNC Mode

3rd (GPU with new config): I modified the VM (added GPU and vbios) run helper script and started the VM.

4th(VNC Spoofed GPU). I modified the VM Back to VNC and added the config.plist with the Spoofed GPU mal part, deleted the APCI part which was added in 2)

5th (GPU spoofed). Added the GPU with VBIOS and started again...

 

I looked throughout them and. th 3rd and the 5th one are complaining about missing compatible GOP.

Googling around with missing GOP and R7-370 didn't get me any hints

opencore-2021-01-09-101113(VNC).txt opencore-2021-01-09-103445(VNC with new config).txt opencore-2021-01-09-103843(GPU with new config).txt opencore-2021-01-09-121610(VNC spoofed GPU).txt opencore-2021-01-09-124847(GPU spoofed).txt

Link to comment
3 hours ago, DrMucki said:

Googling around with missing GOP and R7-370 didn't get me any hints

mmm..ok thanks for the logs..

First: can you check if the R7 370 is uefi compatible?

In other words, if you force the bios to boot in uefi mode instead of legacy bios does the R7 works?

 

If your R7 supports uefi mode, then make sure to boot unraid in uefi mode (go into your unraid settings to allow uefi boot).

Edited by ghost82
Link to comment
25 minutes ago, ghost82 said:

mmm..ok thanks for the logs..

First: can you check if the R7 370 is uefi compatible?

In other words, if you force the bios to boot in uefi mode instead of legacy bios does the R7 works?

 

If your R7 supports uefi mode, then make sure to boot unraid in uefi mode (go into your unraid settings to allow uefi boot).

I did not bother before about UEFI and legacy... so everything was set to UEFI when setting up. Unraid is already booting in UEFI Mode and in windows VM the GPU passthrough works. Should I switch to legacy, but how to do it........ IN the morning I had a quick look to my bios of the motherboard (ASUS) but I did not find anything to change there...

Link to comment
21 minutes ago, DrMucki said:

so everything was set to UEFI when setting up. Unraid is already booting in UEFI Mode and in windows VM the GPU passthrough works.

So, sorry, just to understand, bios is set to boot uefi only and unraid is set to boot in uefi, and your R7 just works in uefi mode, did I understand correctly?

If this is the case it's strange you got the gop error, because gop is enabled by the uefi driver.

No, do not change to bios legacy mode.

Edited by ghost82
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.