cat2devnull

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by cat2devnull

  1. Again unfortunately there isn't enough here for us to go on. It's complaining about the OS.dmg.root_hash which makes me think that there is an issue with the image that was downloaded... Given that you are working with a VM, most of the normal hardware compatibility issues should not exist because it's all virtualized. Hence there is most likely something about your VM config that is incorrect and causing an issue. Start again from scratch with a fresh docker and OS download. Don't try to pass any hardware through at first, just get the system booting and then worry about USB/GPU/etc. Apple have a tendency to keep changing the product IDs of their OS images and of course new versions come out all the time. It is a lot of work to keep the Macinabox code up to date and probably more than Spacinvaderone was counting on. @ghost82 has been hard at work to fix a lot of these issues; https://github.com/SpaceinvaderOne/Macinabox/commit/2aba67bc2738d3ecc7a156a1a9b897665d6982ff https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/page/84/?tab=comments#comment-1010581 This was for OpenCore 0.7.0 and Mac OS 11.4 I'm not sure on the state of this work now with OpenCore 0.7.2 and Mac OS 11.5 being out. Take a look at the dortania debug guide. You may need to install the debug version of OpenCore to get more information. https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/extended/kernel-issues.html#stuck-on-eb-log-exitbs-start You can also just try building your own setup from scratch. It isn't that bad if you don't mind investing some time to tinker. https://dortania.github.io/OpenCore-Install-Guide/ It's pretty rewarding.
  2. Hi @ritty, Not sure exactly what is going on here because you haven't given us enough to work with... I can't say for sure if this is a show stopping error or just a warning. So I don't know if the error is with the VM or the OS. What is your system AMD/Intel and what video cards do you have installed? What type of GT640 do you have? - Fermi GF108/GF116 with support up to High Sierra or Kepler GK107/GK208 support up to Big Sur. Have you tried dumping the GPU ROM and loading it in your VM XML? https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/page/83/?tab=comments#comment-1007917 Have you tried getting the card working on Win10 VM? Are you passing both the video and audio component of the card using multifunction? Point #3 on this post; https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/page/82/?tab=comments#comment-1000526 Have you tried stubbing the card under system devices? Passing through Nvidia cards that are your primary GPU can be a bit tricky. Not sure if you have seen this post, along with the SpaceInvaderOne video. https://forums.unraid.net/topic/71371-resolved-primary-gpu-passthrough/ Just some ideas to get you started...
  3. It was discussed a few pages back. See here; https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/page/84/?tab=comments#comment-1011949
  4. So you need to look at the permissions to understand why you can't edit the file. Also check that the filesystem is mounted as read/write.
  5. Actually there are 20 items in that plist file as you can see in the "value" column. You just need to click on the "> Add" to get it to expand. But I am not sure why you are editing the pcidevices.plist file. You need to add the built-in settings to your config.plist in your EFI disk/partition. If you look at my earlier post, you can see I am editing /Volumes/EFI/EFI/OC/config.plist Also I noticed that you are still using the VMX ethernet driver. This should have been updated to be the intel e1000 driver when you ran the Macinabox helper script. I believe that you need to be using the intel virtual NIC to avoid issues with iServices. You can check your virtual machines XML configuration in Unraid and make sure the NIC model is set to e1000-82545em as per my example below. You may also just be able to run the the helper script again to fix it. Just make sure you updated the FIRSTINSTALL variable to "no". Spacinvader One talks about this here; https://www.youtube.com/watch?v=7OunFLG84Qs&t=840s <interface type='bridge'> <mac address='60:03:08:aa:bb:cc'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='e1000-82545em'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface>
  6. Actually it does talk about how to fix the en0 not having a check mark, but I agree that it's not well written. https://dortania.github.io/OpenCore-Post-Install/universal/iservices.html#fixing-en0 "Here under Network Interfaces (network card icon), look for en0 under BSD and check whether the device has a check mark under Builtin. If there is a check mark, skip to Fixing ROM section otherwise continue reading." After running the two commands to delete the .plist files it actually gives instructions about what to do if you don't have an en0 interface so you can ignore the section about NullEthernet.kext and ssdt-rmne.aml since that isn't your issue. Jump to "Now head under the PCI tab of Hackintool and export your PCI DeviceProperties, this will create a pcidevices.plist on your desktop..." You just need to add the en0 interface PCI root to your config.plist to tell the OS that it is built-in. In my case I had to define the built-in option for the WiFI, Bluetooth and the NVMe SSD.
  7. Everything you need to fix it is here; https://dortania.github.io/OpenCore-Post-Install/universal/iservices.html#using-gensmbios 1) Set SystemProductName to iMacPro1,1 unless you really know what you are doing. 2) Generate a unique serial number, UUID and Board number using GenSMBIOS. 3) Check the serial number with the Apple Tool. It is fine to use an invalid serial number if your Apple ID is in good standing. I've been doing this for years on multiple VMs and never had an issue. 4) Set the MAC address of your VMs virtual network card to an Apple Inc address. 5) Set the ROM address to be the same. 6) Make sure your ethernet interface is "en0" and marked as internal. Then you should be good to go.
  8. Yep, of course... Feel like a bit of an idiot My understanding is that there is nothing special about this file between Macinabox and a generic install. It just holds the NVRAM vars. As such all @Kodiak51103 should need to do is check to see if he has a VARS file under; /mnt/user/system/custom_ovmf/ and that they are correctly defined in the <OS> section of his VM config. or use the copy made when the VM was created under; /etc/libvirt/qemu/nvram/ In the interest of keeping everything stock standard, I would recommend pulling the file(s) down from GitHub and put them back into the custom_ovmf directory. Then return the VM config back to default. <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd</loader> <nvram>/mnt/user/system/custom_ovmf/Macinabox_VARS-pure-efi.fd</nvram> </os>
  9. Ok, so we have ruled out the obvious. I did a quick bit of googling and nothing jumped out at me. You could always try using the default OVMF file rather than the Macinabox version and you could try using a newer q35 machine model. v4.2 is pretty old and you may be hitting a weird bug. <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/055c9315-8848-0404-267f-4ddbae41e8bf_VARS-pure-efi.fd</nvram> </os> Of course update the VM identifier accordingly for the nvram file or keep using your existing Macinabox nvram file.
  10. Please don't take this as me being mean... Just trying to get the the bottom of the issue... So you haven't demonstrated that the file is in the right place, the pic only shows the directory, not the file. Also I'm not familiar with the app you are using and what this "Custom Path" business is about. If you open a terminal window for your server from the Unraid web GUI you should see the following; root@Tower:~# ls -la /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd -rwxrwxrwx 1 root root 3653632 Apr 5 06:54 /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd Also check what you have defined in the <OS> section of the VM template as it will need to match. <loader readonly='yes' type='pflash'>/mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd</loader> Unraid is just looking to read this file. There has to be something simple happening. Eg the file is missing, the permissions are set incorrectly, etc.
  11. Macinabox uses custom OVMF files. Have you checked to see if the files exist? The error message is pretty clear... It can't find /mnt/user/system/custom_ovmf/Macinabox_Code-pure-efi.fd Usually when people get these types of errors it is because they have changed the default locations for files during the installation or deleted the files by mistake. You can always download them again from GitHub. https://github.com/SpaceinvaderOne/Macinabox/tree/master/ovmf
  12. Unraid can pass a virtual e1000 intel ethernet device to your OS X VM that will work just fine. You still need a WiFi card and Bluetooth if you want to use iServices and Handoff. You may be able to pass the wifi card... It depends on the IOMMU groups. Also in general, most onboard Bluetooth is actually attached to the motherboard via a dedicated internal virtual (no connector) USB port. This means that you will have to pass through that entire USB controller as well. OS X is very fussy about WiFi and Bluetooth cards and they almost exclusively use a subset of Broadcom chips. If you want OS X to work well then you may need to add a combo card. Eg; https://www.ebay.com.au/itm/264398687275 https://dortania.github.io/Wireless-Buyers-Guide/unsupported.html#supported-chipsets You will still need to connect this to an internal USB2 header to see the Bluetooth and that USB has to be passed through. https://www.ebay.com.au/itm/272407930666 Any card based on the FL1100 chip should work OTB. Inateck have a good rep but there are other options. If you get the hardware right, then everything just works. Otherwise you will have to use kexts to support other chipsets from Intel/Realtek/etc and you will likely just keep hitting all sorts of weird compatibility issues. I've just thrown hardware at the problem (dedicated GPU, USB, Wifi, SSD)... IOMMU group 31:[1b73:1100] 0b:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10) IOMMU group 32:[14e4:43ba] 0c:00.0 Network controller: Broadcom Inc. and subsidiaries BCM43602 802.11ac Wireless LAN SoC IOMMU group 34:[1bb1:5012] 11:00.0 Non-Volatile memory controller: Seagate Technology PLC FireCuda 510 SSD IOMMU group 35:[1002:67ff] 12:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] [1002:aae0] 12:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X]
  13. Yep... On my rig I run dual GPUs, one for OS X and one for Windows. The Unraid server is headless. You will end up with the same setup but for a VM and docker. You might want to keep an eye on how many slots wide the GPUs are as you don't want to block off a PCIe port that you need for an add in card. In my case I have a PCIe USB controller and a WiFi card being passed direct to the OS X VM (that's how I get iServices and Handoff working). So I have a single slot RX550 GPU. The onboard USB and second GPU are for windows.
  14. I believe Macinabox uses SecureBootModel=Disabled for compatibility reasons. As per the instructions from Opencore I have been using SecureBootModel=x86legacy which is recommended for VMs on 11.0 and above. (Virtual Machines will want to use x86legacy for Secure Boot support.) Also don't forget to set ForceSecureBootScheme to false. https://dortania.github.io/OpenCore-Post-Install/universal/security/applesecureboot.html#securebootmodel
  15. You don't have to have a dedicated GPU for Unraid. It can be accessed via a web interface post boot so that's fine. The issue will be that the GPU can only be assigned to one role at a time. If you turn on the VM then it isn't available for the docker and visa versa. Also in my experience switching hardware back and forth between the base Unraid OS and a VM tends to expose bugs in motherboards PCIe implementations. Aka you will get weird hardware lockups, reboots etc. Getting Plex hardware transcoding has been covered by Spaceinvader One https://www.youtube.com/watch?v=VkC5Hi-rO2c I don't use Macinabox but I did take a look at it when I switched from Clover to Opencore to speed up my migration. That being said, Macinabox is fantastic and really lowered the bar for a lot of people to get into OS X VMs. I wanted to be on a much more vanilla Opencore installation that didn't require a bunch of custom hacks to work. That way I could upgrade OS X and Opencore when ever I wanted to without having to worry about breaking something. I also just like to understand things for myself and not be too reliant on someone else's "black box" product. It did take me a good few days to get my head around... But the result was worth it. My OS X install is 100% functional and stable. It's my daily driver for work and personal computing.
  16. I don't need a cache drive for performance given the main array works at line rate of the SSD. I suppose my preference is to have the reliability that the redundancy provides. If a drive fails the machine will keep going until I can swap it out. I do of course have both local and off site backups.
  17. Agreed, this is not the deployment that the Limetech had in mind when they designed Unraid. I keep reading warnings about SSDs being an issue in the parity array but everything I have read to date is not backed up by any real technical information. With the notable exception that you need to make sure that TRIM is disabled as you don't want the parity drive randomly changing bits on you. There is a lot of chatter about the lack of TRIM causing performance issues but that is not a problem when my bottle neck is the SATA bus on the data drive. The drive can still write at full speed. I suspect that you would hit speed issues with an NMVe only array on large writes that exceed the drives DRAM cache. I'd be interested if you can point me at any deep dives into the issues. I suspect that in this specific case and with my workload (few GBs of disk R/W per day) I am unlikely to uncover any bizarre edge cases with the hardware. I suppose I created this post just to document that it is possible to create a reliable system with Unraid on a NUC since the majority of the posts are vague warnings to the effect that it is just a bad idea.
  18. You have gone with an interesting design. I'm not sure about the logic for a dedicated Plex metadata drive (it's not that demanding) and VM drive. You would get much better performance and reliability from running dual NVMe drives in a RAID cache pool and setting your VMs to be stored on the cache. Are you planning on running Plex as a docker or within a VM? Will it need to do transcoding? Are you going to run Unraid without a dedicated GPU? Is the Quadro a CAD requirement? I would throw money at a pair of the largest NMVe drives that you can afford and use them as a cache pool and just run everything off that. You will never regret having larger cache drives and the price difference between 250 and 500GB drives is $20. In my own rig I have 2 x 1TB 970 Evo as a cache running all my VMs/dockers etc. The HDD array just holds media, ISOs etc. I also have a 3rd 2TB NVMe drive dedicated for my primary OS X install which is my work machine (also dedicated USB/GPU/etc). Just my 2c
  19. I just published a post about my NUC server build which might give you some food for thought. https://forums.unraid.net/topic/110427-unraid-intel-nuc-firewall-and-application-server-build/?tab=comments#comment-1007912 If you use the dual M.2 drives as data and parity for Unraid and then just connect the USB drive array as a big share then that should work fine. You would be relying on the hardware raid of TerraMaster with all it's pros/cons, rather than Unraid and XFS/BTFS.
  20. The goto place for vbios files is https://www.techpowerup.com/vgabios/ but most people recommend that you rip your own bios. Spaceinvader has written some software and made a video that is easy to follow. https://www.youtube.com/watch?v=FWn6OCWl63o I had run my system for over a year without needing to pass a bios to on my Mac VM with an RX550. Then I decided to upgrade from 6.9.0 RC to 6.9.2 and the Mac VM refused to boot with a bunch of difficult to debug hardware errors related to being unable to communicate with the video card. I tried debugging the issue for days and even downgraded back to 6.9.0 without any luck. So as a last resort I pulled the bios using Spaceinvader's instructions, passed it to the VM and bingo, everything working again. Go figure...
  21. This is not your typical Unraid build. I needed a small server to run my firewall, internet link, smart house, etc. It needed to be low power since it would be on 24x7, reliable with minimal moving parts that could fail and small so it could fit in my crowded rack. There is a lot of discussion in forums that NUCs are not great for Unraid but this build is proof otherwise. OS at Time of Build: unRAID 6.9.1 CPU: Intel® Core™ i5-8259U (3.8GHz boost 4C/8T) Motherboard: Intel NUC NUC8BEB (NUC NUC8i5BEH) RAM: Crucial CT8G4SFS8266 (2 x 8GB 2666MHz DDR4) Data Drive: Crucial MX500 250GB SSD CT250MX500SSD1 Parity Drive: Seagate BarraCuda 510 500GB NVMe SSD ZP500CM30001 Cache Drive: None Onboard NIC: Intel I219-V USB 3.0 NIC: TP-Link UE305 (ASIX AX88179 GigE) VMs: PFsense Dockers: NextCloud, HomeBridge, Unifi Contoller, Plex, SQL Database, Guacamole, etc Likes: Tiny, quite, fast, lower power, inexpensive, set and forget Dislikes: One moving part (the fan) and would love a version with 2 x NVMe rather than NVMe and SATA Future Plans: Upgrade the data drive when I need more space Active (avg): ~15 W Idle (avg): <10 W I stubbed and passed the onboard Intel GbE NIC direct to the PFsense VM so it can make use of hardware ethernet functions such as checksum offloading. I'm using 802.1q VLANs to my managed switch to breakout ports for each VLAN and my VDSL internet link. All remaining services go via the USB 3.0 Gigabit NIC. Costs ($AU): NUC $330, NVMe $80, RAM 85, SSD/NIC (recycled from old machine). Total $500AU or $380US About 6 months in now and no regrets. I haven't bothered with a picture since everyone knows what a NUC looks like.
  22. If you have a modern CPU there is no reason why you have to emulate Penryn. I believe Macinabox has it set that way for maximum compatibility. I'm running an AMD 3950 so I emulate a Skylake processor. If I didn't use emulation then I would need to apply a bunch of patches to OS X to support AMD. <qemu:arg value='Skylake-Client,kvm=on,vendor=GenuineIntel,+invtsc,+hypervisor,vmware-cpuid-freq=on,+pcid,+popcnt,check'/> This results in the OS reporting my CPU as - 3.5 GHz Intel Core i3 From sysctl -a machdep.cpu.brand_string: Intel Core Processor (Skylake) machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH MMX FXSR SSE SSE2 SSE3 PCLMULQDQ SSSE3 FMA CX16 SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES VMM XSAVE OSXSAVE TSCTMR AVX1.0 RDRAND F16C machdep.cpu.leaf7_features: RDWRFSGS BMI1 AVX2 SMEP BMI2 RDSEED ADX SMAP Obviously no AVX512 support being AMD Zen3 underneath. Still I got about a 15% speed improvement by shifting from Penryn to Skylake emulation. I'm not sure exactly where that came but I expect there are features that are available in Skylake that are otherwise missing.
  23. Ok, so I am not very experienced with this situation but what has happened is that the VM is booting into the UEFI, but then the UEFI cannot find a valid OS (BOOTx64.efi) to boot, so stops. This means that you have either damaged your EFI partition. Deleted or remapped your virtual disk to somewhere it isn't expecting, or similar. This wouldn't happen from just adding the boot-args to your config.plist file since the boot process hasn't even got that far yet. OpenCore Booting: (from https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/boot.html#xnu-kernel-handoff) System powers on and searches for boot devices <- you're stuck here System locates BOOTx64.efi on your OpenCore USB under EFI/BOOT/ BOOTx64.efi is loaded which then chain-loads OpenCore.efi from EFI/OC/ NVRAM Properties are applied <- this is where you made the change EFI drivers are loaded from EFI/OC/Drivers Graphics Output Protocol(GOP) is installed ACPI Tables are loaded from EFI/OC/ACPI SMBIOS Data is applied OpenCore loads and shows you all possible boot options You now boot your macOS installer I would start by checking your EFI partition/vdisk and check that you somehow didn't break your disk mappings in your VM XML config.
  24. I’ve used the term debugging and verbose interchangeably, which may have been confusing. Add the boot-args "-v keepsyms=1" to your config.plist as I mentioned above, and is documented here . https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/kernel-debugging.html#config-plist-setup This will turn on verbose mode when OS X boots and then you should be able to see what is causing the kernel panic. This isn't an error. It's just the UEFI logging which HDD it's booting from.