cat2devnull

Members
  • Posts

    128
  • Joined

  • Last visited

Everything posted by cat2devnull

  1. Yep, of course... Feel like a bit of an idiot My understanding is that there is nothing special about this file between Macinabox and a generic install. It just holds the NVRAM vars. As such all @Kodiak51103 should need to do is check to see if he has a VARS file under; /mnt/user/system/custom_ovmf/ and that they are correctly defined in the <OS> section of his VM config. or use the copy made when the VM was created under; /etc/libvirt/qemu/nvram/ In the interest of keeping everything stock standard, I would recommend pulling the file(s) down from GitHub and put them back into the custom_ovmf directory. Then return the VM config back to default. <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd</loader> <nvram>/mnt/user/system/custom_ovmf/Macinabox_VARS-pure-efi.fd</nvram> </os>
  2. Ok, so we have ruled out the obvious. I did a quick bit of googling and nothing jumped out at me. You could always try using the default OVMF file rather than the Macinabox version and you could try using a newer q35 machine model. v4.2 is pretty old and you may be hitting a weird bug. <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/055c9315-8848-0404-267f-4ddbae41e8bf_VARS-pure-efi.fd</nvram> </os> Of course update the VM identifier accordingly for the nvram file or keep using your existing Macinabox nvram file.
  3. Please don't take this as me being mean... Just trying to get the the bottom of the issue... So you haven't demonstrated that the file is in the right place, the pic only shows the directory, not the file. Also I'm not familiar with the app you are using and what this "Custom Path" business is about. If you open a terminal window for your server from the Unraid web GUI you should see the following; root@Tower:~# ls -la /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd -rwxrwxrwx 1 root root 3653632 Apr 5 06:54 /mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd Also check what you have defined in the <OS> section of the VM template as it will need to match. <loader readonly='yes' type='pflash'>/mnt/user/system/custom_ovmf/Macinabox_CODE-pure-efi.fd</loader> Unraid is just looking to read this file. There has to be something simple happening. Eg the file is missing, the permissions are set incorrectly, etc.
  4. Macinabox uses custom OVMF files. Have you checked to see if the files exist? The error message is pretty clear... It can't find /mnt/user/system/custom_ovmf/Macinabox_Code-pure-efi.fd Usually when people get these types of errors it is because they have changed the default locations for files during the installation or deleted the files by mistake. You can always download them again from GitHub. https://github.com/SpaceinvaderOne/Macinabox/tree/master/ovmf
  5. Unraid can pass a virtual e1000 intel ethernet device to your OS X VM that will work just fine. You still need a WiFi card and Bluetooth if you want to use iServices and Handoff. You may be able to pass the wifi card... It depends on the IOMMU groups. Also in general, most onboard Bluetooth is actually attached to the motherboard via a dedicated internal virtual (no connector) USB port. This means that you will have to pass through that entire USB controller as well. OS X is very fussy about WiFi and Bluetooth cards and they almost exclusively use a subset of Broadcom chips. If you want OS X to work well then you may need to add a combo card. Eg; https://www.ebay.com.au/itm/264398687275 https://dortania.github.io/Wireless-Buyers-Guide/unsupported.html#supported-chipsets You will still need to connect this to an internal USB2 header to see the Bluetooth and that USB has to be passed through. https://www.ebay.com.au/itm/272407930666 Any card based on the FL1100 chip should work OTB. Inateck have a good rep but there are other options. If you get the hardware right, then everything just works. Otherwise you will have to use kexts to support other chipsets from Intel/Realtek/etc and you will likely just keep hitting all sorts of weird compatibility issues. I've just thrown hardware at the problem (dedicated GPU, USB, Wifi, SSD)... IOMMU group 31:[1b73:1100] 0b:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10) IOMMU group 32:[14e4:43ba] 0c:00.0 Network controller: Broadcom Inc. and subsidiaries BCM43602 802.11ac Wireless LAN SoC IOMMU group 34:[1bb1:5012] 11:00.0 Non-Volatile memory controller: Seagate Technology PLC FireCuda 510 SSD IOMMU group 35:[1002:67ff] 12:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] [1002:aae0] 12:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X]
  6. Yep... On my rig I run dual GPUs, one for OS X and one for Windows. The Unraid server is headless. You will end up with the same setup but for a VM and docker. You might want to keep an eye on how many slots wide the GPUs are as you don't want to block off a PCIe port that you need for an add in card. In my case I have a PCIe USB controller and a WiFi card being passed direct to the OS X VM (that's how I get iServices and Handoff working). So I have a single slot RX550 GPU. The onboard USB and second GPU are for windows.
  7. I believe Macinabox uses SecureBootModel=Disabled for compatibility reasons. As per the instructions from Opencore I have been using SecureBootModel=x86legacy which is recommended for VMs on 11.0 and above. (Virtual Machines will want to use x86legacy for Secure Boot support.) Also don't forget to set ForceSecureBootScheme to false. https://dortania.github.io/OpenCore-Post-Install/universal/security/applesecureboot.html#securebootmodel
  8. You don't have to have a dedicated GPU for Unraid. It can be accessed via a web interface post boot so that's fine. The issue will be that the GPU can only be assigned to one role at a time. If you turn on the VM then it isn't available for the docker and visa versa. Also in my experience switching hardware back and forth between the base Unraid OS and a VM tends to expose bugs in motherboards PCIe implementations. Aka you will get weird hardware lockups, reboots etc. Getting Plex hardware transcoding has been covered by Spaceinvader One https://www.youtube.com/watch?v=VkC5Hi-rO2c I don't use Macinabox but I did take a look at it when I switched from Clover to Opencore to speed up my migration. That being said, Macinabox is fantastic and really lowered the bar for a lot of people to get into OS X VMs. I wanted to be on a much more vanilla Opencore installation that didn't require a bunch of custom hacks to work. That way I could upgrade OS X and Opencore when ever I wanted to without having to worry about breaking something. I also just like to understand things for myself and not be too reliant on someone else's "black box" product. It did take me a good few days to get my head around... But the result was worth it. My OS X install is 100% functional and stable. It's my daily driver for work and personal computing.
  9. I don't need a cache drive for performance given the main array works at line rate of the SSD. I suppose my preference is to have the reliability that the redundancy provides. If a drive fails the machine will keep going until I can swap it out. I do of course have both local and off site backups.
  10. Agreed, this is not the deployment that the Limetech had in mind when they designed Unraid. I keep reading warnings about SSDs being an issue in the parity array but everything I have read to date is not backed up by any real technical information. With the notable exception that you need to make sure that TRIM is disabled as you don't want the parity drive randomly changing bits on you. There is a lot of chatter about the lack of TRIM causing performance issues but that is not a problem when my bottle neck is the SATA bus on the data drive. The drive can still write at full speed. I suspect that you would hit speed issues with an NMVe only array on large writes that exceed the drives DRAM cache. I'd be interested if you can point me at any deep dives into the issues. I suspect that in this specific case and with my workload (few GBs of disk R/W per day) I am unlikely to uncover any bizarre edge cases with the hardware. I suppose I created this post just to document that it is possible to create a reliable system with Unraid on a NUC since the majority of the posts are vague warnings to the effect that it is just a bad idea.
  11. You have gone with an interesting design. I'm not sure about the logic for a dedicated Plex metadata drive (it's not that demanding) and VM drive. You would get much better performance and reliability from running dual NVMe drives in a RAID cache pool and setting your VMs to be stored on the cache. Are you planning on running Plex as a docker or within a VM? Will it need to do transcoding? Are you going to run Unraid without a dedicated GPU? Is the Quadro a CAD requirement? I would throw money at a pair of the largest NMVe drives that you can afford and use them as a cache pool and just run everything off that. You will never regret having larger cache drives and the price difference between 250 and 500GB drives is $20. In my own rig I have 2 x 1TB 970 Evo as a cache running all my VMs/dockers etc. The HDD array just holds media, ISOs etc. I also have a 3rd 2TB NVMe drive dedicated for my primary OS X install which is my work machine (also dedicated USB/GPU/etc). Just my 2c
  12. I just published a post about my NUC server build which might give you some food for thought. https://forums.unraid.net/topic/110427-unraid-intel-nuc-firewall-and-application-server-build/?tab=comments#comment-1007912 If you use the dual M.2 drives as data and parity for Unraid and then just connect the USB drive array as a big share then that should work fine. You would be relying on the hardware raid of TerraMaster with all it's pros/cons, rather than Unraid and XFS/BTFS.
  13. The goto place for vbios files is https://www.techpowerup.com/vgabios/ but most people recommend that you rip your own bios. Spaceinvader has written some software and made a video that is easy to follow. https://www.youtube.com/watch?v=FWn6OCWl63o I had run my system for over a year without needing to pass a bios to on my Mac VM with an RX550. Then I decided to upgrade from 6.9.0 RC to 6.9.2 and the Mac VM refused to boot with a bunch of difficult to debug hardware errors related to being unable to communicate with the video card. I tried debugging the issue for days and even downgraded back to 6.9.0 without any luck. So as a last resort I pulled the bios using Spaceinvader's instructions, passed it to the VM and bingo, everything working again. Go figure...
  14. This is not your typical Unraid build. I needed a small server to run my firewall, internet link, smart house, etc. It needed to be low power since it would be on 24x7, reliable with minimal moving parts that could fail and small so it could fit in my crowded rack. There is a lot of discussion in forums that NUCs are not great for Unraid but this build is proof otherwise. OS at Time of Build: unRAID 6.9.1 CPU: Intel® Core™ i5-8259U (3.8GHz boost 4C/8T) Motherboard: Intel NUC NUC8BEB (NUC NUC8i5BEH) RAM: Crucial CT8G4SFS8266 (2 x 8GB 2666MHz DDR4) Data Drive: Crucial MX500 250GB SSD CT250MX500SSD1 Parity Drive: Seagate BarraCuda 510 500GB NVMe SSD ZP500CM30001 Cache Drive: None Onboard NIC: Intel I219-V USB 3.0 NIC: TP-Link UE305 (ASIX AX88179 GigE) VMs: PFsense Dockers: NextCloud, HomeBridge, Unifi Contoller, Plex, SQL Database, Guacamole, etc Likes: Tiny, quite, fast, lower power, inexpensive, set and forget Dislikes: One moving part (the fan) and would love a version with 2 x NVMe rather than NVMe and SATA Future Plans: Upgrade the data drive when I need more space Active (avg): ~15 W Idle (avg): <10 W I stubbed and passed the onboard Intel GbE NIC direct to the PFsense VM so it can make use of hardware ethernet functions such as checksum offloading. I'm using 802.1q VLANs to my managed switch to breakout ports for each VLAN and my VDSL internet link. All remaining services go via the USB 3.0 Gigabit NIC. Costs ($AU): NUC $330, NVMe $80, RAM 85, SSD/NIC (recycled from old machine). Total $500AU or $380US About 6 months in now and no regrets. I haven't bothered with a picture since everyone knows what a NUC looks like.
  15. If you have a modern CPU there is no reason why you have to emulate Penryn. I believe Macinabox has it set that way for maximum compatibility. I'm running an AMD 3950 so I emulate a Skylake processor. If I didn't use emulation then I would need to apply a bunch of patches to OS X to support AMD. <qemu:arg value='Skylake-Client,kvm=on,vendor=GenuineIntel,+invtsc,+hypervisor,vmware-cpuid-freq=on,+pcid,+popcnt,check'/> This results in the OS reporting my CPU as - 3.5 GHz Intel Core i3 From sysctl -a machdep.cpu.brand_string: Intel Core Processor (Skylake) machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH MMX FXSR SSE SSE2 SSE3 PCLMULQDQ SSSE3 FMA CX16 SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES VMM XSAVE OSXSAVE TSCTMR AVX1.0 RDRAND F16C machdep.cpu.leaf7_features: RDWRFSGS BMI1 AVX2 SMEP BMI2 RDSEED ADX SMAP Obviously no AVX512 support being AMD Zen3 underneath. Still I got about a 15% speed improvement by shifting from Penryn to Skylake emulation. I'm not sure exactly where that came but I expect there are features that are available in Skylake that are otherwise missing.
  16. Ok, so I am not very experienced with this situation but what has happened is that the VM is booting into the UEFI, but then the UEFI cannot find a valid OS (BOOTx64.efi) to boot, so stops. This means that you have either damaged your EFI partition. Deleted or remapped your virtual disk to somewhere it isn't expecting, or similar. This wouldn't happen from just adding the boot-args to your config.plist file since the boot process hasn't even got that far yet. OpenCore Booting: (from https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/boot.html#xnu-kernel-handoff) System powers on and searches for boot devices <- you're stuck here System locates BOOTx64.efi on your OpenCore USB under EFI/BOOT/ BOOTx64.efi is loaded which then chain-loads OpenCore.efi from EFI/OC/ NVRAM Properties are applied <- this is where you made the change EFI drivers are loaded from EFI/OC/Drivers Graphics Output Protocol(GOP) is installed ACPI Tables are loaded from EFI/OC/ACPI SMBIOS Data is applied OpenCore loads and shows you all possible boot options You now boot your macOS installer I would start by checking your EFI partition/vdisk and check that you somehow didn't break your disk mappings in your VM XML config.
  17. I’ve used the term debugging and verbose interchangeably, which may have been confusing. Add the boot-args "-v keepsyms=1" to your config.plist as I mentioned above, and is documented here . https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/kernel-debugging.html#config-plist-setup This will turn on verbose mode when OS X boots and then you should be able to see what is causing the kernel panic. This isn't an error. It's just the UEFI logging which HDD it's booting from.
  18. Your VM log looks fine. With what info I have to go on, at this stage I don't think this is an OC bug. This is likely something between OS X and the VM (kernelspace/userspace). At this point you will need to get your hands dirty. Start by reading the Dortania Troubleshooting guide. https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/troubleshooting.html#table-of-contents (keep in mind that some options in this guide will different as it is based on OC 0.7.0 and you are using 0.6.4) You should look at enabling the OS X debugging first as that is likely to give you what you need. Only if that doesn't work would I bother to install the debugging version of OC. In you config.plist there is a string under NVRAM -> Add -> 7C436110-AB2A-4BBB-A880-FE41995C9F82 -> boot-args Change to "-v keepsyms=1" This will enable verbose mode and hopefully allow you to see what's breaking.
  19. That's not an error message, it's just logging about which drive it is going to boot from. See my post about about needing more information to help you debug further.
  20. See my post about common issues. I would try cleaning up the CPU topology as a first point. Beyond that, we would need more debugging info. Good luck.
  21. Yes... The variable is "built-in" not "build-in". You made a simple typo. For further reading, checkout the dortania guide. https://dortania.github.io/OpenCore-Post-Install/universal/iservices.html#fixing-en0
  22. It would be interesting to diff the old XML and new, to see what was broken. I'm sure it would be something fairly obvious. The common culprits are; 1) CPU topology issues. It's not uncommon to see people with CPU topologies defined which can upset the OS <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> Often safer to just passthrough without any topology. <cpu mode='host-passthrough' check='none' migratable='on'/> 2) Missing custom OS loader/nvram settings. I don't use Macinabox anymore so I might be incorrect but I think it still requires custom versions. 3) Incorrect bus settings on the video card passthrough. - Usually missing bios, multifunction settings, etc. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x12' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/Yeston-RX550-4G-LP-D5.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x12' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> 4) Network card not updated to e1000-82545em after initial build. 5) Missing or corrupt QEMU settings. <qemu:commandline> <qemu:arg value='-usb'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=xxxxxx'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/> </qemu:commandline> And I'm sure there are more gotchas to watch out for. I'm glad to hear you got it going. I must admit I did a compete rebuild of my Unraid Hackintosh a couple of months back. Now on stock standard OC and it's flawless. I was even able to fix the NVMe drive so it appears to the OS as an internal drive which was something that used to bug me. Ta,
  23. If you need help from the community, you will need to give us more information to go on... 1) The VM logs. 2) The apple boot loader debugging. config.plist -> NVRAM -> Add -> 7C436110-AB2A-4BBB-A880-FE41995C9F82 -> boot-args: -v debug=0x100 keepsyms=1 3) and if that doesn't point us in the right direction, the OpenCore debug logs. https://dortania.github.io/OpenCore-Install-Guide/troubleshooting/debug.html#file-swaps
  24. It would be interesting to know if any of these patches are required if you don't try and force the CPU to Penryn. I've been using 10.15.x through 11.3.0 without issue but I have taken a slightly different route to SpaceinvaderOne. I'm trying to keep everything running as stock standard (no custom OVMF etc) with latest versions of QEMU/Opencore etc. I'm passing the CPU though as Skylake and it's been flawless for me. Installs, upgrades, all WOB, no weird patching required. I also haven't needed to disable as much of the OS security. <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='20'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='21'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/xxxx_VARS-pure-efi.fd</nvram> </os> <cpu mode='host-passthrough' check='none' migratable='on'/> <qemu:arg value='Skylake-Client,kvm=on,vendor=GenuineIntel,+invtsc,+hypervisor,vmware-cpuid-freq=on,+pcid,+popcnt,check'/>
  25. So first you need to create a new blank HFS image in Disk Utility. It needs to be large enough to fit the installer. Then convert it from a .dmg to a .img hdiutil convert <input.dmg> -format UDTO -o <output.img> Then mount it again and copy the installer into it. sudo /Applications/Install\ macOS\ Catalina.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume Then you're ready to go.