Jump to content

iamtim2

Members
  • Posts

    7
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

iamtim2's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Are you passing through a GPU? If you are, you have to make some edits to the xml to ensure both the graphics and sound portion of the GPU are on the same virtual PCIe slot. Spaceinvaderone has a good tutorial on this in his advanced GPU passthrough video. https://youtu.be/QlTVANDndpM?si=q0MefOd-abbb4wk6 I don't know about sound if not passing through hardware but I expect the VM is too laggy to use without it anyway.
  2. I don't but there are guides to create a MacOS USB installer. You could create the installer then pass the USB through to the VM. You still need to download the .img in the first place
  3. The current Macinabox has a very old version of Opencore (0.7.8) so you'll have to upgrade opencore to a much newer version (latest is 0.9.8) and update drivers and kexts. Just trying to apply the update to the newly created Big Sur install will fail. I wrote a basic guide here: https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/?do=findComment&comment=1364505 (previous page) on how I got Ventura working and I also got Sonoma working but Sonoma is much pickier about hardware as Apple has dropped support for quite a lot of legacy hardware in the continuing push with Apple Silicon, especially Broadcom WiFi adapters. There are some workarounds in Sonoma using Opencore Legacy Patcher but these didn't work for me. Mileage will vary depending on hardware, intel or AMD CPU etc, but if BigSur works well for you and any passed through hardware, Ventura will work but Sonoma will probably require quite a lot of fiddling.
  4. No, other way around. Proxmox as a bare metal install. MacOS and Unraid are running alongside each other as VM's under Proxmox
  5. Although I have got Ventura working under Unraid, I wasn't very happy with the performance. The GUI was laggy, I was getting frequent micro pauses with the mouse cursor and UI elements such as scrolling windows and it just generally seemed a bit slow! I also had a couple of lock ups that took the whole server down with it. I love Unraid for the NAS side and docker handling but I'm finding it a bit disappointing as a hypervisor for running VM's A windows 11 VM wasn't that great either. After a bit of investigation I tried a couple of things: 1. I installed Ventura as a bare metal install and with 128GB RAM and 18 cores, it runs butter smooth, so not a hardware issue but BM is no good as I need the NAS functionality. 2. I installed proxmox and then installed Ventura as a VM with the same RAM and core count as the VM running under Unraid. As with the Unraid VM, my RX580 is passed through and the VM disk is hosted on an NVME drive. Performance is night and day. Under proxmox, the VM runs very smooth, no glitches or slowdowns and no crashes despite identical settings and hardware passed through. Conclusion, unraid running on my server does not do VM's very well! This is obviously an N of 1 and it might just be me 🙂 I therefore took a different tack and probably off topic in this thread but I now have unraid hosted as a VM in proxmox with my disk controllers passed through along with an Nvidia Quadro p400 for Plex transcoding. As a NAS, it is working really well, and running a 10 disk pool (zfs) and 12 docker containers without a hitch for a week now. Best of all, I'm able to run MacOS Ventura in a proxmox VM at the same time as Unraid with excellent performance.
  6. To rename, click edit on the vm and simply change the name. You’ll then need to follow the instructions at paragraph 17 above if the original vm was created with qcow2 disk type to get it working again
  7. So I’m a bit of a unraid noob but been running bare metal and proxmox hackintosh’s for a while now so fairly confident editing plist files and opencore bootloader. Having recently built a new unraid server to replace my creaking and old Synology NAS, I figured I may as well get MacOS running on it as it’s on all the time anyway and I wouldn’t need to flash up my gaming PC/Hackintosh every time I needed to do something. Fast forward, I have unraid running great with loads of dockers etc, and following spaceinvaderone’s excellent tutorials, a fully working Big Sur vm with passed through WiFi card, USB PCI-E card and GPU (RX580) As I’d been running Ventura on Proxmox for ages, I thought I’d tinker with the vm to see if I could get Ventura running. There seems to be no guide out there to show you how to do this and all the searching I did pointed at a couple of Docker options that were limited in functionality. I have skimmed through this thread so apologies if it's already covered! Anyway……a long while later - success My setup is an intel server; Xeon E5 2697 v4 running on an ASUS X99-E WS motherboard with 128GB 2300 ECC memory. There is no reason why an AMD setup shouldn’t also work with a few tweaks. Here’s what I did: 1. I used the ‘backup vm’ app from community applications to backup my BigSur vm 2. I then stopped all PCI-E passthrough to the vm to mitigate any issues with passthrough breaking during the install. There is a bug in the current version of unraid where if you deselect GPU passthrough and go back to VNC, you will get the ‘guest has not initialised the display’ message and won’t be able to do anything. Luckily, I had a copy of the vm xml prior to passing through the GPU and just copied it back to the vm. I then tweaked the cpu cores and memory, reran the mackinabox helper script and that fixed the problem. There is a thread if you search for the no display error that tells you how to fix the xml but it was simpler for me to just copy back the original from the initial BigSur install. 3. I then created a new user script and copied the script content from spaceinvaderone’s helper script. I then changed the custom xml entries that the script injects into the vm xml when run. I got the new values from Nick Sherlocks excellent tutorial on installing Ventura on proxmox: https://www.nicksherlock.com/2022/10/installing-macos-13-ventura-on-proxmox/ This is where you may be able to use the AMD values for CPU arguments if running an AMD CPU. Essentially, I changed my script to use the Intel CPU argument echo "<qemu:arg value='host,kvm=on,vendor=GenuineIntel,+invtsc,+hypervisor'/>" | tee --append /tmp/"$XML"4.xml This replaced the old line: echo "<qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+kvm_pv_unhalt,+kvm_pv_eoi,+hypervisor,+invtsc,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+avx2,+aes,+fma,+fma4,+bmi1,+bmi2,+xsave,+xsaveopt,+rdrand,check'/>" | tee --append /tmp/"$XML"4.xml This then passes the CPU through to the vm and doesn’t do any CPU emulation. 4. I then used Opencore Configurator to mount the EFI partition and delete all the unnecessary kexts (as I’m using a Fenvi T919 WiFi/Bluetooth card that is natively supported in Ventura, I didn’t need all the bluetooth or wifi/broadcom kexts) 5. I updated all the kexts to the latest versions, especially Lilu, Whatevergreen, AppleALC and VirtualSMC. Follow the guide at https://dortania.github.io/OpenCore-Install-Guide/ 6. Macinabox uses an old version of opencore (0.7.8) and the latest version is 0.9.7 so next it was time to update opencore and my EFI folder and files. 7. I downloaded the latest version of opencore from https://github.com/acidanthera/OpenCorePkg/releases 8. I downloaded Propertree plist editor (https://github.com/corpnewt/ProperTree), used propertree to open the config.plsit from the BigSur EFI disk (mount the disk with opencore configurator as shown in spaceinvaderone’s install video). Then open the sample.plist in the Docs folder of the opencore download, place side by side 9. I followed the opencore install guide (I used Intel HEDT/Broadwell-E under the configs sidebar) but follow whichever guide matches your CPU. The thing to bear in mind is I’m passing through the CPU to the vm as ‘host’, so the vm see’s my real CPU not as an emulated Intel ‘Penryn’ CPU in the current BigSur mackinabox vm. The motherboard is obviously emulated but I followed the guide for X99 (my actual motherboard) where motherboard parameters are used and it seemed to work. You may need to tweak slightly? 10. There is quite a bit of discrepancy between the Dortania guide and some of the settings in the mackinabox BigSur config.plist so I used the guide as the main reference, changing values where it told me to but where it was a bit ambiguous, I matched the settings from the mackinabox config.plist. 11. Once all the edits to the sample.plist was done, I renamed it config.plist and saved it to my desktop. I then followed the guide to build the EFI folder, finally adding the new config.plist once done. 12. I then replaced the BigSur EFI folder with the newly constructed EFI folder with the new config.plist and updated kexts and driver files. 13. I rebooted the vm to check it all worked and BigSur booted fine but now running on Opencore 0.9.7. 14. Next, I opened up a terminal window and used the command This downloads the latest Ventura install app into your applications folder 15. I Then simply went to the downloaded file in applications, double clicked and followed the prompts to install Ventura over the top of Big Sur 16. To my complete surprise, it went through the full update with no errors and I was able to boot into Ventura 13.6.4 17. I renamed the vm from 'Mackinabox Big Sur' to 'Ventura'. If you use qcow2 in the macinabox docker configuration when creating the initial vm you’ll have a problem when renaming. There is a bug that messes up the disk setting in the xml when renaming a vm, changing it from qcow2 to raw. If you try and now boot, you go straight to a UEFI screen and your boot disk can’t be seen. You need to go into the xml and change the disk type back to qcow2 in the line that starts <driver name= <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/BigSur/macos_disk.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> 18. I then came across another issue where when passing through the USB, WiFi and RX580 again, I just got a black screen. I checked my settings and they were the same as before so I was a bit stumped. I then compared the original BigSur xml with working passthrough, to the new Ventura xml and noticed that some of the lines had subtly changed for some reason. I replaced all the lines below the second <devices> </disk> entry from the BigSur xml in the Ventura xml. Voila! GPU passthrough now works. 19. Any changes from now on, as long as I used the updated helper script are fine and the vm performs fine. 20. I used GenSMBIOS (https://github.com/corpnewt/GenSMBIOS) to generate serials etc for an iMAC19,1 mac type and all iMessage services, handoff and Airdrop work fine. The Opencore guide tells you how to do this. There’s an issue with the latest kernel that causes a crash when connecting the WiFi to an encrypted WiFi net (works fine with WiFi with no password) so I can’t connect the WiFi but I’m running hard wired so not an issue for me and all iServices seem to work fine. I did a second update of another macinabox BigSur install to Sonoma and that also works following the guide above but Sonoma has dropped support for Broadcom cards and I had issues with Bluetooth that I couldn't seen to fix even with the OpenCore Legacy Patcher workaround so that was a non-starter for me and Ventura is fine for my needs Hopefully this mini guide will help others to get Ventura (or Sonoma) up and running on unraid.
×
×
  • Create New...