doesntaffect

Members
  • Posts

    188
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by doesntaffect

  1. @RiDDiX Did you use a guide to install your VM? If so, can you share the link? I am currently reading through this https://github.com/osx86-ijb/Hackinabox#5-making-the-recovery-usb-on-linux But on the page to create the installer, it says that Monterey should be avoided since USB port mapping is broken. I wonder how relevant this is in a VM? Meanwhile, I am reinstalling a VM based on this video from Spaceinvader since I want to trial the Geforce GPU and the HDMI/USB over IP extender to get a GPU accelerated GUI. Basically, I am trailing a few installs before settling into a "final" production VM with passthrough GPU. Thanks for any advise. If you feel more comfortable to reply in German, just shoot me message
  2. I deleted the VM (was my 1st hackintosh ever) and started from scratch with a Big Sur installation with the help of the spaceInvader Macinabox docker & helper scripts. So far so good. The thread shared above says that MacPro7,1 isn't supported and one should stick with 1,1. How did you install your system @RiDDiX, since you are already running Monterey? What kind of CPU is detected in your VM? I'd like to set up my VM with least possible patches / hacks. All this Opencore / kext stuff screams for instability and issues from my view. But I might be wrong
  3. Added a 2x 120mm fan holder to improve the airflow around the NVME adapter, GPU and X570 chipset. I sometimes get a temperature warning for the NVME disks in the adapter card (just above 40° celsius, not really concerning) and want to address that. X570 temps dropped from 61° to 45-47°. Fans are 2 x Noctua NF-S12B redux-1200 PWM. Also added a 4x PWM Splitter with a 3M tape on the backplate. Worth to note, the system would not boot if a fan is connected to the first "CPU" connector of the splitter. Updated photos below, also showing final cabling.
  4. Reading into the RestrictedEvents kext now. used this video to install the kext, aiming to being able to update to Monterey. Config.plist shows the restricted entry now but the vm does not restart. Any chance to turn the changes back through recovery mode? a few general questions - Can I change my Imac1,1 "version" to a newer iMac and does that make sense / would that e.g. improve performance or compatibility? How do I change the CPU to a newer model? I read through pages in this thread, but it sounds to me that I am sold to Penryn. Will also get a HDMI/USB over IP extender which should allow me to get the native speed at my desk vs. using a remote connection (Unraid is sitting in a closet next door). Thanks
  5. CPU is still reported as "Unknown". Can / should this be addressed?
  6. What should I say. It works. Performance is good, quite close to a native 2017 MacBook Pro. Can I change # of CPU cores in the XML or does this require changes with OpenCore?
  7. 1. Done 2. Done 3. Done and stuck in boot process at this stage: I also added a USB Keyboard and Mouse to the VM (edited through the XML view) so I can pick the boot disk. What concerns me a little bit is that the UnRaid host restarted after the screen stayed like this for a few mins. Can the VM / GPU path through cause this?
  8. The GPU is recognized in Unraid, incl. Soundcard and I can assign and use it for example in a Win11 VM. Reason why I didnt dump the vbios was simply just trying to skip this step if there are already vbios available to dl. Will look into this and retry with the dumped vbios. Regarding mem balloning - Since My host has 64GB memory and I am not using all memory for the other VMs and probably also not for the dockers I think can leave that deactivated. edit / Dump seem to work, even though my GPU config is not standard. I am using the onboard Aspeed VGA as primary GPU (without a monitor) and plugged the GPU into the PCI x1 Port (x16 port occupied with a NVME card). Used the script from Spaceinvader and dump looks ok to me. I didnt make any edits to it. Question to clarifiy: So I need to attach a monitor and configure the GPU in the OSX VM, then I can switch back to a VNC or Parsec connection? I do not plan to sit next to my UnRaid host and want to work with OSX like I am doing with a WinVM through RDP.
  9. I build a GT710 into my system and its being recognized. So I downloaded a bios from techpowerup and removed the header with a hex editor (hopefully removed the right content). Edited my XML with some cope paste from @RiDDiX file. Vm is starting, but in the VNC window I see following where the boot process is stuck. My GPU part of the XML: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x25' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/isos/Bios/GT710.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x25' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> And I changed: <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </memballoon> to <memballoon model='none' /> Can I leave the VNC part in the XML to be able to monitor the VM remotely? Any other advice how to fix this? Are there pre-edited BIOS files available (would like to rule this out as a source for failure) added the GT710 to a Win11 VM and it gets installed incl. Nvidia apps. Did no gaming testing so far, since I use the VMs for Productivity only. Thanks
  10. ok, got the fact about the manual XML edits. How do I need to adjust the XML to get from 8 to 12 cores? Tried a few changes in the XML - VM starts but does not boot up.
  11. The adjustment to the XML fixed the issue and I cloud sign into iCloud / Appstore etc. However, in the settings the NIC is still shown as Vmxnet. I threat this as optic for now. Do I need to run the helper script every time or will that override manual changes I did to the XML?
  12. I‘ll stick to BigSur for now, at least until I gained decent knowledge about virtual Hackintoshs. So the GT710 should be fine, given the prices for other supported GPUs. Monterey isn’t a big change as I see it. Will report back on the fixes for the virtual NIC.
  13. Qemu log say this: -device e1000,netdev=hostnet0,id=net0,mac=52:54:00:fd:62:ab,bus=pci.6,addr=0x1 \ VM config XML attached as plain text. Also, in terms auf GPU, - I am planning to add two Nvidia GT710 (like this: GeForce® GT 710 1GB PCIE x 1 | ZOTAC) to the setup. One for BigSur, one for a Win11 VM. I read that the GT710 is supported - any advise / opinion on that?
  14. Looks like you know what you are talking about I check the pref-networks. No DHCP IP assigned.
  15. my BigSur VM is still using the Vmxnet network adapter, even though I changed it manually to e1000 and run the helper script. And, is there a chance to get rid of this pre-boot screen? Any Advise?
  16. Hi folks, is there a working VM Snapshot feature, that actually works, in 6.10 - natively or through plugins? The VM Backup Plugin from @JTok seems to be broken. Thanks
  17. looks like all reasonable (max. TDP) AMD GPUs are sold out or priced like a decent car , Haven't followed that area to close but the mining / supply shortage seem to have still an impact on the marked,
  18. Got it. I want remote into the vm only, not sitting in front of the unraid host. Would a GPU also improve the VNC performance? If so, which cheap (no gaming) GPU to pick? Ideally a passive one like Geforce GT1030 or similar.
  19. should a macinabox Big Sur have a similar performance like a physical Mac? I own both (vm and MacBook Pro) and there are lightyears between both, although the VM is sitting on NVME SSD, has 8GB mem and 8 cores (Ryzen 9 CPU). I am not using any GPU. Any advice?
  20. I spend a few hours today to update both BIOS and BMC to 1.20. Result - so far, everything works fine. Details: Most time consuming was to create a DOS USB stick that is bootable. I tried it through VMs and my Mac setup, however ended with an old Windows Laptop I had to revive, and used the Rufus tool. The BMC Update took 10 mins, and I did it remotely through the JViewer. The JViewer disconnects during the update. I pinged the BMC IP and reconnected once it came up again. Biosupdate also approx 10 mins through BIOS / Instaflash, with file on the same USB stick also worked. A few things to note: Some BIOS settings got reset, like the PCI Bifurcation I am using for my NVMEs. That caused a little heart attack when I first started the host. Had to go back to BIOS and re-enable that. Also, the BMC admin password got reset to default "admin", so I had to change that again. Below 2 screens from the BIOS flash and the finished BMC update. Update regarding stability - system is still stable since day one, haven't add any issues so far, even with the more complex NVME setup.
  21. I updated the bios of my host which turned the PCIe Bifurcation off (using it for addtl. NVME disks) and then rebooted. I realized that some NVME disks where missing and corrected the setting in the bios, then rebooted again. The NVME disks are listed now, however it says, when I start the array all data on the nvme disks will be wiped. Any chance to get around that data loss? I have a backup which is a few days old but would prefer to not loose the data from the cache drives. Thanks for advise /edit: after approx. 10 mins the message disappeared and the devices turned from blue/new to green. I started the array and all services came up. Could this be a bug? My host is running the 6.10 RC1
  22. @SpaceInvaderOne can you add a feature to add the build version to the filename and an option to the container config page to keep N versions, before deleting old ISOs. Thanks"
  23. Thanks @alturismo I came across the other thread and will wait for RC2, since I don't want to mess with the rollback of the manual changes once swTPM becomes official. Meanwhile, I updated the VMs this morning with a ISO file where I removed the TPM requirements (used NTLite). That's a hassle to do every few weeks so I cross fingers that RC2 isnt too far out.
  24. seems like I don't have the extra folder in /boot/ Did you guys create this folder manually? And if so, how? I am using Cloud Commander as primary file manager on my host, but seems like this container has no permissions to create a new folder in /boot/. Any advise? I am running 6.10 RC1 Thanks!
  25. Similar issue here after the 6.10.0-rc1 update. Both automatic and manual Backups don't create a disk snapshot, only the .fd and .xml file for the VM. Log attached. 20210830_1416_unraid-vmbackup.txt