Jump to content

SpaceInvaderOne

Community Developer
  • Posts

    1,747
  • Joined

  • Days Won

    30

Everything posted by SpaceInvaderOne

  1. If you already have have the full installer from the app store created on a mac you can just use that instead of the baseimage that the container downloads. Just link it here <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxCatalina/Catalina-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> Have a look at my last macOS videos last was for mojave here for how to do manually install macOS and create media from the app store. The video is a year old but still relevant. https://www.youtube.com/watch?v=YWT4oOz2VK8 Unfortunately I cant include any local install media directly in a container as then it would contain copyrighted material. I can only have it download from the Apple servers then the container itself doesn't actually contain any software. However to allow the user the option to select that they already have install media would be possible. I could have the template have an option to allow the user to select his own image on the server. The container could then copy that and rename it placing it in the correct location that would be compatible with the xml.
  2. With a macOS vm it has custom things in the xml. If you look at the end of the first xml you posted and the second you will see. Because of this any changes that you need to do to the template cant be done in the Unraid template manager as it removes custom edits at present. So edit the xml and put this in its place. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>MacinaboxMojave</name> <uuid>fe4cdace-a7b4-456d-9885-c28ea78fde83</uuid> <description>MacOS Mojave</description> <metadata> <vmtemplate xmlns="unraid" name="MacOS" icon="/mnt/user/vms/MacinaboxMojave/icon/mojave.png" os="Mojave"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/vms/MacinaboxMojave/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/vms/MacinaboxMojave/ovmf/OVMF_VARS.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/Mojave-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/macos_disk.qcow2'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:c8:13:3a'/> <source bridge='br0'/> <model type='vmxnet3'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-usb'/> <qemu:arg value='-device'/> <qemu:arg value='usb-mouse,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd,bus=usb-bus.0'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=Taken out as not allowed on the forums'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/> </qemu:commandline> </domain> Please note i have changed one line that you will need to change back <qemu:arg value='isa-applesmc,osk=Taken out as not allowed on the forums'/> This should container the osk key. You can see it in your first post with the xml (but please edit your post to remove it due to forum rules..thanks)
  3. OK the template that the container makes , assumes ovmf is in the /mnt/user/domains share which is the default for Unraid for a while now. You will just have to make a small adjustment in the template to reflect your location where the ovmf files are and also where the disk images are. <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxMojave/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxMojave/ovmf/OVMF_VARS.fd</nvram> </os> You should change this to as below (note difference is swapping the location from domains to vms as you have on your server) <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/vms/MacinaboxMojave/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/vms/MacinaboxMojave/ovmf/OVMF_VARS.fd</nvram> </os> Also the templates disk locations need to be changed from this <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxMojave/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxMojave/Mojave-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxMojave/macos_disk.qcow2'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> to the below ( again the /domains swapped ti your location /vms ) <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/Mojave-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/macos_disk.qcow2'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> I mention about the non standard vm location in the video which accompanies this container working on it as i type this ! (will be at the top of the post when finished)
  4. Hi please ignore the linked video in the webui of the template for now. That is not the correct video guide for this container. I was testing and linked to my old video for macOS from last year when making this container. I then forgot to remove it in the template. ( i have removed from the template now and will only add back when new video guide is finished and uploaded later today ) You don't need to change the resolution in the bios. The resolution is already set to 1080 in both bios and the clover config. I have also seen the boot loop happen. it happened to me once yesterday. This was after the recovery media has downloaded the Catalina install and attempts to reboot to finish. I believe this is just that the image hasn't downloaded correctly for some reason (maybe the apple servers are very busy) and the image is corrupt. I only have had this happen once during my testing and after reading your issue I ran a Catalina install again to test. During the install the image took a very long time to download over an hour and i have quite fast internet (450 down) However after the reboot there was no boot loop and install finshed Unfortunately I would suggest just trying again. Remove the directory in the domains share where files are rm -r /mnt/user/domains/MacinaboxCatalina/ If using a qcow2 maybe try raw image instead (although i have installed to both image types sucessfully) Also what cpu do you have in the server it must support . It must support SSE 4.2 & AVX2
  5. 09 Dec 2020 Basic usage instructions. Macinabox needs the following other apps to be installed. CA User Scripts (macinabox will inject a user script. This is what fixes the xml after edits made in the Unraid VM manager) Custom VM icons (install this if you want the custom icons for macOS in your vm) Install the new macinabox. 1. In the template select the OS which you want to install 2. Choose auto (default) or manual install. (manual install will just put the install media and opencore into your iso share) 3. Choose a vdisk size for the vm 4. In VM Images: Here you must put the VM image location (this path will put the vdisk in for the vm) 5. In VM Images again : re enter the same location as above. Here its stored as a variable. This will be used when macinabox generate the xml template. 6. In Isos Share Location: Here you must put the location of your iso share. Macinabox will put named install media and opencore here. 7. In Isos Share Location Again: Again this must be the same as above. Here its stored as a variable. Macinabox will use this when it genarates the template. 8. Download method. Leave as default unless for some reason method 1 doesnt work 9. Run mode. Choose between macinabox_with_virtmanager or virtmanager only. ( When I started rewriting macinabox i was going to only use virtmanager to make changes to the xml. However I thought it much easier and better to be able to use the Unraid vm manager to add a gpu cores ram etc, then have macinabox fix the xml afterwards. I deceided to leave vitmanager in anyway, in case its needed. For example there is a bug in Unraid 6.9.beta (including beta 35.) When you have any vm that uses vnc graphics then you change that to a passed through gpu it adds the gpu as a second gpu leaving the vnc in place. This was also a major reason i left virtmanger in macinabox. For situations like this its nice to have another tool. I show all of this in the video guide. ) After the container starts it will download the install media and put it in the iso share. Big Sur seems to take alot longer than the other macOS versions. So to know when its finished goto userscripts and run the macinabox notify script (in background) a message will pop up on the unraid webui when its finished. At this point you can run the macinabox helper script. It will check to see if there is a new autoinstall ready to install then it will install the custom xml template into the VM tab. Goto the vm tab now and run the vm This will boot up into the Opencore bootloader and then the install media. Install macOS as normal. After install you can change the vm in the Unraid VM Manager. Add cores ram gpu etc if you want. Then go back to the macinabox helper script. Put in the name of the vm at the top of the script and then run the script. It will add back all the custom xml to the vm and its ready to run. Hope you guys like this new macinabox
  6. PLEASE - PLEASE - PLEASE EVERYONE POSTING IN THIS THREAD IF YOU POST YOUR XML FOR THE VM HERE PLEASE REMOVE/OBSCURE THE OSK KEY AT THE BOTTOM. IT IS AGAINST THE RULES OF THE FORUM FOR OSK KEY TO BE POSTED....THANKYOU The first macinabox is now been replaced with a newer version as below. Original Macinabox October 2019 -- No longer supported New Macinabox added to CA on December 09 2020 Please watch this video for how to use the container. It is not obvious from just installing the container. Now it is really important to delete the old macinabox, especially its template else the old and new template combine. Whilst this wont break macinabox you will have old variables in the template that are not used anymore. I recommend removing the old macinabox appdata aswell.
  7. Hi Great job. @limetech the vega 10 reset bug patch is in this release. Is the navi patch also here please? https://forum.level1techs.com/t/navi-reset-kernel-patch/147547
  8. No it the peer setting in the plugin. Check post here
  9. Set the peer type to remote tunneled access rather than remote access to server. (but you must add the peer tunnel address)
  10. Its still in heavy developement and hasn't reached 1.0 yet. But people do think that it is very secure and it uses proven cryptographic protocols. The peers are identified to other peers using small public keys a bit like key-based authentication in ssh. It is very difficult to see it running on another machine even because it doesnt respond to packets from peers it doesn't know making a network scan not show that wireguard is running. .................but............lol shouldn't you have asked that before setting it up ! 😉
  11. Just add the 'peer tunnel address' manually. Says its not used but add as below then will conif and QR code will be made and work fine.
  12. Here is a video showing how to move from the deprecated Limetech plex container to either Linuxserver's, Binhex's or the official plex container.
  13. A series of videos about creating and using encrypted disks using the unassigned devices plugin. How to format disks How to mount disks How to use multiple encrypted disks at once with different keyfiles Auto mounting using unassigned devices scripts Using encrypted disks with multiple paritions Hope its useful
  14. So this is a three part series of videos on using encrypted unassigned disks on your server. Part 1 - Is the basics - How to easily format an unassigned encrypted disk using the cach swap technique. How to use unassigned encrypted disk on a server which doesn't have an encrypted array. And basic mounting. Part 2 Is more advanced. Showing how to easily mount any encrypted drive. How to mount and use multiple encrypted unassigned disks with different key-files. And how to automount disks easily. Part 3 goes onto how to create unassigned encrypted disks with multiple partitions. And how to create and format encrypted drives from the command line. Hope its useful
  15. The users section is where you can create users. These users then can be assigned permissions as to wether or not they can access the various shares that you have on the server (should you want to do so.) Also here you can set a password for the root user which is highly recommended. After which to access the Unraid webui ssh etc you will need to enter the username root and password what you have set. The other users set up here will not have access to the webui etc their permissions are only used for the security setting for connecting to the shares. Edit -- I really should read the question before answering! Didn't see the dashboard part lol !!
  16. For those that want to downgrade their bios on motherboards using an AMI bios i have put together a short guide here
  17. I think he was using the afuwingui version maybe why a difference
  18. You cant use afuwin in windows to do this (as windows has a system protection to stop bios being changed to one that doesn;t match and setting to ignore in afuwin will just reboot you into the motherboard bios page to upgrade so no good) I have almost finished a video guide on how to do this which i will upload and link here later today. But for now here is how. Format a usb stick with gpt partition. (use rufus in windows to do this) the put this file attached here into the root of the flash drive. (obviously unzip) EFI.zip Then you will have a file on flash called EFi inside there ther is a folder called BOOT. Copy the downgrade bios that you want to use into that folder but rename it bios.rom then boot the computer from the usb flash drive that you have just made. you will see the shell Now you will need to change to the usb drive type fs0: (yours maybe different poss -- fs2: fs4: etc) then type cd EFI cd BOOT then type ls to list the files there. (you should see the files and the file bios.rom - you will know that you are in correct directory then) now to flash your bios (remember flashing a motherboard bios has risks and it is possible to brick a board so do at your own risk please) type Afuefix64 bios.rom /P /B /N /K /X This will flash the bios onto the board. It will warn as in windows but just type y to continue. After flashing I would clear the cmos then re-setup bios settings as desired.
  19. I have made un updated video guide for setting up this great container. It covers setting up the container, port forwarding and setting up clients on Windows, macOS Linux (ubuntu Mate) and on cell phone - Android and IOS. Hope this guide helps people new to this setting up OpenVPN
  20. Hi @johnnie.black I thought that I would chip into this post. I logged into @uaeproz server yesterday morning to help to do some testing with a new array and clean install. This was because his normal array (of 170tb and 11 data drives and 2 parity drives, encrypted xfs) the array will never start on any version of Unraid above 6.5.3. It always hangs after mounting the first or second drive in the array. He has been trying to upgrade to each new release of Unraid as they come out with the same problem and then having to downgrade back to 6.5.3 for his server to work correctly. What we thought that we would do yesterday, is to see if we could do a clean install of 6.7.0 stable. Then make a 1 drive array and see if the problem persisted. He removed all of his normal data and parity drives from the server. One 4tb drive was attached. An array was created with just one data drive with a clean install onto flashdrive of Unraid 6.7.0. The file system chosen was encrypted xfs (to be the same as the normal array) On clicking 'start the array' the drive was formatted but as the array started, the services started to start and it hung there. The gui saying starting services. The array never fully became available. I looked at the data on the disk and saw that the system share/folder only had the docker folder and had it had not created the libvirt folder. So i assumed that the vm service was unable to start but the docker service had. The server wouldn't shut down from the gui or command line so had to be hard reset. On restarting the server , before starting the array, I disabled the vm service. This time the array started as expected. However, stopping the array again it hang on stopping the services and the array wouldn't stop. Again needed hard reset. Next starting the array without the docker service or vm service running the array would start and stop fine. So next i tried starting the array without the docker or vm service running. Then once the array had finished starting then manually starting the docker and vm service. This worked fine. And so long as these services were manually stopped before attempting to stop the array, then the array would stop fine. ----- So next I deleted that array and made a new one using standard xfs (not encrypted) with the same 4tb drive. The array started fine with both the docker and vm service running without issue. The array could stop fine too. So basically everything worked as expected when the drive was not encrypted. I was hoping from the results of those tests that when we connected the original drives went back to the original flash drive, and upgraded the system to 6.7.0. that the array would start if docker service and vm service were disabled. This wasn't the case. The array didn't finishing mounting the drives. It stopped after mounting the first drive and had to be hard reset. So this is a strange problem. The OP has also tried removing all non-esstential hardware such as GPU. Also tried moving disk controller to different PCIe slot. He has run memtest on the ram which has passed. The diag file that he attached to the post, if I remember, was taken with one drive in the server formatted as xfs encrypted. Starting the array with the vm service enable. The array never finished starting just stuck on starting services. That when this file was downloaded. before hard resetting Hope that helps.
×
×
  • Create New...