Community Developer
  • Posts

  • Joined

  • Days Won


Everything posted by SpaceInvaderOne

  1. OK the template that the container makes , assumes ovmf is in the /mnt/user/domains share which is the default for Unraid for a while now. You will just have to make a small adjustment in the template to reflect your location where the ovmf files are and also where the disk images are. <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxMojave/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxMojave/ovmf/OVMF_VARS.fd</nvram> </os> You should change this to as below (note difference is swapping the location from domains to vms as you have on your server) <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/vms/MacinaboxMojave/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/vms/MacinaboxMojave/ovmf/OVMF_VARS.fd</nvram> </os> Also the templates disk locations need to be changed from this <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxMojave/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxMojave/Mojave-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/MacinaboxMojave/macos_disk.qcow2'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> to the below ( again the /domains swapped ti your location /vms ) <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/Clover.qcow2'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/Mojave-install.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/vms/MacinaboxMojave/macos_disk.qcow2'/> <target dev='hde' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> I mention about the non standard vm location in the video which accompanies this container working on it as i type this ! (will be at the top of the post when finished)
  2. Hi please ignore the linked video in the webui of the template for now. That is not the correct video guide for this container. I was testing and linked to my old video for macOS from last year when making this container. I then forgot to remove it in the template. ( i have removed from the template now and will only add back when new video guide is finished and uploaded later today ) You don't need to change the resolution in the bios. The resolution is already set to 1080 in both bios and the clover config. I have also seen the boot loop happen. it happened to me once yesterday. This was after the recovery media has downloaded the Catalina install and attempts to reboot to finish. I believe this is just that the image hasn't downloaded correctly for some reason (maybe the apple servers are very busy) and the image is corrupt. I only have had this happen once during my testing and after reading your issue I ran a Catalina install again to test. During the install the image took a very long time to download over an hour and i have quite fast internet (450 down) However after the reboot there was no boot loop and install finshed Unfortunately I would suggest just trying again. Remove the directory in the domains share where files are rm -r /mnt/user/domains/MacinaboxCatalina/ If using a qcow2 maybe try raw image instead (although i have installed to both image types sucessfully) Also what cpu do you have in the server it must support . It must support SSE 4.2 & AVX2
  3. 09 Dec 2020 Basic usage instructions. Macinabox needs the following other apps to be installed. CA User Scripts (macinabox will inject a user script. This is what fixes the xml after edits made in the Unraid VM manager) Custom VM icons (install this if you want the custom icons for macOS in your vm) Install the new macinabox. 1. In the template select the OS which you want to install 2. Choose auto (default) or manual install. (manual install will just put the install media and opencore into your iso share) 3. Choose a vdisk size for the vm 4. In VM Images: Here you must put the VM image location (this path will put the vdisk in for the vm) 5. In VM Images again : re enter the same location as above. Here its stored as a variable. This will be used when macinabox generate the xml template. 6. In Isos Share Location: Here you must put the location of your iso share. Macinabox will put named install media and opencore here. 7. In Isos Share Location Again: Again this must be the same as above. Here its stored as a variable. Macinabox will use this when it genarates the template. 8. Download method. Leave as default unless for some reason method 1 doesnt work 9. Run mode. Choose between macinabox_with_virtmanager or virtmanager only. ( When I started rewriting macinabox i was going to only use virtmanager to make changes to the xml. However I thought it much easier and better to be able to use the Unraid vm manager to add a gpu cores ram etc, then have macinabox fix the xml afterwards. I deceided to leave vitmanager in anyway, in case its needed. For example there is a bug in Unraid 6.9.beta (including beta 35.) When you have any vm that uses vnc graphics then you change that to a passed through gpu it adds the gpu as a second gpu leaving the vnc in place. This was also a major reason i left virtmanger in macinabox. For situations like this its nice to have another tool. I show all of this in the video guide. ) After the container starts it will download the install media and put it in the iso share. Big Sur seems to take alot longer than the other macOS versions. So to know when its finished goto userscripts and run the macinabox notify script (in background) a message will pop up on the unraid webui when its finished. At this point you can run the macinabox helper script. It will check to see if there is a new autoinstall ready to install then it will install the custom xml template into the VM tab. Goto the vm tab now and run the vm This will boot up into the Opencore bootloader and then the install media. Install macOS as normal. After install you can change the vm in the Unraid VM Manager. Add cores ram gpu etc if you want. Then go back to the macinabox helper script. Put in the name of the vm at the top of the script and then run the script. It will add back all the custom xml to the vm and its ready to run. Hope you guys like this new macinabox
  4. PLEASE - PLEASE - PLEASE EVERYONE POSTING IN THIS THREAD IF YOU POST YOUR XML FOR THE VM HERE PLEASE REMOVE/OBSCURE THE OSK KEY AT THE BOTTOM. IT IS AGAINST THE RULES OF THE FORUM FOR OSK KEY TO BE POSTED....THANKYOU The first macinabox is now been replaced with a newer version as below. Original Macinabox October 2019 -- No longer supported New Macinabox added to CA on December 09 2020 Please watch this video for how to use the container. It is not obvious from just installing the container. Now it is really important to delete the old macinabox, especially its template else the old and new template combine. Whilst this wont break macinabox you will have old variables in the template that are not used anymore. I recommend removing the old macinabox appdata aswell.
  5. Hi Great job. @limetech the vega 10 reset bug patch is in this release. Is the navi patch also here please?
  6. No it the peer setting in the plugin. Check post here
  7. Set the peer type to remote tunneled access rather than remote access to server. (but you must add the peer tunnel address)
  8. Its still in heavy developement and hasn't reached 1.0 yet. But people do think that it is very secure and it uses proven cryptographic protocols. The peers are identified to other peers using small public keys a bit like key-based authentication in ssh. It is very difficult to see it running on another machine even because it doesnt respond to packets from peers it doesn't know making a network scan not show that wireguard is running. shouldn't you have asked that before setting it up ! 😉
  9. Just add the 'peer tunnel address' manually. Says its not used but add as below then will conif and QR code will be made and work fine.
  10. Here is a video showing how to move from the deprecated Limetech plex container to either Linuxserver's, Binhex's or the official plex container.
  11. A series of videos about creating and using encrypted disks using the unassigned devices plugin. How to format disks How to mount disks How to use multiple encrypted disks at once with different keyfiles Auto mounting using unassigned devices scripts Using encrypted disks with multiple paritions Hope its useful
  12. So this is a three part series of videos on using encrypted unassigned disks on your server. Part 1 - Is the basics - How to easily format an unassigned encrypted disk using the cach swap technique. How to use unassigned encrypted disk on a server which doesn't have an encrypted array. And basic mounting. Part 2 Is more advanced. Showing how to easily mount any encrypted drive. How to mount and use multiple encrypted unassigned disks with different key-files. And how to automount disks easily. Part 3 goes onto how to create unassigned encrypted disks with multiple partitions. And how to create and format encrypted drives from the command line. Hope its useful
  13. Hi @Nick_J glad that its working. One quick thing that you can try too is check if write cache is enabled on the drives. On the webui you will see each disk has an id ie sdb sde sdf etc For each disk use this command. Example here is for my disk sdb hdparm -W /dev/sdb If it doesn't say write-caching on for the drive as above. You can enable it by using command hdparm -W 1 /dev/sdb
  14. Let me know the exact model and make of the gpu and i will send you an edited vbios to use with the card.
  15. Few things to try. Can you try running a vm with just one GPU only in the system the 980ti. Have that GPU in slot one and see if that way it runs ok. Here is the vbios for your card that downloaded from here. i have already hex edited it. Please try passing this with it. Asus.GTX980Ti.6144.150618.rom The gtx 650 the card that you have may or may not support uefi. If you can only start machine with seabios for that card ,then the gpu doesnt support uefi. You can fix that If your card is the 2gig version. i found a uefi vbios here and also have hex edited this one too. With this passed through (if yours is 2 gig version and not 1gig ) it should allow you to use ovmf bios rather than seabios. Palit.GTX650.2048.131018.rom Also what does your iommu groups look like? Lastly I have found issues like this can be bios related in the mb. You could try updating the bios ( or even sometimes downgrading the bios) and it may help.
  16. Try removing gpu 3 Use gpu 1 with unraid/vm1 (You will need to passthrough vbios for it to work with vm when primary gpu - see here Then use gpu 2 with vm 2 See how that goes. Also please give more info on hardware. Also your iommu groups etc. Just post a diags file from /Tools/Diagnostics
  17. Just as jonathanm said -- Config folders can be swapped from stick to stick, just keep the key file with the physical stick it belongs to. So just follow this procedure. 1./ On the new machine register from the trial to the plus 2./ Shutdown both servers. 3./ Put flash drive from old server (basic key one) into a pc. 4./ Make folder called old server 5./ a/ Copy the config folder from the usb flash drive of old server into the old server folder. b. delete config folder from the flash drive c/ open the config folder (in old server folder on desktop) and copy the basic key into the old server folder (After which make sure that it isnt in the config file that it was moved not copied) 6. remove flash drive 7./ Now put the flash drive in from new server (plus key one) 8./ make folder called new server. 9./ a/ copy the config folder to new server folder. b. delete config folder from the flash drive c/ open the config folder (in new server folder on desktop) and move the plus key into the new server folder (After which make sure that it isn't in the config file that it was moved not copied) Now each flash drive has no config folder (as deleted) and you have the config folder on your desktop in the old server and new server folders along with their keys. 10/ Now move the basic key from the oldserver folder into the config folder in the new server folder 11 ./ Move the plus key from the new server folder into the config folder of the old server folder. Now the flash drive (which was originally from the new server) should still be in the pc. Now move the config folder from the old server folder onto this flash drive. Remove drive. Now this flash drive has a plus key and the config of your oldserver. Put this key in old server and start up. Now put the old servers original flash drive into the pc and copy the config folder from the new server folder into the flash drive. This now has the config for the new server but with the basic key. Put in new server and start the server. If for any reason the server doesn't boot. Put flash drive in a pc and run the file makebootable.bat (but i dont think you should have to do this.)
  18. Here is a video that shows what to do if you have a data drive that fails and you want to swap/upgrade it and the disk that you to replace it is larger than your parity drive. So this shows the swap parity procedure. Basically you add the new larger drive then have Unraid copy the existing parity data over to the new drive. This frees up the old parity drive so it can then be used then to rebuild the data of the failed drive onto the old parity drive. Hope this is useful
  19. The users section is where you can create users. These users then can be assigned permissions as to wether or not they can access the various shares that you have on the server (should you want to do so.) Also here you can set a password for the root user which is highly recommended. After which to access the Unraid webui ssh etc you will need to enter the username root and password what you have set. The other users set up here will not have access to the webui etc their permissions are only used for the security setting for connecting to the shares. Edit -- I really should read the question before answering! Didn't see the dashboard part lol !!
  20. Just one thing to check. As you are using Ryzen CPU , if you haven't already, then you will need to make a change to your go file or the server can freeze up which maybe yours is doing. The go file is on the flash drive in the config folder You need to add this line /usr/local/sbin/zenstates --c6-disable So then the go file should look like this. #!/bin/bash # Start the Management Utility /usr/local/sbin/zenstates --c6-disable /usr/local/sbin/emhttp & Hope that helps
  21. Hi @sibi78 So you want to use some security cams on your server. I wouldn't use a vm to do what you want to. I would suggest using the docker container shinobi. Its very very good in my opinion. Its very light weight and doesn't use much resources. I have mine recording to an old unassigned drive in the server (so i am not recording to the array and avoiding wear on that). But if you want to use 2 vms with the same physical disk passed through then you can. You would pass the disk though by using its disk id. open terminal and type ls /dev/disk/by-id That will show you your disks by their id. for example here is what one of mine looks like so to passthrough a whole disk all partitions to one vm you would add the location like this. Just set location to manual then add this but with your disk id. /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0ZDV2PF With this you could first pass this disk to a live gparted instance then partition up the disk Then after you can passthrough each partition to different vms (installing the os on each partition) you can pasthrough individual partitions by adding -partx x being the number of the partition for example you would use this for the location of partition 1 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0ZDV2PF-part1 and if you had a second partition you would add it like this /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N0ZDV2PF-part2 and so on and son on for each partition that you had created. But dont have 2 vms running and try and passthrough the same partition to both. But i would really suggest giving Shinobi a try. I have been meaning to finish a video for it that i started making months ago then kinda forgot about it and did something else. But i'm glad that your post has reminded me that I must finish it off