Jump to content

SpaceInvaderOne

Community Developer
  • Posts

    1,750
  • Joined

  • Days Won

    30

Everything posted by SpaceInvaderOne

  1. HomeAssistant_inabox HomeAssistant_inabox is a powerful and easy-to-use Docker container designed to seamlessly download and install a fully functional Home Assistant VM onto an Unraid server. It simplifies the installation and management of Home Assistant by integrating a virtual machine (VM) setup directly from the official Home Assistant source. Key Features • Direct Download & Installation: Automatically downloads the Home Assistant .qcow2 image from the official Home Assistant release repository and installs it onto your Unraid server. • Automated VM Setup: Handles the creation of a new VM configuration, dynamically building the XML template based on your environment and the highest QEMU version available. • Automatic VM Monitoring: Regularly checks the status of the VM and restarts it if it has been shut down unexpectedly, ensuring Home Assistant is always available. • Seamless Integration with Docker & VM WebUI: Combines Docker container management with VM monitoring. Clicking the “WebUI” link from the Unraid Docker tab will automatically redirect to the Home Assistant WebUI inside the VM. • Dynamic IP Management: Automatically determines the internal IP address of the Home Assistant VM and updates the Docker WebUI redirect, Getting Started Follow these instructions to install and configure the HomeAssistant_inabox container on your Unraid server. Installation 1. Go to the Unraid Apps Tab (also known as CA Community Applications). 2. Search for HomeAssistant_inabox and click Install. 3. Configure the container variables as described below in the “Configuration” section. 4. Start the container. HomeAssistant_inabox will automatically download the latest Home Assistant image, create a VM, and set up the necessary configuration files. 5. Once the setup is complete, click the container’s “WebUI” button to access your Home Assistant instance. Configuration HomeAssistant_inabox relies on a few essential variables that need to be set through the Unraid Docker template. Below is a detailed description of each option and its purpose: Container Variables 1. VMNAME • Description: Set the name of the Home Assistant VM. • Default: Home Assistant • Purpose: This is the VM name that will be displayed in Unraid’s VM manager. 2. VM Images Location • Description: Specify the location where the VM images are stored (e.g., your Domains share). • Example: /mnt/user/domains/ • Purpose: Defines the storage path for the Home Assistant VM files on your Unraid server. 3. Appdata Location • Description: Set the path where HomeAssistant_inabox stores its appdata and configuration files. • Default: /mnt/user/appdata/homeassistantinabox/ • Purpose: Specifies where the container’s internal configuration and scripts are stored. 4. Keep VM Running • Description: If set to Yes, the container will automatically monitor the Home Assistant VM and restart it if it’s not running. • Default: Yes • Purpose: Ensures that Home Assistant remains available, even after unexpected shutdowns. 5. Check Time • Description: Defines the frequency (in minutes) for checking the status of the Home Assistant VM. • Default: 15 • Purpose: Determines how often the container checks to see if the VM is running and needs to be started. 6. WEBUI_PORT • Description: Set the port for accessing the Home Assistant WebUI through the container’s WebUI redirect. • Default: 8123 • Purpose: Allows you to configure the WebUI access port for Home Assistant. How It Works HomeAssistant_inabox provides a robust solution by combining a Docker container with a full VM environment: 1. Direct Download & Installation: • When the container is started, it automatically downloads the latest Home Assistant .qcow2 disk image from the official Home Assistant source. • It then extracts and moves the image to your Unraid server’s specified domains location. 2. Dynamic VM Setup: • The container dynamically builds a VM XML template for Home Assistant using the latest QEMU version available. • This template is then used to automatically define a new VM on your Unraid server. 3. Automatic IP Detection: • After the VM is started, the container uses the QEMU guest agent to retrieve the internal IP address of the VM. • The IP address is then used to configure a redirect within the Docker container, making the “WebUI” link in Unraid’s Docker tab point directly to the Home Assistant WebUI inside the VM. 4. Monitoring & Restart Functionality: • If RESTART is set to Yes, the container will regularly check to see if the Home Assistant VM is running. • If the VM is found to be shut down or paused, the container will attempt to start it automatically. 5. WebUI Integration: • When you click the WebUI button for the HomeAssistant_inabox container, it dynamically redirects to the Home Assistant WebUI inside the VM using the IP address retrieved during the last check.
  2. NEW MACINBOX AT LAST !! Now I know many people have been asking for updates to this container for a long time. So firstly sorry for the wait, but hopefully its worth it. I decided too fully rewrite it and add new features. Now supports all macOS from High Sierra to Sonoma. Macinabox is now self-contained and no longer requires any helper scripts being able to fix xml itself. It dynamically build the XML based off not only the choices made in the container template but also by checking the host for which is latest qemu etc and building accordingly. Also rerunning the container fixes XML problems, replacing missing custom xml, making sure the NIC is on the correct bus so macOS see it and can use it. And if you have an odd core count macinabox will see this and remove the topology line to ensure the VM still boots I have made the container so it will be much easier in future for me to update so expect more regular updates going forward. Any suggests please let me know and open a request on GitHub https://github.com/SpaceinvaderOne/Macinabox Here is a video showing its use. Basic Usage. • Set notifications to Detailed. • Configure Docker update notification to any option other than Never (e.g., Once per Day). Installation Steps 1. Compliance: Ensure you are compliant with the Apple EULA for macOS virtual machines. Select whether you are compliant or not. 2. Select OS: In the template, choose the macOS version you wish to install. 3. VM Name: By default, Macinabox uses the OS name for the VM. If you prefer a different name, enter it in the Custom VM Name field (default is blank). 4. VDisk Type: Choose the vdisk type: Raw or QCOW2. The default option is Raw. 5. VDisk Size: Specify the size for the macOS vdisk in gigabytes. The default size is 100 GB. 6. Default Variables: You can leave all other settings as default. If needed, you can change the default locations for your domains, ISOs, or appdata shares. Running the Container • Launch the container. It will download the recovery media from Apple servers, name it, and place it in your ISO share. • The container will create your vdisk and an OpenCore image for your VM, which will be placed in your domain share within a folder named after your VM. • The XML for the VM will be dynamically generated based on your settings, and the container will perform checks on your server: • It will check the version of QEMU installed and calculate the highest compatible version of Q35. • It will set the default VM network source in the XML. • It will add the appropriate custom QEMU command line arguments to the XML. Notifications During the container’s operation, you will receive notifications: • When the installation media has been successfully downloaded. • When the VM has been defined and is available on the VMs tab of your server. • Or if any errors occur during the process. Additional VM Configuration Once the VM is installed, you may want to adjust it to your preferences: 1. On the Unraid vm tab. Click the vm and click edit. Now modify the CPU core count and RAM amount as needed. Fixing broken XML Configuration When making changes to the VM in the Unraid VM manager, the XML will be changed making the VM XML inorrect for macOS, which requires specific configurations. To fix this: • Run Macinabox again. It will check if a macOS VM with the specified name already exists. If it does, it will fix the XML instead of attempting to install another VM. (This step is necessary for both Unraid 6 and 7.) • If you make any changes to the VM in the Unraid VM manager, be sure to run the container again to update the XML. Starting the VM 1. Start the VM and open a VNC window to proceed with the installation. 2. Boot into the OpenCore boot loader and select the bootable image labelled macOS Base System. Press Enter to continue. Installing macOS 1. Once booted into recovery media, open Disk Utility to format your vdisk. 2. After formatting, close Disk Utility and select Reinstall macOS. Follow the wizard to complete the installation. The installation process may cause the VM to reboot approximately four times. Hope you guys like this new macinabox
  3. When you see "Guest has not initialised the display yet". This is due to the nvram the vm is using. This can happen with any VM windows, Mac, linux. Easiest fix. Just delete the vm (but dont select select disks). Run macinbox again. It will see the disks already there and will not need to download again. It will recreate the nvram and xml template. then run the VM and you should be good
  4. Check that netbios is set to disabled in smb settings on your server. This ciases an issue when enabled with samba over things like zerotier tailscale etc
  5. I had a support session with @mauriceatkinson@btconnect. this evening to address this problem. Maurice and I tested a script I made which you can run using user-scripts which fixes the issue. I have posted the script and how to use it on my github https://github.com/SpaceinvaderOne/Unraid-ZeroTier-Server-Restart-fix
  6. The xen-pciback.hide parameter is not necessary to "stub" PCI devices for passthrough. Instead, you can simply bind them to VFIO, which can be done directly from the GUI. To do this, navigate to Tools > System Devices, and select the devices you want to passthrough by ticking the box next to each. After rebooting your system, these devices will appear in the VM template, ready for use
  7. Hi, looking in your xml for the vm i see that your cdrom is set as the 2nd boot device <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/ubuntu-23.10.1-desktop-amd64.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> This should be the first device to install your OS. Goto the template and set the cd to boot order 1 and vdisk to boot order 2. Save the template and start it will boot from the cd and you can install
  8. Would need to see your xml for the vm. There is no vm xml in your diagnostics. When using the vm are you passing through any devices?
  9. The script has had a change that it can do the following 1. Snapshot and replicate either to a remote server or a local server. There can only be one destination in the script at present. 2. The script can snapshot and replicate a whole zfs pool. You can add exclusions for datasets which you dont want included. 3. It can replicate just one dataset specified 4. It can replicate with zfs replication or also to a location using rsync if the destination doesnt have zfs. (source location still must be zfs as for the rsync replication it mounts the newest snapshot the rsyncs that to destination) I will not add much more functionality to the script otherwise it just becomes to complicated to set up. I do plan on converting this (maybe) to a plugin at some point in the future when then i will add features
  10. Hi, glad it's working. You won't see any difference in performance. You are not actually using a vdisk this way, you're directly passing the physical disk to the VM by its ID
  11. Switching gpu and audio devices to use msi interrupts in your Windows vm can reduce audio stuttering by optimising how interrupts are managed. Download this https://www.dropbox.com/s/y8jjyvfvp1joksv/MSI_util.zip?e=2&dl=0 Then run the utility in the vm. It displays a list of devices and their current interrupt mode. Select all gpu and audio devices and set them to use msi interrupts. Save your changes and reboot the vm to apply these settings.
  12. have you tried temporarily passing them to a different vm to see if its a windows issue or the host (Unraid)?
  13. If a windows vm from a physical machine harddrive converted to vm i would use defaults but choose q35 chipset
  14. Don’t bind the SATA controller; simply pass the SSD directly. Run this command in the terminal: ls /dev/disk/by-id/ This will display a list of all the disks on the server by their ID number. For example, here's what it shows on my server (though I've only included one SSD disk for clarity) root@BaseStar:~# ls /dev/disk/by-id/ ata-CT2000MX500SSD1_2117E5992883@ ata-CT2000MX500SSD1_2117E5992883-part1@ If you want to pass this SSD to the VM in the vdisk location in the vm template, instead of selecting "none", you should specify manual and put /dev/disk/by-id/ata-CT2000MX500SSD1_2117E5992883 (Note that I removed the '@'). Here, I am passing the entire disk, not just a partition (the second listing of my disk you see ends with -part1). I hope this helps!
  15. I would probably use q35 chipset over i440fx
  16. IMO you have too much pinned on the first core. Unraid OS uses this itself alot for its functions so can cause issues with your stuttering. 1. So isolate only the last 3 cores. Pin those to that to the vm. 2. Leave the first core unpinned (0,6) 3. Pin your containers to cores 2 and 3 (1,7 2,8) 4. I wouldnt bother using emulatorpin let Unraid handle that itself it will most likely use the first core anyway. Give that a try. It may help
  17. Its difficult to know why your VM is freezing. More info would be needed. But as an example of troubleshooting this the OP probem we can see from the log. If we look at the log he posted it shows 2020-09-15T18:08:05.594959Z qemu-system-x86_64: vfio_err_notifier_handler(0000:03:00.0) Unrecoverable error detected. Please collect any data possible and then kill the guest 2020-09-15T18:08:05.599453Z qemu-system-x86_64: vfio_err_notifier_handler(0000:03:00.1) Unrecoverable error detected. Please collect any data possible and then kill the guest The message points to an unrecoverable error occurred with a passthrough device 0000:03:00.0 and 0000:03:00.1 which from the diagnostics show us its a 1050ti GPU 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1) Subsystem: eVga.com. Corp. GP107 [GeForce GTX 1050 Ti] [3842:6255] Kernel driver in use: vfio-pci Kernel modules: nvidia_drm, nvidia 03:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1) Subsystem: eVga.com. Corp. GP107GL High Definition Audio Controller [3842:6255] Kernel driver in use: vfio-pci So for the original person who made the post this was causing his freezes. Why he was having this error is hard to know but can be caused from different things. For example could be a hardware fault with the GPU or the motherboard. If i was advising the OP I would say to 1. update the bios in the motherboard to the latest. 2. making sure to also pass through a vbios for the GPU 3. Try putting the GPU in a different pcie slot. 4. Trying another GPU if possible and seeing if the problem continues. Now what i have said to do here is only specific to the OP and your problem could be totally different. But if you find your issue is from passthrough you could try this.
  18. Yes its very straight forward. We can use the qemu-img tool to convert a virtual disk image to a physical disk. If the image is a raw image (by default in Unraid it is) then you would use this command qemu-img convert -f raw -O raw /mnt/user/domains/vm_name/vdisk1.img /dev/sdX If the image was a qcow2 image then the command would be qemu-img convert -f qcow2 -O raw /mnt/user/domains/vm_name/vdisk1.img /dev/sdX So the /dev/sdX refers to the disk you want to copy it to. Plug the disk that you want to use for the real machine into the server and it will get a letter. So for instance the /dev/sdg Be careful its the correct one as you dont want to write over an anraid array disk etc. Also just make sure the location of the vdisk for the vm is correct for this part /mnt/user/domains/vm_name/vdisk1.img
  19. Please try a full reboot on the server. Also clear the cache on your browser.
  20. I wonder if a windows update was responsible for this. The reason I think so is because of the restore point that you dont remember making. Significant Windows updates, like feature updates or semi-annual releases, Windows typically creates a restore point automatically. So i would guess this update caused an issue.
  21. Also a possibility. As you are passing through both the sound of the GPU (which you should when passing through a GPU) and the motherboard sound device. So in windows there will be 2 devices for audio. I assume as you are passing through the motherboard audio is because that is your preference to use that. Probably Windows will put the default to be the HDMI sound from your 3070 and the secondary the onboard. So if the sound is going through the Nvidia sound and you dont have speakers in your monitor then that is why. Try switching the defauly audio out to the secondary audio. Just a possibility but maybe why.
  22. I like keeping my vms on a ZFS Zpool. I have a dataset "domains" then a dataset for each individual vm. From there you can snapshot each directly from the zfs master plugin, or as i do automatically using sanoid. From here you can then use zfs replication to replicate the vms with snapshots to another Zpool. This is very efficient and a great solution in my opinion. To make this easier I have made some scripts to automatically do this. If you are interested in doing this check out these 2 videos.
  23. Ha lol. Oh no Skynet knows about me, I must hide from the AI !! Thanks for your kind words and I am glad that you like the tutorials. Have a great day
  24. Hi. Let me see if I understand the issue. You are passing through an Intel WiFi device to a macOS VM. I assume you are also passing through an AMD GPU as well. The AMD card is resetting correctly, but the Intel WiFi device is not. Is the Intel WiFi device built into the motherboard? I wonder why you want to pass through a WiFi device when you can use a virtual NIC for internet on the VM. Is it because the WiFi device is Bluetooth and WiFi combined? If so, I would recommend just not using the WiFi device. Use the virtual NIC and buy a cheap £10 USB Bluetooth adapter for Bluetooth. Hope this helps!
  25. To anyone using EmbyServerBeta There was an update about 8 hours ago which for me broke my emby Cannot get required symbol ENGINE_by_id from libssl Aborted [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] done. [services.d] starting services [services.d] done. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. ** Press ANY KEY to close this window ** Temporary fix change repository from emby/embyserver:beta to emby/embyserver:4.9.0.5 Emby will then start and run. Just in a few days when after a new version is up and issue is fixed we can just go back and set tag back to beta.
×
×
  • Create New...