
Everything posted by SpaceInvaderOne
-
Unraid 7.1: 9070XT Won't Passthrough
Last week I bought a Sapphire 9070 XT for testing on a server with a Z690 motherboard and a 13900K CPU. In early testing, I ran into what looks like the reset bug affecting this card. I didn’t have much time to troubleshoot, but I was able to pass the GPU through to a VM without doing anything special no need to bind it no vbios I simply added it to the VM as normal and it worked. However, there’s a major issue for me. When I shut down the VM, the entire server hangs. This also happens when running a custom 6.14 kernel. I’m away on holiday for about 10 days, but once I’m back I’ll be taking a proper look at this. If I find a workaround or solution, I’ll post an update.
-
Unraid 7.1: 9070XT Won't Passthrough
Hi @Bala Bob That’s great to hear the GPU is passing through successfully now. One thing I’d be really interested to know, after you’ve shut the VM down, can you start it again without rebooting the server? That would tell us if the GPU resets properly after the first passthrough
-
Unraid 7.1: 9070XT Won't Passthrough
Oh also the logviewer pic is not VM related so dont worry about that
-
Unraid 7.1: 9070XT Won't Passthrough
Hi Rick. Ok so that didnt work. Lets see if the GPU is in D0 when it first boots and if and when its in D3 after a boot Lets try these steps and note the results of each Step 1 – Full Shutdown and Power On 1. Shutdown your Unraid server completely 2. Wait 10/15 seconds after it shuts down 3. Power it back on 4. Do NOT start any VM 5. Open a terminak and run cat /sys/bus/pci/devices/0000:03:00.0/power_state 6. Note whether it says D0 or D3 --- Step 2 – Unbind VFIO 1. Go to Tools > System Devices in Unraid web UI 2. Uncheck "Bind to VFIO at boot" for - Your GPU - Your GPU Audio device 3. Reboot the server 4. After reboot, run again cat /sys/bus/pci/devices/0000:03:00.0/power_state 5. Note the result again (D0 or D3) --- Step 3 – Remove Framebuffer Block (if still D3) 1. Go to Main > Flash 2. Scroll to "Syslinux Configuration" 3. Find the line starting with append initrd=/bzroot ... 4. Remove: video=efifb:off,vesafb:off 5. Click Apply and Reboot 6. Run again: cat /sys/bus/pci/devices/0000:03:00.0/power_state --- Step 4 – Allow AMDGPU Driver to Initialise GPU 1. Make sure - VFIO is still not bound & Ffamebuffer still removed 2. Reboot the server 3. After booting, run cat /sys/bus/pci/devices/0000:03:00.0/power_state 4. The GPU should now be in D0 Step 5 – BIOS Settings Reboot and enter BIOS Change these settings if not like this - Primary Display -- IGFX (iGPU) - Multi-Monitor -- Enabled - CSM -- Disabled - Above 4G Decoding -- Enabled - Resizable BAR (ReBAR) -- Disabled Save and reboot. Run the power state check again: cat /sys/bus/pci/devices/0000:03:00.0/power_state --- Step 6 – Try the VM Without the vbios attached also again do not bind GPU to VFIO at boot Then starting the VM with GPU passed through --- After each test, report - The GPU power state (D0 or D3) on boot of server - If the VM starts - Any error messages seen & hopefully we can get it to work. But AMD gpus are known to have reset issues which your D3 is closely related to
-
Unraid 7.1: 9070XT Won't Passthrough
Hey there, I noticed the title mentions a 9070XT, but in the body, you say you’re passing through a Sapphire 7900 XT. Just wondered which model you’re using? If it’s actually a 7900 XT, you’re not alone alot of people have reported similar passthrough issues with this card on the net. A couple of things to try 1. Bind GPU to VFIO Go to Tools > System Devices in Unraid. Find your GPU and GPU Audio device. Check Bind to VFIO at Boot and reboot server 2. Disable Framebuffer to Prevent Host from Using GPU Modify Syslinux Configuration This prevents UnRAID from using the GPU for console output and helps avoid D3 power state issues. Goto Main Tab > Flash and edit the syslinux config adding video=efifb:off,vesafb:off ie label Unraid OS menu default kernel /bzimage append video=efifb:off,vesafb:off initrd=/bzroot 3. Disable resizable BAR (ReBAR) Some RDNA3 GPUs have issues with passthrough when Resizable BAR is enabled.Try disabling it in BIOS and see if it helps You can also do in the XML by using <rom bar="off"/> Also what is the card like after a cold boot of the server. Do you still have the issue or is it only after restarting a vm etc. Also have a read here for a discussion on the 7900 and passthrough https//forum.level1techs.com/t/the-state-of-amd-rx-7000-series-vfio-passthrough-april-2024/210242
-
Help Needed: Moving a .OVA VM to unRAID
Do you know the OS the vm is? The size looks small compared to whats listed .ovf file (even with thin provisioning) In the .ovf file size is test-disk1.vmdk 563 MB test-disk2.vmdk 1 GB test-disk3.vmdk 102 MB test-disk4.vmdk 307 MB But to convert as Kilrah said you should convert each to raw vdisks in Unraid put them in a temp share or folder ie /mnt/user/temp_share cd into that share. cd /mnt/user/temp_share then for each vmdk run qemu-img convert -f vmdk -O raw test-disk1.vmdk test-disk1.img That will make them raw vdisks. Then make a new vm again as said by Kilrah the XP template should have the defaults that you need CPU qemu64 (if that doesnt work you may have to emulate a 32 bit CPU but thats more difficult as you need to change the xml RAM 256MB BIOS SeaBIOS Chipset i440fx Nic as no pcnet32 in Unraid so I would choose RT1839 Best of luck
-
Possible Corruption?
Hi, looking at your disk 3 in the diagnostics, the drive ( Seagate Barracuda) is an SMR drive, which has a big impact on performance. SMR drives maximize storage density by overlapping data tracks but that make them slower for sustained writes. Because when writing they often need to rewrite adjacent tracks, leading to random slowdowns. This makes them particularly bad in high write scenarios like large file copies, torrents, and NZB downloads, where frequent writes can cause speeds to drop drastically. This would explain why your downloads start fast and then slow to a crawl if writing to the array and not cache. Smart shows drive has been running for about five and a half years, which is quite old for a hard drive. There are no reallocated or pending sectors, so there’s no immediate sign of total failure, but the performance issues you’re seeing suggest the drive is degrading. The read and seek error rates are quite high, meaning the drive may be struggling to read data efficiently, but these aren’t necessarily critical smart errors. The load cycle count is also extremely high at 92,765 cycles, meaning the drive’s heads have been parked and unparked a huge number of times, which contributes to mechanical wear. Overall, it just seems like an aging drive that is starting to show its age
-
Unraiders, What’s Your Server’s Biggest Job at the Moment?
I was thinking the other day, Unraid has come a long way since its inception, evolving from a simple NAS solution into a powerful platform for all sorts like media servers, virtualisation, self hosting, automation, and so much more. But I’m curious, what’s everyone’s main use case in 2025? Are you running VMs, hosting game servers, experimenting with AI, or maybe even doing a bit of everything? So please vote and let us know in the comments if you’ve got a unique setup!
-
Exciting News: Ed Rawlings Expands His Role with Unraid!
I just want to say a huge thank you to everyone for the kind words and support 😀 It truly means a lot. The Unraid community has always been something special, and Im super excited to officially be part of the team. I cant wait to dive deeper into creating content, sharing guides, and engaging with all of you even more. Whether its through tutorials, discussions, or helping with technical challenges, my goal is to make Unraid as accessible and powerful as possible for everyone everywhere. From home users to businesses. Im really looking forward to what’s ahead, and I appreciate you all.
-
Nextcloud-AIO Spaceinvader version with Tailscale domain on local network.
Hi, for anyone who would like to use a tailscale subdomain with your nextcloud AOI container it can be done. But tailscale isnt installed into the AIO containers but we use tailscale on the server itself and enable serve. I thought I would put a quick video together to show how.
-
What is your default browser of choice?
Just Googled it (yeah ironic, I know), and you're right. Firefox is Netscapes descendant. I feel like I just found out Clark Kent is Superman! lol
-
What is your default browser of choice?
Forget Chrome, Safari or Firefox, I’m just over here still waiting for Netscape Navigator to make its glorious comeback. 🤔
-
Docker Updates Fail
I don’t think for you @xrqpthe Docker image is causing your current problem. If Docker were unable to write to the image, you would typically see space/write related errors like No space left on device Input/output error Instead, having read @Squid asking @LanceG0dif Jumbo frames are enabled i think maybe he suspects an mtu mismatch, which I also think could be your issue possibly if you also are using jumbo frames? Consumer routers usually don’t support jumbo frames. If you are using Jumbo frames on your server, to test this, try running a ping command to the router that forces the do not fragment flag with a large packet size. That way you can see if your router does support large mtu set on the server ping -M do -s 8000 your routers ip so for me, my router ip is 10.10.20.1 and running the command looks like the below root@Nebuchadnezzar:~# ping -M do -s 8000 10.10.20.1 PING 10.10.20.1 (10.10.20.1) 8000(8028) bytes of data. ping: sendmsg: Message too long ping: sendmsg: Message too long ping: local error: message too long, mtu=1500 ping: local error: message too long, mtu=1500 ping: sendmsg: Message too long ping: sendmsg: Message too long ping: sendmsg: Message too long So for me i get the errors as my router doesnt support mtu 8192 so i get message too long & mtu=1500 because my router doesnt support jumbo frames So if you get message too long or mtu=1500 errors your router doesnt support jumbo frames either. So then in your case the server would be sending large packets that must be broken into smaller fragments. All these fragments need to arrive successfully and be reassembled. Losing a single fragment means retransmitting the entire packet and so this can cause delays or failures, and may lead to the timeout errors you’re currently experiencing when updating the containers. I would try setting the server back to 1500 and see if the issue still happens. @LanceG0d I would also try setting back the mtu to 1500 your router doesnt support a high mtu if you find the docker image isnt your issue. I hope this helps.
-
[Support] SpaceinvaderOne - Macinabox
Hi, just pushed a fix and now macinbox supports Sequoia correctly. As well the the previous versions.
-
[Support] HomeAssistant_inabox
HomeAssistant_inabox HomeAssistant_inabox is a powerful and easy-to-use Docker container designed to seamlessly download and install a fully functional Home Assistant VM onto an Unraid server. It simplifies the installation and management of Home Assistant by integrating a virtual machine (VM) setup directly from the official Home Assistant source. Key Features • Direct Download & Installation: Automatically downloads the Home Assistant .qcow2 image from the official Home Assistant release repository and installs it onto your Unraid server. • Automated VM Setup: Handles the creation of a new VM configuration, dynamically building the XML template based on your environment and the highest QEMU version available. • Automatic VM Monitoring: Regularly checks the status of the VM and restarts it if it has been shut down unexpectedly, ensuring Home Assistant is always available. • Seamless Integration with Docker & VM WebUI: Combines Docker container management with VM monitoring. Clicking the “WebUI” link from the Unraid Docker tab will automatically redirect to the Home Assistant WebUI inside the VM. • Dynamic IP Management: Automatically determines the internal IP address of the Home Assistant VM and updates the Docker WebUI redirect, Getting Started Follow these instructions to install and configure the HomeAssistant_inabox container on your Unraid server. Installation 1. Go to the Unraid Apps Tab (also known as CA Community Applications). 2. Search for HomeAssistant_inabox and click Install. 3. Configure the container variables as described below in the “Configuration” section. 4. Start the container. HomeAssistant_inabox will automatically download the latest Home Assistant image, create a VM, and set up the necessary configuration files. 5. Once the setup is complete, click the container’s “WebUI” button to access your Home Assistant instance. Configuration HomeAssistant_inabox relies on a few essential variables that need to be set through the Unraid Docker template. Below is a detailed description of each option and its purpose: Container Variables 1. VMNAME • Description: Set the name of the Home Assistant VM. • Default: Home Assistant • Purpose: This is the VM name that will be displayed in Unraid’s VM manager. 2. VM Images Location • Description: Specify the location where the VM images are stored (e.g., your Domains share). • Example: /mnt/user/domains/ • Purpose: Defines the storage path for the Home Assistant VM files on your Unraid server. 3. Appdata Location • Description: Set the path where HomeAssistant_inabox stores its appdata and configuration files. • Default: /mnt/user/appdata/homeassistantinabox/ • Purpose: Specifies where the container’s internal configuration and scripts are stored. 4. Keep VM Running • Description: If set to Yes, the container will automatically monitor the Home Assistant VM and restart it if it’s not running. • Default: Yes • Purpose: Ensures that Home Assistant remains available, even after unexpected shutdowns. 5. Check Time • Description: Defines the frequency (in minutes) for checking the status of the Home Assistant VM. • Default: 15 • Purpose: Determines how often the container checks to see if the VM is running and needs to be started. 6. WEBUI_PORT • Description: Set the port for accessing the Home Assistant WebUI through the container’s WebUI redirect. • Default: 8123 • Purpose: Allows you to configure the WebUI access port for Home Assistant. How It Works HomeAssistant_inabox provides a robust solution by combining a Docker container with a full VM environment: 1. Direct Download & Installation: • When the container is started, it automatically downloads the latest Home Assistant .qcow2 disk image from the official Home Assistant source. • It then extracts and moves the image to your Unraid server’s specified domains location. 2. Dynamic VM Setup: • The container dynamically builds a VM XML template for Home Assistant using the latest QEMU version available. • This template is then used to automatically define a new VM on your Unraid server. 3. Automatic IP Detection: • After the VM is started, the container uses the QEMU guest agent to retrieve the internal IP address of the VM. • The IP address is then used to configure a redirect within the Docker container, making the “WebUI” link in Unraid’s Docker tab point directly to the Home Assistant WebUI inside the VM. 4. Monitoring & Restart Functionality: • If RESTART is set to Yes, the container will regularly check to see if the Home Assistant VM is running. • If the VM is found to be shut down or paused, the container will attempt to start it automatically. 5. WebUI Integration: • When you click the WebUI button for the HomeAssistant_inabox container, it dynamically redirects to the Home Assistant WebUI inside the VM using the IP address retrieved during the last check.
-
[Support] SpaceinvaderOne - Macinabox
NEW MACINBOX AT LAST !! Now I know many people have been asking for updates to this container for a long time. So firstly sorry for the wait, but hopefully its worth it. I decided too fully rewrite it and add new features. Now supports all macOS from High Sierra to Sonoma. Macinabox is now self-contained and no longer requires any helper scripts being able to fix xml itself. It dynamically build the XML based off not only the choices made in the container template but also by checking the host for which is latest qemu etc and building accordingly. Also rerunning the container fixes XML problems, replacing missing custom xml, making sure the NIC is on the correct bus so macOS see it and can use it. And if you have an odd core count macinabox will see this and remove the topology line to ensure the VM still boots I have made the container so it will be much easier in future for me to update so expect more regular updates going forward. Any suggests please let me know and open a request on GitHub https://github.com/SpaceinvaderOne/Macinabox Here is a video showing its use. Basic Usage. • Set notifications to Detailed. • Configure Docker update notification to any option other than Never (e.g., Once per Day). Installation Steps 1. Compliance: Ensure you are compliant with the Apple EULA for macOS virtual machines. Select whether you are compliant or not. 2. Select OS: In the template, choose the macOS version you wish to install. 3. VM Name: By default, Macinabox uses the OS name for the VM. If you prefer a different name, enter it in the Custom VM Name field (default is blank). 4. VDisk Type: Choose the vdisk type: Raw or QCOW2. The default option is Raw. 5. VDisk Size: Specify the size for the macOS vdisk in gigabytes. The default size is 100 GB. 6. Default Variables: You can leave all other settings as default. If needed, you can change the default locations for your domains, ISOs, or appdata shares. Running the Container • Launch the container. It will download the recovery media from Apple servers, name it, and place it in your ISO share. • The container will create your vdisk and an OpenCore image for your VM, which will be placed in your domain share within a folder named after your VM. • The XML for the VM will be dynamically generated based on your settings, and the container will perform checks on your server: • It will check the version of QEMU installed and calculate the highest compatible version of Q35. • It will set the default VM network source in the XML. • It will add the appropriate custom QEMU command line arguments to the XML. Notifications During the container’s operation, you will receive notifications: • When the installation media has been successfully downloaded. • When the VM has been defined and is available on the VMs tab of your server. • Or if any errors occur during the process. Additional VM Configuration Once the VM is installed, you may want to adjust it to your preferences: 1. On the Unraid vm tab. Click the vm and click edit. Now modify the CPU core count and RAM amount as needed. Fixing broken XML Configuration When making changes to the VM in the Unraid VM manager, the XML will be changed making the VM XML inorrect for macOS, which requires specific configurations. To fix this: • Run Macinabox again. It will check if a macOS VM with the specified name already exists. If it does, it will fix the XML instead of attempting to install another VM. (This step is necessary for both Unraid 6 and 7.) • If you make any changes to the VM in the Unraid VM manager, be sure to run the container again to update the XML. Starting the VM 1. Start the VM and open a VNC window to proceed with the installation. 2. Boot into the OpenCore boot loader and select the bootable image labelled macOS Base System. Press Enter to continue. Installing macOS 1. Once booted into recovery media, open Disk Utility to format your vdisk. 2. After formatting, close Disk Utility and select Reinstall macOS. Follow the wizard to complete the installation. The installation process may cause the VM to reboot approximately four times. Hope you guys like this new macinabox
-
[Support] SpaceinvaderOne - Macinabox
When you see "Guest has not initialised the display yet". This is due to the nvram the vm is using. This can happen with any VM windows, Mac, linux. Easiest fix. Just delete the vm (but dont select select disks). Run macinbox again. It will see the disks already there and will not need to download again. It will recreate the nvram and xml template. then run the VM and you should be good
-
Unraid listening interfaces (6.12 and later)
Check that netbios is set to disabled in smb settings on your server. This ciases an issue when enabled with samba over things like zerotier tailscale etc
-
Unraid listening interfaces (6.12 and later)
I had a support session with @mauriceatkinson@btconnect. this evening to address this problem. Maurice and I tested a script I made which you can run using user-scripts which fixes the issue. I have posted the script and how to use it on my github https://github.com/SpaceinvaderOne/Unraid-ZeroTier-Server-Restart-fix
-
Passthru considerations before moving to 6.90
The xen-pciback.hide parameter is not necessary to "stub" PCI devices for passthrough. Instead, you can simply bind them to VFIO, which can be done directly from the GUI. To do this, navigate to Tools > System Devices, and select the devices you want to passthrough by ticking the box next to each. After rebooting your system, these devices will appear in the VM template, ready for use
-
Windows 11 UEFI Shell
Hi, looking in your xml for the vm i see that your cdrom is set as the 2nd boot device <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/ubuntu-23.10.1-desktop-amd64.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> This should be the first device to install your OS. Goto the template and set the cd to boot order 1 and vdisk to boot order 2. Save the template and start it will boot from the cd and you can install
-
VM start crashes Unraid webui and all docker become unresponsive
Would need to see your xml for the vm. There is no vm xml in your diagnostics. When using the vm are you passing through any devices?
-
ZFS snapshot and replication script
The script has had a change that it can do the following 1. Snapshot and replicate either to a remote server or a local server. There can only be one destination in the script at present. 2. The script can snapshot and replicate a whole zfs pool. You can add exclusions for datasets which you dont want included. 3. It can replicate just one dataset specified 4. It can replicate with zfs replication or also to a location using rsync if the destination doesnt have zfs. (source location still must be zfs as for the rsync replication it mounts the newest snapshot the rsyncs that to destination) I will not add much more functionality to the script otherwise it just becomes to complicated to set up. I do plan on converting this (maybe) to a plugin at some point in the future when then i will add features
-
SSD IOMMU Passthrough can't see drive on boot (but can during Windows 11 install)
Hi, glad it's working. You won't see any difference in performance. You are not actually using a vdisk this way, you're directly passing the physical disk to the VM by its ID
-
VMS audio is stuttering and popping
Switching gpu and audio devices to use msi interrupts in your Windows vm can reduce audio stuttering by optimising how interrupts are managed. Download this https://www.dropbox.com/s/y8jjyvfvp1joksv/MSI_util.zip?e=2&dl=0 Then run the utility in the vm. It displays a list of devices and their current interrupt mode. Select all gpu and audio devices and set them to use msi interrupts. Save your changes and reboot the vm to apply these settings.