danioj Posted May 25, 2020 Posted May 25, 2020 (edited) What a monumental waste of time. Fun, but a waste of time. Unstable as can be. System kept dropping the USB and eventually I ended up with about 20 Million writes and reads to the USB which busted the USB completely. So, I went back to my original plan. Ubuntu Desktop OS, Portainer and KVM. Nice to know, unRAID virtualised on a NUC just doesn't work! LOL! 🙂 I wanted to share the results of some of the tinkering I have been doing this weekend with an old Intel Celeron NUC, Ubuntu Server, KVM and unRAID. I will preface the remainder of the post by saying that I know there are many ways to do this, I know that some people will find this silly and or pointless from their perspective etc etc. I write this post more as a reference point for myself in the future - JIC I have to repeat it - but also because I like to share and write. When I read what I have written - it is almost like I am writing it to myself at some point in the future. A number of things happened over the past week: I installed 2 x new Raspberry Pi 4’s around the house to replace some old Intel Celeron NUCs that were running LibreELEC and PlexKodiConnect (no native 4k playback resulting in these clients putting significant strain on the unRAID server to transcode was the driver for this) Both I and the family started to get frustrated when I took down / restarted my main unRAID server allot this week as I run my DNS server (pihole) on it via a Docker Container which resulted in loss of “Internet” aka Instagram and trash TV etc as the clients couldn't resolve DNS until the unRAID server came back up I was nicknamed Wreck it Ralph as I “Broke the Internet” I had spare hardware (2 x NUC’s) lying around doing nothing. I had only recently deployed Pihole on my network (which I did with more haste than thought) and I knew my setup was flawed from the beginning. Very similar to the days when I deployed pfSense on a VM on my unRAID server. Every time the server went down or was restarted there was an interruption to internet service to the family in the household. Much like the pfsense situation, something had to change anyway. My first consideration. Well, given I had just deployed 2 x new Raspberry Pi 4’s around the house they would run pretty much 24x7 I thought I could deploy Pihole on one of them. It is called “Pi”hole after all. These utilise a 5Ghz wireless connection to access the network (as they are in bedrooms). I paused. I realised quickly that a DNS server requires a more stable wired connection. Plus, If I ever need to upgrade, configure etc this may result in a need to restart the box which will inevitably mean I'll hear “Ralph, have you broken the internet again ..?”, “my TV just went off” or “I was in the middle of the latest Made in Chelsea”. Nah, the Pi’s were out. I figured I could just deploy Pihole on my backup unRAID server which gets much less attention as all it does is sits there turned off most of the day until 3am when it wakes up and receives backups from the main unRAID server and also pushes key important files to a removable USB 3.0 drive for offsite storage then turns itself off. Nah, didn't want to do that either. The backup unRAID server setup is stable and running just as I want it to. Lightbulb moment. What if I utilise one of my NUC’s? The little buggers could be powerful enough to do the job, have 4GB of RAM and they have a nice fast Gigabit ethernet port and SSD’s in them. Intel NUC DN2820FYKH 4GB RAM 250GB SSD https://ark.intel.com/content/www/us/en/ark/products/78953/intel-nuc-kit-dn2820fykh.html Options I initially considered about how to use the NUC: Install Linux and install Pihole on the OS Install Linux and install Docker and deploy a Pihole Container Install Linux and install a VM and deploy Pihole within the VM. I considered the above options, as in the back of my mind (much like my main and backup unRAID server setup) I am always thinking about availability and being able to get back up and running if something fails. The ability to backup a virtual deployment and redeploy quickly without having to reconfigure is gold to me. Have you tried making your home (and the people in it) completely used to and dependent on your setup and then not having a plan to bring it all back if it fails, which it will. Nope, it had to be wife proof. Now, I have gone down a route like this before, playing with something new and either through deployment of a VM, an old PC etc. introduced something new to my setup. You know the drill, install linux / deploy a VM, deploy something on it. Inevitably get frustrated when maintaining this VM, PC and / or the applications on it are not as simple as maintaining unRAID. It ends up getting removed / deleted and you revert your setup back and the time you spent with no fruit was just the price you paid to keep yourself entertained one rainy day weekend. *stamps feet* I want what I currently have - i.e. Pihole installed via a Docker container on unRAID with everything that brings but I don't want it impacted by me taking down the server etc *repeats ramblings of above*. I’m taken back to previous frustrations (which I have gone over in my head ad nauseum): Why wont unRAID software let me leave Docker or a VM running when the Array is stopped? Why isnt running unRAID as a VM (so I could run other VM’s alongside it) officially supported? Why isn't there some software out there (as good as unRAID) that I can use to manage VM’s and Docker via a Web GUI? The third bullet above is the closest I have come to having an unRAID esq like setup on another machine but as I also mention above - I often get sick of maintaining this setup. I am of course referring to: Linux Desktop host with KVM, Docker, VNC/RDP, SSH + some display emulator (required to allow me to VNC in) Command line for installing containers Portainer and Virt-Manager on the host to manage VM installation and manage installed Docker containers. Urgh. So much more complicated than unRAID. Then I remembered another thought I had some time ago. What if unRAID released a “Lite” version which was all about being a tool for virtualisation and not about being a NAS? LOL - I didn’t share this idea with the community because I can imagine the responses I would get. At that point, I got bored with thinking and opened YouTube and started watching some HomeAssistant Videos (another project of mine for a different time) and a SpaceinvaderOne video from back in 2016 came up, titled: “How to install unRAID as a virtual machine on another unRAID host” https://www.youtube.com/watch?v=ZFzwihcphrg I’d seen this before but chuckled again at the Russian Doll joke at the beginning of the video. I switched to some Googl’ing while listening to the video and was just randomly using search terms like ‘VM’s’, ‘Bare Metal’, ‘KVM’, ‘ESXI’ etc. I started thinking ‘AGAIN’ given Grid (who I an hear taking the steps) can install unRAID on KVM (i.e. Hypervisor used by unRAID) then why can’t I do the same on another PC (my eyes start going to the NUC and back to the screen). Surley anything you can do on KVM in unRAID can be done on KVM elsewhere (The ESXI route was out as its not free and i've seen so many issues with that and just didn't want to explore that - I had a working solution right here from back in 2016). If only KVM could be installed on a PC without an OS? First hit on Google for “Can you install KVM without an OS” gets you this: https://stackoverflow.com/questions/26696477/can-we-install-kvm-without-any-operating-system-installed Quote No, you don't need to have an operating system to install KVM, because it is part of the Linux kernel itself. The problem is you normally get a kernel with a linux distribution; so people assume to use KVM you need to first install ____ (insert your favorite distribution). In fact, all you need to use KVM is a bootable USB stick with a KVM enabled kernel. You don't need an operating system on the host at all. You can do this yourself, or use something like proxmox which is a bootable image that includes KVM and a GUI.” So it CAN be done. I didn’t know that. Some of that looked a bit outside of my technical comfort zone though. USB stick, KVM enabled Kernel - no ‘apt’ like software installation tool etc etc. I was starting to shy away again. Then, in another forum, someone argued that given KVM is a type 1 hypervisor and has direct access to hardware (as we all know as we run unRAID) that running a very light OS (that consumes little to no resources) is just like (with the smallest of penalties) running KVM without no OS at all. The suggestion was to go over and look at Ubuntu Server. Plans started to form in my head. What if, I install Ubuntu Server (Headless with no desktop environment) on the NUC and get it running with the minimalist of installs. Install KVM. Then (using what i’ve seen before as a guide) install unRAID on top of it. Given Ubuntu Server in this case is (and will always be) doing little to nothing - wouldn’t this be just like running unRAID native. I could create a small custom created disk image to emulate a disk so the unRAID array would start (never be much of an issue of disk failure here meaning the unRAID array would always start and start fast) and then I can deploy docker containers on unRAID as normal using another virtual disk image to emulate a Cache disk. With unRAID in memory and the SSD in the NUC - it should be fast. “Should be”. LOL! It should be easy to get Ubuntu Server to autostart KVM, KVM to auto start unRAID VM and then unRAID to auto start my Docker containers. Again, given id be passing pretty much every resource from the NUC as I can to the unRAID VM and the host doing nothing, hopefully there would not be a performance hit. Hopefully the NUC is up to it. Hopefully. This is either going to work or be another one of those projects I throw away to a rainy day weekend. I decided to try it out. Out comes the NUC. Off I go. What did I use: My NUC that is ready to go with the SSD and RAM already installed A small LCD display (needed to install the OS but also useful for later when we play with network config) A USB stick (for unRAID) A second USB stick (for Ubuntu Server) USB Keyboard (I used a Logitech all in one via a Unify USB Receiver) An ethernet cable connected to my switch (for LAN and Internet access). NOTE: You don't need to use the software I used. I chose the software based on my own knowledge / preference and laptop OS (OSX) that I am running. You can of course use substitute different software and get the same outcome. However, everything below was my choice of procedure using my choice of software. Here is a summary of the steps I took which I detail in the rest of the post: Step 1: Create the Ubuntu Server installation USB Step 2: Install Ubuntu Server Step 3: Install required software to use KVM Step 4: Install and configure Network Manager Step 5: Setup the Network Bridge Step 6: Configure KVM to use the bridge we have just set up Step 7: Time to pause and take a breath Step 8: Create a bootable unRAID USB using your laptop or PC Step 9: Install Virt-Manager Docker Container on your unRAID server Step 10: Connect Virt-Manager to KVM on our NUC via SSH Step 11: Plug in the unRAID USB stick Step 12: Create the unRAID Virtual Machine Step 13: Watch unRAID boot Step 14: Configure unRAID Step 1: Create the Ubuntu Server installation USB I chose Ubuntu Server 20.04 to be the base OS of the NUC. - Download Ubuntu Server from here: https://ubuntu.com/download/server I chose balenaEtcher to burn the Ubuntu Server image we just downloaded to one of our USB drives. - Download balenaEtcher here: https://www.balena.io/etcher/ It is VERY easy to use once installed. - Plug in the USB stick: It doesn’t matter whether it is a USB 2.0 or a USB 3.0 for Ubuntu Server (it does for unRAID). - Select the image: Search for the Ubuntu Server image file we have just downloaded. - Select the USB stick we want to “burn” the Ubuntu Server image to: Make sure you select the correct USB device as this process will erase the USB) Burn the image: - Click 'Flash' - Eject the USB stick from the computer: Use the safe procedure for the OS being used. I used OSX. Once the process has completed, you have a bootable USB stick which is ready to use to install Ubuntu Server 20.04. Step 2: Install Ubuntu Server - Put the newly flashed USB stick into the NUC: Again, there is no need to be picky about whether the USB port is 2.0 or 3.0. - Turn on the NUC. - Hit F2 to enter the bios: You may find that this is difficult if you have Fast Boot enabled. If you do, you just need to keep the Power button pressed on the NUC for 2 seconds on power on and it will automatically give you a menu to select from. - Configure the bios to boot from the USB device: I had to turn off Fast booting fist, then turn on USB booting as well as deal with the device boot order (This could also have been achieved by selecting the option to ‘Always boot USB devices first’). - Save and exit The NUC will now boot from the USB and take you to the installation text based GUI. WARNING: The procedure I am going through here will completely ERASE my SSD. I am ok with this, make sure you are too if you are following along. If not, don't proceed and backup your files first. The steps below are a summary of the installation steps for Ubuntu Server 20.04. On boot, you are presented with a familiar Ubuntu boot menu: - Select ‘Install Ubuntu Server’ - Press ‘Enter’ You will then be asked to select the language for the system: - Scroll through the list and select a language: You can select any language. - Press ‘Enter’ Should you have an image that was compiled with an older than current installer, the setup process will ask you if you want to update it before commencing the installation: - Select ‘Update to the new installer’ - Press ‘Enter’: This may take a little time. You will then asked to select your keyboard configuration: - Select your keyboard layout: I chose Australia - Select your keyboard variant - Select ‘Done’ - Press ‘Enter’ You will then be asked to configure your network connections: - Note down your interface name: Mine was “enp3s0” and should be auto configured. - Note down the given IP address: Mine was “192.168.1.50” that was allocated from the network DHCP server. - Select ‘Done’ - Press ‘Enter’ You will then be given the option to configure a proxy: - Select ‘Done’: I don't use a proxy but if you do you may have some edits to make which I am not going to cover here. - Press ‘Enter’ You will then be given the option to configure a different Ubuntu archive mirror repository: - Select ‘Done’: I was quite happy to leave it with the default options. - Press ‘Enter’ You will then be asked to use the guided storage configurator to prepare the NUC disk for installation of Ubuntu Server: Select ‘Use an entire disk’ - Press ‘Spacebar”: This will put an ‘X’ in the field. You will note that the SSD of the NUC has already been populated to be used. It is good practice to check this field if you have a multi disk system but as I know I only had one I was happy that it had selected the correct disk. - Select ‘Done’: I was quite happy to leave the other fields with the default options. - Press ‘Enter’ You will then be presented with a File system Summary: - Select ‘Done’: This screen is just a summary of what you have selected to review before continuing. - Press ‘Enter’ You will then be presented with a WARNING and to CONFIRM the pending destructive action: - Select Continue: WARNING: The procedure I am going through here will completely ERASE my SSD. I am ok with this, make sure you are too if you are following along. If not, don't proceed and backup your files first. - Press ‘Enter’ You will then be asked to setup your Ubuntu profile: - Enter your name: You don't have to use your real name - Enter a Server Name: I like to put some thought into this. I have naming conventions for my network for all my devices. This is what you would use to refer to the NUC on your network if you’re not using it’s IP address. Write what you choose down. - Enter a password: Make it strong but remember that you ARE going to have to type it again a fair bit (either via Sudo or at the terminal prompt or via SSH). Write what you choose down. - Enter the password again - Select ‘Done’ - Press ‘Enter’ You will then be asked if you want to install ‘OpenSSH server’: - Select ‘Install OpenSSH server’: We are going to need this later. - Press ‘Spacebar”: This will put an ‘X’ in the field. - Select ‘Done’ - Press ‘Enter’ You will then be asked to select Featured Server Snaps to install: - Select ‘Done’: Remember, this NUC is not going to do anything, it is an unRAID VM running on the server that is going to have services running in it. Therefore we don’t want to be installing any more additional software than needed to host the VM. - Press ‘Enter’ The installation will begin. - Wait or go and grab a coffee The installation will complete. - Select ‘Reboot’ - Press ‘Enter’ - Quickly remove your USB Stick from the NUC: Do this before the system starts to boot again otherwise you will boot straight into the USB stick again. Ubuntu Server is now installed and you should boot to the login prompt. You can test logging in with the username and password that you selected (and noted down) earlier. Step 3: Install required software to use KVM I decided to do this step at the console as I knew I would be playing around with the network connections shortly afterwards which makes using SSH problematic at this time especially if you mess up. Update package information for the source library we configured at install: sudo apt update Upgrade all packages that are currently installed: sudo apt upgrade Although we know that KVM is a module built into the kernel, it doesn't mean that Ubuntu Server has all the packages it needs by default: sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils I like to do a reboot now: sudo reboot Log back in. Step 4: Install and configure Network Manager Just like with our unRAID VM’s we want unRAID to be able to get it’s own IP address on our network. I will utilise a network bridge to do this. I installed Network Manager to make this easy (for me). NOTE: I much prefer to use Network Manager rather than netplan to do this - mainly because I don't know how to use netplan as I am more familiar with using Ubuntu Desktop. You can create a bridge in a number of other ways and in fact I have read that what I am going to do next is NOT recommended on Ubuntu Server (although I haven't found it explained as to why) BUT this is the way I know so this is the way I have followed. First we have to stop the current network services: sudo systemctl disable systemd-networkd.service sudo systemctl mask systemd-networkd.service sudo systemctl stop systemd-networkd.service Then we install Network Manager: sudo apt-get install network-manager Open up netplan config in nano (my favorite command line text editor). My config file was called ‘00-installer-config.yaml’: cd /etc/netplan/ sudo nano 00-installer-config.yaml Then edit the file until it looks like this: network: version: 2 renderer: NetworkManager Now we need to exit Nano and save the changes: - Press Ctrl X: Command to exit Nano - Press Y: To confirm that you want to write the changes You should now be back at the command prompt. Funnily enough, now we use netplan to generate the required configuration files to use Network Manager: sudo netplan generate Now we start the Network Manager service: sudo systemctl unmask NetworkManager sudo systemctl enable NetworkManager sudo systemctl start NetworkManager Reboot: sudo reboot Log back in. Network Manager doesn't manage all networks by default. If you try the following command you would probably find that it showed nothing: nmcli con show If you did this command though, you would find it would list your interfaces fine: ip a So we have to tell Network Manager to manage the interfaces. We do this by editing the Network Manager Configuration as follows: sudo nano /etc/NetworkManager/NetworkManager.conf Then edit the following line in the file: [ifupdown] managed=false To this: [ifupdown] managed=true Now we need to exit Nano and save the changes: - Press Ctrl X: Command to exit Nano - Press Y: To confirm that you want to write the changes You should now be back at the command prompt. Reboot: sudo reboot Log back in. If you type the command we typed earlier to see if Network Manager was managing our interfaces you would get a different result than nothing this time: nmcli con show It should output something like this: NAME UUID TYPE DEVICE virbr0 b9203b8d-cab7-41eb-ab53-6038ef4f2b0d bridge virbr0 Wired connection 1 172d2699-df49-4d8e-b3dd-85825972fc08 ethernet enp3s0 Now that Network Manager is managing our network interfaces, we can use Network Manager to set up our bridge. Step 5: Setup the Network Bridge Get your current network configuration and note it down (if you don't remember the output from earlier): nmcli con show From the output (see above for example), note down the Name and the Device of the wired connection. In my case it was (not that it is case sensitive): Name: "Wired connection 1" Device: "enp3s0" Now we will add a bridge called br0 just like we are used to with unRAID: sudo nmcli con add ifname br0 type bridge con-name br0 You will get a notice saying that the bridge has been successfully added. Set the ethernet interface to be a slave to br0 (remember to substitute the interface name you captured above here): sudo nmcli con add type bridge-slave ifname enp3s0 master br0 You will get a notice saying that the bridge slave has been successfully added. I have been advised when creating a bridge that one should disable stp. I admit to not knowing the science behind this so in this I am blindly following: sudo nmcli con modify br0 bridge.stp no You can check that this has been done by looking at the output of this command: nmcli -f bridge con show br0 Using the case sensitive name you captured earlier first we have to take the wired connection down (see why we do this in the terminal): sudo nmcli con down "Wired connection 1" Now we turn on the bridge: sudo nmcli con up br0 Now let’s check that everything looks fine and is working: sudo nmcli con show It should output something like this: NAME UUID TYPE DEVICE virbr0 b9203b8d-cab7-41eb-ab53-6038ef4f2b0d bridge virbr0 Wired connection 1 172d2699-df49-4d8e-b3dd-85825972fc08 ethernet enp3s0 bridge-slave-enp3s0 b0dbd471-20f9-4741-bdc1-8fc155928dc8 ethernet enp3s0 Now let’s test that it is working with a simple ping command: ping google.com It would output something like this: PING google.com (142.250.67.14) 56(84) bytes of data. 64 bytes from syd15s16-in-f14.1e100.net (142.250.67.14): icmp_seq=1 ttl=53 time=29.2 ms 64 bytes from syd15s16-in-f14.1e100.net (142.250.67.14): icmp_seq=2 ttl=53 time=27.5 ms 64 bytes from syd15s16-in-f14.1e100.net (142.250.67.14): icmp_seq=3 ttl=53 time=38.1 ms ^C --- google.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 27.521/31.590/38.065/4.628 ms - Press Ctrl C To stop the ping command. Step 6: Configure KVM to use the bridge we have just set up Now we need to create an XML file in our home directory containing the bridge configuration: nano ~/br0.xml Now we need to add the following xml to the empty file: <network> <name>br0</name> <forward mode="bridge"/> <bridge name="br0" /> </network> Now we need to exit Nano and save the changes: - Press Ctrl X: Command to exit Nano - Press Y: To confirm that you want to write the changes You should now be back at the command prompt. Now we will define the network in KVM using the xml file to provide the configuration: virsh net-define ~/br0.xml No let's start the newly defined bridge in KVM: virsh net-start br0 Now let’s set the newly defined bridge to autostart: virsh net-autostart br0 Now let’s check that everything looks fine: virsh net-list --all It would output something like this: Name State Autostart Persistent ------------------------------------------- br0 active yes yes default active yes yes Step 7: Time to pause and take a breath Now there is an Ubuntu Server running on the NUC, KVM is installed on it, a network bridge has been set up and KVM is configured to use it. You can now unplug your monitor from the NUC and perhaps even put it in its permanent home. We will not have a need to physically access it again (baring sticking a USB into it) as we will be doing everything else via web interfaces and the command line via ssh over the network. Step 8: Create a bootable unRAID USB using your laptop or PC I am not going to detail the steps on how to do this. You have already done this before. For reference the link to the steps is here: https://wiki.unraid.net/USB_Flash_Drive_Preparation NOTE: Just make sure you make the USB UEFI bootable. The procedure to do this is different depending on whether you use the manual or automated process. Step 9: Install Virt-Manager Docker Container on your unRAID server The next step is going to allow us to use a brilliant linux tool called virt-manager to create the unRAID virtual machine on the little Ubuntu Server without the need to use the command line. Now, if you are working from a linux machine with a GUI (or have one handy), you could skip this step as it is as easy as doing this to install it: sudo apt install virt-manager However, if you don’t, then this is a nice easy and fast alternative (for us unRAID’ers) which doesn't require that you install a VM on your non Linux PC or Laptop or use dodgy OSX Homebrew (the option I tried and wasted a good hour of my life). There are two Docker Containers available in Community Applications that will allow you to access the Virt-Manager desktop application from a web browser accessible on your network served up by your unRAID server. This is one of the reasons why we love the virtualisation side of unRAID so much and are working so hard to have another instance of it going right? 🙂 Log into your unRAID server using your browser. - Go to Community Applications - Search for “virt-manager” As I mentioned, 2 options will appear. I decided to choose the one that was not labelled BETA but the one that was made and is maintained by djaaydev: djaydev/docker-virt-manager Installing the Docker Container using the default values is fine (unless you have something running on the port it wants to use in which case you will have to change that). Once installed let us access the Docker Container WebUI by going into the unRAID Docker Tab, finding the Docker Container, Clicking on the Image and Selecting WebUI. You will now see Virt-Manager - a linux desktop based application - served up to you to use in your Web Browser! Step 10: Connect Virt-Manager to KVM on our NUC via SSH Remember when we were installing Ubuntu Server and I mentioned that we would need SSH installed specifically for this process at some point. Well here it is. Of course you would want to install SSH anyway as otherwise maintaining or accessing the thing from your laptop on the couch (or wherever you are) would be very difficult unless you are specialised in astral projection. I am assuming that you have Virt-Manager up in your browser and are ready to go: - Click ‘File’ Select ‘Add Connection’ A dialog box will appear for you to select properties of a connection to your Ubuntu Server: - Select ‘QEMU/KVM’ from the Hypervisor drop down menu - Put a tick in the ‘Connect to remote host over SSH’ checkbox - Enter your Ubuntu username in the Username field: Your Ubuntu Server username is that which you noted down at installation of Ubuntu - Enter your Ubuntu hostname in the Hostname field: Your Ubuntu Server hostname is that which you noted down at installation of Ubuntu - Put a tick in the ‘Autoconnect’ checkbox - Click ‘Connect’ Virt-Manager will now make an SSH connection to the Ubuntu Server. You will then be asked to “Confirm authenticity of host …”: - Type ‘yes’ - Press ‘Enter’ You will then be asked to enter a password: - Enter your Ubuntu password: Your Ubuntu Server hostname is that which you noted down at installation of Ubuntu - Press ‘Enter’ You will now see a line appear in the Virt-Manager window indicating the SSH connection has been made. It should say something like this: QEMU/KVM: 192.168.1.3 Step 11: Plug in the unRAID USB stick Now is the time to put the unRAID usb stick that we created earlier into the NUC. At this point you might want to consider what you are doing. As this is not your unRAID server you don't have to worry about there being 2 unRAID USB sticks in the NUC. You do however have to remember that this USB stick is bootable. So, unless your BIOS is set to either not boot from a USB or set to try and boot from the internal disk first then all that is likely to happen when you reboot the NUC is that it will boot straight into unRAID. This is of course not what we want here. So, in a complete contrast to what we did to prepare the NUC to boot from the Ubuntu Server USB so we could install it, we now want to turn that off. - Put the newly flashed UNRAID USB stick into the NUC: Make sure that you plug this into a USB 2.0 port. - Turn on the NUC - Hit F2 to enter the bios: You may find that this is difficult if you have Fast Boot enabled. If you do, you just need to keep the Power button pressed on the NUC for 2 seconds on power on and it will automatically give you a menu to select from. - Configure the bios to not boot from the USB device: I turned off USB booting, then I turned on Fast booting, as well as dealt with the device boot order. - Save and exit Ubuntu Server will now start as normal even though the unRAID USB is connected to the NUC. Step 12: Create the unRAID Virtual Machine This is probably the simplest part of the whole process and has been covered by others with varying degrees of success. While I took inspiration from others for this, I ended up (with some trial and error) coming up with my own formula of settings that worked. Open the Virt-Manager Docker Container WebUI as per the previous step. Establish Virt-Manager’s connection to the Ubuntu Server: - Double Click on the QEMU/KVM <IP Address> line Your IP Address is the one you noted down earlier. - Confirm host and enter password as per previous instructions Create a Virtual Machine: - Click the iMac looking icon with a yellow box in it which brings up the help text ‘Create a new virtual machine’ when you hover over it. A dialog box will appear for you to decide your installation approach: - Select Manual Install - Click 'Forward' A dialog box will appear for you to enter the type of virtual machine you are creating - Type “Generic” - Select the ‘Generic’ entry from the further dialog box that appears - Click ‘Forward’ A dialog box will appear for you select the amount of memory and CPU cores to be allocated to the virtual machine - Select the Maximum amount of memory (as indicated under the Memory text box): 3825 NOTE: What I entered here is consistent with what I am trying to achieve and that is to run unRAID on the NUC with as much of the host resources as possible given I don't intend to use the host for anything. - Select 2 (of 2) CPU Cores - Click ‘Forward’ A dialog box will appear for you to configure storage for the virtual machine: - Click the “Create a disk image for the virtual machine” dialog box - Give it a size of 1 GB (smallest allowable in Virt-Manager). - Click ‘Forward’ A dialog box will appear for you name the virtual machine, configure the network and start the installation: - Enter a name for the VM: I named the VM UNRAID (not this can be anything and is not linked to the unRAID installation itself) - Put a tick in the ‘Customise configuration before install’ check box - Click on ‘Network selection’ - Select “Virtual Network br0: Bridge Network”: You do this In the drop down menu that has now appeared - Click ‘Finish’ A dialog box will appear for you customise the virtual machines other options before it is created: - Click ‘Overview’ in the left hand menu: - Select ‘Q35’ from the ‘Chipset’ drop down menu - Select ‘UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd’ from the Bios drop down menu - Click ‘Apply’ ‘Click the ‘Add Hardware’ button’ A new dialog box will appear for you to add new virtual hardware: - Select ‘USB Host Device’ from the left hand menu: - Select your unRAID USB stick: Mine was ‘001:003 Imation Corp. Nano Pro’) entry from the ‘Host Device’ box - Click ‘Finish’ You will be returned to the dialog or you to continue to customise the virtual machines other options before it is created: - Click ‘Boot Options’ in the left hand menu: - Put a tick in the ‘Start virtual machine on host boot up’ box - Remove the tick from the ‘IDE Disk 1’ entry in the ‘Boot device order’ list - Put a tick in the ‘USB XXXX’ entry in the ‘Boot device order’ list - Click ‘Apply’ NOTE: You have just selected the unRAID USB stick you previously added to the virtual machine as the boot device for the VM. - Click ‘IDE Disk 1’ in the left hand menu: - Select ‘SATA’ in the ‘Disk bus’ drop down menu - Click ‘Apply’ - Click the ‘Add Hardware’ button A new dialog box will appear for you to add new virtual hardware: - Select ‘Storage’ from the left hand menu - Click the ‘Create disk image for the virtual machine’ radio box (should already be selected) - Enter 5GB less than the total amount you can allocate (which is shown under the input box) for me that was ~240GB - Click ‘Finish’ NOTE: You have just created a disk that you can add to unRAID as a Cache device (noting that the previous small disk - which will never be used for storage - is there to start the array only) that will store your docker system image, containers etc. NOTE: what I entered here is consistent with what I am trying to achieve and that is to run unRAID on the NUC with as much of the host resources as possible given I don't intend to use the host for anything. However I felt leaving the host with at least 5GB on top of what it already had allocated itself at installation was prudent. - Click ‘Display Spice’ in the left hand menu: - Select ‘All interfaces’ in the ‘Address’ drop down menu - Click ‘Begin Installation’ Step 13: Watch unRAID boot From the Virt Manager main screen your Virtual Machine (named whatever you named it - in my case UNRAID) now appears as a monitor icon with its status (Should be ‘Running”) under it. To access the virtual machine: - Click ‘Open’ Watch unRAID install. There is no need to interact with the window (as you know) as unRAID will install itself irrespective of your input Once finished, the unRAID command prompt will be visible and you will be able to see the LAN IP address that has been allocated to this unRAID VM. Step 14: Configure unRAID I am not going to go into too much detail in this last step as most of you should be fairly familiar with the process given you have unRAID servers yourself. I will just detail a list of what I thought some important points and or configurations were given unRAID is now running in a VM and before we start the Array: unRAID recognises the USB GUID meaning that you can request a Trial Key against the USB GUID you used or (as I did) used my USB which is assigned to my third Pro key Allocate the server a static IP address (I also set the DHCP server to allocate IP to the VM MAC address in your router too) Give it a different host name (you don't want two ‘Tower’ hostname on your network) Configure Docker Configure VM’s Set the Array to Auto Start. Now configure the Array: Leave Parity slots empty - no need for Parity as we are not using this instance for storage Select the small 1GB QEMU disk we created as disk 1 Select the lage 240GB QEMU disk we create as cache disk Start the Array. You will notice, as normal, that the disk will need to be formatted. This is fine and happens quickly. Then you’re done. I now have a working virtual unRAID instance that is: On a tiny NUC, running a minimal Ubuntu Server 20.04 installation, running KVM, within a VM with most of the host resources attached to it, booted from an UNRAID USB, allows unRAID to query the GUID of the USB, using two virtual disks (one to keep the array started and one to allocated the rest of the host SSD for Cache and Docker related activities), getting an IP from the LAN DHCP server. Also, given the options selected throughout - restart the host and not only will Ubuntu Server come up, so will KVM, therefore so will unRAID, which will get an IP, then will auto start the array which will then autostart Docker and subsequently any Containers that it is running! It happens quick too! Should the little NUC for whatever reason, deploying will be so much easier. All I have to do is restore the VM from a backup on a new NUC or PC with Ubuntu Server installed and configured. I intend (for now) to keep my spare NUC sat configured and waiting for such an eventuality. I have installed Pihole and it is currently working just like it did on my unRAID server. The NUC CPU is not maxing nor is the memory usage that high. This looks like it might work and might be viable. Time will tell. 25th May 2020 - EDITED for minor mistakes and out of order steps. Edited May 31, 2020 by danioj Quote
binhex Posted May 25, 2020 Posted May 25, 2020 Insanely cool write up [emoji16], I always cringe at people running EVERYTHING on their unraid boxes, as you have found out whilst this may make your inner geek smile with pride, in practical terms its nasty when your have to turn off your unraid for maintenance as nothing then works. I have a separate hardware firewall, sperate htpc and sperate storage for recorded TV, if my server goes down I can still access the internet and the wife can still watch TV, that is worth gold right there. Did you consider Proxmox running pihole, or maybe Rancher Sent from my CLT-L09 using Tapatalk Quote
T0a Posted May 25, 2020 Posted May 25, 2020 Pretty cool post - will take some time to work me through. Wanted to post a comment after reading the first few paragraphs. I have a failover pihole setup in place utilizing keepalived. Two pihole instances, each on a separate Rpi. When one goes down, the floating IP switches in an instant to the other instance. You might want to check it out, so your family can browse the web while you work on your server. Quote
danioj Posted May 31, 2020 Author Posted May 31, 2020 (edited) Updated OP. Project = FAIL! 🙂 Edited May 31, 2020 by danioj Quote
Pducharme Posted June 17, 2020 Posted June 17, 2020 @danioj Hi, i was looking for a way to run Unraid in a VM on Unraid itself (that use KVM). I want to have a "test server" with multiple vdisk for simulating multiple hard disks. Is this a possibility ? Quote
itimpi Posted June 17, 2020 Posted June 17, 2020 Definitely possible. I do this all the time! You DO have to have another USB stick with its own UnRAID licence for use by the VM. There is a post somewhere in the forum that details how to set this up - I will try and see if I can track down the relevant post. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.