danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. This is the most obvious question and I’m sure it will result in the most obvious answer. I ask only, because while errors were reported, Parity Sync succeeded. Should I replace the disk?
  2. Doh. Long SMART test failed. unraid-smart-20200530-1831.zip
  3. Thanks for the review. Read errors again, this time on the parity check. I’m running a long SMART test now. I know disks fail but the absence of obvious SMART data makes me wonder if I did something when I was in there. Is there anything physical I could have done to cause this!? Sent from my iPhone using Tapatalk
  4. Would someone mind taking the trouble to explain to me how unRAID enables 'Host access to custom networks'? Its just not viable for me to have pinhole running on unRAID so I have setup a seperate machine and am running Docker on that system and am administering via Portainer. However, the host Ubuntu OS can't access pihole itself and therefore cant resolve dns (as I push all dns traffic on my network through pihole). I can't for the life of me figure out how it's done in Docker. All unRAID has is a checkbox.
  5. No worries. Note, I just switched the order of the steps around a little just so you check the share properties of the appdata share before copying data.
  6. Your Docker settings are in: Settings>Docker>Default appdata storage location The default unRAID location for app data is '/mnt/user/appdata/' and this share is then usually set to cache preferred so it never sits on your array. I would guess that you probably changed this on setup to something like '/mnt/user/Apps' but have since installed a few containers and not changed the default config location which then in turn created the default appdata folder/share again. What I would do is this (I assume that you haven't also changed the path of your docker image file and that is also not contained in your App share - if it is you will have to deal with that too - and that the only thing you have in your Apps share is working container config folders): - stop Docker; - change the default appdata storage location in Docker settings to the default /mnt/user/appdata/ - check the share properties of the appdata share to ensure it is either cache only or cache preffeed (as is the unRAID default) - move all your folders from your App share to your appdata share - turn off autostart on any containers you have - start docker - check that each config path is set to the appdata path and not the App path - start your containers - set them back to start automatically if required - delete your empty Apps share if required All your containers should then be working fine from the appdata share and you are less likely to miss changing the default location when installing new containers.
  7. Hi all, Im afraid I need the brains trust on this one. I woke this morning to one of my very stable (albeit old) data drives having exhibited read errors over night while the server was undergoing a parity sync. For background, I made some hardware changes to my main server yesterday: - Added a new 4 port sata expansion card (Skymaster PCIe 4-Ports SATA 6G Card EST11B) - Added a new (SMART tested and precleared) 8TB Seagate Barracuda Compute - Added a Silverstone Riser Cable so I could move the graphics card I have in there (which was taking two pcie slots) and allow me to drop the new sata expansion card in there (SilverStone RC04B PCI-e Riser Cable 400mm SST-RC04B-400) - Added a new 250GB SSD to run my living room tv LibreELEC VM via UAD My overall goal is to return to dual parity. I was running 1 parity drive (after releasing my second one some months ago for a data drive). I am intending to add 2 x new Barracuda's to be parity and dropping my single archive 8TB parity disk to be data disk. Just doing it slow and sure over time, step by step hence why I am adding a new Parity drive before replacing the current one. As the sata expansion card has a marvel chipset, I had to apply the 'iommu=pt' fix to my syslinux config to let unRAID see the drive. It is worth noting that the new 8TB and SSD drives are on the new sata card but the drive which is exhibiting problems is not. So, as you would have expected, after installing the new 8TB drive - I added it as a parity disk - and began the parity sync. Then I left it alone. I cannot see anything obvious in the diagnostics. The disks' SMART data looks fine. There were only 88 read errors. These errors coincided with the commencement of my daily CA docker backup sequence and only last a short time. I mention these only because this happened at around the same time and it was the only other thing the server was doing. I guess I could have knocked a cable while I was in there but I am pretty diligent and I checked the cables before I packed up. Plus, I could usually expect to see the UDMA CRC error count to be high if there were cabling issues. Which there weren't. I think I know the protocol. Let the parity sync finish, check the cables again and then do a correcting parity check. Id be grateful if, in the meantime, anyone has any other insights as to what might be the issue? Diagnostics attached. Thank you in advance. Daniel unraid-diagnostics-20200530-0711.zip
  8. What a monumental waste of time. Fun, but a waste of time. Unstable as can be. System kept dropping the USB and eventually I ended up with about 20 Million writes and reads to the USB which busted the USB completely. So, I went back to my original plan. Ubuntu Desktop OS, Portainer and KVM. Nice to know, unRAID virtualised on a NUC just doesn't work! LOL! 🙂 I wanted to share the results of some of the tinkering I have been doing this weekend with an old Intel Celeron NUC, Ubuntu Server, KVM and unRAID. I will preface the remainder of the post by saying that I know there are many ways to do this, I know that some people will find this silly and or pointless from their perspective etc etc. I write this post more as a reference point for myself in the future - JIC I have to repeat it - but also because I like to share and write. When I read what I have written - it is almost like I am writing it to myself at some point in the future. A number of things happened over the past week: I installed 2 x new Raspberry Pi 4’s around the house to replace some old Intel Celeron NUCs that were running LibreELEC and PlexKodiConnect (no native 4k playback resulting in these clients putting significant strain on the unRAID server to transcode was the driver for this) Both I and the family started to get frustrated when I took down / restarted my main unRAID server allot this week as I run my DNS server (pihole) on it via a Docker Container which resulted in loss of “Internet” aka Instagram and trash TV etc as the clients couldn't resolve DNS until the unRAID server came back up I was nicknamed Wreck it Ralph as I “Broke the Internet” I had spare hardware (2 x NUC’s) lying around doing nothing. I had only recently deployed Pihole on my network (which I did with more haste than thought) and I knew my setup was flawed from the beginning. Very similar to the days when I deployed pfSense on a VM on my unRAID server. Every time the server went down or was restarted there was an interruption to internet service to the family in the household. Much like the pfsense situation, something had to change anyway. My first consideration. Well, given I had just deployed 2 x new Raspberry Pi 4’s around the house they would run pretty much 24x7 I thought I could deploy Pihole on one of them. It is called “Pi”hole after all. These utilise a 5Ghz wireless connection to access the network (as they are in bedrooms). I paused. I realised quickly that a DNS server requires a more stable wired connection. Plus, If I ever need to upgrade, configure etc this may result in a need to restart the box which will inevitably mean I'll hear “Ralph, have you broken the internet again ..?”, “my TV just went off” or “I was in the middle of the latest Made in Chelsea”. Nah, the Pi’s were out. I figured I could just deploy Pihole on my backup unRAID server which gets much less attention as all it does is sits there turned off most of the day until 3am when it wakes up and receives backups from the main unRAID server and also pushes key important files to a removable USB 3.0 drive for offsite storage then turns itself off. Nah, didn't want to do that either. The backup unRAID server setup is stable and running just as I want it to. Lightbulb moment. What if I utilise one of my NUC’s? The little buggers could be powerful enough to do the job, have 4GB of RAM and they have a nice fast Gigabit ethernet port and SSD’s in them. Intel NUC DN2820FYKH 4GB RAM 250GB SSD https://ark.intel.com/content/www/us/en/ark/products/78953/intel-nuc-kit-dn2820fykh.html Options I initially considered about how to use the NUC: Install Linux and install Pihole on the OS Install Linux and install Docker and deploy a Pihole Container Install Linux and install a VM and deploy Pihole within the VM. I considered the above options, as in the back of my mind (much like my main and backup unRAID server setup) I am always thinking about availability and being able to get back up and running if something fails. The ability to backup a virtual deployment and redeploy quickly without having to reconfigure is gold to me. Have you tried making your home (and the people in it) completely used to and dependent on your setup and then not having a plan to bring it all back if it fails, which it will. Nope, it had to be wife proof. Now, I have gone down a route like this before, playing with something new and either through deployment of a VM, an old PC etc. introduced something new to my setup. You know the drill, install linux / deploy a VM, deploy something on it. Inevitably get frustrated when maintaining this VM, PC and / or the applications on it are not as simple as maintaining unRAID. It ends up getting removed / deleted and you revert your setup back and the time you spent with no fruit was just the price you paid to keep yourself entertained one rainy day weekend. *stamps feet* I want what I currently have - i.e. Pihole installed via a Docker container on unRAID with everything that brings but I don't want it impacted by me taking down the server etc *repeats ramblings of above*. I’m taken back to previous frustrations (which I have gone over in my head ad nauseum): Why wont unRAID software let me leave Docker or a VM running when the Array is stopped? Why isnt running unRAID as a VM (so I could run other VM’s alongside it) officially supported? Why isn't there some software out there (as good as unRAID) that I can use to manage VM’s and Docker via a Web GUI? The third bullet above is the closest I have come to having an unRAID esq like setup on another machine but as I also mention above - I often get sick of maintaining this setup. I am of course referring to: Linux Desktop host with KVM, Docker, VNC/RDP, SSH + some display emulator (required to allow me to VNC in) Command line for installing containers Portainer and Virt-Manager on the host to manage VM installation and manage installed Docker containers. Urgh. So much more complicated than unRAID. Then I remembered another thought I had some time ago. What if unRAID released a “Lite” version which was all about being a tool for virtualisation and not about being a NAS? LOL - I didn’t share this idea with the community because I can imagine the responses I would get. At that point, I got bored with thinking and opened YouTube and started watching some HomeAssistant Videos (another project of mine for a different time) and a SpaceinvaderOne video from back in 2016 came up, titled: “How to install unRAID as a virtual machine on another unRAID host” https://www.youtube.com/watch?v=ZFzwihcphrg I’d seen this before but chuckled again at the Russian Doll joke at the beginning of the video. I switched to some Googl’ing while listening to the video and was just randomly using search terms like ‘VM’s’, ‘Bare Metal’, ‘KVM’, ‘ESXI’ etc. I started thinking ‘AGAIN’ given Grid (who I an hear taking the steps) can install unRAID on KVM (i.e. Hypervisor used by unRAID) then why can’t I do the same on another PC (my eyes start going to the NUC and back to the screen). Surley anything you can do on KVM in unRAID can be done on KVM elsewhere (The ESXI route was out as its not free and i've seen so many issues with that and just didn't want to explore that - I had a working solution right here from back in 2016). If only KVM could be installed on a PC without an OS? First hit on Google for “Can you install KVM without an OS” gets you this: https://stackoverflow.com/questions/26696477/can-we-install-kvm-without-any-operating-system-installed I was starting to shy away again. Then, in another forum, someone argued that given KVM is a type 1 hypervisor and has direct access to hardware (as we all know as we run unRAID) that running a very light OS (that consumes little to no resources) is just like (with the smallest of penalties) running KVM without no OS at all. The suggestion was to go over and look at Ubuntu Server. Plans started to form in my head. What if, I install Ubuntu Server (Headless with no desktop environment) on the NUC and get it running with the minimalist of installs. Install KVM. Then (using what i’ve seen before as a guide) install unRAID on top of it. Given Ubuntu Server in this case is (and will always be) doing little to nothing - wouldn’t this be just like running unRAID native. I could create a small custom created disk image to emulate a disk so the unRAID array would start (never be much of an issue of disk failure here meaning the unRAID array would always start and start fast) and then I can deploy docker containers on unRAID as normal using another virtual disk image to emulate a Cache disk. With unRAID in memory and the SSD in the NUC - it should be fast. “Should be”. LOL! It should be easy to get Ubuntu Server to autostart KVM, KVM to auto start unRAID VM and then unRAID to auto start my Docker containers. Again, given id be passing pretty much every resource from the NUC as I can to the unRAID VM and the host doing nothing, hopefully there would not be a performance hit. Hopefully the NUC is up to it. Hopefully. This is either going to work or be another one of those projects I throw away to a rainy day weekend. I decided to try it out. Out comes the NUC. Off I go. What did I use: My NUC that is ready to go with the SSD and RAM already installed A small LCD display (needed to install the OS but also useful for later when we play with network config) A USB stick (for unRAID) A second USB stick (for Ubuntu Server) USB Keyboard (I used a Logitech all in one via a Unify USB Receiver) An ethernet cable connected to my switch (for LAN and Internet access). NOTE: You don't need to use the software I used. I chose the software based on my own knowledge / preference and laptop OS (OSX) that I am running. You can of course use substitute different software and get the same outcome. However, everything below was my choice of procedure using my choice of software. Here is a summary of the steps I took which I detail in the rest of the post: Step 1: Create the Ubuntu Server installation USB Step 2: Install Ubuntu Server Step 3: Install required software to use KVM Step 4: Install and configure Network Manager Step 5: Setup the Network Bridge Step 6: Configure KVM to use the bridge we have just set up Step 7: Time to pause and take a breath Step 8: Create a bootable unRAID USB using your laptop or PC Step 9: Install Virt-Manager Docker Container on your unRAID server Step 10: Connect Virt-Manager to KVM on our NUC via SSH Step 11: Plug in the unRAID USB stick Step 12: Create the unRAID Virtual Machine Step 13: Watch unRAID boot Step 14: Configure unRAID Step 1: Create the Ubuntu Server installation USB I chose Ubuntu Server 20.04 to be the base OS of the NUC. - Download Ubuntu Server from here: https://ubuntu.com/download/server I chose balenaEtcher to burn the Ubuntu Server image we just downloaded to one of our USB drives. - Download balenaEtcher here: https://www.balena.io/etcher/ It is VERY easy to use once installed. - Plug in the USB stick: It doesn’t matter whether it is a USB 2.0 or a USB 3.0 for Ubuntu Server (it does for unRAID). - Select the image: Search for the Ubuntu Server image file we have just downloaded. - Select the USB stick we want to “burn” the Ubuntu Server image to: Make sure you select the correct USB device as this process will erase the USB) Burn the image: - Click 'Flash' - Eject the USB stick from the computer: Use the safe procedure for the OS being used. I used OSX. Once the process has completed, you have a bootable USB stick which is ready to use to install Ubuntu Server 20.04. Step 2: Install Ubuntu Server - Put the newly flashed USB stick into the NUC: Again, there is no need to be picky about whether the USB port is 2.0 or 3.0. - Turn on the NUC. - Hit F2 to enter the bios: You may find that this is difficult if you have Fast Boot enabled. If you do, you just need to keep the Power button pressed on the NUC for 2 seconds on power on and it will automatically give you a menu to select from. - Configure the bios to boot from the USB device: I had to turn off Fast booting fist, then turn on USB booting as well as deal with the device boot order (This could also have been achieved by selecting the option to ‘Always boot USB devices first’). - Save and exit The NUC will now boot from the USB and take you to the installation text based GUI. WARNING: The procedure I am going through here will completely ERASE my SSD. I am ok with this, make sure you are too if you are following along. If not, don't proceed and backup your files first. The steps below are a summary of the installation steps for Ubuntu Server 20.04. On boot, you are presented with a familiar Ubuntu boot menu: - Select ‘Install Ubuntu Server’ - Press ‘Enter’ You will then be asked to select the language for the system: - Scroll through the list and select a language: You can select any language. - Press ‘Enter’ Should you have an image that was compiled with an older than current installer, the setup process will ask you if you want to update it before commencing the installation: - Select ‘Update to the new installer’ - Press ‘Enter’: This may take a little time. You will then asked to select your keyboard configuration: - Select your keyboard layout: I chose Australia - Select your keyboard variant - Select ‘Done’ - Press ‘Enter’ You will then be asked to configure your network connections: - Note down your interface name: Mine was “enp3s0” and should be auto configured. - Note down the given IP address: Mine was “192.168.1.50” that was allocated from the network DHCP server. - Select ‘Done’ - Press ‘Enter’ You will then be given the option to configure a proxy: - Select ‘Done’: I don't use a proxy but if you do you may have some edits to make which I am not going to cover here. - Press ‘Enter’ You will then be given the option to configure a different Ubuntu archive mirror repository: - Select ‘Done’: I was quite happy to leave it with the default options. - Press ‘Enter’ You will then be asked to use the guided storage configurator to prepare the NUC disk for installation of Ubuntu Server: Select ‘Use an entire disk’ - Press ‘Spacebar”: This will put an ‘X’ in the field. You will note that the SSD of the NUC has already been populated to be used. It is good practice to check this field if you have a multi disk system but as I know I only had one I was happy that it had selected the correct disk. - Select ‘Done’: I was quite happy to leave the other fields with the default options. - Press ‘Enter’ You will then be presented with a File system Summary: - Select ‘Done’: This screen is just a summary of what you have selected to review before continuing. - Press ‘Enter’ You will then be presented with a WARNING and to CONFIRM the pending destructive action: - Select Continue: WARNING: The procedure I am going through here will completely ERASE my SSD. I am ok with this, make sure you are too if you are following along. If not, don't proceed and backup your files first. - Press ‘Enter’ You will then be asked to setup your Ubuntu profile: - Enter your name: You don't have to use your real name - Enter a Server Name: I like to put some thought into this. I have naming conventions for my network for all my devices. This is what you would use to refer to the NUC on your network if you’re not using it’s IP address. Write what you choose down. - Enter a password: Make it strong but remember that you ARE going to have to type it again a fair bit (either via Sudo or at the terminal prompt or via SSH). Write what you choose down. - Enter the password again - Select ‘Done’ - Press ‘Enter’ You will then be asked if you want to install ‘OpenSSH server’: - Select ‘Install OpenSSH server’: We are going to need this later. - Press ‘Spacebar”: This will put an ‘X’ in the field. - Select ‘Done’ - Press ‘Enter’ You will then be asked to select Featured Server Snaps to install: - Select ‘Done’: Remember, this NUC is not going to do anything, it is an unRAID VM running on the server that is going to have services running in it. Therefore we don’t want to be installing any more additional software than needed to host the VM. - Press ‘Enter’ The installation will begin. - Wait or go and grab a coffee The installation will complete. - Select ‘Reboot’ - Press ‘Enter’ - Quickly remove your USB Stick from the NUC: Do this before the system starts to boot again otherwise you will boot straight into the USB stick again. Ubuntu Server is now installed and you should boot to the login prompt. You can test logging in with the username and password that you selected (and noted down) earlier. Step 3: Install required software to use KVM I decided to do this step at the console as I knew I would be playing around with the network connections shortly afterwards which makes using SSH problematic at this time especially if you mess up. Update package information for the source library we configured at install: sudo apt update Upgrade all packages that are currently installed: sudo apt upgrade Although we know that KVM is a module built into the kernel, it doesn't mean that Ubuntu Server has all the packages it needs by default: sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils I like to do a reboot now: sudo reboot Log back in. Step 4: Install and configure Network Manager Just like with our unRAID VM’s we want unRAID to be able to get it’s own IP address on our network. I will utilise a network bridge to do this. I installed Network Manager to make this easy (for me). NOTE: I much prefer to use Network Manager rather than netplan to do this - mainly because I don't know how to use netplan as I am more familiar with using Ubuntu Desktop. You can create a bridge in a number of other ways and in fact I have read that what I am going to do next is NOT recommended on Ubuntu Server (although I haven't found it explained as to why) BUT this is the way I know so this is the way I have followed. First we have to stop the current network services: sudo systemctl disable systemd-networkd.service sudo systemctl mask systemd-networkd.service sudo systemctl stop systemd-networkd.service Then we install Network Manager: sudo apt-get install network-manager Open up netplan config in nano (my favorite command line text editor). My config file was called ‘00-installer-config.yaml’: cd /etc/netplan/ sudo nano 00-installer-config.yaml Then edit the file until it looks like this: network: version: 2 renderer: NetworkManager Now we need to exit Nano and save the changes: - Press Ctrl X: Command to exit Nano - Press Y: To confirm that you want to write the changes You should now be back at the command prompt. Funnily enough, now we use netplan to generate the required configuration files to use Network Manager: sudo netplan generate Now we start the Network Manager service: sudo systemctl unmask NetworkManager sudo systemctl enable NetworkManager sudo systemctl start NetworkManager Reboot: sudo reboot Log back in. Network Manager doesn't manage all networks by default. If you try the following command you would probably find that it showed nothing: nmcli con show If you did this command though, you would find it would list your interfaces fine: ip a So we have to tell Network Manager to manage the interfaces. We do this by editing the Network Manager Configuration as follows: sudo nano /etc/NetworkManager/NetworkManager.conf Then edit the following line in the file: [ifupdown] managed=false To this: [ifupdown] managed=true Now we need to exit Nano and save the changes: - Press Ctrl X: Command to exit Nano - Press Y: To confirm that you want to write the changes You should now be back at the command prompt. Reboot: sudo reboot Log back in. If you type the command we typed earlier to see if Network Manager was managing our interfaces you would get a different result than nothing this time: nmcli con show It should output something like this: NAME UUID TYPE DEVICE virbr0 b9203b8d-cab7-41eb-ab53-6038ef4f2b0d bridge virbr0 Wired connection 1 172d2699-df49-4d8e-b3dd-85825972fc08 ethernet enp3s0 Now that Network Manager is managing our network interfaces, we can use Network Manager to set up our bridge. Step 5: Setup the Network Bridge Get your current network configuration and note it down (if you don't remember the output from earlier): nmcli con show From the output (see above for example), note down the Name and the Device of the wired connection. In my case it was (not that it is case sensitive): Name: "Wired connection 1" Device: "enp3s0" Now we will add a bridge called br0 just like we are used to with unRAID: sudo nmcli con add ifname br0 type bridge con-name br0 You will get a notice saying that the bridge has been successfully added. Set the ethernet interface to be a slave to br0 (remember to substitute the interface name you captured above here): sudo nmcli con add type bridge-slave ifname enp3s0 master br0 You will get a notice saying that the bridge slave has been successfully added. I have been advised when creating a bridge that one should disable stp. I admit to not knowing the science behind this so in this I am blindly following: sudo nmcli con modify br0 bridge.stp no You can check that this has been done by looking at the output of this command: nmcli -f bridge con show br0 Using the case sensitive name you captured earlier first we have to take the wired connection down (see why we do this in the terminal): sudo nmcli con down "Wired connection 1" Now we turn on the bridge: sudo nmcli con up br0 Now let’s check that everything looks fine and is working: sudo nmcli con show It should output something like this: NAME UUID TYPE DEVICE virbr0 b9203b8d-cab7-41eb-ab53-6038ef4f2b0d bridge virbr0 Wired connection 1 172d2699-df49-4d8e-b3dd-85825972fc08 ethernet enp3s0 bridge-slave-enp3s0 b0dbd471-20f9-4741-bdc1-8fc155928dc8 ethernet enp3s0 Now let’s test that it is working with a simple ping command: ping google.com It would output something like this: PING google.com (142.250.67.14) 56(84) bytes of data. 64 bytes from syd15s16-in-f14.1e100.net (142.250.67.14): icmp_seq=1 ttl=53 time=29.2 ms 64 bytes from syd15s16-in-f14.1e100.net (142.250.67.14): icmp_seq=2 ttl=53 time=27.5 ms 64 bytes from syd15s16-in-f14.1e100.net (142.250.67.14): icmp_seq=3 ttl=53 time=38.1 ms ^C --- google.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 27.521/31.590/38.065/4.628 ms - Press Ctrl C To stop the ping command. Step 6: Configure KVM to use the bridge we have just set up Now we need to create an XML file in our home directory containing the bridge configuration: nano ~/br0.xml Now we need to add the following xml to the empty file: <network> <name>br0</name> <forward mode="bridge"/> <bridge name="br0" /> </network> Now we need to exit Nano and save the changes: - Press Ctrl X: Command to exit Nano - Press Y: To confirm that you want to write the changes You should now be back at the command prompt. Now we will define the network in KVM using the xml file to provide the configuration: virsh net-define ~/br0.xml No let's start the newly defined bridge in KVM: virsh net-start br0 Now let’s set the newly defined bridge to autostart: virsh net-autostart br0 Now let’s check that everything looks fine: virsh net-list --all It would output something like this: Name State Autostart Persistent ------------------------------------------- br0 active yes yes default active yes yes Step 7: Time to pause and take a breath Now there is an Ubuntu Server running on the NUC, KVM is installed on it, a network bridge has been set up and KVM is configured to use it. You can now unplug your monitor from the NUC and perhaps even put it in its permanent home. We will not have a need to physically access it again (baring sticking a USB into it) as we will be doing everything else via web interfaces and the command line via ssh over the network. Step 8: Create a bootable unRAID USB using your laptop or PC I am not going to detail the steps on how to do this. You have already done this before. For reference the link to the steps is here: https://wiki.unraid.net/USB_Flash_Drive_Preparation NOTE: Just make sure you make the USB UEFI bootable. The procedure to do this is different depending on whether you use the manual or automated process. Step 9: Install Virt-Manager Docker Container on your unRAID server The next step is going to allow us to use a brilliant linux tool called virt-manager to create the unRAID virtual machine on the little Ubuntu Server without the need to use the command line. Now, if you are working from a linux machine with a GUI (or have one handy), you could skip this step as it is as easy as doing this to install it: sudo apt install virt-manager However, if you don’t, then this is a nice easy and fast alternative (for us unRAID’ers) which doesn't require that you install a VM on your non Linux PC or Laptop or use dodgy OSX Homebrew (the option I tried and wasted a good hour of my life). There are two Docker Containers available in Community Applications that will allow you to access the Virt-Manager desktop application from a web browser accessible on your network served up by your unRAID server. This is one of the reasons why we love the virtualisation side of unRAID so much and are working so hard to have another instance of it going right? 🙂 Log into your unRAID server using your browser. - Go to Community Applications - Search for “virt-manager” As I mentioned, 2 options will appear. I decided to choose the one that was not labelled BETA but the one that was made and is maintained by djaaydev: djaydev/docker-virt-manager Installing the Docker Container using the default values is fine (unless you have something running on the port it wants to use in which case you will have to change that). Once installed let us access the Docker Container WebUI by going into the unRAID Docker Tab, finding the Docker Container, Clicking on the Image and Selecting WebUI. You will now see Virt-Manager - a linux desktop based application - served up to you to use in your Web Browser! Step 10: Connect Virt-Manager to KVM on our NUC via SSH Remember when we were installing Ubuntu Server and I mentioned that we would need SSH installed specifically for this process at some point. Well here it is. Of course you would want to install SSH anyway as otherwise maintaining or accessing the thing from your laptop on the couch (or wherever you are) would be very difficult unless you are specialised in astral projection. I am assuming that you have Virt-Manager up in your browser and are ready to go: - Click ‘File’ Select ‘Add Connection’ A dialog box will appear for you to select properties of a connection to your Ubuntu Server: - Select ‘QEMU/KVM’ from the Hypervisor drop down menu - Put a tick in the ‘Connect to remote host over SSH’ checkbox - Enter your Ubuntu username in the Username field: Your Ubuntu Server username is that which you noted down at installation of Ubuntu - Enter your Ubuntu hostname in the Hostname field: Your Ubuntu Server hostname is that which you noted down at installation of Ubuntu - Put a tick in the ‘Autoconnect’ checkbox - Click ‘Connect’ Virt-Manager will now make an SSH connection to the Ubuntu Server. You will then be asked to “Confirm authenticity of host …”: - Type ‘yes’ - Press ‘Enter’ You will then be asked to enter a password: - Enter your Ubuntu password: Your Ubuntu Server hostname is that which you noted down at installation of Ubuntu - Press ‘Enter’ You will now see a line appear in the Virt-Manager window indicating the SSH connection has been made. It should say something like this: QEMU/KVM: 192.168.1.3 Step 11: Plug in the unRAID USB stick Now is the time to put the unRAID usb stick that we created earlier into the NUC. At this point you might want to consider what you are doing. As this is not your unRAID server you don't have to worry about there being 2 unRAID USB sticks in the NUC. You do however have to remember that this USB stick is bootable. So, unless your BIOS is set to either not boot from a USB or set to try and boot from the internal disk first then all that is likely to happen when you reboot the NUC is that it will boot straight into unRAID. This is of course not what we want here. So, in a complete contrast to what we did to prepare the NUC to boot from the Ubuntu Server USB so we could install it, we now want to turn that off. - Put the newly flashed UNRAID USB stick into the NUC: Make sure that you plug this into a USB 2.0 port. - Turn on the NUC - Hit F2 to enter the bios: You may find that this is difficult if you have Fast Boot enabled. If you do, you just need to keep the Power button pressed on the NUC for 2 seconds on power on and it will automatically give you a menu to select from. - Configure the bios to not boot from the USB device: I turned off USB booting, then I turned on Fast booting, as well as dealt with the device boot order. - Save and exit Ubuntu Server will now start as normal even though the unRAID USB is connected to the NUC. Step 12: Create the unRAID Virtual Machine This is probably the simplest part of the whole process and has been covered by others with varying degrees of success. While I took inspiration from others for this, I ended up (with some trial and error) coming up with my own formula of settings that worked. Open the Virt-Manager Docker Container WebUI as per the previous step. Establish Virt-Manager’s connection to the Ubuntu Server: - Double Click on the QEMU/KVM <IP Address> line Your IP Address is the one you noted down earlier. - Confirm host and enter password as per previous instructions Create a Virtual Machine: - Click the iMac looking icon with a yellow box in it which brings up the help text ‘Create a new virtual machine’ when you hover over it. A dialog box will appear for you to decide your installation approach: - Select Manual Install - Click 'Forward' A dialog box will appear for you to enter the type of virtual machine you are creating - Type “Generic” - Select the ‘Generic’ entry from the further dialog box that appears - Click ‘Forward’ A dialog box will appear for you select the amount of memory and CPU cores to be allocated to the virtual machine - Select the Maximum amount of memory (as indicated under the Memory text box): 3825 NOTE: What I entered here is consistent with what I am trying to achieve and that is to run unRAID on the NUC with as much of the host resources as possible given I don't intend to use the host for anything. - Select 2 (of 2) CPU Cores - Click ‘Forward’ A dialog box will appear for you to configure storage for the virtual machine: - Click the “Create a disk image for the virtual machine” dialog box - Give it a size of 1 GB (smallest allowable in Virt-Manager). - Click ‘Forward’ A dialog box will appear for you name the virtual machine, configure the network and start the installation: - Enter a name for the VM: I named the VM UNRAID (not this can be anything and is not linked to the unRAID installation itself) - Put a tick in the ‘Customise configuration before install’ check box - Click on ‘Network selection’ - Select “Virtual Network br0: Bridge Network”: You do this In the drop down menu that has now appeared - Click ‘Finish’ A dialog box will appear for you customise the virtual machines other options before it is created: - Click ‘Overview’ in the left hand menu: - Select ‘Q35’ from the ‘Chipset’ drop down menu - Select ‘UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd’ from the Bios drop down menu - Click ‘Apply’ ‘Click the ‘Add Hardware’ button’ A new dialog box will appear for you to add new virtual hardware: - Select ‘USB Host Device’ from the left hand menu: - Select your unRAID USB stick: Mine was ‘001:003 Imation Corp. Nano Pro’) entry from the ‘Host Device’ box - Click ‘Finish’ You will be returned to the dialog or you to continue to customise the virtual machines other options before it is created: - Click ‘Boot Options’ in the left hand menu: - Put a tick in the ‘Start virtual machine on host boot up’ box - Remove the tick from the ‘IDE Disk 1’ entry in the ‘Boot device order’ list - Put a tick in the ‘USB XXXX’ entry in the ‘Boot device order’ list - Click ‘Apply’ NOTE: You have just selected the unRAID USB stick you previously added to the virtual machine as the boot device for the VM. - Click ‘IDE Disk 1’ in the left hand menu: - Select ‘SATA’ in the ‘Disk bus’ drop down menu - Click ‘Apply’ - Click the ‘Add Hardware’ button A new dialog box will appear for you to add new virtual hardware: - Select ‘Storage’ from the left hand menu - Click the ‘Create disk image for the virtual machine’ radio box (should already be selected) - Enter 5GB less than the total amount you can allocate (which is shown under the input box) for me that was ~240GB - Click ‘Finish’ NOTE: You have just created a disk that you can add to unRAID as a Cache device (noting that the previous small disk - which will never be used for storage - is there to start the array only) that will store your docker system image, containers etc. NOTE: what I entered here is consistent with what I am trying to achieve and that is to run unRAID on the NUC with as much of the host resources as possible given I don't intend to use the host for anything. However I felt leaving the host with at least 5GB on top of what it already had allocated itself at installation was prudent. - Click ‘Display Spice’ in the left hand menu: - Select ‘All interfaces’ in the ‘Address’ drop down menu - Click ‘Begin Installation’ Step 13: Watch unRAID boot From the Virt Manager main screen your Virtual Machine (named whatever you named it - in my case UNRAID) now appears as a monitor icon with its status (Should be ‘Running”) under it. To access the virtual machine: - Click ‘Open’ Watch unRAID install. There is no need to interact with the window (as you know) as unRAID will install itself irrespective of your input Once finished, the unRAID command prompt will be visible and you will be able to see the LAN IP address that has been allocated to this unRAID VM. Step 14: Configure unRAID I am not going to go into too much detail in this last step as most of you should be fairly familiar with the process given you have unRAID servers yourself. I will just detail a list of what I thought some important points and or configurations were given unRAID is now running in a VM and before we start the Array: unRAID recognises the USB GUID meaning that you can request a Trial Key against the USB GUID you used or (as I did) used my USB which is assigned to my third Pro key Allocate the server a static IP address (I also set the DHCP server to allocate IP to the VM MAC address in your router too) Give it a different host name (you don't want two ‘Tower’ hostname on your network) Configure Docker Configure VM’s Set the Array to Auto Start. Now configure the Array: Leave Parity slots empty - no need for Parity as we are not using this instance for storage Select the small 1GB QEMU disk we created as disk 1 Select the lage 240GB QEMU disk we create as cache disk Start the Array. You will notice, as normal, that the disk will need to be formatted. This is fine and happens quickly. Then you’re done. I now have a working virtual unRAID instance that is: On a tiny NUC, running a minimal Ubuntu Server 20.04 installation, running KVM, within a VM with most of the host resources attached to it, booted from an UNRAID USB, allows unRAID to query the GUID of the USB, using two virtual disks (one to keep the array started and one to allocated the rest of the host SSD for Cache and Docker related activities), getting an IP from the LAN DHCP server. Also, given the options selected throughout - restart the host and not only will Ubuntu Server come up, so will KVM, therefore so will unRAID, which will get an IP, then will auto start the array which will then autostart Docker and subsequently any Containers that it is running! It happens quick too! Should the little NUC for whatever reason, deploying will be so much easier. All I have to do is restore the VM from a backup on a new NUC or PC with Ubuntu Server installed and configured. I intend (for now) to keep my spare NUC sat configured and waiting for such an eventuality. I have installed Pihole and it is currently working just like it did on my unRAID server. The NUC CPU is not maxing nor is the memory usage that high. This looks like it might work and might be viable. Time will tell. 25th May 2020 - EDITED for minor mistakes and out of order steps.
  9. Hi Mate, The script I wrote ended up being more of a Proof of Concept that, while a good foundation, was something I didn’t have the time to take forward into something more functional and stable. It appears that it has been taken over by the community (got to love open source code) and has since morphed into a vastly more functional plugin that I am happy to have inspired but can neither take credit for nor know much about. I think the name of the unRAID community member now maintaining the resulting plugin is called Jtok. See above posts. I’d suggest searching for this plugin in Community Applications (from which there will no doubt be a support thread linked where you can ask functional questions) and start there. Ta Daniel Sent from my iPhone using Tapatalk
  10. No issue with v5.0 for me. The upgrade happened at some point over the last week and didn’t notice. Logs are clean of errors and all my settings are intact.
  11. I don’t know about the specifics of your 2 guys’ setup but I have 2 DNS servers (Googles) in both my unraid config and also in Pihole. Then I have all my network pointing to pihole at the direction of my DHCP server which is my router (which also forces all DNS queries through pihole Even if the client specifies it’s own). I also think it’s best to run the docker with it’s own IP so you can point clients at it. Not sure about any other config. As for the interface not coming up, I’d imagine this is because there is something wrong with the config preventing the application from starting. Then again, if that was the case, not sure the IP would be pingable as the docker wouldn’t start. What does the log say?
  12. Sure, but there is nothing novel I have done to expand on. I followed the @SpaceInvaderOne guides (noting what I have posted above) almost to the letter for a setup with my own domain name and using Cloudflare dns. The only issue I had is when I used the Cloudflare proxy service - which I ended up turning off - but that had nothing to do with Pihole. I also used LAN IP addresses in my config rather than local dns (e.g. x.x.x.x rather than mariadb.local.tld) so that resolution was not required. Router provides DHCP and reserves IP addys. I assume you had Nextcloud working perfectly using the Letsencrypt docker before you installed Pihole?
  13. Ah, bugger. Still, Id love a button on there which would allow me to raise it with one click no matter where I am in the GUI rather than having to click through the menus. Saying that, not exactly a big issue.
  14. Hey @dlandon I hope you're well. Not surprised that this thread isn't the most active as it is a rock solid piece of work and (barring only CA - which makes installing this very easy) is pretty much the first plugin I think someone should install. Following that same train of thought.... would it be a difficult enhancement to the plugin to allow the option to replace the current syslog link with the enhanced syslog view? Much like what is done with the terminal link and the command line plugin by @dmacias??
  15. For those, like me, who run Asuswrt-Merlin, I found (and successfully followed) this guide from a Reddit user to setup Pihole on your network utilising this specific FW's function called "DNSFilter". Paraphrased from a section of the post: It will force all LAN DNS requests back to the router's settings in LAN, with your Pi-hole (and unRAID in my case so as to not prevent unRAID making a successful DNS request before the container starts) as a no-filtering exception meaning any device on your network, whether it is trying to use its own DNS or not, will be forced upstream to your Pi-hole because of the DNSFilter rule.
  16. This is what I did. I am sure there is a better way but these are the steps I took. - Removed the current App (and image) from within the docker tab (this doesn't touch the folder with all the current app settings in the appdata folder). - Installed the new App from within the Apps tab (CA) with the same network and path settings as the old App. - Stopped the new App in the docker tab. - Deleted all config files within the new folder created in appdata for the new App (no need to backup as they are fresh and can be pulled anytime). - Copied all files from the old appdata folder to the new one. - Started the App within the docker tab. - Once confirmed that the App is working, deleted the old appdata folder. All working fine.
  17. Just had a notice that there was a new version of Nextcloud v 18.0.4. Successfully upgraded utilising the Nextcloud web gui updater. Had one hitch that I thought I would share. The upgrade steps check that the right files are in the appropriate dir before upgrading and won't proceed if there are unexpected files found. Fine for most right? OSX: Hold my beer!! As I had previously browsed those folders, you guessed it, .DS_Store files were present and this stopped the updater from proceeding. In fact the exact error was: "Check for expected files // The following extra files have been found // .DS_Store" You cant set Nextcloud to ignore these files so i had to manually get rid of them. Didn't want to mess around with finder and turning on show hidden files, so did this: root@unraid:~# cd /mnt/user/appdata/nextcloud/ root@unraid:/mnt/user/appdata/nextcloud# find -name ".DS*" ./www/.DS_Store root@unraid:/mnt/user/appdata/nextcloud# cd www/ root@unraid:/mnt/user/appdata/nextcloud/www# rm .DS_Store root@unraid:/mnt/user/appdata/nextcloud/www# I was able to click try again in the Nextcloud updater web gui and now he file had gone, it proceeded on and completed without an issue.
  18. What would be the easiest way to migrate to the new image without losing any settings or data. Is it as simple as deleting the image and renaming the app data folder to match the new name and then installing the app again?
  19. What an excellent little app. Time was well invested in setting things up inside the app once installed. Have a newly pregnant partner here who is relishing being able to transparently build the shopping list for me to fulfil (as this is my job while COVID-19 is here) without the usual txts etc. Sigh. LOL! Anyway, thanks for making this into a docker boys. Special thanks for also including a default proxy.conf file in the letsencrypt docker too. Made setting it up for external access via https a doddle.
  20. Just out of curiosity, what advancement do you think unRAID could make that would impact the ability for SaBNZBD to send you a notification email? unRAID has the ability to input email smtp details for notifications and it works just fine. As I understand it (and I acknowledge that I have not set this up or tested it) SAB all has the ability to do this to? I have many containers setup to send emails so if SAB doesn't do it I would have thought that is with the developer of SAB and not unRAID to fix.
  21. I am not sure on your setup BUT I have pihole setup with the default blocklists as well as a great deal more. I have just tried to update a plugin and it worked fine. If you have setup pihole with its own IP AND you don't have the advanced docker option checked which allows for the "Host to communicate with custom networks" then I am going to guess that your issue relates to your unRAID server not being able to use pihole as a DNS server and therefore resolve the address that is used for updates. Assuming I am right, you should add DNS servers - say Googles (1.1.1.1 and 1.0.0.1) - to your network config, meaning independent to your pihole setup, unRAID can always resolve addresses it needs to resolve whether the container is started or not. To me that is a no brainer as there is nothing unRAID does you would wan to block. In fact, you should do this anyway otherwise when you go to restart unRAID (when the container is not started) then it is not going to be able to do its network call home thing that it does (assuming it still does that) meaning you might not even be able to start the server.
  22. Another guide followed and another successful and easy installation. Thanks to @SpaceInvaderOne and @spants for helping to make that happen. I did do a few extra things which made the setup a bit more seamless: - Decided to set static DNS entries for unRAID to Google DNS to avoid issues with boot up and the unRAID call home "feature" (this works great for my setup as most of my dockers have their own IP and therefore get their DNS set automatically by the router (which is Pihole) - the only exceptions to that are the services which run on either host, proxynet, or br0 - which are few. - Set unRAID to have a static IP as apposed to DHCP reserved to try and prepare to avoid any future DHCP / DNS complications. - Set 'Use Conditional Forwarding' in the Advanced DNS settings of Pihole to my router as my router is still the DHCP server and also stores the local domain, reserved ip assignments and hostname settings. Works great - all my LAN hosts still resolve. - Updated the blocklist with everything here (https://firebog.net) with a "tick" so as to not interrupt the browsing experience as well as updating the whitelist with the suggestions at that link too. What a great piece of software. P.S. I did find that I had to change my nextcloud settings (and will have to with any other services which run on unRAID that don't have their own IP) of my mariadb hostname to the IP address of the docker as unRAID was no longer getting its dns from the router and couldn't resolve the local dns hostname.
  23. Thank you. It's nice to have the benefit of someone else doing the same to give me a little confidence that its ok.
  24. I was wondering if someone would mind casting thier eye over what I have done to be able to resolve dockers via my external app.domain.tld BUT limit access to just client requests from my LAN. Over in this thread I was investigating how I might do that: In short, I setup the application via the reverse proxy like any other public facing app (e.g. with DNS to my external site and through reverseproxy). Result, available through http://app.domain.tld as expected, but not desired. I only want it to be accessible via requests from my LAN. So I put this code into the location code block of the app.subdomain.conf file that sits within the proxy-confs folder of my letsencrypt setup. location / { # allow anyone in 192.168.1.0/24 allow 192.168.1.0/24; # drop rest of the world deny all; } The result, I can access https://app.domain.tld from the LAN but NOT the Internet - which returns a 403. Excellent!!!! I just want to make sure that I am not missing some obvious security issue here which would make the app accessible from the internet or worse.
  25. Thanks for the reply. It was very thought provoking and I think I have done it. I have my own domain and as I mentioned in my OP, I use it to serve the sites I want to access externally. I considered a seperate instance of nginx, local IP's in the public DNS record etc etc. I was sat with my glass of wine last night and I settled on the fact that I MUST have been overthinking this. This is a basic access issue. Then I stumbled on it ... I setup the application via the reverse proxy like any other public facing app (e.g. with DNS to my external site and through reverseproxy). Result, available through http://app.domain.com as expected, but not desired. I only want it to be accessible through the LAN. So I put this code into the corresponding clock of the app.subdomain.conf file that sits within the proxy-confs folder of my letsencrypt setup. location / { # allow anyone in 192.168.1.0/24 allow 192.168.1.0/24; # drop rest of the world deny all; } The result, I can access https://app.domain.com from the LAN but NOT the Internet - which returns a 403. Excellent. Yes it uses external DNS, but as you put it, I think that is a good thing. I am going to check in on the support thread of letsencrypt to see if I have missed any major security flaw here, but I don't think I have! Thanks for being my muse in this. I appreciate it.