danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by danioj

  1. My understanding is that IDRAC ports are similar to SuperMicro (and others) dedicated ports for server management (ie you cannot use them on your server as an available interface for your OS - in this case unRAID).

    You “could” (I think) go down the route (if you have a managed switch) of using your spare RJ45 port as the port for PfSense (available to the VM as a bridge) and use VLANS for LAN and WAN. I won’t elaborate on that.

    I think the best bet though is to use your Intel SFP+ card. I’m making assumptions that you’re not going 10G here and you just want to use your SFP+ ports as additional RJ45 ports for pfSense.

    A quick search found this link appearing to relate to someone wanting similar SFP+ to RJ45 modules for your Intel card:

    https://forums.servethehome.com/index.php?threads/intel-82599es-transceiver-compatability-sfp-to-rj45-1g.25729/

    If you can get ones that work, then I imagine you can just pass that card through to the VM (assuming you server supports it) and use them for LAN/ WAN respectively.

    Then off you go.


    Sent from my iPhone using Tapatalk

  2. I’d love the ability to assign multiple networks to a single Docker container. 
     

    The simplest use case involves pihole. Currently I run seperate instances of a pihole container for each of my VLANS. It would be great if I could just attach one container to each network and configure pihole to listen on all assigned interfaces. 
     

    I’m pretty sure you can do it via the cli using ‘docker network connect <network> <container>‘ or something but it would be excellent to be able to do it in the GUI.

  3. I currently have all my networks restricted to using pihole which is set to use unbound (locally hosted in pfsense) to resolve dns queries (where the request isn’t cached). I do not use any external dns service.  What benefit would introducing this software into my setup give me?

  4. As someone who uses VLANS in my setup I’d love the ability to choose which interface the management access features of unRAID (e.g. GUI / SSH etc is bound to). Select either / All etc. 

     

    I’d like to have unRAID deployed to my management VLAN along with my other network gear and my main VLAN where all my file access occurs.
     

    I can do this of course but I’d like to be able to prevent clients on the main VLAN from accessing the GUI and SSH etc (not possible of course via FW rules as the traffic on that VLAN doesn’t go back to the router to trigger a block rule) and just have file sharing enabled. 
     

    Right now I can deploy unRAID to X VLANS and on each IP (required to be assigned for file sharing) the GUI / SSH is accessible. 

  5. Best way to answer a question you have is to sometimes write it down.  This has worked in my case in that I re-read what I wrote and thought - that sounds strange. RTFM!! So I went and read the Docker documentation. In 2 mins I found what I needed.

     

    https://docs.docker.com/config/containers/container-networking/

     

    Key part being:

    Quote

    DNS services

    By default, a container inherits the DNS settings of the host, as defined in the /etc/resolv.conf configuration file. Containers that use the default bridge network get a copy of this file, whereas containers that use a custom network use Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.

    Custom hosts defined in /etc/hosts are not inherited. To pass additional hosts into your container, refer to add entries to container hosts file in the docker run reference documentation. You can override these settings on a per-container basis. settings on a per-container basis.

     

    This makes it (sort of) clear why the option doesn't exist in unRAID.  Either way, I was able to add --dns X.X.X.X to the docker run command via the advanced mode in the container setup page and low and behold, external DNS was set in the container.

  6. Hello,

     

    The unRAID interface does not allow for specifying DNS settings.  One would assume this is by design (or by restriction) as this seems something quite obvious to omit.

     

    I have set my secondary NIC up against a port on my switch which is assigned to a VLAN which pfsense routes through my VPN gateway.  I have the interface to grab an IP from my VLAN DHCP server (which it does) but Id like to just set the VPN DNS Server directly rather than the way I have unraid setup (unraid>Pihole>pfsense(unbound)).

     

    I have bridging enabled on that interface, which I then use to assign an IP to a docker container. This works great, and traffic is routed as I expect through my VPN via the VLAN.  That is until I need name resolution.  

     

    My question is, how do I achieve what I want (which is being able to set a set of different DNS servers for the secondary interface) assuming that, as I say above, this is by design or restriction?

     

    I am creating a new topic as there have been a number of similar topics created in the past but have had no answers. I cannot find a topic through the forum search that answers this question.

     

    Any info on this would be helpful.

     

    Thanks

     

    Daniel

     

     

     

     

  7. Hello All,

     

    I am optimising with my network design to ensure that potentially threatening (IOT, Cameras, Guests etc) are segregated from my main internal network.  I will utilise VLANS for this.

     

    I intend on utilising a second NIC to give my unRAID Server access to my main LAN and my Camera VLAN. Easier this way as I don’t have to setup inter VLAN routing. 
     

    Pfsense makes it easy for me to restrict all but SMB traffic between Cameras and unRAID (thus protecting unRAID). 
     

    The only thing I can not figure out is how to restrict access to shares by Network. What id like to do is only allow my camera “user” to logon while on my Camera VLAN and once it does so ONLY be able to access 1 share. This would mean no other user would be able to login to the server on the Camera VLAN. 
     

    The threat I am trying to defend against is a device on the VLAN becoming compromised, opening access to the server and through luck and or other means getting access to my other shares.

     

    Is there a way to restrict share access based on network In the OS that anyone knows of?

     

    Thanks

     

    Daniel

  8. Are you after backup or file sync? Each have their own use cases and are different (while they might help serve similar goals in some cases).

    If it’s the former then I use Duplicati (Docker available)
    If it’s the latter then I use Syncthing (Docker Av/liable)

    Both are stable and sold. Duplicati is true backup in that it has versioning, archiving etc. Syncthing is best used to keep locations in Sync and can be wowed for backup as it’s bi directional.

    Not sure what feature in Nextcloud you could use to serve as your backup solution unless there is a app you can install within it that does something similar to the above.

  9. I started a thread in 2015 which culminated in my current setup:

     

    Main Server

     

    M/B: Supermicro X10SL7-F Version 1.01

    CPU: Intel® Xeon® CPU E3-1241 v3 @ 3.50GHz

    Memory: 32 GiB DDR3 Single-bit ECC

    Disk Capacity: 71 TB w / Dual Parity

     

    Backup Server

     

    M/B: ASRock C2550D4I Version

    CPU: Intel® Atom™ CPU C2550 @ 2.40GHz

    Memory: 12 GiB DDR3 Single-bit ECC

    Disk Capacity: 15 TB w / Single Parity
     

    I have no issue at all with my data requirements.  These machines are NAS workhorses and serve my needs perfectly.  Where I am starting to come up short are the ‘extra’ things I’d like to do with my servers. 
     

    I’m not a ‘one box rules them all’ fan given I have a family and with the way unRAID is, taking the array down (or a problem with a single disk) can bring your house to halt so I stay away from running everything from one machine.  I’d love to do more with my backup server but it doesn’t support vt-d so I only have it power on once a day to take the backup from the main server. 
     

    I run Home Assistant and PiHoles (mainly due to the ridiculous writes this thing does) on small little NUCs sat alongside my servers.

     

    All my servers connect into one Gigabit (except the Dedicated IPMI which I feed into the router) switch (dual Nic aggregate bonds) which connect into an Asus AC88U.
     

    On my Main Server I currently run LibreELEC VM which powers my Livingroom TV and a number of dockers including Plex supporting that. I have an Nvidia Card in there (on a riser cable) which means I don’t transcode and all my house clients are Rpi4’s and a 4 port Sata extender which has allowed me to increase my drive capacity to 16 Disks (mostly 8TB with a few original 3TB for data and parity and a 1TB SSD for Cache along with a 259GB SSD for the VM).

     

    My backup server is full of WD 3TB Greens with a mechanical drive as cache (running the backup docker and a few non relevant others).
     

    A good setup all in all (except some of my SATA ports on my Backup server are damaged - not an issue as the Case only allows 8 disks).

     

    As I have inferred my needs are expanding.  I don’t however want to centre myself away from unRAID.  So I am thinking about how to do that.
     

    I am preparing myself to move into a new home.  I want to setup a NVR (BlueIris) VM which will run 8 POE Cams.  I’m going to need POE switches for the Cams and a better solution to my Wifi needs as my new home is massive and while the little Asus worked in an apartment it won’t work in a large single storey house.  I’m going to need POE switches, Patch panels, Access Points. Those are a no brainer as UI has excellent products for them.  Going to go with Pfsense for my router and BluIris as I mentioned for my NVR do not going anywhere near the ‘Alpha‘ UDM Pro and their NVR solution in ‘Protect’.
     

    Which means I will need to expand my current setup to acquire surveillance drive(s) and a VM powerful enough to run BlueIris.  I will likely move my TV client to a Rpi4 as server and equipment will end up in the garage (and not near the TV as it is now).  I’d also like to be able to virtualise things on my backup server (e.g while I’m not a one box rules all fan, my backup server would be a perfect solution for pfsense if I could pass through a 4 port Nic) but can’t.  I keep hoping LT will allow VMs without Array start but I won’t hold my breath.
     

    I’m open to my Backup Server running 24x7 like my Main Server (given my pfsense comment above) but I’ll need to upgrade it to allow virtualisation support 
     

    So ....

     

    What I am thinking is this:

     

    Move my current mITX SuperMicro Main Server setup into my backup Server case.  Use my mITX Atom Backup Server Setup into a SFF case (with an external PSU) as the base for a Pfsense router (with the 4 and 2 port Intel NICS I have in a drawer upstairs) and ultimately build a new 2020 Main Server.  If I end up getting a rackmount case for my main server (see below) I’ll ditch the Silverstone DS380 that houses my backup server and use my Main Server Fractal Define R5 case as my backup server. 

     

    As you can see from my current Main Server, I didn’t cheap out on components. I went with very much recommended server grade products for the time for all the features (ECC, IPMI etc) and I don’t want to loose them.  Unlike back then, the Wiki doesn’t have a current recommended build and I am not sure where to turn (given we now have Ryzen with more cores than a MF) and how to proceed.  My initial thought was to just buy the natural upgrade to the x10sl7-F but there isn’t one!!! Certainly not one with as much Sata ports either. So I’m going to need an expandable card. Before I know it I’m outgrowing my current case (in my head) needing more PCIe slots and am thinking (given the network rackmount equipment I’m buying) I should build into a rack case - however the selection there isn’t simple either. Silverstone Tech had a great hot swap 16 Disk 2U case but They discontinued it. Others I’ve seen look crap. 

     

    Long Story Short (too late I know), I believe I built the 2015 SOHO build and now I am after the 2020 SOHO Plus build. 
     

    I know we all (deep down) love talking hardware and this is the prelude thread to an updated build thread I did in 2015 so am after suggestions, comments and general discussion on how to move forward!

     

    Much like my journey in 2015, this should be a fun 6-12 months!!!  Glad it’s my 40th birthday year and have some approval to spend some $$!!

  10. Without some specific errors I’m not sure I can help. When I was running Core in Docker in unRAID mine never dropped off or stopped working. I had morning, daytime, evening and night automations and all worked every day.

    I did have an issue with some of my TP-Link switches - configured so HA saw them as lights not “reconnecting” each day. This was because I had my dockers set to be backed up at 3am each morning and there is a bug in the TP-Link FW when you have them running local instead of via the cloud.

    Other than that (which really had nothing to do with HA) my setup was solid.


    Sent from my iPhone using Tapatalk

  11. Yes, I did (until recently) and it worked flawlessly for months. When you say unstable, what do you mean?

    Now I run it in a VM on a dedicated NUC running Ubuntu.

    Taking the server down for maintenance, tweaking or WhateverTF was not very wife or family proof. One machine to do it all doesn’t really work for me these days.


    Sent from my iPhone using Tapatalk

  12. On 6/10/2020 at 10:34 PM, jonathanm said:

     

     

     

    No, but the VM sure appreciates being run on a faster system. Think of the KVM host as the motherboard. If you have a slow board, the fastest CPU and huge amounts of RAM don't help. VM's rely on their basic services to be programmatically emulated (created on the fly) by the host. Choke the host, the VM suffers.


    All these folks trying to absolutely maximize the amount of resources they allocate to their VM guests are shooting themselves in the foot. For tuning purposes, give the VM the absolute bare minimum it needs on the box spec, and slowly add more until performance no longer improves. The basic I/O for the VM is all being handled by the host, and as is obvious to anyone who has swapped out an old spinner hard drive for a SSD in a bare metal box, I/O is pretty much the limiting factor in how fast a machine feels. Adding CPU horsepower and extra RAM doesn't help at all once the basic need for them has been met.


    All the RAM and CPU you allocate to the VM are locked away from helping the host perform, pretty much being wasted. Linux does a really good job with "extra" CPU and RAM, it's a whole lot better to let the host have them.

    This was  a very interesting and informative post.  So much so that I think it is a candidate for a sticky.

     

    My big take away from this is that really is no "minimum hardware requirement" for unRAID when you are utilising much of its virtualisation activities and have also added various plugins.

     

    Using your advice, I found the "Sweet Spot" for my configuration which means the VM didn't stutter and the host has more than enough resources.

     

    Thanks for your help.

  13. On 6/6/2020 at 10:48 PM, jonathanm said:

    As a test, back that down to 2 of 8 and see how the VM performs.

    Woah, that worked. No stutter. Its not like the VM was using that many Cores even though it had them allocated and there were at least 2 allocated to unRAID. Surely unRAID doesn't need all those 6 cores not allocated to the VM.

     

    Is this a known bug?

  14. No worries. We all come a cropper of the decisions we make at setup only to find those decisions impact smooth operation sometimes.

    AFAIK there is no way of backing up without stopping. If there was I am sure the CA dev would have implemented it that way.

    You could just let all users know that each day there will be downtime. I’d say your only way forward if you don’t want to do that though is to choose a time of day when you have least usage (maybe 5am) to limit impact. Or you could reduce the frequency of your backups.


    Sent from my iPhone using Tapatalk

  15. Hi,

     

    I have a weird issue that I cannot resolve.  I have my living room TV displaying a VM on my unRAID server which is running LibreELEC. It is playing media that is on the same server. Media is spread across many different disks. 
     

    My Server Hardware is:

     

    M/B: Supermicro X10SL7-F Version 1.01 - s/n: NM149S013462

    BIOS: American Megatrends Inc. Version 3.2. Dated: 06/09/2018

    CPU: Intel® Xeon® CPU E3-1241 v3 @ 3.50GHz

    HVM: Enabled

    IOMMU: Enabled

    Cache: 256 KiB, 1024 KiB, 8192 KiB

    Memory: 32 GiB DDR3 Single-bit ECC (max. installable capacity 32 GiB)

    Network: eth0: 1000 Mbps, full duplex, mtu 1500 

     

    No other VMs are running and I don’t have a great deal of resource intensive Dockers running either. 

     

    The VM is stable as. Plays 4K video perfectly even while a couple of other clients are streaming transcoding etc. The VM is allocated 6 of 8 cores and 8GB RAM.

     

    That is, until I access the unRAID GUI. When I do that, the video playback / VM stutters for 2 seconds. 
     

    I've observed a spike in CPU usage briefly in one or two of the cores at the same time but nothing on all of them and not to 100%.

     

    I thought it might be due to the Cache disk also running other dockers and IO usage so I switched it to a SSD outside of the array on UAD. No difference. 
     

    Weird, it only happens on The Dashboard and Main Tabs. None of the others. 
     

    There is absolutely nothing in the logs and I have trawled the diagnostics. Ive tried different video from each disk of varying size and type. I’ve disabled all dockers and plugins and just run the VM and still, when I access the unRAID GUI the VIdeo stutters. Similarly, I’ve stressed the server to the point of overload and with my settings the VM plays perfect until I access the GUI then stutter. 
     

    I have no idea how to progress this. 

  16. Would someone mind taking the trouble to explain to me how unRAID enables 'Host access to custom networks'?

     

    Its just not viable for me to have pinhole running on unRAID so I have setup a seperate machine and am running Docker on that system and am administering via Portainer.

     

    However, the host Ubuntu OS can't access pihole itself and therefore cant resolve dns (as I push all dns traffic on my network through pihole).  I can't for the life of me figure out how it's done in Docker.  All unRAID has is a checkbox.