Network isolation in unRAID 6.4


92 posts in this topic Last Reply

Recommended Posts

Docker containers can not access the host address, this is by Docker design. They are segregated.

unRAID will allow ssh, telnet and GUI on any active network interface (VLAN or physical). If you don't want this ssh, telnet or GUI access via the VLAN interface (or physical interface) then you should create corresponding firewall rules and block the specific ports.

It is possible that unRAID binds ssh, telnet and GUI to a single interface (IP address) only,  but this requires manual changes in the service creation.

Link to post
  • Replies 91
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

By default unRAID, the VMs and Docker containers all run within the same network. This is a straightforward solution, it does not require any special network setup and for most users this is a suitabl

I do have a unifi switch and by default it trunks all vlans to all ports.  I'll take some screenshots to illustrate where i'm at   **edit** I basically nuked all the routes and started from

Installed 6.4.1 and set VLAN interface to no IP for unRAID.  Then configured docker with the appropriate network.  Everything looks good.  The UniFi docker container is in a separate VLAN and unRAID i

Posted Images

40 minutes ago, bonienl said:

Docker containers can not access the host address, this is by Docker design. They are segregated.

unRAID will allow ssh, telnet and GUI on any active network interface (VLAN or physical). If you don't want this ssh, telnet or GUI access via the VLAN interface (or physical interface) then you should create corresponding firewall rules and block the specific ports.

It is possible that unRAID binds ssh, telnet and GUI to a single interface (IP address) only,  but this requires manual changes in the service creation.

 

I am not too concerned with the docker containers themselves, but who is coming into the open ports required by the containers.  Since my UniFi docker container requires ports to be opened on my firewall, I do not want my unRAID server to be in the same network where I have opened ports, even if the ports are mapped to the docker container.

 

I just don't want my unRAID to be accessible on the VLAN interface too because I would like to put any services that requires ports to be opened on my firewall into their own network with no access to my private network including my unRAID server. 

 

Update:

Let's assume somehow a docker container with its own IP is compromised.  Since the unRAID server on the VLAN interface is in the same network as the docker container, how can I protect it with firewall rules?  Since the two are in the same network, traffic between the two does not hit the firewall router.

Edited by mifronte
Link to post
28 minutes ago, mifronte said:

Let's assume somehow a docker container with its own IP is compromised.  Since the unRAID server on the VLAN interface is in the same network as the docker container, how can I protect it with firewall rules?  Since the two are in the same network, traffic between the two does not hit the firewall router.

 

Correct, but docker container is not able to communicate to unRAID. This is locally prevented.

Link to post
8 minutes ago, bonienl said:

 

Correct, but docker container is not able to communicate to unRAID. This is locally prevented.

 

But I don't think it is prevented from the network.  If someone hacks into the docker container (i.e. docker exec -it unifi bash) as if it is a host on the network, they can try and hack into unRAID through the VLAN interface.  Because when I run:

docker exec -it unifi bash

I get into the docker container as if it is a host on the network.  Wouldn't unRAID be accessbile via the network?

Link to post

Docker containers are isolated no matter if they are on the same or different network.

 

Right now the only way to disallow unRAID access over different interfaces or VLANs is to set NO ip address for said interfaces or VLANs. This however requires manual configuration of docker custom networks through CLI.

 

I am going to look to improve this situation and avoid CLI configuration and make easy GUI configuration possible.

Link to post

@bonienl By any chance are you encountering that segregated Docker containers on VLAN interfaces are using the incorrect DNS server?  Instead of the VLAN interface's (i.e. br0.30) DNS server, it is using the unRAID's primary interface (i.e. br0) DNS server.  This breaks DNS for the containers on br0.30 since the br0's DNS is not reachable from the VLAN.

 

I placed my Docker container in a VLAN as specified in the first post and Docker is trying to use the primary's interface DNS server.  So here is my setup:

 

eth0: 10.10.1.0/24 with DNS 10.10.1.1

VLAN 30 : 10.10.2.0/24 with DNS 10.10.2.1

 

However, Docker containers on br0.30 are configured with DNS 10.10.1.1 and not DNS 10.10.2.1.  I have to use the --dns option to supply the correct DNS.

Link to post
On 1/29/2018 at 10:22 PM, mifronte said:

It would be great to be able to specify if the unRAID server should be accessible for each VLAN interface.  I don't really know what this entails, but I have faith.

 

Yes, with the latest update it is possible to do this for interfaces and VLANs.

 

3 hours ago, mifronte said:

@bonienl By any chance are you encountering that segregated Docker containers on VLAN interfaces are using the incorrect DNS server?  Instead of the VLAN interface's (i.e. br0.30) DNS server, it is using the unRAID's primary interface (i.e. br0) DNS server.  This breaks DNS for the containers on br0.30 since the br0's DNS is not reachable from the VLAN.

 

I placed my Docker container in a VLAN as specified in the first post and Docker is trying to use the primary's interface DNS server.  So here is my setup:

 

eth0: 10.10.1.0/24 with DNS 10.10.1.1

VLAN 30 : 10.10.2.0/24 with DNS 10.10.2.1

 

However, Docker containers on br0.30 are configured with DNS 10.10.1.1 and not DNS 10.10.2.1.  I have to use the --dns option to supply the correct DNS.

 

No clue. By default Docker re-uses the same DNS settings for all interfaces. This is working properly in my case.

Link to post
26 minutes ago, mifronte said:

Are you not running a local DNS or do you allow DNS queries to traverse across your VLANs?

 

I have set up my router as DNS proxy. It can forward queries on any network (VLAN) connected to it.

Link to post

Installed 6.4.1 and set VLAN interface to no IP for unRAID.  Then configured docker with the appropriate network.  Everything looks good.  The UniFi docker container is in a separate VLAN and unRAID is not available in that VLAN.  Complete segration achieved!  Great job @bonienl!

 

Now, just for curiosity, if I have another VLAN where I wanted to selectively have unRAID available, but with only certain shares and no other services, would that be feasible some time in the future?

 

For example, if I have these shares on my unRAID server:  Sales, Finance, Engineering, Executives.  In the Sales VLAN, unRAID would be available with just the Sales share and so on for the other VLANs.  The management web GUI would not be available nor any other services or ports.  Kind of like having a Docker container that allow you to specify which SMB shares to expose and the permissions allowed.  This way I can run multiple instances of the containers in their own VLAN.

 

Why would I want to do this?  Well I am just looking into the future where I become less trusting of all these "smart devices".  So I can see having a VLAN where these devices are segregated from my main LAN, but I would give them access to just certain shares while unRAID is completely isolated.  Kind of like a Docker container so that if for some reason the container is compromised, my unRAID server is still safe.

Edited by mifronte
Link to post

I am struggling to connect two of my containers together.

 

Basically, I have NextCloud running as "br0" and MySQL running as "bridge". I'd like to connect to MySQL from the NextCloud container, but it is not reachable when running as "br0". What steps should I take to be able to connect to MySQL TCP port 3306 from my NextCloud container?


Thanks in advance

 

Link to post
  • 4 weeks later...

SO I am currently using Unraid 6.4.1.  I am using Ubiquiti as my networking gear including USG for router and Unifi Switches.

I am using multiple dockers currently for Plex, Sonarr, SabzNzbd, Deluge, etc.  

The way I have it set is:

when I create a docker I am assigning them a static IP in the same range as assigned by my DNS server on USG:  set yo 192.168.1.1/24.

so I would set network type to eth0.  Fixed IP address of :  for deluge 192.168.1.210 and for plex 192.168.1.211, etc

on my USG, it shows up as the corresponding IP address and I assign a static IP address and name it:  Plex server, Deluge server.

 

This seems to work well for me.  My question is:  Am I doing it wrong?  is there any disadvantage to doing it this way?

Is there a better way?

 

Thanks in advance

Link to post

Only that a docker container could potentially take a IP already assigned by the DHCP server if you don't/forget to assign the static IP.

Ideally, you're DHCP server range should be a subset of 192.168.1.0/24 like 192.168.1.128/25 and your docker network for eth0 is set to 192.168.1.64/26 so as to prevent possible collisions.

Its just like you have a network served by a DHCP server, and you have a user adding a device and he randomly picked  an IP without consulting the DHCP leases. The DHCP server probably checks before assigning the used IP, but something was using it already and the container comes on-line... 

 

Link to post
  • 3 months later...

I hope this isn't considered a necropost but I seem to be having an issue with unraid connecting to my containers over the br0.5 i created. Is this supposed to be blocked? From the unraid console:

 

root@Nexus:~# ping 10.0.1.5
PING 10.0.1.5 (10.0.1.5) 56(84) bytes of data.
From 10.0.1.6 icmp_seq=1 Destination Host Unreachable
From 10.0.1.6 icmp_seq=2 Destination Host Unreachable
From 10.0.1.6 icmp_seq=3 Destination Host Unreachable
From 10.0.1.6 icmp_seq=4 Destination Host Unreachable
From 10.0.1.6 icmp_seq=5 Destination Host Unreachable
From 10.0.1.6 icmp_seq=6 Destination Host Unreachable
^C
--- 10.0.1.5 ping statistics ---
7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6136ms
pipe 4

And this is what my route looks like:

 

root@Nexus:~# ip route
default via 192.168.1.1 dev br0 proto dhcp src 192.168.1.44 metric 217
default via 10.0.1.1 dev br0.5 proto dhcp src 10.0.1.6 metric 219
10.0.1.0/24 dev br0.5 proto dhcp scope link src 10.0.1.6 metric 219
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev br-17bf4a1665ee proto kernel scope link src 172.18.0.1 linkdown
192.168.1.0/24 dev br0 proto dhcp scope link src 192.168.1.44 metric 217

Unraid can ping itself (10.0.1.6) and the gateway (10.0.1.1) but not any of the docker containers using the same br0.5 tagged network. I don't think it is "by-design" it seems like its a configuration issue on my end. I followed all the instructions on the guide and I'm running a unifi managed switch, USG and cloud key. Everything else seems to be working. I can access the br0.5 containers from any other device on the network.

 

 

 

Link to post

This is by design.

What you should do is to not assign an IP to br0.5 interface under unRAID Network Settings, then proceed to setup the interface for docker custom network.

It will then work, but must cross the router for unRAID and the docker to talk to each other.

Link to post
  • 1 month later...

I'm attempting to follow @bonienl's instructions for setting up dockers and VMs on a seperate VLAN.

The plan is:

  • 192.168.1.1 - main network
  • 192.168.5.0 - docker network

However in Unraid VLAN settings, when trying to assign 192.168.5.0 (or anything 192.168.x.0), the routing table always assigns this to 192.168.0.0. Why is this?

Additionally, setting the docker VLAN IPv4 to automatic, assigns an IP address, but does not show up in the docker network settings dropdown.

Link to post
1 hour ago, bonienl said:

 

You need to use a /24 network mask instead of /16.

 

Sweet, will give this a crack when I get home.

A further question:

My network is set up with router, hardwired to a Netgear ex7000 extender acting as an AP/switch, with several devices including my unraid server wired to the ex7000. My router is running tomato firmware and can set up VLANs.

When I set up the VLANs for the port the switch is running on, will that effectively place everything on that switch onto the isolated VLAN?

Because as described in the tutorial, I'd like unraid on the main network (192.168.1.1), then certain dockers and VMs on their own isolated networks.

Forgive the noobiness, I have only recently started learning about VLANs.

Edited by Boo-urns
Link to post

When using VLANs you need them supported on your switch and router too.

 

Your switch must be able to pass through a trunked connection from router to unRAID. A trunked connection means a connection carrying untagged traffic (=your main connection) and tagged traffic (=your VLAN connections) combined.

 

Your router must support sub-interfaces. Each sub-interface is associated with a VLAN connection, and it is treated as a separate connection, like another physical interface on your router.

 

Link to post
On 8/5/2018 at 10:28 PM, bonienl said:

Your switch must be able to pass through a trunked connection from router to unRAID.

Your router must support sub-interfaces.

Thanks for the explanation.

By sub-interfaces do you mean, assigning VLANs to each port? I couldn't find a reference to that terminology. I have an R7000 running advanced Tomato as my main router and it supports VLAN port assignment.

Obviously the EX7000 extender does not support VLAN or trunking, so I was looking into buying a cheap managed switch that supports trunking. 

The NETGEAR GS108E supports VLAN, but the datasheet says it doesn't support link aggregation/port trunking so I guess that's a no. The 16-port model support it, but only "Static Manual LAGs only". What does that mean?

Do u know of a cheap switch or router that supports the functions you mentioned? If router, even better one that can run Tomato.

Link to post

First, when your router has enough LAN ports (usually 4) then no switch is required, which make things easier :)

I have good experience with managed switches from TP-link out of their easy smart series, e.g. their TL-SG108E and TL-SG1016DE models. These have effordable prices and can do what you need.

I  use Ubiquiti routers, these have several features, including VLAN support, but are more targetted at 'prosumers', people with sufficient network knowledge. Good stuff, but you may need to do some learning.

 

I used Netgear routers in the past, but don't remember their exact feature set. Perhaps somebody else has more recent experience with Netgear.

A quick look at the datasheet of Netgear and the features you need are:

IEEE 802.1q VLAN support - this allows an interface to use VLANs and VLAN trunking

Port trunking - another word for port aggregation. Netgear supports LAG protocol statically, which means the 'other' side must also set static LAG without negotation. unRAID supports static LAG (IEEE 802.3ad)

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.