[6.3.0+] How to setup Dockers without sharing unRAID IP address


ken-ji

168 posts in this topic Last Reply

Recommended Posts

On 14/11/2017 at 11:52 PM, ken-ji said:

The IP address details are:

  • Network: 192.168.1.0/24
  • Router: 192.168.1.1
  • unRAID: 192.168.1.2 (on eth0/br0)
  • DHCP range: 192.168.1.64-192.168.1.127 (this just simplifies some of the math)
  • Docker container range: 192.168.1.128-192.168.1.192

Running on the terminal:


# docker network create \
-o parent=br1 \
--driver macvlan \
--subnet 192.168.1.0/24 \
--ip-range 192.168.1.128/26 \
--gateway 192.168.1.1 \
localnetwork
  • Modify any Docker via the WebUI in Advanced mode
  • Set Network to None
  • Remove any port mappings
  • Fill in the Extra Parameters with: --network localnetwork
  • Apply and start the docker

 

Excellent write up, thanks!

 

I've followed this (using the same network layout as I was already using 192.168.1.x) and its working great for my containers however when I stop / start docker the localnetwork is deleted and I have to manually recreate it and then manually start the containers. Also, the last time I rebooted unraid (6.4.0_rc20a) it also recreated br1 which I had to delete from docker before I could recreate localnetwork.

 

Is it supposed to save this network in the docker.img? 

If yes, any idea why mine isn't being saved and I'm having to recreate it each time?

If not, is there a best practice way of automating the recreation of the network and starting of the containers?

Edited by Weavus
Link to post
  • Replies 167
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

How to setup Dockers to have own IP address without sharing the host IP address: This is only valid in unRAID 6.3 series going forward. 6.4.0 has this built into the GUI but currently have a

Perhaps you would be interested to know that macvlan support is added in the upcoming version of unRAID, it allows you to select additional 'custom' networks from the GUI.  

Bridging is just a sample (and a recommended setup since you typically want VMs to use the same physical NICs without having to NAT - the default vmbr0 is a NAT bridge). You can use eth1 just the same

Posted Images

Yes unRAID 6.4. scans all available interfaces when starting the docker service. Each interface which has an IP address assigned to it will be automatically included in the dropdown list of network types when configuring a container.

 

Any manually created macvlan is removed, cause these interfere with the automatic creation.

 

eth0 (br0) is the management interface and always has an IP address. Consequently this interface becomes automatically available to docker containers, which can set a different and unqiue IP address (either dynamic or static) for this interface.

 

And unRAID 6.4 supports both IPv4 and IPv6 addresses for docker containers.

 

Edited by bonienl
Link to post
11 hours ago, bonienl said:

Yes unRAID 6.4. scans all available interfaces when starting the docker service. Each interface which has an IP address assigned to it will be automatically included in the dropdown list of network types when configuring a container.

 

Any manually created macvlan is removed, cause these interfere with the automatic creation.

 

eth0 (br0) is the management interface and always has an IP address. Consequently this interface becomes automatically available to docker containers, which can set a different and unqiue IP address (either dynamic or static) for this interface.

 

And unRAID 6.4 supports both IPv4 and IPv6 addresses for docker containers.

 

 

Sorry, I was mistaken, when rebooting Unraid/Docker recreated br0 which is using the gateway 192.168.1.1 which means I cant recreate my localnetwork network as it can't use the gateway while br0 has it until I manually delete br0 and recreate my localnetwork network. My eth1 interface does not have an IP address assigned so nothing is automatically created on startup for br1.

 

How is 6.4 supposed to work with two network interfaces to achieve what I want, i.e. my containers having their own assigned IP in my 192.168.1.x range? Am I missing something as to how its supposed to work in 6.4 to do this automagically? Right now I can't see how the automatic wipe existing/create new is helping me so is their a way to tell Unraid/Docker not to blow my self-created network away when it restarts?

 

Thanks

Link to post

Two options:

  1. Your docker containers use br0 as network type. In this case you can give each container its own static IP address (see container settings) or if you prefer dynamic IP assignment for containers, you need to define a DHCP range for docker which does not conflict with your regular DHCP server (router). Stop the Docker service and under advanced view you will be able to set a DHCP range for docker.
  2. If you want to use br1 for docker containers, make sure that under network settings you have configured an IP address and gateway for br1 (eth1). These values will be inherited by Docker and make br1 available as a network choice. Again you have the choice to define a DHCP range for br1 to do automatic IP assignment for containers or give each container a fixed IP address.
Link to post

@bonienl I'm thinking the docker network support should have a more I-know-what-I'm-doing-so-let-me mode. Or at least have an option to create the network for interfaces without IP addresses to allow people with more complex networks to use most of the UI without having to do really ugly workarounds like blowing away docker networks to get what they want. My particular use case is that I do macvlan networks on another vlan, but unraid is not allowed to have an IP on that vlan. Which increases security and control.

Edited by ken-ji
Link to post

The challenge here is that automation thru the GUI works fine when it has full control and can keep track of the networks used by docker.

Any manual configuration leads to potential conflicts, cause the GUI isn't aware how these networks are configured.

 

The current workaround is to do manual creation AFTER the docker service is started. Automation can be achieved by using the appropriate event "Started" to execute macvlan creation, which I presume you now have in your go file.

 

Btw when creating a macvlan network by design containers are not allowed to talk to the host address (unRAID), so even with an IP address assigned to unRAID communication isn't possible.

Edited by bonienl
Link to post
On 1/11/2018 at 6:19 PM, bonienl said:

The challenge here is that automation thru the GUI works fine when it has full control and can keep track of the networks used by docker.

Any manual configuration leads to potential conflicts, cause the GUI isn't aware how these networks are configured.

 

The current workaround is to do manual creation AFTER the docker service is started. Automation can be achieved by using the appropriate event "Started" to execute macvlan creation, which I presume you now have in your go file.

 

Btw when creating a macvlan network by design containers are not allowed to talk to the host address (unRAID), so even with an IP address assigned to unRAID communication isn't possible.

 

I'm sorry but I'm going to consider Docker network support in 6.4 too simple and naive.
With the auto clean up of docker networks, it completely broke my system post upgrade.

I'm now going to have to work out a correct way of setting up custom docker networks, without assigning unRAID an IP for my own simplicity.

And still having the containers come up automatically. Do you have any suggestions?

Link to post

Its not "crippled"... just that it now imposes some constraints that I consider naive and annoying.

* Any interface with an IP address will have its custom docker network nuked without asking the user, and a new custom network setup based on the details of the IP address assigned.

** thus a new requirement that unRAID have an unnecessary IP in all custom docker networks. This requirement may sound trivial and not an issue, as containers can't reach unRAID on that IP, but i think users will get confused still and wonder why they can't reach unRAID on that IP.

* Any other custom network is nuked away without even giving the user the option to use it.

* and worse, there's no behavioral off switch. :(

 

Workaround: the custom network can be defined after the docker service has started, but dockers on this custom network will no longer autostart.

 

So now all my containers running on a separate VLAN won't even run since the custom docker network is now missing.

My PHP sucks, but I think I can make a manually configured plugin to fix this silly limitation for now.

 

Link to post

The removal of "unknown" networks was done to avoid potential conflicts with the automated network generation in unRAID6.4. There is nothing naive or silly about that.

 

Look, you are pissed because it doesn't meet YOUR requirements, but doing manual macvlan configuration isn't for the majority of users and I am convinced that current implemention perfectly suites this majority. However we'll need to find a way to suit the needs for more advanced usage too, perhaps a setting to allow manual networks.

 

For the time being the most practical solution for you is to comment out the deletion of other (unknown) networks in /etc/rc.d/rc.docker.

# cleanup non-existing custom networks
#for NETWORK in $NETWORKS; do
#  [[ ! "$ACTIVE " =~ "$NETWORK " ]] && docker network rm $NETWORK 1>/dev/null 2>&1
#done

To get that automated on startup you will need to keep a revised rc.docker version on your flash and copy that within your go file before starting emhttp.

 

Link to post

Thanks bonienl, I'll give that workaround a go and see if that works for me until a proper way of handling this advanced use/edge case is provided in a future release.

 

EDIT: Can confirm, commenting out those 3 lines and adding the file to flash and copying it via the go file means my custom docker network was preserved and my containers auto started as expected using the old custom network after a reboot. Thanks again for a speedy workaround idea.

 

Edited by Weavus
Link to post

I guess I was pissed. But I should have seen it coming, I just didn't get to participate the RC to have seen this actual effect.

Its still annoying that its an all or nothing approach right now.you should have left custom networks with un-address-ed interfaces  alone. maybe that would be a better compromise.

Link to post

I went for the safest approach, which is to avoid any potential conflicts with the automatic network generation. :)

 

Your case I consider more the exception, most users can live with the current implementation, but nonetheless support for user created networks can be part of a future release/update. This means not only to keep user created networks as they are, but also make them available as a network choice in the GUI.

 

As said earlier try the workaround with modified rc.docker, it should help you for the time being.

Link to post

I'm basically trying to restructure my dockers I currently have running on the unRAID host as separate IPs / VLANs. The dockers in question are the customary sabnzbd/deluge/kodi-headless(mariadb)/sonarr.

 

I'd like to put sabnzbd/deluge on a separate VLAN but that means that I have to put sonarr on a separate IP address (as from what I understand the host cannot communicate directly with an IPs / VLANs).

 

Now if I put sonarr on a separate IP, I need to do the same with kodi-headless(mariadb) because sonarr updates the central DB. But I run into the same issue with communication, as kodi-headless needs to access the smb shares on the unRAID host.

 

Thoughts here? Or an I missing something?

Edited by joelones
Link to post

Remember that a separate VLAN also means your network gear must support VLANs to make communication possible. You will have complete isolation if you go this way, but do you really need this? It is much simpler to give each docker its own IP address within your existing network. Things like port translations are no longer needed, and if you need port forwarding on your router, it can be done to the dedicated IP address of the specific docker container.

 

When sharing information on the host, this should be done thru the regular folder mapping not SMB shares. I.e. ensure that each container has the correct folder mapping included on the settings page.

 

Link to post
8 hours ago, bonienl said:

Remember that a separate VLAN also means your network gear must support VLANs to make communication possible. You will have complete isolation if you go this way, but do you really need this? It is much simpler to give each docker its own IP address within your existing network. Things like port translations are no longer needed, and if you need port forwarding on your router, it can be done to the dedicated IP address of the specific docker container.

 

When sharing information on the host, this should be done thru the regular folder mapping not SMB shares. I.e. ensure that each container has the correct folder mapping included on the settings page.

 

 

Thanks for the reply.

 

For security, I'd like to put at least the deluge docker on a separate VLAN as I will port forward from my router. More so, at the router level, I can easily redirect traffic from this VLAN out via the already established VPN connection.

 

I get that when sharing information on the host, this should be done thru the regular folder mapping not SMB shares but I'm trying to think of my particular case with the kodi-headless container, which from what I remember, when you initiate a database scan it connects via smb on the unRAID IP (as all entries are stored in the mariandb for kodi as smb:// paths). More so, when sonarr sends an update (new episode added) request it needs access to the meta data file stored on the smb share.

 

You're right though, it's probably much simpler to give each docker its own IP address within your existing network but even if I give each docker its own IP address within my existing network, the kodi-headless docker still won't work to connect to the smb shares on the unRAID IP, so back to square one.

 

Unless I'm missing something the kodi-headless docker needs access to the unRAID IP for smb shares? If someone knows better, please correct me if I'm wrong.

 

Edit1: I'm running unRAID under ESXi any way around this, perhaps adding a second virtual NIC eth1 and creating a new bridge on that interface and leave eth0 as the host IP?

 

Edit2: Well that didn't work. I added another virtual NIC to my unRAID VM as eth1 and recreated a new bridge br1 and gave a container a br1 (where i got a vlan) address and still couldn't ping eth0 from within the container. Pinging any other IP but the unRAID IP works from my vlan docker. Everything was done from the GUI in 6.4. Attached some screens of the interfaces. Not sure if the fact that both eth0 & eth1 are the same physical adapter has something to do with this not working.

Screen Shot 2018-01-19 at 8.02.42 PM.png

Screen Shot 2018-01-19 at 8.02.46 PM.png

 

NETWORK ID          NAME                DRIVER              SCOPE
f1e931e72f79        br1.10              macvlan             local
a763720d03b3        bridge              bridge              local
f9aa41a1e4cb        host                host                local
f90f49050874        none                null                local

 

Edited by joelones
Link to post

Ken-ji had written up how to do this very thing using VLANs but it seems the last update to UNRAID trashes any custom docker network settings you may have created.  So it's either some script to recreate it all again before launching the containers or some hacking away pf some UNRAID scripts to disable the removal of docker networks.  I really don't know the details as it all became a bit too hard for me.

 

Annoying as I went and bought a second NIC and smart switch to handle the VLAN, but such is life.

Link to post

If nobody has encountered this issue yet; I did long before in 6.3.0

When unRAID w/ IP 192.168.1.2) has an interface (say br1) with IP of 192.168.2.2

It will attempt to reach the other machines (VMs, Containers, or devices) on 192.168.2.x using that interface.

It will then fail silently as it can't reach any of the containers due to the macvlan network restrictions.

The workaround here is to either:

  • make a routing rule for the docker ip range to be reached via the default gateway 192.168.1.1
  • not assign and ip to the container network, which forces the host to reach the non-local subnet via the default router.

This is why I was complaining to bonienl about the current requirement of needing to have an IP to the docker network parent interface.

Link to post

Wait for 6.4.1

 

Ps. This network restriction exists for a reason, it would break the isolation of a container. But sometimes (should be more exception than rule) host communication is needed and the workaround of ken-ji will work.

Edited by bonienl
Link to post

Perhaps someone could clarify as to how to implement the work-around to have a container speak to the host. My ESXi box has two physical NICs (LAN/WAN), I've attached my topology diagram - unRAID is virtualized both NICs below have the same parent interface. VLAN5 is for my guest wifi, so disregard. VLAN10 is what I want configured for dockers. I also have pfSense running virtualized using both LAN/WAN and have and has interfaces for VLAN5/10.

 

My details are as follows for my unRAID VM:

Virtual NIC eth0: 

  • bridging member: br0
  • IP: 192.168.3.99
  • Gateway: 192.168.3.1(pfSense)

Virtual NIC eth1:

  • bridging member: br1
  • IP: None
  • Gateway: None
  • VLAN Enabled: Yes
  • VLAN #: 10
  • VLAN Static IP: 192.168.10.99
  • VLAN gateway: 192.168.10.1

Test Container:

  • Network Type: br1.10
  • Fixed IP address:192.168.10.96

Bashing into this container, I'm able to ping every client on 192.168.3.0/24 except 192.168.3.99 (unRAID). 

 

Thoughts?

 

docker network ls:

NETWORK ID          NAME                DRIVER              SCOPE
f1e931e72f79        br1.10              macvlan             local
a763720d03b3        bridge              bridge              local
f9aa41a1e4cb        host                host                local
f90f49050874        none                null                local

route:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.10.1    0.0.0.0         UG    1      0        0 br1.10
default         pfsense.lan     0.0.0.0         UG    2      0        0 br0
172.17.0.0      *               255.255.0.0     U     0      0        0 docker0
192.168.3.0     *               255.255.255.0   U     0      0        0 br0
192.168.10.0    *               255.255.255.0   U     0      0        0 br1.10

 

 

 

 

topology_new_1.png

Edited by joelones
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.