[6.3.0+] How to setup Dockers without sharing unRAID IP address


ken-ji

Recommended Posts

On 1/20/2018 at 10:43 AM, bonienl said:

My first question would be: which container needs to talk to the host over the network?

 

It would be a container on the VLAN10 subnet (192.168.10.0/24), ideally it would be the kodi-headless docker with needs access to 192.168.3.99 via the smb shares.

 

EDIT: I added the following route which appears to semi-work from (not sure if this is the right way to do it though). I'm able to ping 192.168.3.99 from 192.168.10.0/24 but not the other way around but I think it should be good for my needs.

route add -net 192.168.10.0 netmask 255.255.255.0 gw 192.168.3.1

A couple of questions:

 

  1. Anything else I have to be aware of with respect to this static route after subsequent reboots? I'm assuming it's as simple as adding it to the go file?
  2. Can anyone elaborate on how to force a specific mac address for a docker such that it does not change? A better question is, does a particular mac address change for a docker on subsequent reboots? Probably on container removal/recreation...
  3. Another point,  I'm not sure why I cannot select br0 for a docker's interface, I thought I saw it selectable earlier, not sure if it disappeared for some odd reason. Should I not be able to see it as part of the network type dropdown?
Edited by joelones
Link to comment
43 minutes ago, joelones said:

 

Have you tried adding a static route from the docker ip range to be reached via the default gateway (i guess 192.168.7.1 in your case)? I'm able to get br1 to br0 communication.

 

I have played around with itbut was still not able to get it working :-/. 

 

Can you be more specific on how you did it?

Link to comment
  • 4 weeks later...

So I'm trying something different here (instead of everything go out through the same pipe.)

 

I have a free physical NIC so I created a new port group in ESXi (VLAN10) and assigned it the id of 10. I then added another NIC to unRAID and created a br1 bridge and gave it an address on that subnet. So alls fine until here.

 

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         pfSense.localdo 0.0.0.0         UG    1      0        0 br0
default         192.168.10.1    0.0.0.0         UG    10     0        0 br1
192.168.3.0     *               255.255.255.0   U     0      0        0 br0
192.168.10.0    *               255.255.255.0   U     0      0        0 br1

Now, I figure I could assign a container to br1 and give it an appropriate address in the subnet it question.

 

I do so, but cannot ping it. I also can't ping the default gateway for this VLAN within this container. Although I can ping the default gateway (pfsense) on this subnet from unRAID.

 

Thought this would be an easier approach to have two nics, any thoughts on what I'm missing?

 

EDIT: Works now...I reread this thread and noticed the insight about not assign an ip to br1. That did it...

Edited by joelones
Link to comment

I'm assuming this is what needs to get commented out to persist custom networks now in /etc/rc.d/rc.docker  now?

 

# get existing custom networks
  NETWORKS=$(docker network ls --filter driver='macvlan' --format='{{.Name}}'|tr '\n' ' ')
  for NETWORK in $NETWORKS; do
    # remove networks which have been replaced
    [[ $EXCLUDE =~ "$NETWORK " ]] && docker network rm $NETWORK 1>/dev/null 2>&1
    # remove user defined networks
    [[ $DOCKER_USER_NETWORKS != preserve && ! $ACTIVE =~ "$NETWORK " ]] && docker network rm $NETWORK 1>/dev/null 2>&1
  done

 

Not sure , maybe someone could explain?

 

eth1 (vlan 10 for me) is configured as per the attachment. 

 

ifconfig reports:

br1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::c8a8:6aff:fe05:7e2d  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:a8:e9:71  txqueuelen 1000  (Ethernet)
        RX packets 998135  bytes 1584184370 (1.4 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 165324  bytes 20552626 (19.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

yet "docker network ls" reports the following without br1

 

NETWORK ID          NAME                DRIVER              SCOPE
b89f4a20a05f        br0                 macvlan             local
6f115d5cc668        bridge              bridge              local
76cfe74c0cc3        host                host                local
cdd2a35da12d        none                null                local

I can manually create a docker network with the command line and using the parent interface br1 to get my vlan working.

 

But if I use a non-standard name for the network the it won't survive reboots.

 

Should I instead disable bridging from the GUI for eth1 and then use the docker create command and use as parent "eth1" and create a docker network with the name "br1" instead of using something like "docker1"????

 

Maybe that makes more sense here.

Screen Shot 2018-02-22 at 9.09.23 PM.png

Edited by joelones
Link to comment
  • 10 months later...

Question to add to this. If I have two NICs with two IPs, one for each NIC how can I separate say Plex docker on one IP and say Home Assitant, MQTT, and HA supported dockers on the other NIC? So in theory I would like to keep all Plex, Tautulli, and Ombi traffic through one card and all home automation to the other card? Is it possible or would it be best to divided it like the begin post suggests?

Link to comment
2 hours ago, gsd2012 said:

Question to add to this. If I have two NICs with two IPs, one for each NIC how can I separate say Plex docker on one IP and say Home Assitant, MQTT, and HA supported dockers on the other NIC? So in theory I would like to keep all Plex, Tautulli, and Ombi traffic through one card and all home automation to the other card? Is it possible or would it be best to divided it like the begin post suggests?

Hmm... the separation you seem to want might require VLAN / or 2nd physical LAN support on your network.

Think of it this way. Docker supports any combination of the following:

  • one host network - container network is the host itself
  • a bridged network - an internal bridge is made which NATs the outgoing connections and does port forwardings on the host ports to the container ports
  • a macvlan network - an internal bridge is made, which is a subinterface. This allows packets to go in and out of the parent NIC so the containers seem like they are members of the NIC network and have their own IP apart from the host, but a built-in security feature prevents packets from the containers to loopback to the parent NIC

 

Docker networks cannot have the same subnet defined so if your two NICs are on the same LAN (even with different IPs) containers on the macvlan network can only be bound to one NIC (usually the 2nd). The other containers will need to be running in bridged (or host) mode - which make them available on the other NIC (the primary)

Link to comment
17 minutes ago, ken-ji said:

Hmm... the separation you seem to want might require VLAN / or 2nd physical LAN support on your network.

Think of it this way. Docker supports any combination of the following:

  • one host network - container network is the host itself
  • a bridged network - an internal bridge is made which NATs the outgoing connections and does port forwardings on the host ports to the container ports
  • a macvlan network - an internal bridge is made, which is a subinterface. This allows packets to go in and out of the parent NIC so the containers seem like they are members of the NIC network and have their own IP apart from the host, but a built-in security feature prevents packets from the containers to loopback to the parent NIC

 

Docker networks cannot have the same subnet defined so if your two NICs are on the same LAN (even with different IPs) containers on the macvlan network can only be bound to one NIC (usually the 2nd). The other containers will need to be running in bridged (or host) mode - which make them available on the other NIC (the primary)

So even with two NICs it is not possible to split the two sections of docker? I would prefer all traffic to flood one NIC so what your best advise if you have two NICs installed? I remember you saying in another post you had two NICs what would be the best optimization for just dockers no VMs?

Link to comment
1 hour ago, gsd2012 said:

So even with two NICs it is not possible to split the two sections of docker? I would prefer all traffic to flood one NIC so what your best advise if you have two NICs installed? I remember you saying in another post you had two NICs what would be the best optimization for just dockers no VMs?

 

Its quite possible - buit you have to understand what the limitations are depending on what your network has or supports.

 

If you do not have VLANs (or a second physical LAN),

you will end up with a mix of containers on eth0/br0 running in bridge or host mode, which means they share unRAID IP; and another group of containers on eth1/br1 running is custom network, which means they have their own IP, and can talk to unRAID (If unRAID does not have an 2nd IP assigned to this NIC)

 

Since you mentioned having 2 NIC and 2 IPs... but did not elaborate, I also did not elaborate too much on the possible config, and assumed you did not have VLAN/2nd LAN support.

 

Link to comment
7 hours ago, ken-ji said:

 

Its quite possible - buit you have to understand what the limitations are depending on what your network has or supports.

 

If you do not have VLANs (or a second physical LAN),

you will end up with a mix of containers on eth0/br0 running in bridge or host mode, which means they share unRAID IP; and another group of containers on eth1/br1 running is custom network, which means they have their own IP, and can talk to unRAID (If unRAID does not have an 2nd IP assigned to this NIC)

 

Since you mentioned having 2 NIC and 2 IPs... but did not elaborate, I also did not elaborate too much on the possible config, and assumed you did not have VLAN/2nd LAN support.

 

Yes correct I have a flat network without VLAN's (switches do not support it). Maybe tying the NIC together for load balancing may help giving it one IP as the host. It is not a problem now with the dockers using the one NIC I just wanted to keep traffic from beating that one up the entire time. Mainly I am looking for the optimal way to use both NICs in the system to support the dockers. Attached is the dockers I am currently running and yes you are right some are host and some are bridged. I am not that knowledgeable on dockers as of yet so I was just going off the default settings for most. I have to add LetsCrypt docker for home automation next so what IS the best way to run these? And what is best for the use of two NICs?  Well there be a conflict with Bridge and Host settings? I did like your idea of putting the dockers in their own network but would my current 192.168.0.0 be the address it sees for usage? or will I have to point it to a new network say 192.168.1.0 for devices?

Dockers.png

Link to comment

I was just reviewing videos on docker when I saw that it had advance features on the docker.img settings. I was a bit concern that I dorked all this up by just leaving it so all the containers point to the host IP 192.168.0.10. Is it better to put the dockers on their own IP subnet like this topic talks about? Is that the best practice so each devices created in the docker containers gets its own IP address? So if I have two NIC cards what is the best way to utilize them to work best with using Dockers? I was looking at doing maybe some VM in the future. Currently my UnRAID has one IP on one NIC 192.168.0.10 in a /24 network. I have another NIC card installed but currently it is disabled at this time. I am not an expert with some parts of UnRAID and this area I get a bit lost. To get the most out of my UnRAID running Plex, Home Assistant, and other docker what would all the (experts) do to make each docker run smoothly? or does it not matter the way i currently have it in the picture above in other post. I cut out the IP on it but all the 192 are 192.168.0.10 for all the dockers. All of the dockers are setup with the defaults they came with.  Thank you for any help you can offer here.

Link to comment
  • 1 year later...
  • 1 year later...

So if I have a pretty vanilla implementation with 3 network interfaces (for no other reason than the motherboard came with two and I had an extra intel to plugin) what would my gateway be? I am currently only using eth0 with a gateway of 192.168.50.1 and UNRAID at 192.168.50.7.  


What will my subnet and gateway be for assigning a fixed IP to get two containers running Plex? 

 

Thanks!

Link to comment
16 hours ago, deepbellybutton said:

So if I have a pretty vanilla implementation with 3 network interfaces (for no other reason than the motherboard came with two and I had an extra intel to plugin) what would my gateway be? I am currently only using eth0 with a gateway of 192.168.50.1 and UNRAID at 192.168.50.7.  


What will my subnet and gateway be for assigning a fixed IP to get two containers running Plex? 

 

Thanks!

 

Best solution I can think of that uses base level network devices (ie plain switches and router)

Configure eth1 as follows (you can opt not to turn on bridging)

image.png.7dbadc7778400c2033da74151266c012.png

Then with the array stopped enable it in docker settings configure the network interface br1 (eth1 if you don't enable bridging)

image.thumb.png.6f9fbb5eaf4558eb3b4406d5d4005d91.png

make sure to use the same subnet as your main LAN (where you want Plex to be)

ie Subnet is 192.168.50.0/24

Gateway is 192.168.50.1

DHCP pool should be something like 192.168.50.0/24

  • Ensure the DHCP pool is a range that your DHCP server will never assign (try to prevent collisions now to avoid headaches later on) - again the Docker DHCP pool is not a real DHCP pool and has no connection to the network's actual DHCP server/ nor will it communicate with the network's DHCP server
  • 192.168.50.0/24 is the easiest value to use but if you run any containers on this network, there is a non zero chance of IP address collision if you do not set them up statically
  • use http://jodies.de/ipcalc to compute possible subnets the fit your use case

 

When the docker service is enabled, configure/create your plex dockers with the following info

image.thumb.png.c7cb0d5882086aa964499c3029ca120d.png

Change the IP to whatever you want it to be

 

This will make your Plex servers accessible at that specified IP address instead of the Unraid IP and as a bonus, all Plex traffic will use that 2nd network connection without sharing with the rest of Unraid.

Link to comment
  • 2 weeks later...

You are the best. I appreciate the response and just know I've been trying.  I have little time to play with this so am just getting to it. 

 

I did everything you suggested perfectly. I understand it all but I cannot get what you are telling me should be there to show up. I'm sure it's something really silly. 

 

I cannot get the network type or fixed IP address fields to show up in any of the app templates. 

 

 image.thumb.png.d635be7d64e02d99e9173bded5ad1fd2.png

 

Thanks so much! 

Link to comment
  • 1 year later...

Hey I don't know if this too late to this topic, but I plan to try put the IPVlan docker network driver on unraid to get past the whole macvlan can't talk to the unraid box. I'm guessing this doesn't necessarily isolate your networks since all networks will be able to talk to each other and its up to you to set the isolation in your gateway/router at this point.

 

Here is the guide I found on youtube:

https://youtu.be/bKFMS5C4CG0

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.