[6.3.0+] How to setup Dockers without sharing unRAID IP address


ken-ji

Recommended Posts

On 11/18/2017 at 4:35 PM, dalben said:

I'm now going with the single NIC solution and getting this error when I try and create the network:


Error response from daemon: failed to allocate gateway (192.168.1.1): Address already in use

One the unRAID server, the DNS is set to 192.168.1.1  Can I assume that's what creating the error above?  If not, what else could it be?  I'm using the following command:

 

I got it going finally.  Something was conflicting with the unRAID GUI's ability to set something like this up I guess.  Anyway I have everything running on eth0

 

Two quirks:

  1. Once I have dockers with their own IP, I get access denied for any docker when I try to launch the  webgui that is still configured with the Servers IP
  2. my kodi database uses SMB and the source path is //TDM/mnt etc.  Kodi can't find that directory.  Would that be because there's no link access from the dedicated IP to the Server, or is it because tdm, the servers host name, isn't resolving?

 

#2 is painful right now.  I'd be tempted to put Kodi back to the old shared IP but as above that would mean I can't access it anymore.

 

Alternatively I could setup a bridge as the switch the server is on supports vlan.  Would that solve the issue ?

#1 is a known fault of the unRAID Web UI as it has some (braindead) rules on how the docker webui should be configured

#2 is because the dockers can't reach unRAID anymore.

 

You're best solution is to either

#1 setup a new VLAN (seeing as you have switch support) and route them together (you have a L3 router right?)

  • This will require resetting up just about everything and your SMB paths might still not work right. - Blame the protocol for this.

#2 use two NIC ports like I specified above, and migrate your docker network to the 2nd NIC:

  • Take note to assign unRAID an ip to eth0/br0 and DO NOT assign an ip to eth1/br1
  • shutdown all involved dockers (do not stop the docker service)
  • execute
    docker network rm ~docker-network-name-here~
    docker network create ... --parent eth1/br1
    to regenerate the docker network on the other NIC

 

Link to comment

Thanks KenJi-San.

 

i spent most the weekend on this but now back to the way it was, all in bridge mode. Some Details here:

 

My wasted weekend

 

looking at your post is see I did do a couple of things wrong so I’ll give it another go soon.  I don’t have any vlans setup yet so that’s a bit of reading but the switch the server will be connected to is L3 and that’s connected to a USG if needed.

Link to comment
  • 3 weeks later...

 

I've been trying to get this working but for some reason, once the docker is created it does not have any internet access, nor can I ping it. My server has two NICs, so I assigned the docker network to eth1.  I've gone through this thread several times now, and I haven't run into any of the issues others had in terms of getting the docker started/interfaces in use - everything SEEMS like it should be working...but nada.

 

Relevant Info:

  • Unraid IP: 192.168.1.5
  • Subnet: 192.168.1.0/24
  • Router: EdgerouterX @ 192.168.1.1

 

Things I've tried:

  • Turned bridging on and tried using br1
  • Tried a completely different docker container
  • Assigned a static IP lease of 192.168.1.224 to the MAC address I found in docker inspect pihole

 

Any ideas would be appreciated!  Further information about my system and setup below.

 

EDIT: Problem solved.  Make sure the other NIC is plugged in.  Ugh.

 

Settings for docker1

root@DS9:/mnt/user/apps/pihole# docker network inspect docker1
[
    {
        "Name": "docker1",
        "Id": "4f279733e1460cf0d3754766b833965ebaef8674b15c09d1ae9b3494eaa33f17",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.1.0/24",
                    "IPRange": "192.168.1.224/27",
                    "Gateway": "192.168.1.1"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "a2d3a49ff4999963d0ca84bda755533d3afc9c2d9e2e241eed07b97ae7122045": {
                "Name": "pihole",
                "EndpointID": "aac23b2a278e396e80fc1cb60025244c36f5cca0f9f9d6289ab573d3d3ecb737",
                "MacAddress": "02:42:c0:a8:01:e0",
                "IPv4Address": "192.168.1.224/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "parent": "eth1"
        },
        "Labels": {}

IP Address of the running docker

 

root@DS9:/mnt/user/apps/pihole# docker inspect pihole | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "192.168.1.224",

 

ip addr

 

root@DS9:/mnt/user/apps/pihole# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/32 scope host lo
       valid_lft forever preferred_lft forever
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
8: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether bc:5f:f4:ea:66:9f brd ff:ff:ff:ff:ff:ff
9: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether bc:5f:f4:ea:66:9e brd ff:ff:ff:ff:ff:ff
78: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:5f:f4:ea:66:9f brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.5/24 scope global br0
       valid_lft forever preferred_lft forever
94: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:84:e5:a6:e8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
96: vethe10b5c3@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 2e:9c:70:84:08:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
98: veth05776bd@if97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether fa:89:84:d1:cb:6b brd ff:ff:ff:ff:ff:ff link-netnsid 1
100: vethdc33a7a@if99: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether e6:4c:5f:93:7c:19 brd ff:ff:ff:ff:ff:ff link-netnsid 2
102: vethbc6d1f6@if101: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether e6:bf:6b:f6:bc:8f brd ff:ff:ff:ff:ff:ff link-netnsid 3
104: veth0eafc2d@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether f2:96:e5:de:df:c9 brd ff:ff:ff:ff:ff:ff link-netnsid 4
106: veth4309524@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether c2:23:3b:f5:b3:4b brd ff:ff:ff:ff:ff:ff link-netnsid 5

ip link

 

root@DS9:/mnt/user/apps/pihole# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
8: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP mode DEFAULT group default qlen 1000
    link/ether bc:5f:f4:ea:66:9f brd ff:ff:ff:ff:ff:ff
9: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
    link/ether bc:5f:f4:ea:66:9e brd ff:ff:ff:ff:ff:ff
78: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether bc:5f:f4:ea:66:9f brd ff:ff:ff:ff:ff:ff
94: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:84:e5:a6:e8 brd ff:ff:ff:ff:ff:ff
96: vethe10b5c3@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether 2e:9c:70:84:08:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
98: veth05776bd@if97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether fa:89:84:d1:cb:6b brd ff:ff:ff:ff:ff:ff link-netnsid 1
100: vethdc33a7a@if99: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether e6:4c:5f:93:7c:19 brd ff:ff:ff:ff:ff:ff link-netnsid 2
102: vethbc6d1f6@if101: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether e6:bf:6b:f6:bc:8f brd ff:ff:ff:ff:ff:ff link-netnsid 3
104: veth0eafc2d@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether f2:96:e5:de:df:c9 brd ff:ff:ff:ff:ff:ff link-netnsid 4
106: veth4309524@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether c2:23:3b:f5:b3:4b brd ff:ff:ff:ff:ff:ff link-netnsid 5

 

/boot/config/network.config

 

reading network.cfg
# Generated settings:
IFNAME[0]="br0"
BRNAME[0]="br0"
BRSTP[0]="no"
BRFD[0]="0"
BRNICS[0]="eth0"
DESCRIPTION[0]=""
USE_DHCP[0]="no"
IPADDR[0]="192.168.1.5"
NETMASK[0]="255.255.255.0"
GATEWAY="192.168.1.1"
DHCP_KEEPRESOLV="yes"
DNS_SERVER1="192.168.1.1"
DNS_SERVER2="8.8.8.8"
DNS_SERVER3=""
MTU[0]=""
IFNAME[1]="eth1"
DESCRIPTION[1]=""
USE_DHCP[1]=""
MTU[1]=""
SYSNICS="2"
~

 

Edited by meestark
Idiocy.
Link to comment

I wish I could get this to work!.....

 

2 network interfaces

eth0: has Ip address 192.168.1.22, gateway 192.168.1.1, ipv4 only, bridge enabled

eth1: IPV4, no IP, bridge enabled

 

I am trying to do:

docker network create \
-o parent=br1 \
--driver macvlan \
--subnet 192.168.1.0/24 \
--ip-range 192.168.1.64/28 \
--gateway 192.168.1.1 \
localnetwork

 

...and get "Error response from daemon: failed to allocate gateway (192.168.1.1): Address already in use"

 

 

Link to comment
3 minutes ago, meestark said:

@spants What's the output of "docker network ls"?

NETWORK ID          NAME                DRIVER              SCOPE
f07c68001ad1        br0                 macvlan             local
d8cefac70d2c        bridge              bridge              local
84413264ada5        host                host                local
0fbf02a8dfac        none                null                local

 

Link to comment

and "docker network inspect br0"? I'm guessing that's an old one you may have created during your troubleshooting that is already bound to 192.168.1.1 as the gateway. Assuming that's the case, and you don't have any running dockers currently using br0, then you can remove it with docker network rm br0

Link to comment

many thanks for looking at this .....

 

root@Tower:~# docker network inspect br0
[
    {
        "Name": "br0",
        "Id": "f07c68001ad177b9071f64b1837b4cad117f60b7e3648adbc539bef41ac08ff1",
        "Created": "2017-12-06T15:59:21.310581567Z",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.1.0/24",
                    "IPRange": "192.168.1.0/25",
                    "Gateway": "192.168.1.1",
                    "AuxiliaryAddresses": {
                        "server": "192.168.1.22"
                    }
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "parent": "br0"
        },
        "Labels": {}
    }
]

Link to comment

Yeah you can  go ahead and remove that with "docker network rm br0" - that's what you currently have bound to your gateway.

 

Then try again with

 

docker network create \
-o parent=br1 \
--driver macvlan \
--subnet 192.168.1.0/24 \
--ip-range 192.168.1.64/28 \
--gateway 192.168.1.1 \
localnetwork

 

 

I literally just set this up with your pihole docker template yesterday :)

Link to comment
On 11/20/2017 at 7:00 AM, ken-ji said:
  • Take note to assign unRAID an ip to eth0/br0 and DO NOT assign an ip to eth1/br1
  • shutdown all involved dockers (do not stop the docker service)
  • execute
    
    docker network rm ~docker-network-name-here~
    docker network create ... --parent eth1/br1
    to regenerate the docker network on the other NIC

 

I've now got time to work on this again.  A quick question on the above.

  1. Can I run 2 docker networks at the same time?
    • Ideally I want to setup the new docker network with individual IPs are slowly mograte the dockers one at a time to make sure everything works before I go too far.
  2. If it is, do I follow your two commands above but add another network create for eth0/br0 that doesn't have the --parent flag ?

Totally out of my depth here......

 

Link to comment
22 hours ago, dalben said:

 

I've now got time to work on this again.  A quick question on the above.

  1. Can I run 2 docker networks at the same time?
    • Ideally I want to setup the new docker network with individual IPs are slowly mograte the dockers one at a time to make sure everything works before I go too far.
  2. If it is, do I follow your two commands above but add another network create for eth0/br0 that doesn't have the --parent flag ?

Totally out of my depth here......

 

* Yes, you can run multiple docker networks. what is important and not allowed is to have multiple docker networks with the same gateway address

* --parent is mandatory as it tells docker where to attach the macvlan network.

 

Link to comment
  • 3 weeks later...
On 2/15/2017 at 6:24 AM, ken-ji said:

With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network.

 

This is my problem, I need to run netdata docker on the host network to be able to fully see host stats, but i need to run my nginx and all other dockers on br0 macvlan, is there ANY way to let nginx proxy/talk to netdata (eth0 to br0)?

This is something I've found online but i get device busy when trying to implement even with everything unplugged.

 

Quote

curl: (7) Failed connect to 10.12.0.117:80; No route to host

The host is unable to communicate with macvlan devices via the primary interface. You can create another macvlan interface on the host, give it an address on the appropriate network, and then set up routes to your containers via that interface:


# ip link add em1p1 link em1 type macvlan mode bridge
# ip addr add 10.12.6.144/21 dev em1p1
# ip route add 10.12.0.117 dev em1p1

 

Any help is appreciated.

Link to comment

Ok, did a bit of research.

 

Assume the following:

local network gateway: 192.168.1.1

local network: 192.168.1.0/24

DHCP range: 192.168.1.64/26 (64-127)

unRAID server IP: 192.168.1.5

unRAID interface: br0

Docker network: localnet

Docker network range: 192.168.1.128/26 (128-191)

 

Then we'll need another IP for unRAID server to use with the docker network

unRAID server IP2: 192.168.1.6

 

So in this case after you'll need to run the following commands:

ip link add br0.host link br0 type macvlan mode bridge
ip addr a 192.168.1.6/24 dev br0.host
ip link set up dev br0.host

The last line is very important, as by default, the interface would be down (not connected)

Edited by ken-ji
Forgot to enable the macvlan subinterface
Link to comment

Its not a true vlan. So the use of a different subnet puts two logical networks on the same broadcast domain, allowing spying on each other but no true communication, as subnet 1 has no idea how to route to subnet 2 (the gateway has in all likelihood a single ip on subnet1 and nothing on subnet two)

Link to comment

OK, so I found this thread because I need to set up my Home Automation gateway docker to use port 80 which means it will require its own IP address.  Unfortunately I'm not getting anywhere.  I tried entering the commands on the first post and my unRaid manages to accept them all except for when I enter the last line:  homenet

 

When I press enter I get the following message:  "docker network create" requires exactly 1 argument(s).

See 'docker network create -[-help'.

 

Specifics: I am running version 6.3.5

I have 1 nic

Network info:

Unraid: 192.168.0.133

Subnet 192.168.0.0/24

GW: 192.168.0.1

 

I did make the changes to the commands so that they reflected my subnet and not that of the instructions.  Other than that, I've copied everything to the letter. Any suggestions?

 

Link to comment

Thanks for the reply.  I must be an idiot because I don't see anywhere it can be easily changed in the GUI.  I did search for the instructions before and after posting this question.  Every single article that I found links back to the original instructions on this thread.  Is there another article that shows how this is done through the GUI?

Link to comment

Ah, OK.  I glossed over that.  I figured I was on the latest since it was reporting that there were no updates.  

 

Short of installing a RC of version 6.4, is there anything that I can do to get this working in my current version?  I am running a few mission critical services on my unRaid (one of them being a Security Cam DVR) and I'd rather not risk any instability that inherently comes from early release software.

Link to comment

Post your network details and what you want the container ip to be.

Post the output of

docker network ls

and

docker network inspect <network>

where network is any network listed by the ls command other than bridge, host and none

 

I'll see what you need to run and change

Link to comment

Thanks, after my last post I did some research and decided to test the 6.4 RC since downgrading seemed simple.  True to form (at least as it seems for me) it broke many things  including my dockers (they disappeared) and I could not longer access shares from some of the systems on my network.  I spent a couple hours of getting things fixed up and I am now running the latest 6.4RC and I am able to assign IPs to the dockers independent of the unRaid host IP which is what I wanted to do.  The only issue I have now is that its not receiving IPs from my DHCP server but is assigning the IP itself.  I do have the ability to assign the IP manually from the GUI however I prefer to leverage my DHCP server and assign reserved IPs for cleaner management.  Not sure if this behavior is by design or if there's something that I can do alter it so that it behaves the way that I'd like.  I'll do a search however if you have any advice to give I'd gladly accept it. 


Thanks!

 

Link to comment

Yeah, docker networks currently do not communicate with an external DHCP server - there was some talk about somebody writing a docker engine plugin to do that, but that has yet to materialize - so the need to specify a subnet for docker containers to use outside of the DHCP server allocation range is necessary.

Link to comment
16 hours ago, ken-ji said:

Yeah, docker networks currently do not communicate with an external DHCP server - there was some talk about somebody writing a docker engine plugin to do that, but that has yet to materialize - so the need to specify a subnet for docker containers to use outside of the DHCP server allocation range is necessary.

 

OK, thank you for clarifying. I'll manage things manually for now until a solution is found.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.