[6.3.0+] How to setup Dockers without sharing unRAID IP address


ken-ji

168 posts in this topic Last Reply

Recommended Posts

  • Replies 167
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

How to setup Dockers to have own IP address without sharing the host IP address: This is only valid in unRAID 6.3 series going forward. 6.4.0 has this built into the GUI but currently have a

Perhaps you would be interested to know that macvlan support is added in the upcoming version of unRAID, it allows you to select additional 'custom' networks from the GUI.  

Bridging is just a sample (and a recommended setup since you typically want VMs to use the same physical NICs without having to NAT - the default vmbr0 is a NAT bridge). You can use eth1 just the same

Posted Images

7 minutes ago, wayner said:

I use the LibreElec build if I remember correctly.

 

Just read the dvb thread and the TBS open source and crazycat builds detected the Hauppauge card in some versions. Think it was 6.3.4.

Link to post
2 hours ago, wayner said:

On May 1 a bunch of this stuff was said to be coming soon - presumably it has landed?  That isn't mentioned in this thread.  If so, in what version?

 

unRAID prerelease 6.4.0 is available which has custom networks built-in.

Link to post
1 minute ago, bonienl said:

 

unRAID prerelease 6.4.0 is available which has custom networks built-in.

Where do I learn more about these custom networks and how they work?

 

I am not an expert on networking so I don't totally understand how all of this stuff works.  Can you set up the range for docker IPs to be in non-contiguous bits? What I mean by this is that I would like to be able to use the IP addresses of 192.168.1.224-254?  In other words I want to use the first five bits of the last octal of the IP address.  Is that possible or do you have to use a /25 network which means, I think, that you would have to use the entire range from 128-254 - but I can't do that as I have other things using those address and I already have a ton of devices on my LAN so I can't afford to give up 128 addresses to unRAID.  And I don't want to go to a /23 network (and use 192.168.0.1-254) as I am sure that I will break stuff as it will mean having to change the subnet mask for everything else on my network.

Link to post

Hi,

 

I followed this guide an have setup several sites in a seperate docker with dedicated ip.

 

Eg.:

.254 = UnRaid

.253 = Nextcloud

.252 = MySql

 

This works fine. I can setup Nextcloud with the MySQL DB.

 

Then I thought: Why using a dedicated IP for MySQL as MySQL is not sharing any ports with UnRaid.

So I removed MySQL again and installed with default settings. 

 

Within UnRaid I can access MySQL but when trying to setup Nextcloud, Nextcloud tells me that it cannot reach the DB (Host Unreachable).

So I logged on to the nextcloud docker and tried to ping unraid and see, it was not resonding...

 

Is this a know behaviour or I am doing something wrong? 

 

Br,

Johannes

Link to post

Hi,

 

I was reading again this Thread and then I found this statement:

 

Quote
  • With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network.

 

This is why it is not working - I will keep my MySQL Docker now also on a dedicated IP. This is working.

 

Br,

Johannes

 

Link to post
  • 3 weeks later...

Hi, 

 

I have here two question:

 

If I go with this virtual LAN topic, then the virtual ip adresses cannot access the Unriad IP and vice versa

But can I install two different dockers to one virtual IP? E.g. MariaDB and Apache? Are this two dockers then able to communicate with each other?

 

Second question which is not directly related to this topic but similar: Currenlty i am using this virtuall IP Adresses, because I have only one NIC in my Server but need different dockers with the same port availabe (and port mapping is not possible for my scenario).

I will get soon a Quad Gigabit Card. 

Can I install then 1 Docker to one of the additional NICs and is the Docker then able to talk also with the Unraid NIC? I guess this is possible then?

 

Br,

Johannes

Link to post
  • 2 weeks later...

 

Quote

 

Some caveats:

  • With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network.
  • I cannot confirm yet what happens in the case of two or more NICs bridged/bonded together (but it should be the same as a single NIC)

 

 

I've got one docker network for all of my VPN traffic and one docker network for my web server. I'm able to reverse proxy everything on the VPN network using the docker network that my web server uses (meaning: my web server container can talk to all containers on the OTHER docker network). I'm using two NICs for this. Both NICs are bridged separately. I have a static IP set up on my second NIC that's within the docker network I created (named "webby").

 

I followed the OP's thread on making a new docker network (using my network as an example). I set up my network using my second NIC's bridged interface.

# docker network create \
-o parent=br1 \
--driver macvlan \
--subnet 10.10.10.0/24 \
--ip-range 10.10.10.160/31 \
--gateway 10.10.10.1 \
webby

I then went into my letsencrypt container and followed the instructions in the OP's post. I statically assigned an IP address to the container that was contained within the "webby" network (since my interface IP is 10.10.10.160 and I have a /31, I used 10.10.10.161) using --ip 10.10.10.161. I can now use my reverse proxy settings while keeping my other docker containers behind my VPN connection.

 

Thanks for this write up - I've spent at least 20+ hours thinking/researching a solution. I wanted docker to be the answer (and now it finally is!)

Link to post
  • 3 weeks later...

Hi, 

 

I have here a question when using multiple nics: With the following command I am able to use the second NIC:

 

# docker network create \
-o parent=br1 \
--driver macvlan \
--subnet 10.0.3.0/24 \
--ip-range 10.0.3.128/25 \
--gateway 10.0.3.1 \
docker1

Why is bridging used? Wouldnt it also work if bridging is disabled in the networksettings and for creating the docker network "eth1" is used instead of "br1"?

 

Br,

Johannes

Link to post
  • 2 weeks later...

When I try to run the command in the OP, i get the following error:

Error response from daemon: network dm-ba57b5a60b33 is already using parent interface br0

 

I attached my network settings and also an ifconfig readout with docker enabled, but all dockers turned off. Not sure if I have things set up right.

 

I have 2 NIC's (both Intel onboard), but i have the second NIC detatched from unRAID and available to a VM using append vfio-pci.ids=etc.

 

 

network.png

network2.png

Link to post
On 2/15/2017 at 1:24 AM, ken-ji said:

# docker network create \
-o parent=br0 \
--driver macvlan \
--subnet 192.168.1.0/24 \
--ip-range 192.168.1.128/25 \
--gateway 192.168.1.1 \
homenet

# docker inspect container | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "192.168.1.128",
# docker exec container ping www.google.com
PING www.google.com (122.2.129.167): 56 data bytes
64 bytes from 122.2.129.167: seq=0 ttl=57 time=36.842 ms
64 bytes from 122.2.129.167: seq=1 ttl=57 time=36.496 ms
^C
#

I tried creating this with -o parent=br0.1 and it created it. But when I assign an IP to a docker it is not reachable.

 

root@Tower:/mnt/user/appdata/pihole# docker inspect pihole | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "192.168.79.128",
root@Tower:/mnt/user/appdata/pihole# docker exec pihole ping www.google.com
ping: bad address 'www.google.com'

 

i also can't reach the webui of the docker either.

 

Link to post
57 minutes ago, ken-ji said:

Can you post your /boot/config/network.cfg file (or the diagnostics)

the output of "docker network inspect homenet"

the output of "ip addr" and "ip link"

 

 

Thanks @ken-ji

 

network.cfg attached,

 

Output of "docker network inspect homenet":

[
    {
        "Name": "towernet",
        "Id": "e700c906426a27f3dd1d61279ab286ae0c403eff7d72cb2ccfc9c50cbc819c54",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.79.0/24",
                    "IPRange": "192.168.79.200/30",
                    "Gateway": "192.168.79.83"
                }
            ]
        },
        "Internal": false,
        "Containers": {},
        "Options": {
            "parent": "br0.1"
        },
        "Labels": {}
    }
]

 

Output of "ip addr":

 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/32 scope host lo
       valid_lft forever preferred_lft forever
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
6: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff
37: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff
    inet 192.168.79.15/24 scope global br0
       valid_lft forever preferred_lft forever
38: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:7b:d0:d6:90 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
68: vethcffe18d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 6a:b9:83:53:9e:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0
70: vethf36d2cb@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether c2:c0:45:06:69:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 1
72: veth85c2f81@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether b2:fb:fe:40:09:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 2
74: veth8841ad0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 2e:f3:d0:ca:18:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3
76: veth43be249@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 3a:e7:11:e4:3a:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4
78: vethb23822a@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 5e:15:4e:4f:e8:27 brd ff:ff:ff:ff:ff:ff link-netnsid 5
84: veth8ea80ff@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether a6:b5:c8:54:17:43 brd ff:ff:ff:ff:ff:ff link-netnsid 8
86: veth00d62c7@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 36:f6:b3:2f:15:7a brd ff:ff:ff:ff:ff:ff link-netnsid 9
88: veth6cf8a31@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 86:34:77:55:ad:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 11
90: veth1cfac99@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether da:8f:52:a8:1f:25 brd ff:ff:ff:ff:ff:ff link-netnsid 12
92: veth8e022a0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 02:47:3d:0a:27:07 brd ff:ff:ff:ff:ff:ff link-netnsid 13
94: veth16c6eaa@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 4a:a5:da:63:12:00 brd ff:ff:ff:ff:ff:ff link-netnsid 14
96: vethff638b9@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether da:6b:d2:56:31:75 brd ff:ff:ff:ff:ff:ff link-netnsid 15
100: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
101: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff
102: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:44:b4:34 brd ff:ff:ff:ff:ff:ff
104: veth8e6e43c@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ae:b8:63:19:65:4a brd ff:ff:ff:ff:ff:ff link-netnsid 10
106: vethe9ce18e@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 6a:b6:eb:40:23:1b brd ff:ff:ff:ff:ff:ff link-netnsid 16
110: br0.1@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff

"ip link":

 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
6: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP mode DEFAULT group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff
37: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff
38: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:7b:d0:d6:90 brd ff:ff:ff:ff:ff:ff
68: vethcffe18d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 6a:b9:83:53:9e:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0
70: vethf36d2cb@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether c2:c0:45:06:69:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 1
72: veth85c2f81@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether b2:fb:fe:40:09:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 2
74: veth8841ad0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 2e:f3:d0:ca:18:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3
76: veth43be249@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 3a:e7:11:e4:3a:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4
78: vethb23822a@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 5e:15:4e:4f:e8:27 brd ff:ff:ff:ff:ff:ff link-netnsid 5
84: veth8ea80ff@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether a6:b5:c8:54:17:43 brd ff:ff:ff:ff:ff:ff link-netnsid 8
86: veth00d62c7@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 36:f6:b3:2f:15:7a brd ff:ff:ff:ff:ff:ff link-netnsid 9
88: veth6cf8a31@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 86:34:77:55:ad:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 11
90: veth1cfac99@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether da:8f:52:a8:1f:25 brd ff:ff:ff:ff:ff:ff link-netnsid 12
92: veth8e022a0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 02:47:3d:0a:27:07 brd ff:ff:ff:ff:ff:ff link-netnsid 13
94: veth16c6eaa@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 4a:a5:da:63:12:00 brd ff:ff:ff:ff:ff:ff link-netnsid 14
96: vethff638b9@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether da:6b:d2:56:31:75 brd ff:ff:ff:ff:ff:ff link-netnsid 15
100: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff
101: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff
102: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:54:00:44:b4:34 brd ff:ff:ff:ff:ff:ff
104: veth8e6e43c@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether ae:b8:63:19:65:4a brd ff:ff:ff:ff:ff:ff link-netnsid 10
106: vethe9ce18e@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 6a:b6:eb:40:23:1b brd ff:ff:ff:ff:ff:ff link-netnsid 16
110: br0.1@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff

 

network.cfg

Link to post

ok. Your dockers are on a macvlan subinterface br0.1, which seems to be a VLAN id 1 under br0.

By design, macvlan subinterfaces cannot talk to the host on the parent interface. Also, as you are trying to use VLAN id 1, does your switch have VLAN support?

I ask because VLANs can't see each other unless you have a L3 router (or some VLAN bridge) in your network.

In a nutshell, Packets on a VLAN subinterface (br0.x) get tagged when they exit the host (br0), the tag is standardized (802.11q) but makes the packet look like garbage to devices without VLAN support.

 

This is the reason my samples gave the simple case of using br0 and br1 directly. When you use a subinterface, this assumes VLAN support and therefore some understanding of what you are trying to achieve.

 

On 2/15/2017 at 2:24 PM, ken-ji said:

How to setup Dockers to have own IP address without sharing the host IP address:

This is only valid in unRAID 6.3 series going forward.

 

 

Some caveats:

  • With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network.

 

 

EDIT: Upon re-checking, your not using VLANs. Since your dockers cannot access the main network, can you run "ip -d link show"? I need a little more detail to understand why its not working.

also "docker inspect pi_hole"

Edited by ken-ji
Corrected...
Link to post
31 minutes ago, ken-ji said:

ok. Your dockers are on a macvlan subinterface br0.1, which seems to be a VLAN id 1 under br0.

By design, macvlan subinterfaces cannot talk to the host on the parent interface. Also, as you are trying to use VLAN id 1, does your switch have VLAN support?

I ask because VLANs can't see each other unless you have a L3 router (or some VLAN bridge) in your network.

In a nutshell, Packets on a VLAN subinterface (br0.x) get tagged when they exit the host (br0), the tag is standardized (802.11q) but makes the packet look like garbage to devices without VLAN support.

 

This is the reason my samples gave the simple case of using br0 and br1 directly. When you use a subinterface, this assumes VLAN support and therefore some understanding of what you are trying to achieve.

 

 

EDIT: Upon re-checking, your not using VLANs. Since your dockers cannot access the main network, can you run "ip -d link show"? I need a little more detail to understand why its not working.

also "docker inspect pi_hole"

 

"ip -d link show"

 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0 
    ipip remote any local any ttl inherit nopmtudisc 
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0 promiscuity 0 
    gre remote any local any ttl inherit nopmtudisc 
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    gretap remote any local any ttl inherit nopmtudisc 
5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0 
    vti remote any local any ikey 0.0.0.0 okey 0.0.0.0 
6: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP mode DEFAULT group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 2 
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on 
37: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 
38: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:7b:d0:d6:90 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 
68: vethcffe18d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 6a:b9:83:53:9e:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
70: vethf36d2cb@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether c2:c0:45:06:69:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
72: veth85c2f81@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether b2:fb:fe:40:09:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 2 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
74: veth8841ad0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 2e:f3:d0:ca:18:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
76: veth43be249@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 3a:e7:11:e4:3a:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
78: vethb23822a@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 5e:15:4e:4f:e8:27 brd ff:ff:ff:ff:ff:ff link-netnsid 5 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
84: veth8ea80ff@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether a6:b5:c8:54:17:43 brd ff:ff:ff:ff:ff:ff link-netnsid 8 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
86: veth00d62c7@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 36:f6:b3:2f:15:7a brd ff:ff:ff:ff:ff:ff link-netnsid 9 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
88: veth6cf8a31@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 86:34:77:55:ad:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 11 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
90: veth1cfac99@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether da:8f:52:a8:1f:25 brd ff:ff:ff:ff:ff:ff link-netnsid 12 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
92: veth8e022a0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 02:47:3d:0a:27:07 brd ff:ff:ff:ff:ff:ff link-netnsid 13 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
94: veth16c6eaa@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 4a:a5:da:63:12:00 brd ff:ff:ff:ff:ff:ff link-netnsid 14 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
96: vethff638b9@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether da:6b:d2:56:31:75 brd ff:ff:ff:ff:ff:ff link-netnsid 15 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
100: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 200 hello_time 200 max_age 2000 ageing_time 30000 stp_state 1 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 
101: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff promiscuity 1 
    tun 
    bridge_slave state disabled priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on 
102: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:54:00:44:b4:34 brd ff:ff:ff:ff:ff:ff promiscuity 1 
    tun 
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on 
104: veth8e6e43c@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether ae:b8:63:19:65:4a brd ff:ff:ff:ff:ff:ff link-netnsid 10 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
106: vethe9ce18e@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 6a:b6:eb:40:23:1b brd ff:ff:ff:ff:ff:ff link-netnsid 16 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
110: br0.1@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 0 
    vlan protocol 802.1Q id 1 <REORDER_HDR> 

and the docker inspect for pihole is attached.

 

BTW, the Vlan i enabled after i posted while troubleshooting. I should disable it but forgot to. But i was having the issues before i enabled it.

 

pihole inspect.txt

Link to post
6 minutes ago, ken-ji said:

Then my answer stands.

* you can't use VLANs unless you have a VLAN supported switch.

* with only a single NIC, dockers with dedicated IPs can not talk to the host and vice versa.

 

This is in your original post:

"With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network. "

 

I thought that latter part of that paragraph was that it'd still work as long as i wasn't trying to have the container talk to unRaid. Shouldn't that still mean the container can talk to the outside world? or did I misunderstand what you're saying?

 

In what cases does your single NIC example work? Is it not feasible if you don't have a VLAN supported network? I do have a second NIC and can try your 2 NIC reco, but i was hoping to isolate the second NIC specifically for a VM running PFSense (which i haven't started yet).

Edited by Kewjoe
Link to post

Something is wrong with your setup right now...
How did you create the br0.1 interface? The one in your ip commands is created as a VLAN subinterface.

You can reconfigure the pihole container back, delete the homenet docker network with "docker network rm homenet", and stop the array and disable the VLAN network. Then try again.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.