(SOLVED) Interesting Network Issue


Recommended Posts

EDIT: I didnt post this in the OpenVPN-AS container thread as I am thinking this is an unRAID network issue. Mods, please move this if you / future replies indicate I am wrong.

 

Hi All,

 

I have an interesting network issue. 

 

I establish a VPN connection to my unRAID machine via linuxserver.io docker OpenVPN-AS. All has always worked well.

 

Recently (as in a few days ago) I decided to change things and give each of my containers their own LAN IP on the same range as all other machines on my LAN (192.168.1.x). I went further and allocated (via the -h docker switch AND DNS in the router) each their own hostname.

 

Now, when I VPN in, I cannot access any docker container UI. I can access other machines on the network fine and also the unRAID UI. I have tried to access the IP address as well as the local DNS name (I half expected the local DNS name not to work) but to no avail. When I revert back to using a bridge or host port, I can access the containers UI's just fine via VPN.

 

There is absolutely no change to local access on the LAN - where I can access each container perfectly fine using either the hostname or the local IP.

 

I imagine this must have something to do with a container accessing a container, but I am not savy enough here to figure out what is going on to try and fix it.

 

Any help would be appreciated.

 

Ta,

 

Daniel

Edited by danioj
Link to comment

This is "normal" and expected. The mechanism that allows docker containers to have their own IP's puts up a hard firewall between the host network and any dockers with separate IP's. It's a security function and not bypassable without routing the traffic around the firewall.

 

It's been discussed ad-nauseam in various threads around the forum.

Link to comment
28 minutes ago, jonathanm said:

This is "normal" and expected. The mechanism that allows docker containers to have their own IP's puts up a hard firewall between the host network and any dockers with separate IP's. It's a security function and not bypassable without routing the traffic around the firewall.

 

It's been discussed ad-nauseam in various threads around the forum.

Thanks @jonathanm. My search kungfu must be terrible, in a dozen searches I didnt see any discussion. Can you give me a link to the most authoritative thread so I can read up. 

Link to comment
On 6/2/2018 at 11:48 AM, jonathanm said:

Not necessarily most authoritative, but a good starting point with solutions.

 

Thanks for this. However, after reading through the posts I wasn't too taken away with the solutions. So, (for others benefit) what I decided to do was:

 

  • Use my existing Ubuntu VM which is always running
  • Installed 18.04 LTS in a minimal config
  • Install Docker.io via apt-get
  • Install Portainer management UI docker container
  • Give VM static IP address
  • Deploy linuxserver.io openvpn-as container into the Ubuntu VM Docker instance
  • Setup openvpn-as as normal
  • port forward 1194 to the Ubuntu VM
  • Login via phone and test. 

 

Now all docker containers with their own IP address can be accessed when I VPN in.

 

There are plenty of other solutions to this (e.g. deploy openvpn-as directly into the VM, use router VPN functionality) but for various reasons (ongoing admin, the power of router hardware) I didn't want to do it.

 

Happy now.

 

EDIT: some people might want to know why I want each of my dockers to have their own LAN IP. It is so I can use my router to route certain dockers (via their IP) internet connections via an external VPN service.

Edited by danioj
Link to comment

Another way of doing this is when you have a second ehternet interface.

  • Configure the second ethernet interface as a separate interface with NO IP addresses assigned to it (select: none)
  • Enable bridge function for this second interface (optional)
  • Configure the docker network settings
    • uncheck the IP network settings for the main interface (eth0 or br0)
    • assign the network subnet and gateway settings of br0/eth0 to the second ethernet interface (e.g. br1)
  • Start the docker service
  • Configure the containers to use the second ethernet interface (br1)
  • The above allows containers to communicate with each other AND the unRAID host too

 

Link to comment
11 minutes ago, bonienl said:

Another way of doing this is when you have a second ehternet interface.

  • Configure the second ethernet interface as a separate interface with NO IP addresses assigned to it (select: none)
  • Enable bridge function for this second interface (optional)
  • Configure the docker network settings
    • uncheck the IP network settings for the main interface (eth0 or br0)
    • assign the network subnet and gateway settings of br0/eth0 to the second ethernet interface (e.g. br1)
  • Start the docker service
  • Configure the containers to use the second ethernet interface (br1)
  • The above allows containers to communicate with each other AND the unRAID host too

 

 

Hmmm, I do have 2 ethernet interfaces on the server. They are currently bonded. I'm not sure I get much real life benefit from that bonding setup. I might remove the bond and try that solution.

Link to comment
  • 4 weeks later...
On 6/3/2018 at 4:24 PM, bonienl said:

Another way of doing this is when you have a second ehternet interface.

  • Configure the second ethernet interface as a separate interface with NO IP addresses assigned to it (select: none)
  • Enable bridge function for this second interface (optional)
  • Configure the docker network settings
    • uncheck the IP network settings for the main interface (eth0 or br0)
    • assign the network subnet and gateway settings of br0/eth0 to the second ethernet interface (e.g. br1)
  • Start the docker service
  • Configure the containers to use the second ethernet interface (br1)
  • The above allows containers to communicate with each other AND the unRAID host too

 

 

Hi @bonienl , I followed your instructions to the letter, but I still hit issues.

 

All my docker containers are working fine (as you would expect on br1) but my openvpn docker (which is configured on the host) will not communicate with the containers - which have thier own IP set on my network. It can (once again, as you would expect) communicate with the host.

 

Do you have any suggestions?

 

Link to comment
On 6/26/2018 at 6:58 AM, danioj said:

 

Hi @bonienl , I followed your instructions to the letter, but I still hit issues.

 

All my docker containers are working fine (as you would expect on br1) but my openvpn docker (which is configured on the host) will not communicate with the containers - which have thier own IP set on my network. It can (once again, as you would expect) communicate with the host.

 

Do you have any suggestions?

 

 

Some things to check

- When the openvpn container is set as "br1" network it should be able to communicate with any other container also set as "br1". Communication passes your router/switch and must be allowed by that device too.

- The "remote" side must have the proper routing to access the complete network range and not just the single host address of unRAID

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.