Jump to content
IceNine451

Better handling of to-container Docker network binding

5 posts in this topic Last Reply

Recommended Posts

I've been working on a solution for routing specific Docker container traffic through a VPN connection for applications that don't have proxy support. This can be done using the --net=container:<containername> parameter (demonstrated here: https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ )

 

This works, albeit a bit ungainly. My request is specific to how UnRAID appears to handle this kind of setup. On the container doing the binding, under the Docker tab of UnRAID, on the Network column (which normally shows bridge / host etc.) it will show "container=<container ID of container that is bound to>". This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag.

 

Secondly, since the container doing the binding seems to be using Docker IDs rather than container names, every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated or have a non-change (such as adding and then removing a space to a field) in the container configuration, enabling the "Apply" button, at which point that container will "re-translate" the name to ID of the bound-to container. If I try to restart or stop-then-start the container it fails since the bound-to container ID no longer exists.

 

It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. It just feels like what I'm seeing is at least partially due to the way UnRAID parses and handles Docker information and requests and therefore could be fixed.

Share this post


Link to post
12 minutes ago, IceNine451 said:

I've been working on a solution for routing specific Docker container traffic through a VPN connection for applications that don't have proxy support. This can be done using the --net=container:<containername> parameter (demonstrated here: https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ )

 

This works, albeit a bit ungainly. My request is specific to how UnRAID appears to handle this kind of setup. On the container doing the binding, under the Docker tab of UnRAID, on the Network column (which normally shows bridge / host etc.) it will show "container=<container ID of container that is bound to>". This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag.

 

Secondly, since the container doing the binding seems to be using Docker IDs rather than container names, every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated or have a non-change (such as adding and then removing a space to a field) in the container configuration, enabling the "Apply" button, at which point that container will "re-translate" the name to ID of the bound-to container. If I try to restart or stop-then-start the container it fails since the bound-to container ID no longer exists.

 

It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. It just feels like what I'm seeing is at least partially due to the way UnRAID parses and handles Docker information and requests and therefore could be fixed.

I too saw this behavior and have become frustrated by it. I would like to second this request.

My implementation was for nearly identical purposes.

Basically I setup a NordVPN container; added non-VPN friendly containers (including my sslh docker) to that docker's network and was off the the races.
SSLH multiplexes my HTTPS and SSH traffic so they are both on Port 443. I'd rather not expose my public IP directly so I route all external traffic through the VPN connection leveraging this and nginx. 

Problem:
Anytime the NordVPN container restarts or updates everything breaks and has to be manually corrected to --net=container.

Share this post


Link to post
2 hours ago, Xaero said:

I too saw this behavior and have become frustrated by it. I would like to second this request.

My implementation was for nearly identical purposes.

Basically I setup a NordVPN container; added non-VPN friendly containers (including my sslh docker) to that docker's network and was off the the races.
SSLH multiplexes my HTTPS and SSH traffic so they are both on Port 443. I'd rather not expose my public IP directly so I route all external traffic through the VPN connection leveraging this and nginx. 

Problem:
Anytime the NordVPN container restarts or updates everything breaks and has to be manually corrected to --net=container.

I'm glad I'm not the only one looking for this!

 

Running specific containers through a "gateway" VPN container seems like a popular desire, and while Privoxy and SOCKS covers most things it isn't as comprehensive as running all traffic from a container through a secure tunnel.

 

I'm no developer so I can't speak to the actual possibilities, but I guess what I would like to see myself is an additional option for "Other container" or something like that under the "Network Type" drop down on the container config page, with a sub-drop to select another container that already exists. This should cut down on confusion, and it could even be hidden unless you have Advanced View enabled.

 

That, combined with more transparent handling of the container name / ID translation and update issue we've both noted, seems like it would cover the use case pretty well. Even if containers had to be put in logical groups, where the "client" containers were automatically restarted / whatever needs to happen when the "host" container changes ID would be fine with me.

Share this post


Link to post
On 10/9/2019 at 7:52 PM, IceNine451 said:

I'm glad I'm not the only one looking for this!

 

Running specific containers through a "gateway" VPN container seems like a popular desire, and while Privoxy and SOCKS covers most things it isn't as comprehensive as running all traffic from a container through a secure tunnel.

 

I'm no developer so I can't speak to the actual possibilities, but I guess what I would like to see myself is an additional option for "Other container" or something like that under the "Network Type" drop down on the container config page, with a sub-drop to select another container that already exists. This should cut down on confusion, and it could even be hidden unless you have Advanced View enabled.

 

That, combined with more transparent handling of the container name / ID translation and update issue we've both noted, seems like it would cover the use case pretty well. Even if containers had to be put in logical groups, where the "client" containers were automatically restarted / whatever needs to happen when the "host" container changes ID would be fine with me.


In my opinion, each docker should be listed inside the "network interfaces" box as selections. That way you can easily select which network to connect to. Perhaps add a "shared network" option to dockers so that the list doesn't get huge with too many dockers.
Just need it to not switch from container name to container ID.


To make this a bit more clear:
image.png.d4be361f433a9604f0540171d3836f62.png

In this screenshot we see that Nordvpn is configured for bridge mode networking. DDClient is configured for host mode networking (I start it in host mode for updating dns records with my real IP, currently. eventually I will change it to container:nordvpn)
The third docker pproxy is configured manually going into advanced and putting --net=container:nordvpn. After saving the --net=container:nordvpn is converted to container:<uuid>
This UUID is changed every time the container is modified. So if I change a setting, update the container, etc everything that is dependent on it's network now also must be manually updated again.

Edited by Xaero

Share this post


Link to post

Running into the exact same problem ... it's really annoying and requires a lot of manual babysitting the docker update process.  Even if this can't change for some reason, it would be nice if the update mechanism took this into account to force update child dockers when parent dockers are updated.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.