IceNine451 Posted October 9, 2019 Share Posted October 9, 2019 I've been working on a solution for routing specific Docker container traffic through a VPN connection for applications that don't have proxy support. This can be done using the --net=container:<containername> parameter (demonstrated here: https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ ) This works, albeit a bit ungainly. My request is specific to how UnRAID appears to handle this kind of setup. On the container doing the binding, under the Docker tab of UnRAID, on the Network column (which normally shows bridge / host etc.) it will show "container=<container ID of container that is bound to>". This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag. Secondly, since the container doing the binding seems to be using Docker IDs rather than container names, every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated or have a non-change (such as adding and then removing a space to a field) in the container configuration, enabling the "Apply" button, at which point that container will "re-translate" the name to ID of the bound-to container. If I try to restart or stop-then-start the container it fails since the bound-to container ID no longer exists. It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. It just feels like what I'm seeing is at least partially due to the way UnRAID parses and handles Docker information and requests and therefore could be fixed. 3 Quote Link to comment
Xaero Posted October 9, 2019 Share Posted October 9, 2019 12 minutes ago, IceNine451 said: I've been working on a solution for routing specific Docker container traffic through a VPN connection for applications that don't have proxy support. This can be done using the --net=container:<containername> parameter (demonstrated here: https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ ) This works, albeit a bit ungainly. My request is specific to how UnRAID appears to handle this kind of setup. On the container doing the binding, under the Docker tab of UnRAID, on the Network column (which normally shows bridge / host etc.) it will show "container=<container ID of container that is bound to>". This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag. Secondly, since the container doing the binding seems to be using Docker IDs rather than container names, every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated or have a non-change (such as adding and then removing a space to a field) in the container configuration, enabling the "Apply" button, at which point that container will "re-translate" the name to ID of the bound-to container. If I try to restart or stop-then-start the container it fails since the bound-to container ID no longer exists. It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. It just feels like what I'm seeing is at least partially due to the way UnRAID parses and handles Docker information and requests and therefore could be fixed. I too saw this behavior and have become frustrated by it. I would like to second this request. My implementation was for nearly identical purposes. Basically I setup a NordVPN container; added non-VPN friendly containers (including my sslh docker) to that docker's network and was off the the races. SSLH multiplexes my HTTPS and SSH traffic so they are both on Port 443. I'd rather not expose my public IP directly so I route all external traffic through the VPN connection leveraging this and nginx. Problem: Anytime the NordVPN container restarts or updates everything breaks and has to be manually corrected to --net=container. 1 Quote Link to comment
IceNine451 Posted October 10, 2019 Author Share Posted October 10, 2019 2 hours ago, Xaero said: I too saw this behavior and have become frustrated by it. I would like to second this request. My implementation was for nearly identical purposes. Basically I setup a NordVPN container; added non-VPN friendly containers (including my sslh docker) to that docker's network and was off the the races. SSLH multiplexes my HTTPS and SSH traffic so they are both on Port 443. I'd rather not expose my public IP directly so I route all external traffic through the VPN connection leveraging this and nginx. Problem: Anytime the NordVPN container restarts or updates everything breaks and has to be manually corrected to --net=container. I'm glad I'm not the only one looking for this! Running specific containers through a "gateway" VPN container seems like a popular desire, and while Privoxy and SOCKS covers most things it isn't as comprehensive as running all traffic from a container through a secure tunnel. I'm no developer so I can't speak to the actual possibilities, but I guess what I would like to see myself is an additional option for "Other container" or something like that under the "Network Type" drop down on the container config page, with a sub-drop to select another container that already exists. This should cut down on confusion, and it could even be hidden unless you have Advanced View enabled. That, combined with more transparent handling of the container name / ID translation and update issue we've both noted, seems like it would cover the use case pretty well. Even if containers had to be put in logical groups, where the "client" containers were automatically restarted / whatever needs to happen when the "host" container changes ID would be fine with me. Quote Link to comment
Xaero Posted October 10, 2019 Share Posted October 10, 2019 (edited) On 10/9/2019 at 7:52 PM, IceNine451 said: I'm glad I'm not the only one looking for this! Running specific containers through a "gateway" VPN container seems like a popular desire, and while Privoxy and SOCKS covers most things it isn't as comprehensive as running all traffic from a container through a secure tunnel. I'm no developer so I can't speak to the actual possibilities, but I guess what I would like to see myself is an additional option for "Other container" or something like that under the "Network Type" drop down on the container config page, with a sub-drop to select another container that already exists. This should cut down on confusion, and it could even be hidden unless you have Advanced View enabled. That, combined with more transparent handling of the container name / ID translation and update issue we've both noted, seems like it would cover the use case pretty well. Even if containers had to be put in logical groups, where the "client" containers were automatically restarted / whatever needs to happen when the "host" container changes ID would be fine with me. In my opinion, each docker should be listed inside the "network interfaces" box as selections. That way you can easily select which network to connect to. Perhaps add a "shared network" option to dockers so that the list doesn't get huge with too many dockers. Just need it to not switch from container name to container ID. To make this a bit more clear: In this screenshot we see that Nordvpn is configured for bridge mode networking. DDClient is configured for host mode networking (I start it in host mode for updating dns records with my real IP, currently. eventually I will change it to container:nordvpn) The third docker pproxy is configured manually going into advanced and putting --net=container:nordvpn. After saving the --net=container:nordvpn is converted to container:<uuid> This UUID is changed every time the container is modified. So if I change a setting, update the container, etc everything that is dependent on it's network now also must be manually updated again. Edited October 15, 2019 by Xaero Quote Link to comment
JesterEE Posted October 21, 2019 Share Posted October 21, 2019 Running into the exact same problem ... it's really annoying and requires a lot of manual babysitting the docker update process. Even if this can't change for some reason, it would be nice if the update mechanism took this into account to force update child dockers when parent dockers are updated. Quote Link to comment
Xaero Posted October 31, 2019 Share Posted October 31, 2019 I'll note a few things here; Switching to using Docker-Compose instead of Docker Run solves this automatically. Docker-compose supports always using container or service network types by name. It will just resolve the name to the container ID on startup from the docker-compose.yml file. Docker compose also comes with the benefit of health checks and health even based management (vpn docker goes down? automatically stop all dependent dockers. Comes back up? Starts them back up.) Seems worth looking into; obviously not an easy to implement change as the current docker profile system has been a product of years of development. Would be nice to move to a more standard docker implement like compose though. 1 Quote Link to comment
IceNine451 Posted November 12, 2019 Author Share Posted November 12, 2019 I also found this (https://hub.docker.com/r/eafxx/rebuild-dndc) which seems to do about what I am looking for, I guess it would be nice to have this kind of functionality built into the OS though. 1 Quote Link to comment
Squid Posted November 12, 2019 Share Posted November 12, 2019 59 minutes ago, IceNine451 said: I also found this (https://hub.docker.com/r/eafxx/rebuild-dndc) which seems to do about what I am looking for, 1 Quote Link to comment
Xaero Posted November 13, 2019 Share Posted November 13, 2019 That also didn't exist until well after this thread was made. Quote Link to comment
absent Posted February 9, 2020 Share Posted February 9, 2020 I requested something similar a little while back... My solution died with the 6.8.x update so I have a not-so-great solution not using `binhex/arch-qbittorrentvpn` and luckily jackett supports proxy so using the in-built `Privoxy`, its not a solution I'm fond of, at least my old one I just turned off auto-update on the vpn container and updated it manually if it ever needed it... Quote Link to comment
IceNine451 Posted February 9, 2020 Author Share Posted February 9, 2020 8 hours ago, absent said: I requested something similar a little while back... My solution died with the 6.8.x update so I have a not-so-great solution not using `binhex/arch-qbittorrentvpn` and luckily jackett supports proxy so using the in-built `Privoxy`, its not a solution I'm fond of, at least my old one I just turned off auto-update on the vpn container and updated it manually if it ever needed it... This setup works still in 6.8.x, the method is a little different but I made the transition myself with no major issues. You need to make a custom Docker network that has the same name as the VPN container. You can find direct instructions in the Prerequisite section of the instructions for the Rebuild-DNDC container. That container also takes care of the networking breaking when the VPN container updates, it will automatically rebuild any "client" containers for you. I use the binhex-privoxyvpn container as the tunnel and run qbittorrent and nzbget through it with no problem. Quote Link to comment
bonienl Posted February 9, 2020 Share Posted February 9, 2020 (edited) On 10/9/2019 at 11:40 PM, IceNine451 said: This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag. I made an update which translates the ID to the container name, e.g. container:vpn On 10/9/2019 at 11:40 PM, IceNine451 said: every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated When the vpn network no longer exists due to an update, all impacted containers are shown as "update available" and can be updated at once using "Update ALL" On 10/9/2019 at 11:40 PM, IceNine451 said: It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. Some are docker restrictions, but the modifications I have made should make your life a lot easier. It will also be possible again to specify a network in the extra parameters. If present it will overrule to default assignment. Edited February 9, 2020 by bonienl 1 Quote Link to comment
IceNine451 Posted February 9, 2020 Author Share Posted February 9, 2020 2 hours ago, bonienl said: I made an update which translates the ID to the container name, e.g. container:vpn When the vpn network no longer exists due to an update, all impacted containers are shown as "update available" and can be updated at once using "Update ALL" Some are docker restrictions, but the modifications I have made should make your life a lot easier. It will also be possible again to specify a network in the extra parameters. If present it will overrule to default assignment. Fantastic, thank you so much! Quote Link to comment
bonienl Posted February 9, 2020 Share Posted February 9, 2020 6 hours ago, bonienl said: When the vpn network no longer exists due to an update, all impacted containers are shown as "update available" and can be updated at once using "Update ALL" More refinements. Now impacted containers are automatically rebuild when the Docker page is opened. No user action required. Quote Link to comment
absent Posted February 13, 2020 Share Posted February 13, 2020 @bonienl forgive my ignorance but how do we get access to the updates? Or do we need to wait on the next minor release? Quote Link to comment
bonienl Posted February 13, 2020 Share Posted February 13, 2020 1 hour ago, absent said: @bonienl forgive my ignorance but how do we get access to the updates? Or do we need to wait on the next minor release? You need to wait for the next release of Unraid, which will include the updates. 1 Quote Link to comment
eafx Posted February 14, 2020 Share Posted February 14, 2020 On 2/9/2020 at 10:56 PM, bonienl said: More refinements. Now impacted containers are automatically rebuild when the Docker page is opened. No user action required. Is there a reason why it rebuilds only when the Docker page is opened? Quote Link to comment
bonienl Posted February 14, 2020 Share Posted February 14, 2020 When you go to the Docker page and update the "vpn" container, it will automatically update all other containers which depend on it. Quote Link to comment
eafx Posted February 14, 2020 Share Posted February 14, 2020 15 hours ago, bonienl said: When you go to the Docker page and update the "vpn" container, it will automatically update all other containers which depend on it. Will it also work, if I set it to auto update the vpn container using the CA Auto Update plugin? Quote Link to comment
bonienl Posted February 15, 2020 Share Posted February 15, 2020 26 minutes ago, eafx said: Will it also work, if I set it to auto update the vpn container using the CA Auto Update plugin? No, it doesn’t work together with the CA auto updater Quote Link to comment
nekromantik Posted October 24, 2020 Share Posted October 24, 2020 On 2/15/2020 at 12:02 AM, bonienl said: No, it doesn’t work together with the CA auto updater can this feature be added? I am using CA Auto Updater Quote Link to comment
tjb_altf4 Posted December 24, 2020 Share Posted December 24, 2020 Piecing a couple of threads together on the subject, and after a quick test on what the docker run command gets build as, it looks like you can create the network once (via CLI) and then select it in network type from the container template. This would be instead of needing to add custom args on each container. docker network create container:vpn Seems like a neater solution. @bonienl will this still get picked up by the auto rebuild updates you've implemented? Quote Link to comment
knaack Posted May 28, 2022 Share Posted May 28, 2022 Older topic, but the solutions do not appear to work well with unraid 6.10+ as noted at https://github.com/elmerfdz/rebuild-dndc/issues/56. Anyone have another solutions for when the main docker is restarted? Quote Link to comment
biggiesize Posted June 14, 2022 Share Posted June 14, 2022 (edited) On 5/28/2022 at 12:10 PM, knaack said: Older topic, but the solutions do not appear to work well with unraid 6.10+ as noted at https://github.com/elmerfdz/rebuild-dndc/issues/56. Anyone have another solutions for when the main docker is restarted? I just submitted a pull request to fix the issue. Edited June 14, 2022 by biggiesize Removed alternative fix due to dev updating the conntainer Quote Link to comment
eafx Posted June 14, 2022 Share Posted June 14, 2022 3 minutes ago, biggiesize said: I just submitted a pull request to fix the issue. I am not aware if the dev is still actively working on it so I'm not sure it will be brought in. If you want, you can temporarily use my container I used to test the fix by changing the repository to "diamondprecisioncomputing/rebuild-dndc:unraid-m" Hi there, I'm the dev for this repo. I've currently moved houses and will be moving again later this year, unfortunately my Unraid server is in storage, so haven't tested any of the new builds. I've merged your pull request, thanks for that @knaack Docker image should've been updated as well now, could you please pull and test when you get a chance? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.