Jump to content

IceNine451

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

5 Neutral

About IceNine451

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This setup works still in 6.8.x, the method is a little different but I made the transition myself with no major issues. You need to make a custom Docker network that has the same name as the VPN container. You can find direct instructions in the Prerequisite section of the instructions for the Rebuild-DNDC container. That container also takes care of the networking breaking when the VPN container updates, it will automatically rebuild any "client" containers for you. I use the binhex-privoxyvpn container as the tunnel and run qbittorrent and nzbget through it with no problem.
  2. I had a similar request myself, although it seems a bit moot now as support for the --net=container:<container-name> parameter seems to no longer work under 6.8.x. You need to make a new Docker network from the command line and then that will show up as a Custom network in the container settings. Instructions on doing that is in the readme for the Rebuild-DNDC container.
  3. 1. Yes, I can reach all the applications running "behind" the VPN container from the LAN. To make this work you have to remove the port configuration from the "client" container and add that port to the VPN container. For example, if you wanted to run NZBget through the VPN, you would remove port 6789 (leaving the NZBget container with no port configurations at all) and add port 6789 to the VPN container. On UnRAID 6.7.x, you would then change the Network Type on the "client" container to None. On UnRAID 6.8.x this no longer works, you need to make a new Docker network from the UnRAID CLI and set the "client" container to use that. 2. If the VPN container turns off or is updated all traffic to the "client" containers also stops. I can't say if that is because iptables is shut down or because the "client" containers have no port translations to the host machine through the Docker networking system because all of the client application ports are handled through the VPN container. If the VPN container is updated then its container ID (which the clients use to bind to it as a network) changes, so the "clients" need to be rebuilt as well so they use the new ID. That is the purpose of the Rebuild-DNDC container. Since the "client" applications are unreachable I'm not sure how I would test for leaks. 3. I haven't tried bouncing the VPN from within the container console, but if the VPN container is rebooted (but not rebuilt, so the container ID doesn't change) all the "client" containers will need to be rebooted as well or they will be unreachable, even though the VPN container is up and should be handling the "client" applications ports. I again can't say for sure if this is because of iptables or something with how internal Docker networking works.
  4. @binhex, this absolutely works on your container. I am currently using this setup successfully with no major issues. I wanted a VPN "gateway" for specific other containers, where all their traffic could only ever go through the VPN for safety. I can't say I looked that closely at your iptables rules, but I can tell you this method works with what you have set up. The readme I think @alturismo is referring to might be this, which is where I got started. It references a different OpenVPN client container, but the same setup works fine with yours. Things have changed a little in 6.8, I found the correct method for setting up the Docker network on the instructions for this container. I'm no Docker networking master, but as I understand it the way it works is you set up a bespoke Docker network shared between the VPN container and anything you want going through the VPN. You also have to remove any port translations from the "client" containers and add them to the "host" VPN container. This creates a sub-network bubble, where the VPN container acts as a router of sorts. I confirmed it was working when I was first testing it by attaching a basic desktop VNC container to the VPN container. When the VPN is connected, when I opened a browser in the desktop VNC one and without configuring any proxy settings etc, the public IP was always the VPN endpoint I was using. I also ran some online privacy tests to make sure no traffic was leaking out of the VPN and everything came back secure. There are some complications that come with this setup, like every time the VPN container is updated its container ID (which the other "client" containers use as a network connection) changes, so the "clients" have to be rebuilt as well. The second container I linked to above actually automates this process specifically for this purpose, so it seems there is a bit of demand for using VPN containers in this way.
  5. I also found this (https://hub.docker.com/r/eafxx/rebuild-dndc) which seems to do about what I am looking for, I guess it would be nice to have this kind of functionality built into the OS though.
  6. I'm glad I'm not the only one looking for this! Running specific containers through a "gateway" VPN container seems like a popular desire, and while Privoxy and SOCKS covers most things it isn't as comprehensive as running all traffic from a container through a secure tunnel. I'm no developer so I can't speak to the actual possibilities, but I guess what I would like to see myself is an additional option for "Other container" or something like that under the "Network Type" drop down on the container config page, with a sub-drop to select another container that already exists. This should cut down on confusion, and it could even be hidden unless you have Advanced View enabled. That, combined with more transparent handling of the container name / ID translation and update issue we've both noted, seems like it would cover the use case pretty well. Even if containers had to be put in logical groups, where the "client" containers were automatically restarted / whatever needs to happen when the "host" container changes ID would be fine with me.
  7. I've been working on a solution for routing specific Docker container traffic through a VPN connection for applications that don't have proxy support. This can be done using the --net=container:<containername> parameter (demonstrated here: https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ ) This works, albeit a bit ungainly. My request is specific to how UnRAID appears to handle this kind of setup. On the container doing the binding, under the Docker tab of UnRAID, on the Network column (which normally shows bridge / host etc.) it will show "container=<container ID of container that is bound to>". This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag. Secondly, since the container doing the binding seems to be using Docker IDs rather than container names, every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated or have a non-change (such as adding and then removing a space to a field) in the container configuration, enabling the "Apply" button, at which point that container will "re-translate" the name to ID of the bound-to container. If I try to restart or stop-then-start the container it fails since the bound-to container ID no longer exists. It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. It just feels like what I'm seeing is at least partially due to the way UnRAID parses and handles Docker information and requests and therefore could be fixed.
  8. I don't remember for sure, but I feel like at some point in the past I changed the repository for that container without fully rebuilding the config, and it's possible there was a case change that I didn't even think about at the time. That could explain it I guess. And it does seem like dockerMan is getting confused between the files, where when I make changes to the container it will write them to the lowercase-named file but when I update the container it reads settings from the uppercase-named file. I'm guessing I can resolve it myself by either removing the uppercase-named file and renaming the lowercase on so there is only a "my-sonarr.xml" left, or by removing all the templates and starting from scratch. I just wanted to bring the issue up in case it is causing other people different issues or ones I have and don't notice. It's not like I edit host paths on containers every day, but it was annoying to have to remove an empty share that was created every time I updated this container.
  9. I did the deletion test already, although I renamed the file "my-Sonarr.xml" to replace the one with the incorrect data. When I tried adding a new Host Path a *new* "my-sonarr (1).xml" file was created and I ended up with the same inability to alter the container, just now with "correct" data. While it fixes this specific issue it will come up again if I ever need to change the Host Paths in the future, so not really a fix. The container still seems to only write to "my-sonarr (1).xml" and read from "my-Sonarr.xml", regardless of the data in the files. I have not yet tried just deleting the container and the associated templates and starting again from scratch because I thought it might be more useful for the devs to be able to track down the actual issue in case it is causing other underlying problems that I just haven't noticed yet. @Squid did ask about the "my-sonarr (1).xml" file in the first place, so I'm guessing I'm not the first person this has happened to.
  10. Ok, so I did a bit more experimentation and here is what I found: The container seems to only read from "my-Sonarr.xml" and only write to "my-sonarr (1).xml". If I delete "my-Sonarr.xml" and go to edit the container, all the fields are empty. If I delete "my-sonarr (1).xml" and go to edit the container, all the settings (including the Host Path I am trying to remove) are there. If I remove the host path and apply the changes to the container again then the "my-sonarr (1).xml" file is recreated but the Host Path is still shown under the Edit Container page. I haven't yet tried removing the container completely and rebuilding it, and at this point I'm not convinced that would even work.
  11. Yup, there is. I didn't notice it before because it sorts to the end of the directory list. Looking at the settings in there they appear to be "correct", without the Host Path I've been trying to remove. Would the best action at this point to remove the secondary (my-sonarr (1).xml) or replace the "incorrect" one with the "correct" one?
  12. This could be possible (at least from my uneducated standpoint), this container has existed through a bunch of UnRAID updates, basically since Docker support was added. If it comes to it I can probably remove the container completely and rebuild it, but it seems like it should work properly anyway. I have tried adding and removing Host Paths to other containers and it works properly, so it seems stuck specifically on this one. The permissions for the xml file for this container is also the same as all others, so it's not like it became read-only at some point.