Jump to content

IceNine451

Members
  • Content Count

    24
  • Joined

  • Last visited

Community Reputation

9 Neutral

About IceNine451

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Have you tried @SpaceInvaderOne latest video method? I was under the impression that the "extra parameters" method didn't work beyond 6.7.x but he appears to be making it work on 6.9.0b1. Otherwise it appears like your setup is the same as mine and I don't see anything obvious that would be causing yours to not work. What I would check next I guess is the logs for the Jackett container to make sure it is actually starting up properly. If it isn't for some reason you will fail to reach it like what is happening.
  2. Can you also check your Docker settings? The Docker version changed at 6.8, your screenshots look like you might still be running an older version of Docker, which can be changed in the settings. Under Settings -> Docker it will show you the version you are using, which can only be changed when Docker is disabled.
  3. If you are on 6.8.x the "Extra Parameters" method of Docker networking won't work anymore. You need to create a customer Docker network from the UnRAID command line that is named the same as your VPN container, then remove the extra parameters from your Jackett etc. containers and set the network type to Custom: container:name-of-your-custom-network. Unless you intentionally changed the version of Docker you are using in the Docker settings, the default for UnRAID 8.6.3 should be Docker 19.03.05.
  4. I feel like I ran into this same issue when I first was getting this running, but I can't remember for sure. First note, the "--net=container=vpn" definitely doesn't work on 6.8.3. It does sound like you have the custom network set up properly if you can curl the VPN IP on the console for your client containers, one thing I wanted to make sure was that the VPN container was still set to Bridge for the networking mode, not the custom network you created. Only the "client" containers need to be set to the custom VPN network. On the main Docker tab for UnRAID the client containers should have nothing show up in the Port Mappings column. Here is a screenshot of my setup with the Binhex VPN container and three client containers, yours should look similar if you are set up correctly. Hopefully this helps!
  5. I don't know of a way to use a proxy with Plex either, but you can do what I have done with some of my containers and run *all* of the Plex traffic through a VPN container. Since you won't be doing remote access I don't see any issues with this myself, but keep in mind I haven't actually tried Plex specifically. The method for doing this is a bit different between UnRAID 6.7.x and 6.8.x, it works best on the latest version of UnRAID (6.8.3 as of this post) because they have added some quality-of-life fixes to the Docker code. I figured out how to do this through these two posts (https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ and https://hub.docker.com/r/eafxx/rebuild-dndc) but I will summarize here since neither completely cover what you need to do. 1) Have a VPN container like Binhex's up and running. 2) Create a dedicated "VPN network" for Plex and anything else you want to run through the VPN on. - Open the UnRAID terminal or connect via SSH, then run the command docker network create container:master_container_name where "master_container_name" is the name of your VPN container, so "binhex-privoxyvpn" in my case. This name should be all lowercase, if it isn't than change it before creating the new network. 3) Edit your Plex container and change the network type to "Custom: container:binhex-privoxyvpn" if you are on UnRAID 6.8.x. If you are on 6.7.x then change the network type to "Custom" and add "--net=container:binhex-privoxyvpn" to the Extra Parameters box. 4) Remove all the Host Port settings from the Plex container, so by default on my setup there are ports TCP 3005, 32400 and 32469 and UDP ports 1900, 32410, 32412, 32413 and 32414. 5) Edit your VPN container and add the Plex required ports to the VPN container. You can probably get away with just TCP ports 3005 and 3005 and UDP port 1900 and have it work, but probably safer to add them all again. Leave the VPN containers' network type to what it is now, probably Bridge. 6) Do a forced upgrade (seen with Advanced View turned on) on the VPN container first and then the Plex container. You should still be able to reach your Plex containers web UI now, with the VPN container acting as a "gateway". Now all external traffic will go through the VPN. There are some things to remember with this kind of setup, like if the VPN container goes down you will be unable to reach Plex at all even if it is running. Also, if the VPN container is updated the Plex container will lose connectivity until it is also updated. There is code in UnRAID 6.8.3 to do this update automatically when you load the Docker tab in the UnRAID UI. Hopefully all that is clear, let me know if you have any questions!
  6. This setup works still in 6.8.x, the method is a little different but I made the transition myself with no major issues. You need to make a custom Docker network that has the same name as the VPN container. You can find direct instructions in the Prerequisite section of the instructions for the Rebuild-DNDC container. That container also takes care of the networking breaking when the VPN container updates, it will automatically rebuild any "client" containers for you. I use the binhex-privoxyvpn container as the tunnel and run qbittorrent and nzbget through it with no problem.
  7. I had a similar request myself, although it seems a bit moot now as support for the --net=container:<container-name> parameter seems to no longer work under 6.8.x. You need to make a new Docker network from the command line and then that will show up as a Custom network in the container settings. Instructions on doing that is in the readme for the Rebuild-DNDC container.
  8. 1. Yes, I can reach all the applications running "behind" the VPN container from the LAN. To make this work you have to remove the port configuration from the "client" container and add that port to the VPN container. For example, if you wanted to run NZBget through the VPN, you would remove port 6789 (leaving the NZBget container with no port configurations at all) and add port 6789 to the VPN container. On UnRAID 6.7.x, you would then change the Network Type on the "client" container to None. On UnRAID 6.8.x this no longer works, you need to make a new Docker network from the UnRAID CLI and set the "client" container to use that. 2. If the VPN container turns off or is updated all traffic to the "client" containers also stops. I can't say if that is because iptables is shut down or because the "client" containers have no port translations to the host machine through the Docker networking system because all of the client application ports are handled through the VPN container. If the VPN container is updated then its container ID (which the clients use to bind to it as a network) changes, so the "clients" need to be rebuilt as well so they use the new ID. That is the purpose of the Rebuild-DNDC container. Since the "client" applications are unreachable I'm not sure how I would test for leaks. 3. I haven't tried bouncing the VPN from within the container console, but if the VPN container is rebooted (but not rebuilt, so the container ID doesn't change) all the "client" containers will need to be rebooted as well or they will be unreachable, even though the VPN container is up and should be handling the "client" applications ports. I again can't say for sure if this is because of iptables or something with how internal Docker networking works.
  9. @binhex, this absolutely works on your container. I am currently using this setup successfully with no major issues. I wanted a VPN "gateway" for specific other containers, where all their traffic could only ever go through the VPN for safety. I can't say I looked that closely at your iptables rules, but I can tell you this method works with what you have set up. The readme I think @alturismo is referring to might be this, which is where I got started. It references a different OpenVPN client container, but the same setup works fine with yours. Things have changed a little in 6.8, I found the correct method for setting up the Docker network on the instructions for this container. I'm no Docker networking master, but as I understand it the way it works is you set up a bespoke Docker network shared between the VPN container and anything you want going through the VPN. You also have to remove any port translations from the "client" containers and add them to the "host" VPN container. This creates a sub-network bubble, where the VPN container acts as a router of sorts. I confirmed it was working when I was first testing it by attaching a basic desktop VNC container to the VPN container. When the VPN is connected, when I opened a browser in the desktop VNC one and without configuring any proxy settings etc, the public IP was always the VPN endpoint I was using. I also ran some online privacy tests to make sure no traffic was leaking out of the VPN and everything came back secure. There are some complications that come with this setup, like every time the VPN container is updated its container ID (which the other "client" containers use as a network connection) changes, so the "clients" have to be rebuilt as well. The second container I linked to above actually automates this process specifically for this purpose, so it seems there is a bit of demand for using VPN containers in this way.
  10. I also found this (https://hub.docker.com/r/eafxx/rebuild-dndc) which seems to do about what I am looking for, I guess it would be nice to have this kind of functionality built into the OS though.
  11. I'm glad I'm not the only one looking for this! Running specific containers through a "gateway" VPN container seems like a popular desire, and while Privoxy and SOCKS covers most things it isn't as comprehensive as running all traffic from a container through a secure tunnel. I'm no developer so I can't speak to the actual possibilities, but I guess what I would like to see myself is an additional option for "Other container" or something like that under the "Network Type" drop down on the container config page, with a sub-drop to select another container that already exists. This should cut down on confusion, and it could even be hidden unless you have Advanced View enabled. That, combined with more transparent handling of the container name / ID translation and update issue we've both noted, seems like it would cover the use case pretty well. Even if containers had to be put in logical groups, where the "client" containers were automatically restarted / whatever needs to happen when the "host" container changes ID would be fine with me.
  12. I've been working on a solution for routing specific Docker container traffic through a VPN connection for applications that don't have proxy support. This can be done using the --net=container:<containername> parameter (demonstrated here: https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ ) This works, albeit a bit ungainly. My request is specific to how UnRAID appears to handle this kind of setup. On the container doing the binding, under the Docker tab of UnRAID, on the Network column (which normally shows bridge / host etc.) it will show "container=<container ID of container that is bound to>". This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag. Secondly, since the container doing the binding seems to be using Docker IDs rather than container names, every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated or have a non-change (such as adding and then removing a space to a field) in the container configuration, enabling the "Apply" button, at which point that container will "re-translate" the name to ID of the bound-to container. If I try to restart or stop-then-start the container it fails since the bound-to container ID no longer exists. It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. It just feels like what I'm seeing is at least partially due to the way UnRAID parses and handles Docker information and requests and therefore could be fixed.
  13. I don't remember for sure, but I feel like at some point in the past I changed the repository for that container without fully rebuilding the config, and it's possible there was a case change that I didn't even think about at the time. That could explain it I guess. And it does seem like dockerMan is getting confused between the files, where when I make changes to the container it will write them to the lowercase-named file but when I update the container it reads settings from the uppercase-named file. I'm guessing I can resolve it myself by either removing the uppercase-named file and renaming the lowercase on so there is only a "my-sonarr.xml" left, or by removing all the templates and starting from scratch. I just wanted to bring the issue up in case it is causing other people different issues or ones I have and don't notice. It's not like I edit host paths on containers every day, but it was annoying to have to remove an empty share that was created every time I updated this container.