IceNine451

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by IceNine451

  1. I'm also having this issue just recently. Files and folders downloaded by qBittorrent are using the wrong mask now for some reason and I don't see any way to set it in the app config, so I am guessing something happened at the container level.
  2. Have you tried @SpaceInvaderOne latest video method? I was under the impression that the "extra parameters" method didn't work beyond 6.7.x but he appears to be making it work on 6.9.0b1. Otherwise it appears like your setup is the same as mine and I don't see anything obvious that would be causing yours to not work. What I would check next I guess is the logs for the Jackett container to make sure it is actually starting up properly. If it isn't for some reason you will fail to reach it like what is happening.
  3. Can you also check your Docker settings? The Docker version changed at 6.8, your screenshots look like you might still be running an older version of Docker, which can be changed in the settings. Under Settings -> Docker it will show you the version you are using, which can only be changed when Docker is disabled.
  4. If you are on 6.8.x the "Extra Parameters" method of Docker networking won't work anymore. You need to create a customer Docker network from the UnRAID command line that is named the same as your VPN container, then remove the extra parameters from your Jackett etc. containers and set the network type to Custom: container:name-of-your-custom-network. Unless you intentionally changed the version of Docker you are using in the Docker settings, the default for UnRAID 8.6.3 should be Docker 19.03.05.
  5. I feel like I ran into this same issue when I first was getting this running, but I can't remember for sure. First note, the "--net=container=vpn" definitely doesn't work on 6.8.3. It does sound like you have the custom network set up properly if you can curl the VPN IP on the console for your client containers, one thing I wanted to make sure was that the VPN container was still set to Bridge for the networking mode, not the custom network you created. Only the "client" containers need to be set to the custom VPN network. On the main Docker tab for UnRAID the client containers should have nothing show up in the Port Mappings column. Here is a screenshot of my setup with the Binhex VPN container and three client containers, yours should look similar if you are set up correctly. Hopefully this helps!
  6. I don't know of a way to use a proxy with Plex either, but you can do what I have done with some of my containers and run *all* of the Plex traffic through a VPN container. Since you won't be doing remote access I don't see any issues with this myself, but keep in mind I haven't actually tried Plex specifically. The method for doing this is a bit different between UnRAID 6.7.x and 6.8.x, it works best on the latest version of UnRAID (6.8.3 as of this post) because they have added some quality-of-life fixes to the Docker code. I figured out how to do this through these two posts (https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ and https://hub.docker.com/r/eafxx/rebuild-dndc) but I will summarize here since neither completely cover what you need to do. 1) Have a VPN container like Binhex's up and running. 2) Create a dedicated "VPN network" for Plex and anything else you want to run through the VPN on. - Open the UnRAID terminal or connect via SSH, then run the command docker network create container:master_container_name where "master_container_name" is the name of your VPN container, so "binhex-privoxyvpn" in my case. This name should be all lowercase, if it isn't than change it before creating the new network. 3) Edit your Plex container and change the network type to "Custom: container:binhex-privoxyvpn" if you are on UnRAID 6.8.x. If you are on 6.7.x then change the network type to "Custom" and add "--net=container:binhex-privoxyvpn" to the Extra Parameters box. 4) Remove all the Host Port settings from the Plex container, so by default on my setup there are ports TCP 3005, 32400 and 32469 and UDP ports 1900, 32410, 32412, 32413 and 32414. 5) Edit your VPN container and add the Plex required ports to the VPN container. You can probably get away with just TCP ports 3005 and 3005 and UDP port 1900 and have it work, but probably safer to add them all again. Leave the VPN containers' network type to what it is now, probably Bridge. 6) Do a forced upgrade (seen with Advanced View turned on) on the VPN container first and then the Plex container. You should still be able to reach your Plex containers web UI now, with the VPN container acting as a "gateway". Now all external traffic will go through the VPN. There are some things to remember with this kind of setup, like if the VPN container goes down you will be unable to reach Plex at all even if it is running. Also, if the VPN container is updated the Plex container will lose connectivity until it is also updated. There is code in UnRAID 6.8.3 to do this update automatically when you load the Docker tab in the UnRAID UI. Hopefully all that is clear, let me know if you have any questions!
  7. This setup works still in 6.8.x, the method is a little different but I made the transition myself with no major issues. You need to make a custom Docker network that has the same name as the VPN container. You can find direct instructions in the Prerequisite section of the instructions for the Rebuild-DNDC container. That container also takes care of the networking breaking when the VPN container updates, it will automatically rebuild any "client" containers for you. I use the binhex-privoxyvpn container as the tunnel and run qbittorrent and nzbget through it with no problem.
  8. I had a similar request myself, although it seems a bit moot now as support for the --net=container:<container-name> parameter seems to no longer work under 6.8.x. You need to make a new Docker network from the command line and then that will show up as a Custom network in the container settings. Instructions on doing that is in the readme for the Rebuild-DNDC container.
  9. 1. Yes, I can reach all the applications running "behind" the VPN container from the LAN. To make this work you have to remove the port configuration from the "client" container and add that port to the VPN container. For example, if you wanted to run NZBget through the VPN, you would remove port 6789 (leaving the NZBget container with no port configurations at all) and add port 6789 to the VPN container. On UnRAID 6.7.x, you would then change the Network Type on the "client" container to None. On UnRAID 6.8.x this no longer works, you need to make a new Docker network from the UnRAID CLI and set the "client" container to use that. 2. If the VPN container turns off or is updated all traffic to the "client" containers also stops. I can't say if that is because iptables is shut down or because the "client" containers have no port translations to the host machine through the Docker networking system because all of the client application ports are handled through the VPN container. If the VPN container is updated then its container ID (which the clients use to bind to it as a network) changes, so the "clients" need to be rebuilt as well so they use the new ID. That is the purpose of the Rebuild-DNDC container. Since the "client" applications are unreachable I'm not sure how I would test for leaks. 3. I haven't tried bouncing the VPN from within the container console, but if the VPN container is rebooted (but not rebuilt, so the container ID doesn't change) all the "client" containers will need to be rebooted as well or they will be unreachable, even though the VPN container is up and should be handling the "client" applications ports. I again can't say for sure if this is because of iptables or something with how internal Docker networking works.
  10. @binhex, this absolutely works on your container. I am currently using this setup successfully with no major issues. I wanted a VPN "gateway" for specific other containers, where all their traffic could only ever go through the VPN for safety. I can't say I looked that closely at your iptables rules, but I can tell you this method works with what you have set up. The readme I think @alturismo is referring to might be this, which is where I got started. It references a different OpenVPN client container, but the same setup works fine with yours. Things have changed a little in 6.8, I found the correct method for setting up the Docker network on the instructions for this container. I'm no Docker networking master, but as I understand it the way it works is you set up a bespoke Docker network shared between the VPN container and anything you want going through the VPN. You also have to remove any port translations from the "client" containers and add them to the "host" VPN container. This creates a sub-network bubble, where the VPN container acts as a router of sorts. I confirmed it was working when I was first testing it by attaching a basic desktop VNC container to the VPN container. When the VPN is connected, when I opened a browser in the desktop VNC one and without configuring any proxy settings etc, the public IP was always the VPN endpoint I was using. I also ran some online privacy tests to make sure no traffic was leaking out of the VPN and everything came back secure. There are some complications that come with this setup, like every time the VPN container is updated its container ID (which the other "client" containers use as a network connection) changes, so the "clients" have to be rebuilt as well. The second container I linked to above actually automates this process specifically for this purpose, so it seems there is a bit of demand for using VPN containers in this way.
  11. I also found this (https://hub.docker.com/r/eafxx/rebuild-dndc) which seems to do about what I am looking for, I guess it would be nice to have this kind of functionality built into the OS though.
  12. I'm glad I'm not the only one looking for this! Running specific containers through a "gateway" VPN container seems like a popular desire, and while Privoxy and SOCKS covers most things it isn't as comprehensive as running all traffic from a container through a secure tunnel. I'm no developer so I can't speak to the actual possibilities, but I guess what I would like to see myself is an additional option for "Other container" or something like that under the "Network Type" drop down on the container config page, with a sub-drop to select another container that already exists. This should cut down on confusion, and it could even be hidden unless you have Advanced View enabled. That, combined with more transparent handling of the container name / ID translation and update issue we've both noted, seems like it would cover the use case pretty well. Even if containers had to be put in logical groups, where the "client" containers were automatically restarted / whatever needs to happen when the "host" container changes ID would be fine with me.
  13. I've been working on a solution for routing specific Docker container traffic through a VPN connection for applications that don't have proxy support. This can be done using the --net=container:<containername> parameter (demonstrated here: https://jordanelver.co.uk/blog/2019/06/03/routing-docker-traffic-through-a-vpn-connection/ ) This works, albeit a bit ungainly. My request is specific to how UnRAID appears to handle this kind of setup. On the container doing the binding, under the Docker tab of UnRAID, on the Network column (which normally shows bridge / host etc.) it will show "container=<container ID of container that is bound to>". This is a bit awkward in that container IDs aren't nearly as easy to read as container names, plus is makes the UI unnecessarily wide due to the long ID tag. Secondly, since the container doing the binding seems to be using Docker IDs rather than container names, every time the bound-to (VPN) updates the ID changes, so the binding container needs to be forced-updated or have a non-change (such as adding and then removing a space to a field) in the container configuration, enabling the "Apply" button, at which point that container will "re-translate" the name to ID of the bound-to container. If I try to restart or stop-then-start the container it fails since the bound-to container ID no longer exists. It's possible what's happening is due to Docker and not UnRAID, if so I don't expect any real kind of workaround but wanted to try anyway. It just feels like what I'm seeing is at least partially due to the way UnRAID parses and handles Docker information and requests and therefore could be fixed.
  14. I don't remember for sure, but I feel like at some point in the past I changed the repository for that container without fully rebuilding the config, and it's possible there was a case change that I didn't even think about at the time. That could explain it I guess. And it does seem like dockerMan is getting confused between the files, where when I make changes to the container it will write them to the lowercase-named file but when I update the container it reads settings from the uppercase-named file. I'm guessing I can resolve it myself by either removing the uppercase-named file and renaming the lowercase on so there is only a "my-sonarr.xml" left, or by removing all the templates and starting from scratch. I just wanted to bring the issue up in case it is causing other people different issues or ones I have and don't notice. It's not like I edit host paths on containers every day, but it was annoying to have to remove an empty share that was created every time I updated this container.
  15. I did the deletion test already, although I renamed the file "my-Sonarr.xml" to replace the one with the incorrect data. When I tried adding a new Host Path a *new* "my-sonarr (1).xml" file was created and I ended up with the same inability to alter the container, just now with "correct" data. While it fixes this specific issue it will come up again if I ever need to change the Host Paths in the future, so not really a fix. The container still seems to only write to "my-sonarr (1).xml" and read from "my-Sonarr.xml", regardless of the data in the files. I have not yet tried just deleting the container and the associated templates and starting again from scratch because I thought it might be more useful for the devs to be able to track down the actual issue in case it is causing other underlying problems that I just haven't noticed yet. @Squid did ask about the "my-sonarr (1).xml" file in the first place, so I'm guessing I'm not the first person this has happened to.
  16. Ok, so I did a bit more experimentation and here is what I found: The container seems to only read from "my-Sonarr.xml" and only write to "my-sonarr (1).xml". If I delete "my-Sonarr.xml" and go to edit the container, all the fields are empty. If I delete "my-sonarr (1).xml" and go to edit the container, all the settings (including the Host Path I am trying to remove) are there. If I remove the host path and apply the changes to the container again then the "my-sonarr (1).xml" file is recreated but the Host Path is still shown under the Edit Container page. I haven't yet tried removing the container completely and rebuilding it, and at this point I'm not convinced that would even work.
  17. Yup, there is. I didn't notice it before because it sorts to the end of the directory list. Looking at the settings in there they appear to be "correct", without the Host Path I've been trying to remove. Would the best action at this point to remove the secondary (my-sonarr (1).xml) or replace the "incorrect" one with the "correct" one?
  18. This could be possible (at least from my uneducated standpoint), this container has existed through a bunch of UnRAID updates, basically since Docker support was added. If it comes to it I can probably remove the container completely and rebuild it, but it seems like it should work properly anyway. I have tried adding and removing Host Paths to other containers and it works properly, so it seems stuck specifically on this one. The permissions for the xml file for this container is also the same as all others, so it's not like it became read-only at some point.
  19. Attached. I do see the share I'm trying to remove (/mnt/user/Alex TV) so I'm assuming it's not being removed from this xml file when I click the remove button on the Edit Container page, so is being recreated when I upgrade the container. I'm thinking the upgrade process isn't pulling that info from outside since I added the host path myself. Should I be able to edit this xml file directly to remove the host path and keep the empty share from getting recreated every time? Also, I just tried something else that yielded interesting results. I tried adding another Host Path to that container. What happened was the container started again with the path I've been trying to remove as well as the new path I added and all the paths that should be there. All the paths show normally on the Volume Mappings column in the Docker Containers page. But when I go to the Edit Container page for that container the new path isn't shown at all. And the new path also wasn't added to the XML file. When I removed the path I've been trying to get rid of both the one I removed and the one I just added but wasn't showing under the Edit Container page disappeared from the Volume Mappings column on the Docker Containers page. Very weird. my-Sonarr.xml
  20. You are on the right track, sorry if my initial description was unclear. I am following the correct operation to remove a host path from a container (open Edit Container page, click Remove button next to host path, click Apply button at bottom of Edit Container page) and while the host path will be removed from the container and no longer show under the Volume Mappings column on the Docker Containers page, the host path remains on the Edit Container page and will recreate the deleted share every time this container is updated. I actually updated the container today, so here is everything I am seeing on my end and what I have done / am doing: Done in the past: - Removed all files from a given share. - Deleted empty share from the Share Settings page for that share, via the Delete Share checkbox. - Removed Host Path mapping for that share for the Docker container from the Edit Container page. At this point the Host Path doesn't show in the Volume Mappings column in the Docker Containers page but continues to show under the Edit Container page for that container. The share is still deleted. Today: - Updated Docker container through the container update function on the Docker Containers page. - See share that was deleted listed in the container launch command, the share also shows again under the Volume Mappings column on the Docker Containers page and the share I deleted has recreated itself on the Shares page and I can see it again on the network. - Go to Edit Container page for this container and click Remove button next to Host Path I want removed and click Apply button at bottom of page. - The empty share is no longer listed on the launch command for the container. - The Host Path no longer shows under the Volume Mappings column on the Docker Containers page. - The Host Path continues to show on the Edit Container page, even though it doesn't appear to currently be being used by the container. - Delete the empty share again. I can stop/start or restart the container and the share won't be recreated, but if I force update the container then the phantom host path is used again and the share will recreate itself. My current problem seems to be that it is impossible to remove the host path from the Edit Container page and that setting is used whenever the container is updated, causing an empty share to be created every time.
  21. I found this issue recently while on 6.5 and it has persisted through to 6.6.3 so I thought I would report it. It is more annoying than a show stopper. I have a Docker container (Sonarr) that has several Host Paths configured. I have since emptied one of the shares that was mapped and removed the share itself, but I seem to be unable to remove the Host Path from the container. I can click the Remove button next to the Host Path in the container configuration page and the container will restart without using the share, then I'm able to delete the empty share itself. Before removing the mapping I cannot delete the empty share because UnRAID says it is in use. The path is not shown under the Volume Mappings under the Docker Container list at this point, but if I go to the container properties again the path is still listed. If I update the container the path will be used again and the empty share is recreated. So then I have to remove the mapping and delete the empty share again. I'm not sure what is causing the issue and I'm pretty sure I could fix it by rebuilding the container completely, but it seems like this should work the way I'm doing it by removing the volume mapping from the existing container. Let me know if more details are needed. tower-diagnostics-20181022-1415.zip