• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Dyon's Achievements


Apprentice (3/14)



  1. Unfortunately Unraid didn't send me an email notification about this. But I'll look into it, but it looks like SABNZBd changed the way of how to compile it, so I have to look into this. I'll let you know if I updated it. Edit: It's updated
  2. This error is caused by the 'WEBUI_PASSWORD' environment variable. Could you verify that your WEBUI_PASSWORD environment variable is set and has no weird special characters that might cause this issue? I'll also make the error message more clear, so it won't cause confusion if it is the VPN password or WEBUI password.
  4. I see they offer a free 8GB account, I'll create one and look into it how to configure it. You are right about the OC_SERVER section in the README, I'll update that later. If I figured it out, I'll mention you in a new reply.
  5. I believe that is corrent indeed, for OpenVPN atleast. I've also played around with that in the past. However, I don't think this would work if you use Wireguard. And since WireGuard and OpenVPN both would need different settings (the --device /dev/net/tun or something else for WireGuard), the privileged mode is an universal 'fix'. Unless there won't be a conflict running the container with multiple --device parameters. But I wouldn't know which one WireGuard would use.
  6. Sorry for late reply. Unraid Forums does not send me emails about replied on posts Done. Update the container and add a new Environment Variable named 'INSTALL_PYTHON3' and set it to yes.
  7. That is quite odd. I've made a note for myself to look at this later this week. I'll do some testing to see if I can find a solution, or find what's going wrong.
  8. In Unraid, you must see something like this in the Docker tab: +----------------+--------------------------+-------------------------------------------+ | Application | Network | Port Mappings | +================+==========================+===========================================+ | Plex | container:passthroughvpn | :32400/TCP <==> :32400 | +----------------+--------------------------+-------------------------------------------+ | passthroughvpn | bridge | 172.17.0.x:32400 <==> | +----------------+--------------------------+-------------------------------------------+ (172.17.0.x and would obviously be your own IPs) This means that in order to access Plex, you must go to http(s):// The you see in Plex is most likely the same 172.17.0.x IP the passthrough container has. How do you try to access your Plex server right now? You should use the 2nd IP shown at the port mappings (for me that would be
  9. > my example will test each of my containers Ah, yeah I see! That's quite nice then! ๐Ÿ‘ I believe all containers use the IP of the passthrough containers. NZBGet, Sonarr and Radarr would all share the same IP. If you pass NZBGet through the VPN, in Sonarr and Radarr you would need to get the IP of the download client to either or the IP of the passthrough container, which would be the 172-address that you can see on the Docker dashboard in Unraid. If NZBGet, Sonarr/Radarr are passed through the passthrough container, they should still be able to communicate with your actual local network, since the LAN_NETWORK variable of the passthrough container is 'responsible' for that. I hope it make sense ๐Ÿ˜‰
  10. @sonic6 I've made a script that will check every 10 seconds if the `passthroughvpn` container did restart. If it did restart, it will restart all containers that are routed through it. To install this script: In Unraid, go to the Apps section and install "CA User Scripts" from Squid Open the terminal in Unraid and run the following 3 commands: mkdir -p /boot/config/plugins/user.scripts/scripts/passthrough_restart echo 'This script will check if the passthroughvpn container has restarted and restart the passed through containers' > /boot/config/plugins/user.scripts/scripts/passthrough_restart/description wget -q https://raw.githubusercontent.com/DyonR/docker-passthroughvpn/master/restart-passed-through-containers.sh -O /boot/config/plugins/user.scripts/scripts/passthrough_restart/script In Unraid, go to Settings -> (User Utilities at the bottom) -> User Scripts Here you will see a script called 'passthrough_restart'. Set the schedule to At Startup of Array. And press Apply. Select Run In Background to start the script immediately. I hope this helps ๐Ÿ˜ You actually shouldn't run random script on your server from a stranger online, so you can read the source of the script here: https://github.com/DyonR/docker-passthroughvpn/blob/master/restart-passed-through-containers.sh
  11. I am working on a script right now that can be added to the User Scripts plugin. This script will check every x seconds if the passthroughvpn container has restarted. If it did restart, it will also restart all the containers that are getting passed through No, not possible.
  12. First of all, excuse me all for being late with replies. Unraid Forums does not auto-follow threads you post, so I did not receive any email notifications unfortunately. First question; in what scenario would I not want it to restart automatically This depends, if you pass an torrent client through this container, it probably won't matter when it restarts, since most torrent clients will just continue downloading the files and move on. (Yes, there are VPN torrent Dockers, but this is an example of a 'less important' program). Another example would be a small private webserver that you host, or a reverse proxy host. These programs won't be affected by abruptly shutting down. Some game servers, to give an example that I've had experience and 'issues' with is Minecraft. Minecraft does not continuously save player activity to the disk, so if the container thinks it is down, and abruptly restarts, this could result in a minor rollback or item loss. I consider programs that basically continuously access to disk to save unrecoverable progress 'important'. The problem is that pinging a host to check if the connection is actually up or down is not really reliable in my opinion at all, even though this is exactly what the container does. I've had enough times that the container thought it was down based on 1 failed ping, but was actually still up. I did test this by sending my a Telegram message instead of abruptly shutting down the container. I might look into a better solution to do, for example 10 pings and see if >30% failed, if so, restart the container. Second question; If this is set to no and the connection is dropped for some reason such as if the VPN server goes offline, will the service begin working again once that specific VPN server comes online again or does the killswitch mean it stays offline until it restarts? This is actually an interesting question which I am not 100% certain about, this is not something I ever tested really. Since I just wanted the highest uptime for my containers. For now, I believe this is the case: If the connection is actually drops, and once again, one ping is actually a bad indicator, the connection will never come back, unless it restarts, since it can reestablish the connection before applying the iptables killswitch. OpenVPN uses tun0 as network interface and WireGuard uses the wg0 interface. The iptables killswitch is quite strict, it allows only connections to get in and out via tun0 or wg0 (with exception of the local network). So, if OpenVPN or WireGuard goes actually down it would never be able to communicate again. I don't know if WireGuard or OpenVPN have some auto-reconnection going on, this is something I actually would like to look in to. This is something I will look in to, but I can't promise when. I should probably also include something like this in the documentation on the GitHub page. As @numblock699 said, 'curl ifconfig.io' is probably the quickest way to do this. Privoxy is something I've literally never worked with since I have always just been passingthrough containers through this container. I do not know what the other usecase for Privoxy would be. I thought it was always used to proxy programs like Torrent clients, Sonarr, Radarr, Jackett, etc before there were any 'vpn' Dockers of those programs available. To be honest, I do not feel like supporting a product that I do not use myself. Also, what about getting a Privoxy Docker and route that Privoxy container through this passthrough container? ๐Ÿ˜‰ I think that would give the same result, wouldn't it? This is intended behaviour, it is possible that your container loses connection with your VPN provider (Session timeout for example), so, if 1 test ping to HEALTH_CHECK_HOST (defaults to one.one.one.one if unset) fails, it will restart the container if RESTART_CONTAINER is set to 'yes'. Although, basing this of one ping is not the best way to do it, to quote my reply above; "The problem is that pinging a host to check if the connection is actually up or down is not really reliable in my opinion at all, even though this is exactly what the container does. I've had enough times that the container thought it was down based on 1 failed ping, but was actually still up. I did test this by sending my a Telegram message instead of abruptly shutting down the container. I might look into a better solution to do, for example 10 pings and see if >30% failed, if so, restart the container."
  13. This container solves the problem of containers with no 'VPN' variant. After setting up this container, route your non-VPN Dockers through this one to protect your IP. Or, when you have no other 'VPN' containers. Or host, for example a game or webserver by using your VPN Provider's IP, if your VPN Provider supports forwarding. Both WireGuard and OpenVPN are supported. Check out https://github.com/DyonR/docker-passthroughvpn for setup instructions. Base: Debian 10-slim Docker Hub: https://hub.docker.com/r/dyonr/passthroughvpn/ GitHub: https://github.com/DyonR/docker-passthroughvpn
  14. @mihu I have updated to container and it seems to work fine now. Please update the container in Unraid ๐Ÿ˜ What I've changed; switched the Docker from being based on debian:10-slim to debian:bullseye-slim (Bullseye is Debian 11, but still actually in beta) and changed a minor thing in the run script. Since Debian Bullseye is beta, this will hopefully be a temporary solution for now. I was unable to find out why it stopped working in Debian 10 ๐Ÿ˜”
  15. That's odd. I haven't changed anything about the container for a long time and still works fine for me. I assume you just run Unraid, right? Edit: You are right, I am getting the same error after a clean install. I will look into this asap.