[Support] binhex - DelugeVPN


Recommended Posts

Hi,

 

Tried to set up this docker with a ovpn file from my OpenVPN server running on a VPS.

With VPN disabled it works, but with VPN enabled it says connected but no local binding.

 

Any ideas? I only have the ovpn file in /openvpn folder.

 

Sorry if this already has been answered but I couldn't find it :(

Link to comment
14 hours ago, Tarald said:

Hi,

 

Tried to set up this docker with a ovpn file from my OpenVPN server running on a VPS.

With VPN disabled it works, but with VPN enabled it says connected but no local binding.

 

Any ideas? I only have the ovpn file in /openvpn folder.

 

Sorry if this already has been answered but I couldn't find it :(

Hi Tarald

 

Welcome to the forums.  The pros will need to see logs to know what is happening.

 

Stop the container, delete supervisord.log, edit the container properties, set DEBUG to true, start the container, wait 5 minutes then copy supervisord.log here.

 

Good luck.

 

Edited by Gog
Link to comment
On 12/25/2018 at 11:51 AM, Tarald said:

open vpn server is not set up with any password. Tried removing but it defaults to vpnuser and vpnpassword.

What do you mean?  Your VPN provider is not using a username and password to authenticate you?

 

There is also this feature in your ovpn file that doesn't seem to be supported: block-outside-dns

 

Link to comment

Hey got a permission issue.

 

Every time Deluge downloads a torrent, the torrent is not accessible through SMB in the incomplete or complete folder.

 

I am able to access all other files through SMB accept the ones downloaded through Deluge, so I'm assuming Deluge is setting the permission wrong on the downloaded torrents

 

Error message: "You do not have permission to access this file"

 

I have the containers UMASK set to 000

 

Capture.PNG.b9c3568b76c664929a5eec8b0bc886f1.PNG

 

When I run the Docker Safe New Perms tool, the torrents permissions are fixed and I can access it through SMB. I just don't want to have to run that script every time I download something new.

 

I restored the app data folder as well (was surprised that didn't fix it).

 

Thanks for any help.

Link to comment
On 12/24/2018 at 11:47 PM, Tarald said:

Hi,

 

Tried to set up this docker with a ovpn file from my OpenVPN server running on a VPS.

With VPN disabled it works, but with VPN enabled it says connected but no local binding.

 

Any ideas? I only have the ovpn file in /openvpn folder.

 

Sorry if this already has been answered but I couldn't find it :(

VPS is not supported im afraid, no LAN means no easy way to access deluge over public ip and prevent ip leakage. 

Link to comment
23 minutes ago, axman said:

Hey got a permission issue.

 

Every time Deluge downloads a torrent, the torrent is not accessible through SMB in the incomplete or complete folder.

 

I am able to access all other files through SMB accept the ones downloaded through Deluge, so I'm assuming Deluge is setting the permission wrong on the downloaded torrents

 

Error message: "You do not have permission to access this file"

 

I have the containers UMASK set to 000

 

Capture.PNG.b9c3568b76c664929a5eec8b0bc886f1.PNG

 

When I run the Docker Safe New Perms tool, the torrents permissions are fixed and I can access it through SMB. I just don't want to have to run that script every time I download something new.

 

I restored the app data folder as well (was surprised that didn't fix it).

 

Thanks for any help.

ok so umask looks correct, but what about PUID and PGID? have you set those to the correct values?

Edited by binhex
Link to comment
3 minutes ago, binhex said:

ok and that screenshot is what the permissions look like when you cant access it via SMB?

Yeah, the permission are drwxrwx--- (can't screen shot it) when I can't access it via SMB


The permissions are set to drwxrwxrwx when I can access it after Docker Safe New Perms script is ran.

Link to comment
1 hour ago, axman said:

Yeah, the permission are drwxrwx--- (can't screen shot it) when I can't access it via SMB


The permissions are set to drwxrwxrwx when I can access it after Docker Safe New Perms script is ran.

that looks ok, so check your Shares/Security, ensure 'User Shares' and/or 'Disk Shares' are set to 'Public', examples from my setup:-

 

image.png.f98c9650715beff5ebe2e405c942544a.png

 

and:-

 

image.png.4113a5ceb3fb74903923fbad88e29788.png

Link to comment

Hi all, I just tried to access the webui on my unraid box.  I get "the site cannot be reached" error.  I'm using 8112 as the web gui and I haven't changed anything.  i can VPN into the docker without a problem.  I looked through the last few pages of comments and can't tell if the suggestions would solve my problem.  Everything is functioning well on unraid outside of this.  I am new to unraid so if you have suggestions, if you could explain where I need to get the log or run a command etc... generally, I can find it.

Link to comment
On 12/24/2018 at 4:19 AM, spatial_awareness said:

Hi Binhex,

 

I have two instances of this plugin going one for movies, another for TV shows.

 

PROBLEM DESCRIPTION

The Movies instance, writes data to the logs inside the container until the docker.img is filled. I have a few hundred torrents going since I perma-seed.

 

I've attached the .json log.

 

Most of the messages aren't useful.

 

It seems to be working fine. Stuff is being downloaded, it just spams the logs until docker.img fills and crashes :(

ok and can you tell me what the path to the log file is?.

Link to comment
4 hours ago, binhex said:

ok and can you tell me what the path to the log file is?.

root@tank:/var/lib/docker/containers/c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc# ls -lh
total 8.8G
-rw-r----- 1 root root 8.8G Jan  3 10:00 c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc-json.log  < ----
drwx------ 1 root root    0 Dec 31 14:20 checkpoints/
-rw------- 1 root root 4.0K Jan  2 14:22 config.v2.json
-rw-r--r-- 1 root root 1.5K Jan  2 14:22 hostconfig.json
-rw-r--r-- 1 root root   13 Jan  2 14:22 hostname
-rw-r--r-- 1 root root  224 Jan  2 14:22 hosts
drwx------ 1 root root    6 Dec 31 14:20 mounts/
-rw-r--r-- 1 root root  176 Jan  2 14:22 resolv.conf
-rw-r--r-- 1 root root   71 Jan  2 14:22 resolv.conf.hash


root@tank:/var/lib/docker/containers/c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc# docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                                                                                                                          NAMES
c5cb99ad5b06        binhex/arch-delugevpn   "/usr/bin/tini -- /b…"   2 days ago          Up 20 hours         0.0.0.0:8111->8112/tcp, 0.0.0.0:8117->8118/tcp, 0.0.0.0:58845->58846/tcp, 0.0.0.0:58945->58946/tcp, 0.0.0.0:58945->58946/udp   binhex-delugevpn-movies   < ----
b02e3b1f816f        limetech/plex           "/sbin/my_init"          10 days ago         Up 2 days                                                                                                                                          PlexMediaServer
6e1a05fe9fe2        binhex/arch-delugevpn   "/usr/bin/tini -- /b…"   10 days ago         Up 2 days           0.0.0.0:8113->8112/tcp, 0.0.0.0:8119->8118/tcp, 0.0.0.0:58847->58846/tcp, 0.0.0.0:58947->58946/tcp, 0.0.0.0:58947->58946/udp   binhex-delugevpn-tv

 

Link to comment
1 hour ago, spatial_awareness said:

root@tank:/var/lib/docker/containers/c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc# ls -lh
total 8.8G
-rw-r----- 1 root root 8.8G Jan  3 10:00 c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc-json.log  < ----
drwx------ 1 root root    0 Dec 31 14:20 checkpoints/
-rw------- 1 root root 4.0K Jan  2 14:22 config.v2.json
-rw-r--r-- 1 root root 1.5K Jan  2 14:22 hostconfig.json
-rw-r--r-- 1 root root   13 Jan  2 14:22 hostname
-rw-r--r-- 1 root root  224 Jan  2 14:22 hosts
drwx------ 1 root root    6 Dec 31 14:20 mounts/
-rw-r--r-- 1 root root  176 Jan  2 14:22 resolv.conf
-rw-r--r-- 1 root root   71 Jan  2 14:22 resolv.conf.hash


root@tank:/var/lib/docker/containers/c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc# docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                                                                                                                          NAMES
c5cb99ad5b06        binhex/arch-delugevpn   "/usr/bin/tini -- /b…"   2 days ago          Up 20 hours         0.0.0.0:8111->8112/tcp, 0.0.0.0:8117->8118/tcp, 0.0.0.0:58845->58846/tcp, 0.0.0.0:58945->58946/tcp, 0.0.0.0:58945->58946/udp   binhex-delugevpn-movies   < ----
b02e3b1f816f        limetech/plex           "/sbin/my_init"          10 days ago         Up 2 days                                                                                                                                          PlexMediaServer
6e1a05fe9fe2        binhex/arch-delugevpn   "/usr/bin/tini -- /b…"   10 days ago         Up 2 days           0.0.0.0:8113->8112/tcp, 0.0.0.0:8119->8118/tcp, 0.0.0.0:58847->58846/tcp, 0.0.0.0:58947->58946/tcp, 0.0.0.0:58947->58946/udp   binhex-delugevpn-tv

 

ok no probs, that looks like the deluge-web log, i have just pushed some changes which will force the log to be located on /config/, ive also rolled in two env vars to control logging level for the deluge dameon and the deluge web, image is now built, so please pull down latest.

Link to comment
7 hours ago, binhex said:

ok no probs, that looks like the deluge-web log, i have just pushed some changes which will force the log to be located on /config/, ive also rolled in two env vars to control logging level for the deluge dameon and the deluge web, image is now built, so please pull down latest.

 

Pulled it down, it looks like the logging isn't as frantic, these files aren't growing at the same rate.

 

I don't understand what to adjust the variables to or where to adjust the variables, I checked here.

 

I still see weird entries on the tails of the .json log in the container, and the supervisord log now.

 

(I have a script that deletes the supervisord.XXX logs files every hour, it's why we don't see that directory grow)

 

root@tank:/mnt/user/appdata/binhex-delugevpn-movies# docker image ls
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
binhex/arch-delugevpn   latest              1b4be30790e3        2 hours ago         1.34GB  < ---- 
limetech/plex           latest              622fc6d98c10        2 weeks ago         514MB
binhex/arch-delugevpn   <none>              edb42194fcfd        2 weeks ago         1.34GB

root@tank:/mnt/user/appdata/binhex-delugevpn-movies# docker container ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                                                                                                                          NAMES
9dbeb52ff92c        binhex/arch-delugevpn   "/usr/bin/tini -- /b…"   14 minutes ago      Up 14 minutes       0.0.0.0:8113->8112/tcp, 0.0.0.0:8119->8118/tcp, 0.0.0.0:58847->58846/tcp, 0.0.0.0:58947->58946/tcp, 0.0.0.0:58947->58946/udp   binhex-delugevpn-tv
c52761545533        binhex/arch-delugevpn   "/usr/bin/tini -- /b…"   17 minutes ago      Up 5 minutes        0.0.0.0:8111->8112/tcp, 0.0.0.0:8117->8118/tcp, 0.0.0.0:58845->58846/tcp, 0.0.0.0:58945->58946/tcp, 0.0.0.0:58945->58946/udp   binhex-delugevpn-movies  < ---
b02e3b1f816f        limetech/plex           "/sbin/my_init"          11 days ago         Up 3 days                                                                                                                                          PlexMediaServer

root@tank:/mnt/user/appdata/binhex-delugevpn-movies# ls *log
91dd38ef8c6a1cefa8e07f26b7966360663eeef56e5a00f13f5e9b8e886a650b-json.log*  deluge-web.log  deluged.log  supervisord.log

root@tank:/mnt/user/appdata/binhex-delugevpn-movies# tail *.log && tail /var/lib/docker/containers/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87-json.log
==> 91dd38ef8c6a1cefa8e07f26b7966360663eeef56e5a00f13f5e9b8e886a650b-json.log <==
{"log":"rW\n","stream":"stdout","time":"2018-12-24T03:45:13.11357649Z"}
{"log":"2018-12-23 22:45:13,113 DEBG fd 16 closed, stopped monitoring \u003cPOutputDispatcher at 22958395464448 for \u003cSubprocess at 22958395463512 with name watchdog-script in state STOPPING\u003e (stderr)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.113860971Z"}
{"log":"2018-12-23 22:45:13,113 DEBG fd 11 closed, stopped monitoring \u003cPOutputDispatcher at 22958395464160 for \u003cSubprocess at 22958395463512 with name watchdog-script in state STOPPING\u003e (stdout)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.114088118Z"}
{"log":"2018-12-23 22:45:13,114 INFO stopped: watchdog-script (terminated by SIGTERM)\n","stream":"stdout","time":"2018-12-24T03:45:13.114323505Z"}
{"log":"2018-12-23 22:45:13,114 DEBG received SIGCLD indicating a child quit\n","stream":"stdout","time":"2018-12-24T03:45:13.114558589Z"}
{"log":"2018-12-23 22:45:13,114 DEBG killing start-script (pid 138) with signal SIGTERM\n","stream":"stdout","time":"2018-12-24T03:45:13.114910293Z"}
{"log":"2018-12-23 22:45:13,115 DEBG fd 8 closed, stopped monitoring \u003cPOutputDispatcher at 22958395463584 for \u003cSubprocess at 22958395463368 with name start-script in state STOPPING\u003e (stdout)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.115406481Z"}
{"log":"2018-12-23 22:45:13,115 DEBG fd 10 closed, stopped monitoring \u003cPOutputDispatcher at 22958395463872 for \u003cSubprocess at 22958395463368 with name start-script in state STOPPING\u003e (stderr)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.115557003Z"}
{"log":"2018-12-23 22:45:13,115 INFO stopped: start-script (terminated by SIGTERM)\n","stream":"stdout","time":"2018-12-24T03:45:13.115660248Z"}
{"log":"2018-12-23 22:45:13,115 DEBG received SIGCLD indicating a child quit\n","stream":"stdout","time":"2018-12-24T03:45:13.115793026Z"}

==> deluge-web.log <==
[INFO    ] 12:33:59 configmanager:70 Setting config directory to: /config
[INFO    ] 12:33:59 ui:124 Deluge ui 1.3.15
[INFO    ] 12:33:59 ui:127 Starting web ui..
[INFO    ] 12:33:59 server:666 Starting server in PID 835.
[INFO    ] 12:33:59 server:679 Serving on 0.0.0.0:8112 view at http://0.0.0.0:8112
[INFO    ] 12:33:59 client:217 Connecting to daemon at localhost:58846..
[INFO    ] 12:33:59 client:121 Connected to daemon at 127.0.0.1:58846..

==> deluged.log <==
[INFO    ] 12:46:37 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume
[INFO    ] 12:46:37 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume
[INFO    ] 12:47:08 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume
[INFO    ] 12:47:08 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume
[INFO    ] 12:47:17 torrentmanager:756 Saving the state at: /config/state/torrents.state
[INFO    ] 12:48:10 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume
[INFO    ] 12:48:10 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume
[INFO    ] 12:49:47 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume
[INFO    ] 12:49:47 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume
[INFO    ] 12:50:37 torrentmanager:756 Saving the state at: /config/state/torrents.state

==> supervisord.log <==
2019-01-03 12:51:17,334 DEBG 'start-script' stdout output:
Rw
2019-01-03 12:51:17,334 DEBG 'start-script' stdout output:
RwRwRwRwRwRwrWrWrWrWrWrW
2019-01-03 12:51:17,335 DEBG 'start-script' stdout output:
rWrWrWrWRwrWrWrWr
2019-01-03 12:51:17,335 DEBG 'start-script' stdout output:
W
2019-01-03 12:51:17,338 DEBG 'start-script' stdout output:
RwRw

{"log":"2019-01-03 12:51:17,334 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.334700502Z"}
{"log":"Rw\n","stream":"stdout","time":"2019-01-03T17:51:17.334716594Z"}
{"log":"2019-01-03 12:51:17,334 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.334981836Z"}
{"log":"RwRwRwRwRwRwrWrWrWrWrWrW\n","stream":"stdout","time":"2019-01-03T17:51:17.334992136Z"}
{"log":"2019-01-03 12:51:17,335 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.335304911Z"}
{"log":"rWrWrWrWRwrWrWrWr\n","stream":"stdout","time":"2019-01-03T17:51:17.335319143Z"}
{"log":"2019-01-03 12:51:17,335 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.335495444Z"}
{"log":"W\n","stream":"stdout","time":"2019-01-03T17:51:17.335506838Z"}
{"log":"2019-01-03 12:51:17,338 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.338673675Z"}
{"log":"RwRw\n","stream":"stdout","time":"2019-01-03T17:51:17.338696095Z"}
root@tank:/var/lib/docker/containers/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87# ls -lh
total 836M
-rw-r----- 1 root root 836M Jan  3 18:34 c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87-json.log  < ----
drwx------ 1 root root    0 Jan  3 12:10 checkpoints/
-rw------- 1 root root 4.0K Jan  3 12:33 config.v2.json
-rw-r--r-- 1 root root 1.5K Jan  3 12:33 hostconfig.json
-rw-r--r-- 1 root root   13 Jan  3 12:33 hostname
-rw-r--r-- 1 root root  223 Jan  3 12:33 hosts
drwx------ 1 root root    6 Jan  3 12:10 mounts/
-rw-r--r-- 1 root root  176 Jan  3 12:33 resolv.conf
-rw-r--r-- 1 root root   71 Jan  3 12:33 resolv.conf.hash

root@tank:/var/lib/docker/containers/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87# cd /mnt/user/appdata/binhex-delugevpn-movies/

root@tank:/mnt/user/appdata/binhex-delugevpn-movies# ls -lh *.log*
-rwxrwxrwx 1 root   root  416K Dec 23 23:09 91dd38ef8c6a1cefa8e07f26b7966360663eeef56e5a00f13f5e9b8e886a650b-json.log*
-rw-rw-rw- 1 nobody users  450 Jan  3 12:33 deluge-web.log
-rw-rw-rw- 1 nobody users 157K Jan  3 18:34 deluged.log
-rw-r--r-- 1 root   root  3.7M Jan  3 18:35 supervisord.log
-rw-r--r-- 1 root   root   11M Jan  3 18:30 supervisord.log.1
-rw-r--r-- 1 root   root   11M Jan  3 18:17 supervisord.log.2
-rw-r--r-- 1 root   root   11M Jan  3 18:04 supervisord.log.3
-rw-r--r-- 1 root   root   11M Jan  3 17:52 supervisord.log.4

 

Edited by spatial_awareness
Adding logs.
Link to comment
On 12/31/2018 at 7:49 AM, binhex said:

that looks ok, so check your Shares/Security, ensure 'User Shares' and/or 'Disk Shares' are set to 'Public', examples from my setup:-

 

image.png.f98c9650715beff5ebe2e405c942544a.png

 

and:-

 

image.png.4113a5ceb3fb74903923fbad88e29788.png

 

Thanks for the help, I was able to resolve the issue from this source in case anyone else is looking.

 

 

Link to comment

I've not had a chance to search this forum, in case you answered this before, because I'm at work and should be working. 

I've had to add the path variable /download and matched it up to the Radarr/Sonarr/Lidarr download directories so that the download paths can be passed back. Otherwise, the path to the downloads is passed back as /data instead.

 

It would be good if Docker could formalise some sort of naming convention for this idea of passing paths around, but in the meantime would it be a good idea to add this /download path to your container template?

Link to comment
15 hours ago, spatial_awareness said:

I don't understand what to adjust the variables to or where to adjust the variables

you need to add in the two new env vars via unraid, so go to the unraid web ui/docker/edit container then add the following two env vars, set the value to be info, warning, error, or debug, depending on the level of logging you want:-

 

DELUGE_DAEMON_LOG_LEVEL
DELUGE_WEB_LOG_LEVEL

 

15 hours ago, spatial_awareness said:

I still see weird entries on the tails of the .json log in the container, and the supervisord log now.

 

probably due to the fact the default log level is info, so you will see info, warning, and error messages, turn it down if you dont want to see these.

 

15 hours ago, spatial_awareness said:

(I have a script that deletes the supervisord.XXX logs files every hour, it's why we don't see that directory grow)

you dont need to do this, supervisor logs will not grow past 10MB per file and then auto cycle, maximum of 5 log files.

 

just to be clear here, you are the first person to mention insanely large log files (think you mentioned in the gigabytes?!) so i would assume by that that you have something that is hitting deluge hard repeatedly and thus causing the log files to grow so i would encourage you to try and identify what is doing this, as turning down the logging level will just be a way to mask the issue, it obviously wont prevent whatever is from hammering deluge.

Link to comment
2 hours ago, OFark said:

It would be good if Docker could formalise some sort of naming convention

you cant formalise this, docker is just a tool, how you use it is up to the developer/end user, you can name your volume mapping anything you want, and thats a good thing, there is no way docker will ever hard set this (and they shouldn't).

 

2 hours ago, OFark said:

but in the meantime would it be a good idea to add this /download path to your container template?

not really, as this will just cause further confusion as to why /data and /downloads exist, the issue is LSIO uses /downloads and i went the route of /data (a very long time ago!), thus the disparity if you use a mix of containers. so two possible solutions, dont use LSIO and exclusively use my docker images, as they all use /data :-), or alternatively ensure consistency with either /data or /downloads by simply renaming the volume mapping, the choice is yours.

 

just a fyi - the reason i decided on data as a name is that i want to be able to use this throughout all containers for saving ANY time of data, not just downloads, thus the name.

Edited by binhex
Link to comment

Meh crap.  Boy did I just screw the pooch.  I had an old Docker container that I was fighting with (not working) and gave up on for a while.  The other day I decided to manually update the container to try again and didn't bother to look at the documentation.  The application seemed to run just fine.  When did the container change to allow traffic without the VPN?  That just bit me in the ass.  😲

 

I realize it's all my fault.  Like every bit.  But the nice thing about this container was that I could trust that nothing could leak out.  Using the same environmental variables I had with my old setup I didn't realize I wasn't protected.  Before if the VPN didn't work neither did Deluge.

 

I should have checked the new variables since it had been so long.  I should have probably tested the connection too.  Don't be a dumbass like me.

 

Ugh. 

Link to comment
Meh crap.  Boy did I just screw the pooch.  I had an old Docker container that I was fighting with (not working) and gave up on for a while.  The other day I decided to manually update the container to try again and didn't bother to look at the documentation.  The application seemed to run just fine.  When did the container change to allow traffic without the VPN?  That just bit me in the ass.  
 
I realize it's all my fault.  Like every bit.  But the nice thing about this container was that I could trust that nothing could leak out.  Using the same environmental variables I had with my old setup I didn't realize I wasn't protected.  Before if the VPN didn't work neither did Deluge.
 
I should have checked the new variables since it had been so long.  I should have probably tested the connection too.  Don't be a dumbass like me.
 
Ugh. 
It hasn't changed, as long as VPN_ENABLED=yes then you will not be able to access the web UI unless the VPN is established, no IP leakage.

Sent from my EML-L29 using Tapatalk

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.