[Support] binhex - DelugeVPN


Recommended Posts

I had an error where Deluge was using the container disk image for my downloads, thus setting off warnings. Not sure what triggered this change, but when I tried to revert the location to usual <screenshot>, I got a command failed error: Error response from daemon: endpoint with name binhex-delugevpn already exists in network bridge.. Anything I'm missing?

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='bridge' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='REDACTED' -e 'VPN_PASS'='REDACTED' -e 'VPN_PROV'='pia' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='yes' -e 'LAN_NETWORK'='192.168.86.0/24' -e 'NAME_SERVERS'='209.222.18.222,84.200.69.80,37.235.1.174,1.1.1.1,209.222.18.218,37.235.1.177,84.200.70.40,1.0.0.1' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -p '8118:8118/tcp' -v '/mnt/user/Downloads/':'/data':'rw' -v '/mnt/user/appdata/binhex-delugevpn':'/config':'rw' 'binhex/arch-delugevpn' 
13da363b92fbdc7dcf0b37d926e09828d3f6dd11f05cbf1677321a618df3abcf
/usr/bin/docker: Error response from daemon: endpoint with name binhex-delugevpn already exists in network bridge.

The command failed.

 

2020-02-15_21-50-59.png

Link to comment
On 2/16/2020 at 5:56 AM, vvolfpack said:

I got a command failed error: Error response from daemon: endpoint with name binhex-delugevpn already exists in network bridge.. Anything I'm missing?

it means you are trying to create a container with the same name as one that already exists, issue the following command on your host and you will no doubt see you already have a container named 'binhex-delugevpn':-

docker ps -a

so delete the old one if you want to create a new container with the same name.

Link to comment
On 2/17/2020 at 3:22 AM, binhex said:

issue the following command on your host and you will no doubt see you already have a container named 'binhex-delugevpn':-

Running the command doesn't show 'binhex-delugevpn' as a container. However, I do see a folder for the container under appdata. Deleting this folder and reinstalling the docker also shows me the same 'command failed' error from before.

 

Attaching my diagnostics file in case it's an error in the docker image itself since delugevpn downloaded items to its diskimage causing it to reach 100% storage.

 

CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS                    PORTS                                                                  NAMES
dd408724d851        binhex/arch-jackett      "/usr/bin/tini -- /b…"   37 hours ago        Up 27 hours               0.0.0.0:9117->9117/tcp                                                 binhex-jackett
9be5e3a09bc8        binhex/arch-sickchill    "/usr/bin/tini -- /b…"   37 hours ago        Created                                                                                          binhex-sickchill
6b116441524c        linuxserver/openvpn-as   "/init"                  38 hours ago        Up 27 hours               0.0.0.0:943->943/tcp, 0.0.0.0:9443->9443/tcp, 0.0.0.0:1194->1194/udp   openvpn-as
dd876d1256e3        linuxserver/duckdns      "/init"                  3 days ago          Up 27 hours                                                                                      duckdns
3aedc50e459c        binhex/arch-plexpass     "/usr/bin/tini -- /b…"   3 days ago          Up 27 hours                                                                                      binhex-plexpass
9c7295b66cb6        binhex/arch-sonarr       "/usr/bin/tini -- /b…"   6 days ago          Exited (0) 3 days ago                                                                            binhex-sonarr
b1fdead365e7        binhex/arch-radarr       "/usr/bin/tini -- /b…"   6 days ago          Exited (0) 38 hours ago                                                                          binhex-radarr
348ae87c462b        binhex/arch-sabnzbd      "/usr/bin/tini -- /b…"   9 days ago          Up 27 hours               0.0.0.0:8080->8080/tcp, 0.0.0.0:8090->8090/tcp                         binhex-sabnzbd
d64832469e44        binhex/arch-krusader     "/usr/bin/tini -- /b…"   10 days ago         Up 27 hours               5900/tcp, 0.0.0.0:6080->6080/tcp                                       binhex-krusader

vvolfbox-diagnostics-20200217-1058.zip

 

Edit 1: After a reboot, the command passes successfully, but I'm unable to start the docker. After clicking "Start". It spins for a second and returns to "Stopped" state. Any help would be much appreciated!

 

 

EDIT 2:

Getting these syslogs from trying to start the application

Feb 18 23:02:14 vvolfbox kernel: docker0: port 5(veth72bc119) entered blocking state
Feb 18 23:02:14 vvolfbox kernel: docker0: port 5(veth72bc119) entered disabled state
Feb 18 23:02:14 vvolfbox kernel: device veth72bc119 entered promiscuous mode
Feb 18 23:02:14 vvolfbox kernel: IPv6: ADDRCONF(NETDEV_UP): veth72bc119: link is not ready
Feb 18 23:02:14 vvolfbox kernel: docker0: port 5(veth72bc119) entered blocking state
Feb 18 23:02:14 vvolfbox kernel: docker0: port 5(veth72bc119) entered forwarding state
Feb 18 23:02:14 vvolfbox kernel: docker0: port 5(veth72bc119) entered disabled state
Feb 18 23:02:14 vvolfbox kernel: eth0: renamed from veth7da683a
Feb 18 23:02:14 vvolfbox kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth72bc119: link becomes ready
Feb 18 23:02:14 vvolfbox kernel: docker0: port 5(veth72bc119) entered blocking state
Feb 18 23:02:14 vvolfbox kernel: docker0: port 5(veth72bc119) entered forwarding state
Feb 18 23:02:15 vvolfbox kernel: docker0: port 5(veth72bc119) entered disabled state
Feb 18 23:02:15 vvolfbox kernel: veth7da683a: renamed from eth0
Feb 18 23:02:15 vvolfbox kernel: docker0: port 5(veth72bc119) entered disabled state
Feb 18 23:02:15 vvolfbox kernel: device veth72bc119 left promiscuous mode
Feb 18 23:02:15 vvolfbox kernel: docker0: port 5(veth72bc119) entered disabled state

Any help would be much appreciated, @binhex!

Edited by vvolfpack
Updating with the latest information
Link to comment

Update: My issue was network-related.  I had enabled a 2nd NIC on my server.  I disabled the 2nd NIC and rebooted the server -.

 

All is well now.  I think I will leave it alone before I create more problems.

 

I have been sailing with Deluge for some time now without issue.  I have VPN enabled in Deluge routing thru Toronto.

 

I rebooted my unraid server today. I am not able to open the Deluge WEBUI without first disabling the VPN. 

 

What is the issue?  This was working fine before I did the re-boot.  I just get the "site cannot be reached" error. Using port 8112.

 

Here is my log:

 

 

2020-02-19 20:39:33,304 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-02-19 20:39:33,305 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-02-19 20:39:33,425 DEBG 'start-script' stdout output:
[info] Default route for container is 172.17.0.1

2020-02-19 20:39:33,431 DEBG 'start-script' stdout output:
[info] Adding 209.222.18.222 to /etc/resolv.conf

2020-02-19 20:39:33,436 DEBG 'start-script' stdout output:
[info] Adding 84.200.69.80 to /etc/resolv.conf

2020-02-19 20:39:33,442 DEBG 'start-script' stdout output:
[info] Adding 37.235.1.174 to /etc/resolv.conf

2020-02-19 20:39:33,448 DEBG 'start-script' stdout output:
[info] Adding 1.1.1.1 to /etc/resolv.conf

2020-02-19 20:39:33,454 DEBG 'start-script' stdout output:
[info] Adding 209.222.18.218 to /etc/resolv.conf

2020-02-19 20:39:33,459 DEBG 'start-script' stdout output:
[info] Adding 37.235.1.177 to /etc/resolv.conf

2020-02-19 20:39:33,465 DEBG 'start-script' stdout output:
[info] Adding 84.200.70.40 to /etc/resolv.conf

2020-02-19 20:39:33,471 DEBG 'start-script' stdout output:
[info] Adding 1.0.0.1 to /etc/resolv.conf

2020-02-19 20:41:33,599 DEBG 'start-script' stderr output:
Error: error sending query: Could not send or receive, because of network error

2020-02-19 20:43:33,724 DEBG 'start-script' stderr output:
Error: error sending query: Could not send or receive, because of network error

2020-02-19 20:45:38,847 DEBG 'start-script' stderr output:
Error: error sending query: Could not send or receive, because of network error

2020-02-19 20:47:43,970 DEBG 'start-script' stderr output:
Error: error sending query: Could not send or receive, because of network error

2020-02-19 20:49:49,092 DEBG 'start-script' stderr output:
Error: error sending query: Could not send or receive, because of network error

2020-02-19 20:51:54,221 DEBG 'start-script' stderr output:
Error: error sending query: Could not send or receive, because of network error

 

Then I shut down the Docker and deleted the Config Files in OpenVPN.  Copied the original certificates

back and changed my end point from Toronto to France. Same results in the logfile.

 

Also looking at my Unraid Log I am seeing the following :


Feb 20 07:36:16 MediaTower kernel: docker0: port 4(veth9c959ce) entered blocking state
Feb 20 07:36:16 MediaTower kernel: docker0: port 4(veth9c959ce) entered forwarding state
Feb 20 07:36:16 MediaTower kernel: docker0: port 4(veth9c959ce) entered disabled state
Feb 20 07:36:17 MediaTower kernel: eth0: renamed from vethde49b44
Feb 20 07:36:17 MediaTower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth9c959ce: link becomes ready
Feb 20 07:36:17 MediaTower kernel: docker0: port 4(veth9c959ce) entered blocking state
Feb 20 07:36:17 MediaTower kernel: docker0: port 4(veth9c959ce) entered forwarding state
Feb 20 07:38:37 MediaTower kernel: docker0: port 4(veth9c959ce) entered disabled state
Feb 20 07:38:37 MediaTower kernel: vethde49b44: renamed from eth0
Feb 20 07:38:37 MediaTower kernel: docker0: port 4(veth9c959ce) entered disabled state
Feb 20 07:38:37 MediaTower kernel: device veth9c959ce left promiscuous mode
Feb 20 07:38:37 MediaTower kernel: docker0: port 4(veth9c959ce) entered disabled state
Feb 20 07:38:41 MediaTower kernel: docker0: port 4(veth55c0157) entered blocking state
Feb 20 07:38:41 MediaTower kernel: docker0: port 4(veth55c0157) entered disabled state
Feb 20 07:38:41 MediaTower kernel: device veth55c0157 entered promiscuous mode
Feb 20 07:38:41 MediaTower kernel: IPv6: ADDRCONF(NETDEV_UP): veth55c0157: link is not ready
Feb 20 07:38:41 MediaTower kernel: docker0: port 4(veth55c0157) entered blocking state
Feb 20 07:38:41 MediaTower kernel: docker0: port 4(veth55c0157) entered forwarding state
Feb 20 07:38:41 MediaTower kernel: docker0: port 4(veth55c0157) entered disabled state
Feb 20 07:38:43 MediaTower kernel: eth0: renamed from veth37cc761
Feb 20 07:38:43 MediaTower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth55c0157: link becomes ready
Feb 20 07:38:43 MediaTower kernel: docker0: port 4(veth55c0157) entered blocking state
Feb 20 07:38:43 MediaTower kernel: docker0: port 4(veth55c0157) entered forwarding state
Feb 20 07:57:51 MediaTower kernel: docker0: port 4(veth55c0157) entered disabled state
Feb 20 07:57:51 MediaTower kernel: veth37cc761: renamed from eth0
Feb 20 07:57:52 MediaTower kernel: docker0: port 4(veth55c0157) entered disabled state
Feb 20 07:57:52 MediaTower kernel: device veth55c0157 left promiscuous mode
Feb 20 07:57:52 MediaTower kernel: docker0: port 4(veth55c0157) entered disabled state
Feb 20 07:59:46 MediaTower kernel: docker0: port 4(vethc3f7b23) entered blocking state
Feb 20 07:59:46 MediaTower kernel: docker0: port 4(vethc3f7b23) entered disabled state
Feb 20 07:59:46 MediaTower kernel: device vethc3f7b23 entered promiscuous mode
Feb 20 07:59:46 MediaTower kernel: IPv6: ADDRCONF(NETDEV_UP): vethc3f7b23: link is not ready
Feb 20 07:59:46 MediaTower kernel: docker0: port 4(vethc3f7b23) entered blocking state
Feb 20 07:59:46 MediaTower kernel: docker0: port 4(vethc3f7b23) entered forwarding state
Feb 20 07:59:46 MediaTower kernel: docker0: port 4(vethc3f7b23) entered disabled state
Feb 20 07:59:47 MediaTower kernel: eth0: renamed from veth3940a12
Feb 20 07:59:47 MediaTower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc3f7b23: link becomes ready
Feb 20 07:59:47 MediaTower kernel: docker0: port 4(vethc3f7b23) entered blocking state
Feb 20 07:59:47 MediaTower kernel: docker0: port 4(vethc3f7b23) entered forwarding state

Edited by a12vman
Additional Information
Link to comment

Hi! New to Unraid but learning on the way and watching many Youtube guides. Ive just configured DelugeVPN and everything is working fine except for Privoxy. I just want to use Privoxy for my browsing but when i configure my network settings on my mac my internet connection goes down. I think im doing everything right but something is failing. In the log file for the Docker image it says 2020-02-20 20:21:16,883 DEBG 'watchdog-script' stdout output:
[info] Privoxy process listening on port 8118

 

Any ideas on what may be wrong here? 

Link to comment
4 minutes ago, oskarax said:

Hi! New to Unraid but learning on the way and watching many Youtube guides. Ive just configured DelugeVPN and everything is working fine except for Privoxy. I just want to use Privoxy for my browsing but when i configure my network settings on my mac my internet connection goes down. I think im doing everything right but something is failing. In the log file for the Docker image it says 2020-02-20 20:21:16,883 DEBG 'watchdog-script' stdout output:
[info] Privoxy process listening on port 8118

 

Any ideas on what may be wrong here? 

Exactly what settings are you using on your Mac?

Link to comment
3 minutes ago, wgstarks said:

Exactly what settings are you using on your Mac?

im using the webproxy HTTP and the secure webproxy HTTPS, filling in my host ip and portnumber 8118.

 

When ive done that i have no internet access at all. Just fails to connect.

Edited by oskarax
typo
Link to comment
On 4/19/2019 at 5:19 PM, emteedubs said:

Edit: Narrowed down the issue. For some reason, the error below only happens when running the container on the docker swarm overlay network. When I run it on a docker bridge network, things are fine now. I only wish more containers supported swarm properly. =(

 

I hate to keep asking the same question. But this "write UDP: Operation not permitted (code=1)" error keeps coming back again. I was able to resolve it last time (maybe by luck) by flushing my IPtables on my Ubuntu 18.04 server. It worked for about a week and without me doing much other than maybe turning the containers off and on, it suddenly stopped working again with that error.

 

This time I tried to flush the IPtables, reboot my router, but still no go. I installed a pia-openvpn docker container on the same server as delugevpn to test if it may be a server config issue. But with this other pia-openvpn container, it was able to connect to PIA without this UDP write error using the same credentials. Unfortunately there's nothing more in the log to help me test other ways to figure out the problem. Any thoughts from the experts here? What does that error code really mean?

@emteedubs i know it's been nearly a year since you posted, but did you ever get this setup working with docker swarm? I'm stuck with the same issue, looking for some advise/direction. 

 

I don't feel comfortable with separate openvpn and deluge containers and I want @binhex container on the overlay network so my other services can communicate with it easily.

Link to comment

i hadn't done any changes for a long time then 2 days ago or so delugeVPN just completely stopped downloading/uploading anything. If i turn VPN off in the template it starts downloading/uploading instantly. and the weird thing is even when VPN is active and DelugeVPN is not downloading anything, the privoxy proxy still works for clients connected to it.

 

My provider is Mullvad and it seems they had an update (to their desktop app from what it looks like) just around that time. that could be related somehow?

 

Any ideas? attached the (hopefully) relevant log

 

edit: Started a new container with the same settings and it works, so something mustve gotten screwed up within the config :/

oh boy here i go troubleshootin again

 

edit2: had to change incoming port for whatever reason... had been running fine for months and then just died, weird.

 

Edited by MammothJerk
Link to comment

For those of you using PIA and have had their torrents stop working recently, I've found the beta version (2.0.0) of the ltConfig plugin doesn't work with Deluge 2.0.4 which is what the latest docker release is on.

 

Like another user, I created a brand new docker container which allowed my torrents to start working again, but my speeds were slow.  When I compiled and installed the ltConfig plugin in the new container, the torrents stopped working again.

 

At this point if you want PIA working with full speeds, your only bet is to roll the docker container back to Deluge 2.0.3.  You can do this by stopping the container and editing the repository setting to point to the last 2.0.3 release.  The container will automatically rollback to the specified version.  The repository should look like this,

 

binhex/arch-delugevpn:2.0.3_23_g5f1eada3e-1-03

Link to comment
2 hours ago, raiditup said:

At this point if you want PIA working with full speeds, your only bet is to roll the docker container back to Deluge 2.0.3.

What do you call "full speeds"?  My connection is, at best 100Mbps (nominally 50Mbps), and I am still seeing downloads at up to 8MBps and uploads running at the full limited rate of 6MBps, with v2.0.4.

Link to comment
29 minutes ago, PeterB said:

What do you call "full speeds"?  My connection is, at best 100Mbps (nominally 50Mbps), and I am still seeing downloads at up to 8MBps and uploads running at the full limited rate of 6MBps, with v2.0.4.

Well that's great for you, but for people like me who see a dramatic decrease in speeds without the custom settings that the ltConfig plugin provides, you should rollback to version 2.0.3.  Without it, I see at most 1MBps down and with it, downloads jump to 35-40MBps so that's what I consider "full speeds".

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.