[Support] binhex - PrivoxyVPN


Recommended Posts

Hey all,

 

I'm having an issue with Privoxy not starting properly. I'm getting a lot of this in the logs on repeat:

 

2023-04-10 11:42:38,115 DEBG 'watchdog-script' stdout output:
[info] Privoxy not running

2023-04-10 11:42:38,116 DEBG 'watchdog-script' stdout output:
[info] Attempting to start Privoxy...

2023-04-10 11:42:50,154 DEBG 'watchdog-script' stdout output:
[warn] Wait for Privoxy process to start aborted, too many retries
[info] Showing output from command before exit...

 

MicroSocks works fine.

 

I'm running this with VPN set to off as I'm running this through another VPN container. I'm using this purely for the HTTP and SOCKS5 proxies.

 

Also, what does DEBUG do? Does it show more in depth logs or something else? Ideally I'd like to confirm that the proxies are functioning normally with connections

 

Thanks in advance

Link to comment

I'm having some DNS resolution issues using wireguard. I've struggled with this for days now, and it's really frustrating. I do not know where to go from here... 

 

From the container console, I am able to ping devices on the local lan, but I can not ping or resolve any external IP or domain.

 

With debug enabled, this is what I am getting from supervisord.log:

2023-04-15 23:08:50.319465 [info] Host is running unRAID
2023-04-15 23:08:50.349622 [info] System information Linux 3414a6b7f1c3 5.19.17-Unraid #2 SMP PREEMPT_DYNAMIC Wed Nov 2 11:54:15 PDT 2022 x86_64 GNU/Linux
2023-04-15 23:08:50.385046 [info] OS_ARCH defined as 'x86-64'
2023-04-15 23:08:50.420122 [info] PUID defined as '99'
2023-04-15 23:08:50.466915 [info] PGID defined as '100'
2023-04-15 23:08:50.520705 [info] UMASK defined as '000'
2023-04-15 23:08:50.552389 [info] Permissions already set for '/config'
2023-04-15 23:08:50.589095 [info] Deleting files in /tmp (non recursive)...
2023-04-15 23:08:50.637891 [info] VPN_ENABLED defined as 'yes'
2023-04-15 23:08:50.673953 [info] VPN_CLIENT defined as 'wireguard'
2023-04-15 23:08:50.705869 [info] VPN_PROV defined as 'custom'
2023-04-15 23:08:50.747555 [info] WireGuard config file (conf extension) is located at /config/wireguard/wg0.conf
2023-04-15 23:08:50.801073 [info] VPN_REMOTE_SERVER defined as '162.221.202.17'
2023-04-15 23:08:50.836787 [info] VPN_REMOTE_PORT defined as '51820'
2023-04-15 23:08:50.864197 [info] VPN_DEVICE_TYPE defined as 'wg0'
2023-04-15 23:08:50.892954 [info] VPN_REMOTE_PROTOCOL defined as 'udp'
162.221.202.17
2023-04-15 23:08:50.931615 [debug] iptables kernel module 'ip_tables' available, setting policy to drop...
2023-04-15 23:08:50.967869 [debug] ip6tables kernel module 'ip6_tables' available, setting policy to drop...
2023-04-15 23:08:51.009647 [debug] Docker interface defined as eth0
2023-04-15 23:08:51.057866 [info] LAN_NETWORK defined as '10.5.5.0/24,10.3.3.0/24'
2023-04-15 23:08:51.089893 [info] NAME_SERVERS defined as '10.5.5.1,1.1.1.1,1.0.0.1,8.8.8.8'
2023-04-15 23:08:51.122678 [info] VPN_USER defined as 'private'
2023-04-15 23:08:51.158368 [info] VPN_PASS defined as 'also_private'
2023-04-15 23:08:51.193736 [info] ENABLE_PRIVOXY defined as 'yes'
2023-04-15 23:08:51.232310 [warn] ADDITIONAL_PORTS DEPRECATED, please rename env var to 'VPN_INPUT_PORTS'
2023-04-15 23:08:51.264660 [info] ADDITIONAL_PORTS defined as '7878,8090,8686,8989,9117'
2023-04-15 23:08:51.299035 [info] VPN_OUTPUT_PORTS not defined (via -e VPN_OUTPUT_PORTS), skipping allow for custom outgoing ports
2023-04-15 23:08:51.335565 [info] ENABLE_SOCKS defined as 'no'
2023-04-15 23:08:51.370607 [info] ENABLE_PRIVOXY defined as 'yes'
2023-04-15 23:08:51.406926 [info] Starting Supervisor...
2023-04-15 23:08:51,695 INFO Included extra file "/etc/supervisor/conf.d/privoxy.conf" during parsing
2023-04-15 23:08:51,695 INFO Set uid to user 0 succeeded
2023-04-15 23:08:51,697 INFO supervisord started with pid 7
2023-04-15 23:08:52,701 INFO spawned: 'start-script' with pid 192
2023-04-15 23:08:52,704 INFO spawned: 'watchdog-script' with pid 193
2023-04-15 23:08:52,705 INFO reaped unknown pid 8 (exit status 0)
2023-04-15 23:08:52,713 DEBG 'start-script' stdout output:
[info] VPN is enabled, beginning configuration of VPN
[debug] Environment variables defined as follows
ADDITIONAL_PORTS=7878,8090,8686,8989,9117
APPLICATION=privoxy
BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:complete_fullquote:extquote:force_fignore:globasciiranges:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
BASH_LINENO=([0]="0")
BASH_SOURCE=([0]="/root/start.sh")
BASH_VERSINFO=([0]="5" [1]="1" [2]="16" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu")
BASH_VERSION='5.1.16(1)-release'
DEBUG=true
DIRSTACK=()
ENABLE_PRIVOXY=yes
ENABLE_SOCKS=no
EUID=0
GROUPS=()
HOME=/home/nobody
HOSTNAME=3414a6b7f1c3
HOSTTYPE=x86_64
HOST_CONTAINERNAME=binhex-privoxyvpn
HOST_HOSTNAME=Saturn
HOST_OS=Unraid
IFS=$' \t\n'
LANG=en_GB.UTF-8
LAN_NETWORK=10.5.5.0/24,10.3.3.0/24
MACHTYPE=x86_64-pc-linux-gnu
NAME_SERVERS=10.5.5.1,1.1.1.1,1.0.0.1,8.8.8.8
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
OS_ARCH=x86-64
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PGID=100
PIPESTATUS=([0]="0")
PPID=7
PS4='+ '
PUID=99
PWD=/
SHELL=/bin/bash
SHELLOPTS=braceexpand:hashall:interactive-comments
SHLVL=1
SOCKS_PASS=socks
SOCKS_USER=admin
SUPERVISOR_ENABLED=1
SUPERVISOR_GROUP_NAME=start-script
SUPERVISOR_PROCESS_NAME=start-script
TERM=xterm
TZ=America/Los_Angeles
UID=0
UMASK=000
VPN_CLIENT=wireguard
VPN_CONFIG=/config/wireguard/wg0.conf
VPN_DEVICE_TYPE=wg0
VPN_ENABLED=yes
VPN_INPUT_PORTS=7878,8090,8686,8989,9117
VPN_OPTIONS=
VPN_OUTPUT_PORTS=
VPN_PASS=
VPN_PROV=custom
VPN_REMOTE_PORT=51820
VPN_REMOTE_PROTOCOL=udp
VPN_REMOTE_SERVER=162.221.202.17
VPN_USER=
_='[debug] Environment variables defined as follows'
[debug] Directory listing of files in /config/wireguard/ as follows

2023-04-15 23:08:52,713 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-04-15 23:08:52,713 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-04-15 23:08:52,719 DEBG 'start-script' stdout output:
total 4
drwxrwxr-x 1 nobody users  16 Apr 15 22:43 .
drwxrwxr-x 1 nobody users 218 Apr 15 23:08 ..
-rwxrwxr-x 1 nobody users 364 Apr 15 22:43 wg0.conf

2023-04-15 23:08:52,719 DEBG 'start-script' stdout output:
[debug] Contents of WireGuard config file '/config/wireguard/wg0.conf' as follows...

2023-04-15 23:08:52,721 DEBG 'start-script' stdout output:
[Interface]
PostUp = '/root/wireguardup.sh'
PostDown = '/root/wireguarddown.sh'
PrivateKey = <my_private_key>
ListenPort = 51820
Address = 10.101.95.63/32

[Peer]
PublicKey = <my_public_key>
PresharedKey = <my_preshared_key>
AllowedIPs = 0.0.0.0/0
Endpoint = 162.221.202.17:51820


2023-04-15 23:08:52,727 DEBG 'start-script' stdout output:
[info] Adding 10.5.5.1 to /etc/resolv.conf

2023-04-15 23:08:52,732 DEBG 'start-script' stdout output:
[info] Adding 1.1.1.1 to /etc/resolv.conf

2023-04-15 23:08:52,738 DEBG 'start-script' stdout output:
[info] Adding 1.0.0.1 to /etc/resolv.conf

2023-04-15 23:08:52,745 DEBG 'start-script' stdout output:
[info] Adding 8.8.8.8 to /etc/resolv.conf

2023-04-15 23:08:52,755 DEBG 'start-script' stdout output:
[debug] Show name servers defined for container

2023-04-15 23:08:52,756 DEBG 'start-script' stdout output:
nameserver 10.5.5.1
nameserver 1.1.1.1
nameserver 1.0.0.1
nameserver 8.8.8.8

2023-04-15 23:08:52,757 DEBG 'start-script' stdout output:
[debug] Show contents of hosts file

2023-04-15 23:08:52,758 DEBG 'start-script' stdout output:
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.2	3414a6b7f1c3

2023-04-15 23:08:52,788 DEBG 'start-script' stdout output:
[debug] Docker interface defined as eth0

2023-04-15 23:08:52,795 DEBG 'start-script' stdout output:
[info] Default route for container is 172.17.0.1

2023-04-15 23:08:52,801 DEBG 'start-script' stdout output:
[debug] Docker IP defined as 172.17.0.2

2023-04-15 23:08:52,807 DEBG 'start-script' stdout output:
[debug] Docker netmask defined as 255.255.0.0

2023-04-15 23:08:52,969 DEBG 'start-script' stdout output:
[info] Docker network defined as    172.17.0.0/16

2023-04-15 23:08:52,975 DEBG 'start-script' stdout output:
[info] Adding 10.5.5.0/24 as route via docker eth0

2023-04-15 23:08:52,985 DEBG 'start-script' stdout output:
[info] Adding 10.3.3.0/24 as route via docker eth0

2023-04-15 23:08:52,987 DEBG 'start-script' stdout output:
[info] ip route defined as follows...
--------------------

2023-04-15 23:08:52,989 DEBG 'start-script' stdout output:
default via 172.17.0.1 dev eth0 
10.3.3.0/24 via 172.17.0.1 dev eth0 
10.5.5.0/24 via 172.17.0.1 dev eth0 
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2 
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 
local 172.17.0.2 dev eth0 table local proto kernel scope host src 172.17.0.2 

2023-04-15 23:08:52,989 DEBG 'start-script' stdout output:
broadcast 172.17.255.255 dev eth0 table local proto kernel scope link src 172.17.0.2 

2023-04-15 23:08:52,990 DEBG 'start-script' stdout output:
--------------------
[debug] Modules currently loaded for kernel

2023-04-15 23:08:52,995 DEBG 'start-script' stdout output:
Module                  Size  Used by
xt_connmark            16384  0
xt_comment             16384  0
iptable_raw            16384  0
wireguard              73728  0
curve25519_x86_64      32768  1 wireguard
libcurve25519_generic    49152  2 curve25519_x86_64,wireguard
libchacha20poly1305    16384  1 wireguard
chacha_x86_64          28672  1 libchacha20poly1305
poly1305_x86_64        28672  1 libchacha20poly1305
ip6_udp_tunnel         16384  1 wireguard
udp_tunnel             20480  1 wireguard
libchacha              16384  1 chacha_x86_64
tun                    53248  2
ipvlan                 36864  0
xt_mark                16384  1
veth                   32768  0
xt_CHECKSUM            16384  0
ipt_REJECT             16384  0
nf_reject_ipv4         16384  1 ipt_REJECT
ip6table_mangle        16384  1
ip6table_nat           16384  1
nvidia_uvm           1355776  2
vhost_iotlb            16384  0
macvlan                28672  0
xt_nat                 16384  10
xt_tcpudp              16384  35
xt_conntrack           16384  8
xt_MASQUERADE          16384  18
nf_conntrack_netlink    49152  0
nfnetlink              16384  2 nf_conntrack_netlink
xfrm_user              36864  1
xt_addrtype            16384  2
iptable_nat            16384  1
nf_nat                 49152  4 ip6table_nat,xt_nat,iptable_nat,xt_MASQUERADE
nf_conntrack          139264  6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_connmark,xt_MASQUERADE
nf_defrag_ipv6         16384  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
br_netfilter           32768  0
xfs                  1654784  11
dm_crypt               45056  15
dm_mod                126976  31 dm_crypt
dax                    36864  1 dm_mod
md_mod                 53248  10
iptable_mangle         16384  2
ipmi_devintf           16384  0
efivarfs               16384  1
ip6table_filter        16384  1
ip6_tables             28672  3 ip6table_filter,ip6table_nat,ip6table_mangle
iptable_filter         16384  3
ip_tables              28672  6 iptable_filter,iptable_raw,iptable_nat,iptable_mangle
x_tables               45056  19 ip6table_filter,xt_conntrack,iptable_filter,ip6table_nat,xt_tcpudp,xt_addrtype,xt_CHECKSUM,xt_nat,xt_comment,ip6_tables,ipt_REJECT,xt_connmark,iptable_raw,ip_tables,iptable_nat,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark
af_packet              49152  0
8021q                  32768  0
garp                   16384  1 8021q
mrp                    16384  1 8021q
bridge                262144  1 br_netfilter
stp                    16384  2 bridge,garp
llc                    16384  3 bridge,stp,garp
bonding               151552  0
tls                   106496  1 bonding
ixgbe                 286720  0
xfrm_algo              16384  2 xfrm_user,ixgbe
mdio                   16384  1 ixgbe
nvidia_drm             65536  0
nvidia_modeset       1204224  1 nvidia_drm
ast                    57344  0
drm_vram_helper        20480  1 ast
i2c_algo_bit           16384  1 ast
drm_ttm_helper         16384  2 drm_vram_helper,ast
x86_pkg_temp_thermal    16384  0
nvidia              56016896  14 nvidia_uvm,nvidia_modeset
intel_powerclamp       16384  0
ttm                    73728  2 drm_vram_helper,drm_ttm_helper
coretemp               16384  0
drm_kms_helper        159744  5 drm_vram_helper,ast,nvidia_drm
mpt3sas               282624  11
drm                   475136  8 drm_kms_helper,drm_vram_helper,ast,nvidia,drm_ttm_helper,nvidia_drm,ttm
i2c_i801               24576  0
raid_class             16384  1 mpt3sas
i2c_smbus              16384  1 i2c_i801
crct10dif_pclmul       16384  1
crc32_pclmul           16384  0
crc32c_intel           24576  3
ghash_clmulni_intel    16384  0
aesni_intel           380928  30
crypto_simd            16384  1 aesni_intel
cryptd                 24576  17 crypto_simd,ghash_clmulni_intel
rapl                   16384  0
nvme                   49152  3
ipmi_ssif              32768  0
agpgart                40960  1 ttm
backlight              20480  2 drm,nvidia_modeset
intel_cstate           20480  0
intel_uncore          200704  0
nvme_core             1
2023-04-15 23:08:52,996 DEBG 'start-script' stdout output:
06496  4 nvme
scsi_transport_sas     40960  1 mpt3sas
i2c_core               86016  8 drm_kms_helper,i2c_algo_bit,ast,nvidia,i2c_smbus,i2c_i801,ipmi_ssif,drm
syscopyarea            16384  1 drm_kms_helper
tpm_tis                16384  0
ahci                   45056  2
acpi_ipmi              16384  0
sysfillrect            16384  1 drm_kms_helper
tpm_tis_core           20480  1 tpm_tis
input_leds             16384  0
joydev                 24576  0
sysimgblt              16384  1 drm_kms_helper
libahci                40960  1 ahci
led_class              16384  1 input_leds
fb_sys_fops            16384  1 drm_kms_helper
tpm                    73728  2 tpm_tis,tpm_tis_core
wmi                    28672  0
ipmi_si                57344  1
acpi_power_meter       20480  0
acpi_pad               24576  0
button                 20480  0
unix                   53248  451

2023-04-15 23:08:53,003 DEBG 'start-script' stdout output:
iptable_mangle         16384  2
ip_tables              28672  6 iptable_filter,iptable_raw,iptable_nat,iptable_mangle
x_tables               45056  19 ip6table_filter,xt_conntrack,iptable_filter,ip6table_nat,xt_tcpudp,xt_addrtype,xt_CHECKSUM,xt_nat,xt_comment,ip6_tables,ipt_REJECT,xt_connmark,iptable_raw,ip_tables,iptable_nat,ip6table_mangle,xt_MASQUERADE,iptable_mangle,xt_mark

2023-04-15 23:08:53,003 DEBG 'start-script' stdout output:
[info] iptable_mangle support detected, adding fwmark for tables

2023-04-15 23:08:53,127 DEBG 'start-script' stdout output:
[info] iptables defined as follows...
--------------------

2023-04-15 23:08:53,130 DEBG 'start-script' stdout output:
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-A INPUT -s 162.221.202.17/32 -i eth0 -j ACCEPT
-A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 7878 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 7878 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 8090 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 8090 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 8686 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 8686 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 8989 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 8989 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 9117 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 9117 -j ACCEPT
-A INPUT -s 10.5.5.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 8118 -j ACCEPT
-A INPUT -s 10.3.3.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 8118 -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -i wg0 -j ACCEPT
-A OUTPUT -d 162.221.202.17/32 -o eth0 -j ACCEPT
-A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --sport 7878 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m udp --sport 7878 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --sport 8090 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m udp --sport 8090 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --sport 8686 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m udp --sport 8686 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --sport 8989 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m udp --sport 8989 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --sport 9117 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m udp --sport 9117 -j ACCEPT
-A OUTPUT -s 172.17.0.0/16 -d 10.5.5.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT
-A OUTPUT -s 172.17.0.0/16 -d 10.3.3.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -o wg0 -j ACCEPT

2023-04-15 23:08:53,131 DEBG 'start-script' stdout output:
--------------------

2023-04-15 23:08:53,135 DEBG 'start-script' stdout output:
[info] Attempting to bring WireGuard interface 'up'...

2023-04-15 23:08:53,147 DEBG 'start-script' stderr output:
Warning: `/config/wireguard/wg0.conf' is world accessible

2023-04-15 23:08:53,159 DEBG 'start-script' stderr output:
[#] ip link add wg0 type wireguard

2023-04-15 23:08:53,162 DEBG 'start-script' stderr output:
[#] wg setconf wg0 /dev/fd/63

2023-04-15 23:08:53,164 DEBG 'start-script' stderr output:
[#] ip -4 address add 10.101.95.63/32 dev wg0

2023-04-15 23:08:53,176 DEBG 'start-script' stderr output:
[#] ip link set mtu 1420 up dev wg0

2023-04-15 23:08:53,189 DEBG 'start-script' stderr output:
[#] wg set wg0 fwmark 51820

2023-04-15 23:08:53,190 DEBG 'start-script' stderr output:
[#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820

2023-04-15 23:08:53,193 DEBG 'start-script' stderr output:
[#] ip -4 rule add not fwmark 51820 table 51820

2023-04-15 23:08:53,194 DEBG 'start-script' stderr output:
[#] ip -4 rule add table main suppress_prefixlength 0

2023-04-15 23:08:53,200 DEBG 'start-script' stderr output:
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1

2023-04-15 23:08:53,203 DEBG 'start-script' stderr output:
[#] iptables-restore -n

2023-04-15 23:08:53,206 DEBG 'start-script' stderr output:
[#] '/root/wireguardup.sh'

2023-04-15 23:08:53,212 DEBG 'start-script' stdout output:
[debug] Waiting for valid local and gateway IP addresses from tunnel...

2023-04-15 23:08:54,232 DEBG 'start-script' stdout output:
[debug] Valid local IP address from tunnel acquired '10.101.95.63'

2023-04-15 23:08:54,232 DEBG 'start-script' stdout output:
[debug] Checking we can resolve name 'www.google.com' to address...

2023-04-15 23:08:54,251 DEBG 'watchdog-script' stdout output:
[debug] Checking we can resolve name 'www.google.com' to address...

2023-04-15 23:09:39,290 DEBG 'start-script' stdout output:
[debug] Having issues resolving name 'www.google.com'
[debug] Retrying in 5 secs...
[debug] 11 retries left

2023-04-15 23:09:39,307 DEBG 'watchdog-script' stdout output:
[debug] Having issues resolving name 'www.google.com'
[debug] Retrying in 5 secs...
[debug] 11 retries left

2023-04-15 23:10:29,351 DEBG 'start-script' stdout output:
[debug] Having issues resolving name 'www.google.com'
[debug] Retrying in 5 secs...
[debug] 10 retries left

2023-04-15 23:10:29,365 DEBG 'watchdog-script' stdout output:
[debug] Having issues resolving name 'www.google.com'
[debug] Retrying in 5 secs...
[debug] 10 retries left

 

 

 

Link to comment
  • 1 month later...
2 minutes ago, SeeGee said:

I am still having the issue above. It has been over a month and no posts in this thread. 
@binhex is this a dead project?

 

nope its not dead, your issue looks to be that you are trying to use a nameserver located on your LAN, this will be blocked, from your log:-

2023-04-15 23:08:51.057866 [info] LAN_NETWORK defined as '10.5.5.0/24,10.3.3.0/24'
2023-04-15 23:08:51.089893 [info] NAME_SERVERS defined as '10.5.5.1,1.1.1.1,1.0.0.1,8.8.8.8'

remove your nameserver '10.5.5.1' from the NAME_SERVERS list.

Link to comment



remove your nameserver '10.5.5.1' from the NAME_SERVERS list.


I tried that with no success. Out of frustration, I decided to install a fresh template and start from scratch. Dumped the .conf from vpn unlimited into the wireguard folder and it fired up nice and clean.

The original container was set up using openvpn, and I was trying to migrate to wireguard. I suspect that something dirty from the old openvpn setup was interfering with the wireguard tunnel. Never found out what it was, but we're up and running now. Thank you @binhex
Link to comment
  • 2 weeks later...
On 3/26/2023 at 8:34 PM, Niklas said:

Is it possible to turn this off from the logging? I have lots of rotated logfiles filled with just info like this (for everything using socks, no logging to terminal for privoxy). It's not really needed for me. 🙂 Also, not good for privacy with everything logged. 

 

2023-03-26 20:23:28,168 DEBG 'watchdog-script' stderr output:
client[5] 192.168.1.5: connected to dl2.cdn.filezilla-project.org:443

 

 

Sorry to ask again but can I do something myself to stop microsocks from filling the log and supervisord file with domains connected to using socks? I don't need it logged. 

Edited by Niklas
Link to comment
13 hours ago, Niklas said:

 

Sorry to ask again but can I do something myself to stop microsocks from filling the log and supervisord file with domains connected to using socks? I don't need it logged. 

There currently is no flag to silence the logs in the latest release of microsocks, but i see a commit to master branch has been made which includes this functionality so i have asked the developer to create a new release, if he does so then i can incorporate a env var to turn logging on/off.

 

for ref here is the issue raised:- https://github.com/rofl0r/microsocks/issues/61

  • Thanks 1
Link to comment

Hi all, I have made some nice changes to the core code used for all the VPN docker images I produce, details as follows:-

  • Randomly rotate between multiple remote endpoints (openvpn only) on disconnection - Less possbility of getting stuck on a defunct endpoint
  • Manual round-robin implementation of IP addresses for endpoints - On disconnection all endpoint IP's are rotated in /etc/hosts, reducing the possibility of getting stuck on a defunct server on the endpoint.

I also have a final piece of work around this (not done yet), which is to refresh IP addresses for endpoints on each disconnect/reconnect cycle, further reducing the possibility of getting stuck on defunct servers.
 

In short the work above should help keep the connection maintained for longer periods of time (hopefully months!) without the requirement to restart the container.
 

The work was non-trivial and it is possible I have introduced some bugs (extensively tested) so please keep an eye out of for unexpected issues as I roll out the this change (currently rolled out to SABnzbdVPN and PrivoxyVPN), if you see a new image released then it will include the new functionality.

  • Like 2
  • Thanks 1
Link to comment

I am trying to setup qbit container to route through the privoxyvpn container with pia and port forarding.  I am having an issue wih the port forwarding working.  I may not completely understand what is required.  I have setup qbit in the UI to use the privoxyvpn container as a Proxy Server, the screen shot is below.  My 2 container's are setup in accordance with the below settings as well.  I keep getting 0 B/s Up Speed.  I have varified in the logs that I am connecting to the CA Toronto server with a port forward assigned.  What am I doing wrong?  Thank you for your assistance.  

 

binhex-privoxyvpn

docker run
  -d
  --name='binhex-privoxyvpn'
  --net='dockernet'
  --privileged=true
  -e TZ="America/New_York"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Zion"
  -e HOST_CONTAINERNAME="binhex-privoxyvpn"
  -e 'VPN_ENABLED'='yes'
  -e 'VPN_USER'='pXXXXXX'
  -e 'VPN_PASS'='XXXXXXXXX'
  -e 'VPN_PROV'='pia'
  -e 'VPN_CLIENT'='openvpn'
  -e 'VPN_OPTIONS'=''
  -e 'LAN_NETWORK'='192.168.31.0/24'
  -e 'NAME_SERVERS'='84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1'
  -e 'SOCKS_USER'='XXXXX'
  -e 'SOCKS_PASS'='XXXXXX'
  -e 'ENABLE_SOCKS'='no'
  -e 'ENABLE_PRIVOXY'='yes'
  -e 'VPN_INPUT_PORTS'='31118'
  -e 'VPN_OUTPUT_PORTS'='31119'
  -e 'DEBUG'='false'
  -e 'UMASK'='000'
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'STRICT_PORT_FORWARD'='yes'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.webui='http://config.privoxy.org/'
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/privoxy-icon.png'
  -p '8118:8118/tcp'
  -p '9118:9118/tcp'
  -v '/mnt/cache_nvme/appdata/binhex-privoxyvpn':'/config':'rw'
  --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-privoxyvpn'

 

qbittorrent

docker run
  -d
  --name='qbittorrent'
  --net='dockernet'
  -e TZ="America/New_York"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Zion"
  -e HOST_CONTAINERNAME="qbittorrent"
  -e 'WEBUI_PORT'='8080'
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'UMASK'='022'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.webui='http://[IP]:[PORT:8080]'
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/qbittorrent-logo.png'
  -p '8080:8080/tcp'
  -p '6881:6881/tcp'
  -p '6881:6881/udp'
  -v '/mnt/user/media/torrent/':'/1_downloads':'rw'
  -v '/mnt/cache_nvme/appdata/qbittorrent':'/config':'rw' 'lscr.io/linuxserver/qbittorrent'

 

qbit.png

Link to comment

Sorry if I have missed this in this thread or on GitHub. I've tried setting up the privoxyvpn docker using my AirVPN wireguard config file but I think the issue I am having is that the Endpoint port for AirVPN is 1637 by default not 51820? Is there anything I can do to fix this? Or is there a way we can get a variable added that allows us to change this?

Link to comment
  • 2 weeks later...

Good morning,

 

I have been using binhex-privoxyvpn without issue for some time now.

 

My VPN provider has been via Mullvad Wireguard, however I want to move over to ProtonVPN. I have downloaded the Proton Wireguard config file and replaced the Mullvad one, and PrivoxyVPN starts up fine (screenshot below), however Sabnzbd reports my "Public IPv4 address" as "connection failed", but my downloads do eventually start.

 

For reference, "curl ifconfig.io" under the PrivoxyVPN console returns my VPN IP, however trying the command under Sab's console returns; "Could not resolve host: ifconfig.io". 

 

No other settings have been changed. 

 

I'm sure I'm missing something simple. Any advice would be greatly appreciated. 

 

Screenshot_20230702_060116_Brave.thumb.jpg.412a06a945a902fb50423c00b633bcf9.jpg

Edited by BeardedNoir
Link to comment

Greetings!

 

Came across an issue with my binhex-privoxyvpn setup this morning. 

Everything had been working without issue for a while/set it and forget it so not sure when the issue started occurring.


The issue seems to be when attempting to access the WebUI of my service I have routed through binhex-privoxyvpn I receive an "ERR_CONNECTION_TIMED_OUT"

I can confirm based on the container logs that traffic is properly being routed through the vpn container/receiving the vpn IP address but can't seem to gain access to the WebUI.

Changing the network interface to bridge/custom docker network I am once again able to access the WebUI so the container seems to be working properly. 

Also I've found setting setting 'VPN_ENABLED' to no also provides working tests to the services webui so maybe the issue is with my Wireguard file? I've updated it will a newly acquire setup config from Mulvad.

Weirder still as I've continue to troubleshoot; came across this reddit post, which mentioned having a very similar issue and someone fixing by trying with OpenVPN instead of Wireguard.

Gave that a shot and surprisingly had the same result was able to access WebUI if I'm using OpenVPN instead of Wireguard.

 


Cheers

 

docker run
  -d
  --name='privoxyvpn'
  --net='dockernetwork'
  --privileged=true
  -e TZ="America/New_York"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="MyServer"
  -e HOST_CONTAINERNAME="privoxyvpn"
  -e 'VPN_ENABLED'='yes'
  -e 'VPN_PROV'='custom'
  -e 'VPN_CLIENT'='wireguard'
  -e 'VPN_OPTIONS'=''
  -e 'LAN_NETWORK'='10.0.1.0/24,172.18.0.0/24'
  -e 'NAME_SERVERS'='84.200.69.80,37.235.1.174,1.1.1.1,37.235.1.177,84.200.70.40,1.0.0.1'
  -e 'SOCKS_USER'='admin'
  -e 'SOCKS_PASS'='socks'
  -e 'ENABLE_SOCKS'='no'
  -e 'ENABLE_PRIVOXY'='yes'
  -e 'VPN_INPUT_PORTS'='5047'
  -e 'VPN_OUTPUT_PORTS'='5047'
  -e 'DEBUG'='false'
  -e 'UMASK'='000'
  -e 'PUID'='99'
  -e 'PGID'='100'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.webui='http://config.privoxy.org/'
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/privoxy-icon.png'
  -p '8118:8118/tcp'
  -p '9118:9118/tcp'
  -p '5047:5047/tcp'
  -v '/mnt/user/appdata/privoxyvpn':'/config':'rw'
  --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-privoxyvpn'
36e042bf62fdc40b575e3d2b88c7c61daca6e5dd903e60db90530d9ea3a901f4


Container Log

-A OUTPUT -o eth0 -p tcp -m tcp --sport 5047 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m udp --sport 5047 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 10.0.1.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 10.0.1.0/24 -o eth0 -p tcp -m tcp --dport 5047 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 172.18.0.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 172.18.0.0/24 -o eth0 -p tcp -m tcp --dport 5047 -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -o wg0 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 172.18.0.0/16 -j ACCEPT
-A OUTPUT -d removedfromlog/32 -o eth0 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --sport 5047 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m udp --sport 5047 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 10.0.1.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 10.0.1.0/24 -o eth0 -p tcp -m tcp --dport 5047 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 172.18.0.0/24 -o eth0 -p tcp -m tcp --sport 8118 -j ACCEPT
-A OUTPUT -s 172.18.0.0/16 -d 172.18.0.0/24 -o eth0 -p tcp -m tcp --dport 5047 -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -o wg0 -j ACCEPT

2023-07-02 14:35:12,949 DEBG 'start-script' stdout output:
--------------------

2023-07-02 14:35:12,950 DEBG 'start-script' stdout output:
[info] Attempting to bring WireGuard interface 'up'...

2023-07-02 14:35:12,954 DEBG 'start-script' stderr output:
Warning: `/config/wireguard/wg0.conf' is world accessible

2023-07-02 14:35:12,958 DEBG 'start-script' stderr output:
[#] ip link add wg0 type wireguard

2023-07-02 14:35:12,959 DEBG 'start-script' stderr output:
[#] wg setconf wg0 /dev/fd/63

2023-07-02 14:35:12,959 DEBG 'start-script' stderr output:
[#] ip -4 address add 10.67.63.4/32 dev wg0

2023-07-02 14:35:12,962 DEBG 'start-script' stderr output:
[#] ip link set mtu 1420 up dev wg0

2023-07-02 14:35:12,964 DEBG 'start-script' stderr output:
[#] resolvconf -a wg0 -m 0 -x

2023-07-02 14:35:12,973 DEBG 'start-script' stderr output:
[#] wg set wg0 fwmark 51820

2023-07-02 14:35:12,974 DEBG 'start-script' stderr output:
[#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820

2023-07-02 14:35:12,974 DEBG 'start-script' stderr output:
[#] ip -4 rule add not fwmark 51820 table 51820

2023-07-02 14:35:12,975 DEBG 'start-script' stderr output:
[#] ip -4 rule add table main suppress_prefixlength 0

2023-07-02 14:35:12,977 DEBG 'start-script' stderr output:
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1

2023-07-02 14:35:12,978 DEBG 'start-script' stderr output:
[#] iptables-restore -n

2023-07-02 14:35:12,979 DEBG 'start-script' stderr output:
[#] '/root/wireguardup.sh'

2023-07-02 14:35:14,035 DEBG 'start-script' stdout output:
[info] Application does not require external IP address, skipping external IP address detection

2023-07-02 14:35:14,037 DEBG 'start-script' stdout output:
[info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment

2023-07-02 14:35:14,039 DEBG 'start-script' stdout output:
[info] WireGuard interface 'up'

2023-07-02 14:35:14,068 DEBG 'watchdog-script' stdout output:
[info] Privoxy not running

2023-07-02 14:35:14,069 DEBG 'watchdog-script' stdout output:
[info] Attempting to start Privoxy...

2023-07-02 14:35:15,072 DEBG 'watchdog-script' stdout output:
[info] Privoxy process started

2023-07-02 14:35:15,073 DEBG 'watchdog-script' stdout output:
[info] Waiting for Privoxy process to start listening on port 8118...

2023-07-02 14:35:15,084 DEBG 'watchdog-script' stdout output:
[info] Privoxy process listening on port 8118

 

Edited by CMac413
Link to comment
  • 2 months later...
On 6/8/2023 at 1:11 PM, binhex said:

 

I also have a final piece of work around this (not done yet), which is to refresh IP addresses for endpoints on each disconnect/reconnect cycle, further reducing the possibility of getting stuck on defunct servers.

Is that already implemented? Only working for openvpn or still an upcoming feature?

Im currently using the container to connect to a wireguard server thats behind a dynamic ip and its been a bit of a hassle to restart the container each time the endpoint has a new ip assigned (which happends either once every 24h or, if the telekom decides to be muppets again, 20 times a day)

Link to comment
24 minutes ago, Mainfrezzer said:

Is that already implemented? Only working for openvpn or still an upcoming feature?

Im currently using the container to connect to a wireguard server thats behind a dynamic ip and its been a bit of a hassle to restart the container each time the endpoint has a new ip assigned (which happends either once every 24h or, if the telekom decides to be muppets again, 20 times a day)

nope, its still to be done.

  • Like 1
Link to comment
  • 3 months later...

I have an issue that I would welcome some assistance on.

 

I have a number of docker containers set to route traffic through a binhex-privoxyvpn container using the '--net=container:binhex-privoxyvpn' flag in the extra parameters section.  All of that works well and traffic for the dependent containers routes through the binhex-privoxyvpn container network as expected.  I've set up the VPN input ports and can access the WebUI for each of the containers.  So far so good...

 

I've also set up a wireguard tunnel to my Unraid server using the 'remote access to LAN' access type.  When connected through the tunnel I couldn't access my Pi-Hole container or any of the dependent containers running their traffic through binhex-privoxyvpn.  I read around a bit and followed the wireguard instructions for configuring complex networks here so that I can get access to dockers with custom IPs.  

Quote

In the WireGuard tunnel config, set "Use NAT" to No

In your router, add a static route that lets your network access the WireGuard "Local tunnel network pool" through the IP address of your Unraid system. For instance, for the default pool of 10.253.0.0/24 you should add this static route:

Destination Network: 10.253.0.0/24 (aka 10.253.0.0 with subnet 255.255.255.0)

Gateway / Next Hop: <IP address of your Unraid system>

Distance: 1 (your router may not have this option)

If you use pfSense, you may also need to check the box for "Static route filtering - bypass firewall rules for traffic on the same interface". See this.

If you have Dockers with custom IPs then on the Docker settings page, set "Host access to custom networks" to "Enabled".

This worked for my pi-hole container which was inaccessible prior to the above changes (a tracert hung at 10.253.0.1) but is accessible post change (I can see the Pi-Hole WebUI and tracert shows the traffic being routed on to the container appropriately).  The changes above did not fix the issue for any containers routed through binhex-privoxyvpn.  I'm still unable to access the containers when connected to my network through a wireguard tunnel. 

 

Any suggestions as to what I need to do here to be able to access the WebUI for those containers?

 

Link to comment
1 hour ago, SirCadian said:

I have an issue that I would welcome some assistance on.

 

I have a number of docker containers set to route traffic through a binhex-privoxyvpn container using the '--net=container:binhex-privoxyvpn' flag in the extra parameters section.  All of that works well and traffic for the dependent containers routes through the binhex-privoxyvpn container network as expected.  I've set up the VPN input ports and can access the WebUI for each of the containers.  So far so good...

 

I've also set up a wireguard tunnel to my Unraid server using the 'remote access to LAN' access type.  When connected through the tunnel I couldn't access my Pi-Hole container or any of the dependent containers running their traffic through binhex-privoxyvpn.  I read around a bit and followed the wireguard instructions for configuring complex networks here so that I can get access to dockers with custom IPs.  

This worked for my pi-hole container which was inaccessible prior to the above changes (a tracert hung at 10.253.0.1) but is accessible post change (I can see the Pi-Hole WebUI and tracert shows the traffic being routed on to the container appropriately).  The changes above did not fix the issue for any containers routed through binhex-privoxyvpn.  I'm still unable to access the containers when connected to my network through a wireguard tunnel. 

 

Any suggestions as to what I need to do here to be able to access the WebUI for those containers?

 

Just in case anyone else has this problem, I found the answer in this reddit thread.  I had to amend the LAN_NETWORK container variable in the binhex-privoxyvpn docker container to add my wireguard local tunnel network pool (you can find this in Unraid Settings - VPN Manager).  So, assuming your LAN range is 192.168.1.0/24 and your Wireguard Local Tunnel Network Pool is 10.253.0.0/24 then you should set it as follows.

image.png.690d0bef37ce93cb19f019e34b1f4e02.png

 

Hope this helps anyone having similar problems.

  • Like 2
Link to comment
  • 2 weeks later...

Im trying to figure out if i am experiencing a privoxy issue or if its a email client issue.

 

So i am running a business, and previously i've always used a browser based webmail, which worked fine but i wanted an actual imap client to work with and i found one (the bat) that is nice for my needs. The only issue is when you send email from an email client the actual ip address of your source connection sending the email is embedded by design, i don't want this obviously. its 2023 and exposing your public ip to strangers is a bad idea in general especially for me when i have a static ip, so the solution was to put up a privoxy proxy and use a vpn to send the emails through.

 

The problem:

I have setup a privoxy using this containers and everything seems to work fine, i can test the socks5 proxy with a browser and see that if i use it for my traffic the vpn ip comes up. It never lags or slows down when using a browser, but when i use it with my imap mail client the handshakes to the server can take up to 2 minutes of time, and it times out often, and just works really really bad. I have no idea why this would be, whenever i use the bat without proxy it works fine.

 

My question:

Do any of you know any setting that could cause email imap to be really slow with privoxy default settings? I've asked chat gpt but no great ideas there, i tested changing the keep alive to 0 etc but none of it seems to change anything, running the email via the proxy is incredible slow and nearly unusable.

 

All ideas welcome.

Link to comment
  • 3 weeks later...
On 12/20/2023 at 7:16 PM, SirCadian said:

Just in case anyone else has this problem, I found the answer in this reddit thread.  I had to amend the LAN_NETWORK container variable in the binhex-privoxyvpn docker container to add my wireguard local tunnel network pool (you can find this in Unraid Settings - VPN Manager).  So, assuming your LAN range is 192.168.1.0/24 and your Wireguard Local Tunnel Network Pool is 10.253.0.0/24 then you should set it as follows.

image.png.690d0bef37ce93cb19f019e34b1f4e02.png

 

Hope this helps anyone having similar problems.

 

Thank you so much! Wanted to add that this also works while using ZeroTier by simply adding your ZeroTier network to it. In case anyone reading ZeroTier is reading this; you might also be running into what is described here if you're on Unraid >6.12.x and you can't access any service in Docker through ZeroTier.

Link to comment
  • 3 weeks later...
2 hours ago, alturismo said:

download and place the needed ovpn files in the appdata location.

 

Ok here's the appdata location. Where should I place the ovpn here? Should I create a new folder?

hide.png

Edited by HHUBS
Link to comment
7 minutes ago, HHUBS said:

Ok here's the appdata location. Where should I place the ovpn here? Should I create a new folder?

 

im not using it (anymore), but i guess when you set the variable to openvpn it ll create the folder for you, for the rest i just would try and look into the logs where its missing ;)

  • Like 1
Link to comment

Thanks. I have already sucessfully setup privoxyvpn with hide.me. So I want to pass containers, for example qbittorrent, to this vpn and it says here in the guide => How to Route Any Docker Container Through VPN in Unraid | WhiteMatterTech  that I need to input 8080 for qbittorrent gui in VPN_OUTPUT_PORTS. However, I can't access the gui unless I set it in the VPN_INPUT_PORTS. May I ask what's the difference between the two?

 

In the documentation, I can't seem to understand it all. Can you elaborate it in a manner that a newbie can understand?

Quote

IMPORTANT
Please note 'VPN_INPUT_PORTS' is NOT to define the incoming port for the VPN, this environment variable is used to define port(s) you want to allow in to the VPN network when network binding multiple containers together, configuring this incorrectly with the VPN provider assigned incoming port COULD result in IP leakage, you have been warned!.

 

Also, qBittorrent uses ports 8080 and 6881. Should enter this both in the VPN_OUTPUT_PORTS?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.