ati

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by ati

  1. I am waiting as well, and from what I can tell this issue will not be fixed by LimeTech. I think if we want to move on from 6.8.x to anything newer you'll have to run through the above outlined steps. I am still holding out for something, but again, I'm not holding my breath. What frustrates me the most is this was working in 6.8.xx and not in 6.9 and onwards, so it is something they could potentially address.
  2. Well it is working again. I waited a day and then I was able to connect again. No clue. Regardless, no settings changes, just was a little more patient. Maybe it was a cache issue with my browser or something.
  3. I am struggling to figure out what happened to my container. Yesterday I had a momentary power loss which resulted in my internet going down, but my UPS kept my unRAID server online. I restored the internet and found my container GUI inaccessible a day or so later. I figure it was related to the internet loss and breaking the VPN connection. No biggie. I restarted the container to re-establish the connection and have no luck getting back into the GUI. Nothing has changed. No configuration change, nothing, but now it won't seem to work. I dug through the startup log and cannot seem to find a glaring error either. Any guidance would be appreciated. Container command (passwords removed): # /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='' -e 'VPN_PASS'='' -e 'VPN_PROV'='custom' -e 'VPN_CLIENT'='openvpn' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'LAN_NETWORK'='192.168.10.0/24' -e 'NAME_SERVERS'='85.203.37.1,85.203.37.2' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'VPN_INPUT_PORTS'='7878,5800,5900,9117,8989,9897' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -p '8118:8118/tcp' -p '7878:7878/tcp' -p '9117:9117/tcp' -p '8989:8989/tcp' -p '9897:9897/tcp' -p '8686:8686/tcp' -v '/mnt/user/Downloads/Downloads/':'/data':'rw' -v '/mnt/cache/appdata/binhex-delugevpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn' Log from startup (passwords removed and IPs changed) Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2022-08-05 13:41:36.554341 [info] Host is running unRAID 2022-08-05 13:41:36.606632 [info] System information Linux 8290c25a0b63 4.19.107-Unraid #1 SMP Thu Mar 5 13:55:57 PST 2020 x86_64 GNU/Linux 2022-08-05 13:41:36.664016 [info] OS_ARCH defined as 'x86-64' 2022-08-05 13:41:36.721235 [info] PUID defined as '99' 2022-08-05 13:41:38.346542 [info] PGID defined as '100' 2022-08-05 13:41:39.759059 [info] UMASK defined as '000' 2022-08-05 13:41:39.814989 [info] Permissions already set for '/config' 2022-08-05 13:41:39.877230 [info] Deleting files in /tmp (non recursive)... 2022-08-05 13:41:39.942277 [info] VPN_ENABLED defined as 'yes' 2022-08-05 13:41:39.999234 [info] VPN_CLIENT defined as 'openvpn' 2022-08-05 13:41:40.052263 [info] VPN_PROV defined as 'custom' 2022-08-05 13:41:40.115871 [info] OpenVPN config file (ovpn extension) is located at /config/openvpn/my_expressvpn_usa_-_chicago_udp.ovpn 2022-08-05 13:41:40.218392 [warn] VPN configuration file /config/openvpn/my_expressvpn_usa_-_chicago_udp.ovpn remote protocol is missing or malformed, assuming protocol 'udp' 2022-08-05 13:41:40.266851 [info] VPN remote server(s) defined as 'usa-chicago-ca-version-2.expressnetw.com,' 2022-08-05 13:41:40.313238 [info] VPN remote port(s) defined as '1195,' 2022-08-05 13:41:40.362507 [info] VPN remote protcol(s) defined as 'udp,' 2022-08-05 13:41:40.416098 [info] VPN_DEVICE_TYPE defined as 'tun0' 2022-08-05 13:41:40.469772 [info] VPN_OPTIONS not defined (via -e VPN_OPTIONS) 2022-08-05 13:41:40.524077 [info] LAN_NETWORK defined as '192.168.10.0/24' 2022-08-05 13:41:40.578576 [info] NAME_SERVERS defined as '85.203.37.1,85.203.37.2' 2022-08-05 13:41:40.632005 [info] VPN_USER defined as '' 2022-08-05 13:41:40.686763 [info] VPN_PASS defined as '' 2022-08-05 13:41:40.741505 [info] ENABLE_PRIVOXY defined as 'no' 2022-08-05 13:41:40.801958 [info] VPN_INPUT_PORTS defined as '7878,5800,5900,9117,8989,9897' 2022-08-05 13:41:40.857511 [info] VPN_OUTPUT_PORTS not defined (via -e VPN_OUTPUT_PORTS), skipping allow for custom outgoing ports 2022-08-05 13:41:40.913177 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info' 2022-08-05 13:41:40.968820 [info] DELUGE_WEB_LOG_LEVEL defined as 'info' 2022-08-05 13:41:41.026254 [info] Starting Supervisor... 2022-08-05 13:41:41,528 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing 2022-08-05 13:41:41,528 INFO Set uid to user 0 succeeded 2022-08-05 13:41:41,533 INFO supervisord started with pid 6 2022-08-05 13:41:42,535 INFO spawned: 'shutdown-script' with pid 187 2022-08-05 13:41:42,537 INFO spawned: 'start-script' with pid 188 2022-08-05 13:41:42,539 INFO spawned: 'watchdog-script' with pid 189 2022-08-05 13:41:42,539 INFO reaped unknown pid 7 (exit status 0) 2022-08-05 13:41:42,578 DEBG 'start-script' stdout output: [info] VPN is enabled, beginning configuration of VPN 2022-08-05 13:41:42,579 INFO success: shutdown-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2022-08-05 13:41:42,579 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2022-08-05 13:41:42,579 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2022-08-05 13:41:42,666 DEBG 'start-script' stdout output: [info] Adding 85.203.37.1 to /etc/resolv.conf 2022-08-05 13:41:42,671 DEBG 'start-script' stdout output: [info] Adding 85.203.37.2 to /etc/resolv.conf 2022-08-05 13:41:43,045 DEBG 'start-script' stdout output: [info] Default route for container is 172.17.0.1 2022-08-05 13:41:43,069 DEBG 'start-script' stdout output: [info] Docker network defined as 172.17.0.0/16 2022-08-05 13:41:43,075 DEBG 'start-script' stdout output: [info] Adding 192.168.10.0/24 as route via docker eth0 2022-08-05 13:41:43,077 DEBG 'start-script' stdout output: [info] ip route defined as follows... -------------------- 2022-08-05 13:41:43,079 DEBG 'start-script' stdout output: default via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5 192.168.10.0/24 via 172.17.0.1 dev eth0 2022-08-05 13:41:43,079 DEBG 'start-script' stdout output: broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 broadcast 172.17.0.0 dev eth0 table local proto kernel scope link src 172.17.0.5 local 172.17.0.5 dev eth0 table local proto kernel scope host src 172.17.0.5 broadcast 172.17.255.255 dev eth0 table local proto kernel scope link src 172.17.0.5 2022-08-05 13:41:43,079 DEBG 'start-script' stdout output: -------------------- 2022-08-05 13:41:43,084 DEBG 'start-script' stdout output: iptable_mangle 16384 2 ip_tables 24576 5 iptable_filter,iptable_nat,iptable_mangle 2022-08-05 13:41:43,085 DEBG 'start-script' stdout output: [info] iptable_mangle support detected, adding fwmark for tables 2022-08-05 13:41:43,282 DEBG 'start-script' stdout output: [info] iptables defined as follows... -------------------- 2022-08-05 13:41:43,284 DEBG 'start-script' stdout output: -P INPUT DROP -P FORWARD DROP -P OUTPUT DROP -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A INPUT -s 149.19.196.239/32 -i eth0 -j ACCEPT -A INPUT -s 45.39.44.2/32 -i eth0 -j ACCEPT -A INPUT -s 45.39.44.105/32 -i eth0 -j ACCEPT -A INPUT -s 149.19.196.116/32 -i eth0 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 8112 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 7878 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 7878 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 5800 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 5800 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 5900 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 5900 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 9117 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 9117 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8989 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 8989 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 9897 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 9897 -j ACCEPT -A INPUT -s 192.168.10.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i tun0 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A OUTPUT -d 149.19.196.239/32 -o eth0 -j ACCEPT -A OUTPUT -d 45.39.44.2/32 -o eth0 -j ACCEPT -A OUTPUT -d 45.39.44.105/32 -o eth0 -j ACCEPT -A OUTPUT -d 149.19.196.116/32 -o eth0 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 8112 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 7878 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 7878 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 5800 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 5800 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 5900 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 5900 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 9117 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 9117 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8989 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 8989 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 9897 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 9897 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 192.168.10.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -o tun0 -j ACCEPT 2022-08-05 13:41:43,285 DEBG 'start-script' stdout output: -------------------- 2022-08-05 13:41:43,286 DEBG 'start-script' stdout output: [info] Starting OpenVPN (non daemonised)... 2022-08-05 13:41:43,335 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 DEPRECATED OPTION: --cipher set to 'AES-256-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-256-CBC' to --data-ciphers or change --cipher 'AES-256-CBC' to --data-ciphers-fallback 'AES-256-CBC' to silence this warning. 2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6 2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6 2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6 2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6 2022-08-05 13:41:43 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6 2022-08-05 13:41:43 WARNING: file 'credentials.conf' is group or others accessible 2022-08-05 13:41:43,335 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 OpenVPN 2.5.7 [git:makepkg/a0f9a3e9404c8321+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 31 2022 2022-08-05 13:41:43 library versions: OpenSSL 1.1.1q 5 Jul 2022, LZO 2.10 2022-08-05 13:41:43 WARNING: --ns-cert-type is DEPRECATED. Use --remote-cert-tls instead. 2022-08-05 13:41:43 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts 2022-08-05 13:41:43,337 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 Outgoing Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication 2022-08-05 13:41:43 Incoming Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication 2022-08-05 13:41:43,337 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 TCP/UDP: Preserving recently used remote address: [AF_INET]149.19.196.239:1195 2022-08-05 13:41:43 Socket Buffers: R=[212992->1048576] S=[212992->1048576] 2022-08-05 13:41:43 UDP link local: (not bound) 2022-08-05 13:41:43 UDP link remote: [AF_INET]149.19.196.239:1195 2022-08-05 13:41:43,356 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 TLS: Initial packet from [AF_INET]149.19.196.239:1195, sid=3e2c64b4 29850e4d 2022-08-05 13:41:43,379 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 VERIFY OK: depth=1, C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=ExpressVPN CA, emailAddress=support@expressvpn.com 2022-08-05 13:41:43,380 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 VERIFY OK: nsCertType=SERVER 2022-08-05 13:41:43 VERIFY X509NAME OK: C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=Server-11070-0a, emailAddress=support@expressvpn.com 2022-08-05 13:41:43 VERIFY OK: depth=0, C=VG, ST=BVI, O=ExpressVPN, OU=ExpressVPN, CN=Server-11070-0a, emailAddress=support@expressvpn.com 2022-08-05 13:41:43,406 DEBG 'start-script' stdout output: 2022-08-05 13:41:43 Control Channel: TLSv1.3, cipher TLSv1.3 TLS_AES_256_GCM_SHA384, peer certificate: 2048 bit RSA, signature: RSA-SHA256 2022-08-05 13:41:43 [Server-11070-0a] Peer Connection Initiated with [AF_INET]149.19.196.239:1195 2022-08-05 13:41:44,648 DEBG 'start-script' stdout output: 2022-08-05 13:41:44 SENT CONTROL [Server-11070-0a]: 'PUSH_REQUEST' (status=1) 2022-08-05 13:41:44,667 DEBG 'start-script' stdout output: 2022-08-05 13:41:44 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS 10.122.0.1,comp-lzo no,route 10.122.0.1,topology net30,ping 10,ping-restart 60,ifconfig 10.122.1.218 10.122.1.217,peer-id 122,cipher AES-256-GCM' 2022-08-05 13:41:44 OPTIONS IMPORT: timers and/or timeouts modified 2022-08-05 13:41:44 OPTIONS IMPORT: compression parms modified 2022-08-05 13:41:44 OPTIONS IMPORT: --ifconfig/up options modified 2022-08-05 13:41:44,667 DEBG 'start-script' stdout output: 2022-08-05 13:41:44 OPTIONS IMPORT: route options modified 2022-08-05 13:41:44 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified 2022-08-05 13:41:44 OPTIONS IMPORT: peer-id set 2022-08-05 13:41:44 OPTIONS IMPORT: adjusting link_mtu to 1629 2022-08-05 13:41:44 OPTIONS IMPORT: data channel crypto options modified 2022-08-05 13:41:44 Data Channel: using negotiated cipher 'AES-256-GCM' 2022-08-05 13:41:44 NCP: overriding user-set keysize with default 2022-08-05 13:41:44 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key 2022-08-05 13:41:44 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key 2022-08-05 13:41:44 net_route_v4_best_gw query: dst 0.0.0.0 2022-08-05 13:41:44 net_route_v4_best_gw result: via 172.17.0.1 dev eth0 2022-08-05 13:41:44,667 DEBG 'start-script' stdout output: 2022-08-05 13:41:44 ROUTE_GATEWAY 172.17.0.1/255.255.0.0 IFACE=eth0 HWADDR=02:42:ac:11:00:05 2022-08-05 13:41:44,668 DEBG 'start-script' stdout output: 2022-08-05 13:41:44 TUN/TAP device tun0 opened 2022-08-05 13:41:44 net_iface_mtu_set: mtu 1500 for tun0 2022-08-05 13:41:44,668 DEBG 'start-script' stdout output: 2022-08-05 13:41:44 net_iface_up: set tun0 up 2022-08-05 13:41:44 net_addr_ptp_v4_add: 10.122.1.218 peer 10.122.1.217 dev tun0 2022-08-05 13:41:44 /root/openvpnup.sh tun0 1500 1557 10.122.1.218 10.122.1.217 init 2022-08-05 13:41:46,914 DEBG 'start-script' stdout output: 2022-08-05 13:41:46 net_route_v4_add: 149.19.196.239/32 via 172.17.0.1 dev [NULL] table 0 metric -1 2022-08-05 13:41:46 net_route_v4_add: 0.0.0.0/1 via 10.122.1.217 dev [NULL] table 0 metric -1 2022-08-05 13:41:46 net_route_v4_add: 128.0.0.0/1 via 10.122.1.217 dev [NULL] table 0 metric -1 2022-08-05 13:41:46,914 DEBG 'start-script' stdout output: 2022-08-05 13:41:46 net_route_v4_add: 10.122.0.1/32 via 10.122.1.217 dev [NULL] table 0 metric -1 2022-08-05 13:41:46 Initialization Sequence Completed 2022-08-05 13:41:50,730 DEBG 'start-script' stdout output: [info] Attempting to get external IP using 'http://checkip.amazonaws.com'... 2022-08-05 13:41:50,870 DEBG 'start-script' stdout output: [info] Successfully retrieved external IP address 85.237.194.94 2022-08-05 13:41:50,872 DEBG 'start-script' stdout output: [info] Application does not require port forwarding or VPN provider is != pia, skipping incoming port assignment 2022-08-05 13:41:50,959 DEBG 'watchdog-script' stdout output: [info] Deluge listening interface IP 0.0.0.0 and VPN provider IP 10.122.1.218 different, marking for reconfigure 2022-08-05 13:41:50,967 DEBG 'watchdog-script' stdout output: [info] Deluge not running 2022-08-05 13:41:50,973 DEBG 'watchdog-script' stdout output: [info] Deluge Web UI not running 2022-08-05 13:41:50,974 DEBG 'watchdog-script' stdout output: [info] Attempting to start Deluge... [info] Removing deluge pid file (if it exists)... 2022-08-05 13:41:51,973 DEBG 'watchdog-script' stdout output: [info] Deluge key 'listen_interface' currently has a value of '10.122.0.138' [info] Deluge key 'listen_interface' will have a new value '10.122.1.218' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-08-05 13:41:52,494 DEBG 'watchdog-script' stdout output: [info] Deluge key 'outgoing_interface' currently has a value of 'tun0' [info] Deluge key 'outgoing_interface' will have a new value 'tun0' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-08-05 13:41:52,984 DEBG 'watchdog-script' stdout output: [info] Deluge key 'default_daemon' currently has a value of 'e2015e2ba35049b9aea47ad89d31b6a5' [info] Deluge key 'default_daemon' will have a new value 'e2015e2ba35049b9aea47ad89d31b6a5' [info] Writing changes to Deluge config file '/config/web.conf'... 2022-08-05 13:41:54,409 DEBG 'watchdog-script' stdout output: [info] Deluge process started [info] Waiting for Deluge process to start listening on port 58846... 2022-08-05 13:41:54,741 DEBG 'watchdog-script' stdout output: [info] Deluge process listening on port 58846 2022-08-05 13:42:01,909 DEBG 'watchdog-script' stderr output: <Deferred at 0x14bb4ee22e30 current result: None> 2022-08-05 13:42:02,044 DEBG 'watchdog-script' stdout output: [info] No torrents with state 'Error' found 2022-08-05 13:42:02,044 DEBG 'watchdog-script' stdout output: [info] Starting Deluge Web UI... 2022-08-05 13:42:02,045 DEBG 'watchdog-script' stdout output: [info] Deluge Web UI started
  4. Well I am not sure what I did, but I have 2 versions of the same container. One goes through binhex-delugevpn and the other doesn't. Somehow they got messed up. As soon as I deleted both of those everything was peachy. Strange. Probably screwed something up awhile back setting them up and not getting the appdata folders unique. Once it required a rebuild it broke everything as I haven't rebuilt the containers in months.
  5. I use binhex-delugevpn as a proxy container for many services. Today I went to add binhex-lidarr to binhex-delugevpn. Steps: 1. Downloaded binhex-lidarr and configured the network to none with the extra parameter for the '--net=container:binhex-delugevpn' 2. Started the binhex-lidarr container 3. Realized I forgot to add the port mapping in the binhex-delugevpn container and edited it 4. Added 8686-8686 TCP port mapping to binhex-delugevpn and rebuilt it 5. This is where everything went sideways. Generally an update to binhex-delugevpn would cause all the containers the routed though it to rebuild. This time they didn't - they all said rebuild ready rebuilding and did nothing. My GUI was freaking out - the auto start icons were flashing and the resource usage counters were flashing and the unRAID refresh logo was popping in and out every 2-3 seconds. At this point I couldn't select anything on the screen because by the time I could click it would reload/refresh. So I did the following: 1. From the dashboard screen stopped all the containers - didn't fix anything 2. Slowly disabled all the auto-starts by timing my clicks 3. Stopped and restarted the Docker service from settings - no change 4. No obvious errors/issues in the logs Even with all my containers stopped the Docker GUI is not working? I deleted the binhex-lidarr container and removed the port mapping from binhex-delugevpn. Still nothing. I have somehow managed to screw everything. What is interesting is if I start the containers from the Dashboard page they work fine, but if I go to the Docker page in the GUI it is unusable.
  6. I tired searching around, but couldn't find anything that really matched what I was looking for. If this has already been answered, please just point me there. I have created a custom docker network for my SWAG container. proxynetwork (172.18.0.0\24) I have 3 containers in on the proxynetwork: 1. SWAG 2. Service 1 3. Service 2 Service 1 and service 2 are reversed proxied with SWAG which is mapped to port 1443 on my unRAID server's LAN IP address and port 443 is port forwarded to SWAG via unRAID LAN IP. What bothers me is if I SSH into any of the 3 containers on my proxynetwork I can access any other LAN resource. I'd like to firewall off those containers from accessing any LAN resource. Basically make a DMZ of sorts. Due to how unRAID NAT's the container network (proxynetwork) to the LAN subnet unRAID sits on (bridge mode), I am unsure I can make firewall rules at my router. Not to mention I'd prefer to lock it down inside unRAID if possible. I am looking in unRAID network settings and see the routing table, but no place to add in firewall rules/IP tables. My only other thought is to create a DMZ VLAN, make unRAID VLAN aware and then put those containers in that VLAN somehow. I am not exactly sure of the process or if that will even achieve my goal. Thanks.
  7. One thing to note. In your guide you never said to add the Photonix container to your photonix_net Docker network. I set a username and password in the Docker settings, but it just stays at loading every time I log in. I've tried restarting the container with no change. It is pointed at a directory with 15 photos for testing, so it shouldn't be taking too long to load them I'd assume. I cannot even log in when I run the container in demo mode.
  8. Lovely. Now I have to rebuild my array because I updated my OS. Might be time to roll back to 6.8 which was rock solid for me...
  9. I have a 8TB IronWolf from June 2020 in my UnRAID with no issues after the initial pre-clear. Today I took the plunge and upgraded to 6.9 from 6.8 and within a few hours after the update I got a notification from my system that I had a drive in a disabled state. I am not very strong in this department, so I am hoping people with more HDD knowledge than me can help shed some light on my SMART results and recommend a course of action. The array is currently stopped pending what I learn here. Plus I am not really sure what the warranty process is like with Seagate. How do I prove a drive failure and get a replacement? Thank you. Disk error log: Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB) Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] 4096-byte physical blocks Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Write Protect is off Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Mode Sense: 7f 00 10 08 Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA Jun 8 11:14:03 unRAID kernel: sdh: sdh1 Jun 8 11:14:03 unRAID kernel: sd 5:0:6:0: [sdh] Attached SCSI disk Jun 8 11:14:36 unRAID emhttpd: ST8000VN004-2M2101_WKD1WM01 (sdh) 512 15628053168 Jun 8 11:14:36 unRAID kernel: mdcmd (2): import 1 sdh 64 7814026532 0 ST8000VN004-2M2101_WKD1WM01 Jun 8 11:14:36 unRAID kernel: md: import disk1: (sdh) ST8000VN004-2M2101_WKD1WM01 size: 7814026532 Jun 8 11:14:36 unRAID emhttpd: read SMART /dev/sdh Jun 8 11:56:57 unRAID emhttpd: spinning down /dev/sdh Jun 8 14:16:40 unRAID kernel: sd 5:0:6:0: [sdh] tag#2816 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00 Jun 8 14:16:43 unRAID kernel: sd 5:0:6:0: [sdh] Synchronizing SCSI cache Jun 8 14:16:43 unRAID kernel: sd 5:0:6:0: [sdh] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00 Jun 8 14:16:43 unRAID kernel: scsi 5:0:6:0: [sdh] tag#3201 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=19s Jun 8 14:16:43 unRAID kernel: scsi 5:0:6:0: [sdh] tag#3201 CDB: opcode=0x88 88 00 00 00 00 02 49 3c b2 f8 00 00 00 80 00 00 Jun 8 14:16:43 unRAID kernel: blk_update_request: I/O error, dev sdh, sector 9818649336 op 0x0:(READ) flags 0x0 phys_seg 16 prio class 0 Jun 8 14:17:07 unRAID emhttpd: read SMART /dev/sdh ST8000VN004-2M2101_WKD1WM01-20210608-1429.txt
  10. Well I mean that is how subnet masking works. You can summarize (catch all) if you set it up correctly. But whether this docker can support that or not I don't know. I changed my LAN_NETWORK to "192.168.10.0/24,192.168.130.0/24" and still no luck. The reason I didn't mention the other networks is I don't feel that it matters. I need to get one working first. No use troubleshooting 7 things at once. I logged into the docker and verified my networks were in the IP tables: sh-5.1# iptables --list-rules -P INPUT DROP -P FORWARD DROP -P OUTPUT DROP -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT -A INPUT -s ***.***.***.***/32 -i eth0 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --dport 8112 -j ACCEPT -A INPUT -s 192.168.10.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT -A INPUT -s 192.168.130.0/24 -d 172.17.0.0/16 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i tun0 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A OUTPUT -d ***.***.***.***0/32 -o eth0 -j ACCEPT -A OUTPUT -d ***.***.***.***/32 -o eth0 -j ACCEPT -A OUTPUT -d ***.***.***.***/32 -o eth0 -j ACCEPT -A OUTPUT -d ***.***.***.***/32 -o eth0 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 8112 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 192.168.10.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 192.168.130.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -o tun0 -j ACCEPT I am not sure what else to try. This is rather frustrating as I don't have this issue with any other container, so it must be in the IP tables somewhere. I am just not overly familiar with how they're implemented here. I am a little lost that port 58846 is specifically called out as allowed from the remote subnets, while 8112 isn't and that is the port the GUI for Deluge runs on.
  11. I'll give that a go, though I don't see the difference. I have other devices on other subnets contained within that /16 network that I would like to be able to access the docker as well. What is the difference between 192.168.0.0/16 and 192.168.10.0/24, 192.168.20.0/24, 192.168.30.0/24, 192.168.110.0/24, 192.168.120.0/24, 192.168.130.0/24? The br0 isn't my goal, it was just for testing to try and gather more data. I need it to run in bridge mode anyway in order to route other dockers through this one.
  12. I am running into some trouble with what I believe is the LAN_NETWORK parameter and the IP_TABLES blocking me from accessing the deluge web GUI from a different subnet depending on the docker network configuration. I tired to use the binhex FAQ and some other searches, but I just couldn't find anything that applied to my particular situation. Here is the setup ONE: unRAID is at 192.168.10.40/24 PC I am attempting to access the binhex-delugevpn web GUI from is at 192.168.130.50/24 LAN_NETWORK is set to 192.168.0.0/16 to encompass everything Docker is configured as a bridge network. Here is the docker run command: user@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='bridge' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='user' -e 'VPN_PASS'='pass' -e 'VPN_PROV'='custom' -e 'VPN_CLIENT'='openvpn' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'LAN_NETWORK'='192.168.0.0/16' -e 'NAME_SERVERS'='nameserver' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'VPN_INPUT_PORTS'='' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '8112:8112/tcp' -p '58846:58846/tcp' -p '58946:58946/tcp' -p '58946:58946/udp' -p '8118:8118/tcp' -v '/mnt/user/appdata/data':'/data':'rw' -v '/mnt/cache/appdata/binhex-delugevpn':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn' When I run this docker, the VPN connects (verified via docker log), but I cannot access it from my PC on a different subnet. However, if I fire up a Firefox docker also in bridge mode, I can access the deluge web GUI. Here is the setup TWO: unRAID is at 192.168.10.40/24 PC I am attempting to access the binhex-delugevpn web GUI from is at 192.168.130.50/24 LAN_NETWORK is set to 192.168.10.0/24 Docker is configured as br0 network with IP of 192.168.10.210/24 Here is the docker run command: user@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-delugevpn' --net='br0' --ip='192.168.10.210' --privileged=true -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'TCP_PORT_8112'='8112' -e 'TCP_PORT_58846'='58846' -e 'TCP_PORT_58946'='58946' -e 'UDP_PORT_58946'='58946' -e 'TCP_PORT_8118'='8118' -e 'VPN_ENABLED'='yes' -e 'VPN_USER'='user' -e 'VPN_PASS'='pass' -e 'VPN_PROV'='custom' -e 'VPN_CLIENT'='openvpn' -e 'VPN_OPTIONS'='' -e 'STRICT_PORT_FORWARD'='yes' -e 'ENABLE_PRIVOXY'='no' -e 'LAN_NETWORK'='192.168.10.0/24' -e 'NAME_SERVERS'='nameserver' -e 'DELUGE_DAEMON_LOG_LEVEL'='info' -e 'DELUGE_WEB_LOG_LEVEL'='info' -e 'VPN_INPUT_PORTS'='' -e 'VPN_OUTPUT_PORTS'='' -e 'DEBUG'='false' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/cache/appdata/data':'/data':'rw' -v '/mnt/cache/appdata/binhex-delugevpn/':'/config':'rw' --sysctl="net.ipv4.conf.all.src_valid_mark=1" 'binhex/arch-delugevpn' When I run this docker, the VPN connects (verified via docker log) and I can access it from my PC on a different subnet. What is even more confusing for me is if I fire up the same Firefox docker and try and access the deluge web GUI, I cannot even though they are both on the very same layer 2 network. The main reason I am trying to get this figured out is I have 2-3 other dockers I'd like to route through this binhex-delugevpn docker to grab the VPN benefits. However, in order for that to work the dockers must be configured in bridge mode. As referenced above, when I am in bridge mode I am unable to access the web GUI of the various services contained within those dockers.
  13. Not quite what I was getting at, but I get it. I want to have my unRAID server at 192.168.10.50/24 I want to have a docker (binhex-delugevpn) at 192.168.10.100/24 I want to have a docker run through binhex-delugevpn and be accessible at 192.168.10.100 (because the ports are run through binhex-delugevpn). They're all on the same network. No layer 3 required, just different IP addresses. I don't want my binhex-delugevpn and my unRAID server at 192.168.10.50/24.
  14. I am slowly trying to learn about routing one docker container through another. I've watched SpaceInvaderOne's video which was a great help. I have also read up on the changes to the binhex dockers with regards to passthrough. What I am most curious about is networking configurations. I typically prefer to use br0 networks for my dockers to keep different workloads on different addresses. Right or wrong, this is just how I have everything currently set up. What I am learning is that if I want do have one docker route through another docker (binhex-delugevpn in my case) I need to use bridge networking. Is that correct, there is no other way around that? I initially setup binhex-delugevpn as a br0 network on my server and I got everything working and running fine. Now that I got into playing with the inter docker routing I tried changing my binhex-delugevpn docker to bridge, and I can no longer access the UI. As soon as I change it back to br0 it's all good again. (probably an unrelated issue) Is there a way to maintain a br0 docker network and still route traffic through that docker? I believe it comes down to ports. Once you're in br0 mode, my port mapping is ignored unlike when I am in bridge networking.
  15. I am not worried about data loss. I am just worried about how unRAID will handle the system, appdata and domains folders going missing. Will it automatically recreate them on the array until I replace the cache setup and move them over or something?
  16. That makes sense. Thank you for the help. I am assuming the hardware errors were on the cache pool, or were you referring to the data array as well? What is the best way to recover from this being that my system, appdata and domains folders are on the failing cache pool?
  17. It shouldn't be a 3 device pool. I set it up with only 2 drives. I replaced one a while back, could that be the 3rd drive? There are no historical drives listed if I stop the array. Plus the drive it thinks is missing, it is also reporting as present?
  18. I believe my 2 drive BTRFS cache pool is falling and is stuck in read-only mode. I cannot get my Docker service to start: Sep 30 08:35:59 unRAID root: ERROR: unable to resize '/var/lib/docker': Read-only file system Sep 30 08:35:59 unRAID root: Resize '/var/lib/docker' of 'max' Sep 30 08:35:59 unRAID emhttpd: shcmd (216): /etc/rc.d/rc.docker start Sep 30 08:35:59 unRAID root: starting dockerd ... Sep 30 08:36:14 unRAID emhttpd: shcmd (218): umount /var/lib/docker I am trying to use the mover to clear out my cache drives so I can replace them, but that will not work either. I figured that would be fine for a read-only file system, but I guess not. Should be moving from cache to disk1. Sep 30 08:32:32 unRAID root: mover: started Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE1.mp4 Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE1.mp4 error: Read-only file system Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE2.mkv Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE2.mkv error: Read-only file system Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE3.mkv Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE3.mkv error: Read-only file system Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE4.mkv Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE4.mkv error: Read-only file system Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system Sep 30 08:32:32 unRAID move: move: file /mnt/cache/Movies/MOVIE5.mp4 Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies/MOVIE5.mp4 error: Read-only file system Sep 30 08:32:32 unRAID move: move: create_parent: /mnt/cache/Movies error: Read-only file system Sep 30 08:32:32 unRAID move: move_object: /mnt/cache/Movies: Read-only file system Sep 30 08:32:33 unRAID move: move_object: /mnt/disk1/isos: Read-only file system Sep 30 08:32:33 unRAID move: move: file /mnt/disk2/isos/ubuntu-20.04.1-desktop-amd64.iso Sep 30 08:32:33 unRAID move: move: create_parent: /mnt/disk2/isos error: Read-only file system Sep 30 08:32:33 unRAID move: move_object: /mnt/disk2/isos: Read-only file system This issue came about because one of the main drives in my array had some read errors recently. So yesterday I stopped the array, pulled the drive and replaced it. I started the array and allowed it to rebuild. This morning I noticed my Docker service failed to start so I did a little digging. Fix Common Problems called out that my cache drive pool was mounted in read-only mode. I am assuming because of the number of errors? One other strange thing is when I start the array I get a notification that one of the cache pool disks is missing, but it doesn't show as missing after the array starts. Tried starting and stopping the array again with no change. I just rebooted the server as well just to see - no change either. I'd like to try and move everything off the cache pool into the array so I can replace both cache drives as both have issues. Looking for some guidance and I am a unRAID newbie and a little lost with my current situation. unraid-diagnostics-20200930-0844.zip
  19. My word, that is embarrassing. That was it. Thank you.
  20. Works fine in shell. Issue appears to be with FileBot... /tmp # cd /storage/temp_files/ /tmp # touch testfile /tmp # cp testfile /storage/Movies/ /tmp # ls /storage/Movies/ | grep test testfile
  21. I am not using any automation, I am trying to do it all manually inside FileBot, so they're all blank. Do I need them even if I am not using the AMC script?
  22. I am running into an issue with permissions and it doesn't seem to pop up in this thread via search. I have 3 shares: 1 - Temporary storage place on my cache drive (/mnt/user/temp_files) 2 - Movies folder (/mnt/user/Movies) 3 - TV Shows folder (/mnt/user/TV Shows) I have my Docker container setup to pass (/mnt/user) to the container as (/storage). When I bring up the WebUI I can configure FileBot to pull my media files from the folder on my Cache drive and tag them, but when I try to copy them to the parity protected share on the array I get an error. I tried using move in FileBot first and it created all the folders, but never moved the content. When I switch to copy I get the following error: I am a little lost as to what to do.
  23. Yeah, the disk has some pending sectors. I can understand it dropped because it's failing - that's fair. That doesn't explain why the unRAID webUI still wasn't reading correctly on the dashboard page. Plus, if the drive reconnected, why didn't unRAID recognize that's back and add it back into the cache array? I'm a little lost why there is no notification of a missing disk whatsoever on the main page like there would be for a data drive. Just seems like I'm missing something more...
  24. I went to bed last night and realized I never posted the diagnostics. 🤦‍♂️ Since my first post I have removed 1 unassigned device (not the one in question) and added 3 more that are currently in a pre-clear process, otherwise nothing has changed. unraid-diagnostics-20200819-0621.zip
  25. I recently setup a new Cache RAID-1 for running a few dockers. Nothing fanny so I used some old mechanical drives. I basically slapped them both into unRAID and assigned them to be cache drives and it did the rest of the work making the RAID-1 array. I then used the unBalance plugin to move the default 4 folders to those drives. Overnight one of the cache drives went missing (screenshot 1). What is strange is the drive in unRAID on the main page doesn't show the drive as missing (screenshot 2), but it does show up under the Unassigned Devices section. What is even more strange to me is that when I go to the main unRAID dashboard it doesn't even show the same Unassigned Devices as the main page does (screenshot 3). I am super lost and a little confused. I haven't stopped the array or restarted, but I am sure that'd fix the issue this time around. I am more interested in why it happened and how I can prevent it in the future. What most worries me is the drive doesn't show as missing in the webUI and the main and dashboard pages don't agree on the Unassigned Devices.