Jump to content

zyurph

Members
  • Content Count

    21
  • Joined

  • Last visited

Community Reputation

0 Neutral

About zyurph

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm running Version 6.7.2 Here is some history on what has been going on and lead to the current situation. I recently had some hardware issues where my add-on SATA controller failed. I use to have 8 drives connected to my MB and 2 Parity + 2 Cache connected to a four port SATA Controller. However After it failed I purchased a 16 port SAS controller and bout breakout cables and have all 12 drives now connected to it. When it happend I didn't realize that two of my data drives got put into party slots after a reboot, and the server proceed to rebuild the parity. However once I got my new controller and put things back to the way they were to my surprise the data on the two drives that were pull from the array into Party got rebuilt once I got the original parity drives back online with the new controller. I then had some issues with Dockers and my VM which I pass through my GPU and NVMe SSD. NOTE here when working to resolve these issues I upgraded from 6.7.0 to 6.7.2. However I just got all of that redone, and while I was working to restore plex I noticed a shit load of files were missing. I looked around and found that that there are few folders that were blank in the share. After further investigate I found that the files were still on the disk but just not showing in the share. I then found a butt ton of xfs errors in the log (see attached). I did some search in the forum but I wasn't able to find anything directly related to my issue. Any help would be greatly apprecated. Thanks, Zyurph tower-syslog-20190912-0805.zip
  2. Thanks that fixed it. I had been running a custom network for a good while without issue. some update either in deluge or UNRAID must have broken it. Thanks again!
  3. Thanks for the reply strike, my network type is "Custom : br0" and I'm using 10.1.1.201:8112 to connect to the WebUI.
  4. It has been a while since I have had time to post about my WebUI issues but I really want to get things working again so I thought I would lay them out again and see if anyone has any ideas. For the past 6mons or so I am unable to get DelugeVPN WebUI to open with VPN enabled. I'm using PIA with the proper port forwarding site. I have looked at my log (which is attached BTW) and it connects with no issues and states that the WebUI has started both with and without VPN Enabled. I had this working for over a year prior to this, and I can't pinpoint what changed that made it stopped. Thanks in advance for anyone who has time to take a look! supervisord-with-vpn.txt supervisord-without-vpn.txt
  5. @eb3k , Thank you for the reply! I have tried this as well, even going so far as to remove the docker and the directory in appdata and re-install. Still the same effect.
  6. Agreed, it is very bizarre for there seems to be an issue with VPN and webui working per my previous reply
  7. My apologies my more recent reply was a continuation of my previous messages note I have been running binhex-delugevpn for over a year without an issue until recently I have updated to the latest version of both unraid and the binhex-delugevpn docker and no webui. So I now have the same issue on two different unraid servers. Oh and yes I have my PIA creds entered. Attached is my latest supervisord log. supervisord.log
  8. I went ahead and updated unraid to 6.6.5 which purged all my docker containers and their respective folders in appdata. So I added binhex-delugevpn again and tried all vpn points listed here: https://www.privateinternetaccess.com/helpdesk/kb/articles/how-do-i-enable-port-forwarding-on-my-vpn and none of them allowed me to enter the webui.
  9. If I need to provide more information, please let me know. I have been trying to determine what changed for over a week and I'm unable to figure it out. From my point of view, nothing has changed.
  10. So I have two Unraid servers both running DelugeVPN and both with the same issue. as I mentioned before: The WebUI is not loading, the difference here is I haven't upgraded unraid and I am still running 6.4 supervisord.log: Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2018-11-12 14:55:12.217141 [info] Host is running unRAID 2018-11-12 14:55:12.247686 [info] System information Linux 61e0a63334e7 4.14.13-unRAID #1 SMP PREEMPT Wed Jan 10 10:27:09 PST 2018 x86_64 GNU/Linux 2018-11-12 14:55:12.289912 [info] PUID defined as '99' 2018-11-12 14:55:12.326920 [info] PGID defined as '100' 2018-11-12 14:55:12.397353 [info] UMASK defined as '000' 2018-11-12 14:55:12.432684 [info] Permissions already set for volume mappings 2018-11-12 14:55:12.472108 [info] VPN_ENABLED defined as 'yes' 2018-11-12 14:55:12.512886 [info] OpenVPN config file (ovpn extension) is located at /config/openvpn/CA Toronto.ovpn dos2unix: converting file /config/openvpn/CA Toronto.ovpn to Unix format... 2018-11-12 14:55:12.573468 [info] VPN remote line defined as 'remote ca-toronto.privateinternetaccess.com 1198' 2018-11-12 14:55:12.607314 [info] VPN_REMOTE defined as 'ca-toronto.privateinternetaccess.com' 2018-11-12 14:55:12.641545 [info] VPN_PORT defined as '1198' 2018-11-12 14:55:12.681615 [info] VPN_PROTOCOL defined as 'udp' 2018-11-12 14:55:12.715259 [info] VPN_DEVICE_TYPE defined as 'tun0' 2018-11-12 14:55:12.748385 [info] VPN_PROV defined as 'pia' 2018-11-12 14:55:12.781612 [info] LAN_NETWORK defined as '10.1.1.0/24' 2018-11-12 14:55:12.815185 [info] NAME_SERVERS defined as '209.222.18.222,37.235.1.174,8.8.8.8,209.222.18.218,37.235.1.177,8.8.4.4' 2018-11-12 14:55:12.848619 [info] VPN_USER defined as '******************' 2018-11-12 14:55:12.883130 [info] VPN_PASS defined as '*******************************' 2018-11-12 14:55:12.917235 [info] VPN_OPTIONS not defined (via -e VPN_OPTIONS) 2018-11-12 14:55:12.952129 [info] STRICT_PORT_FORWARD defined as 'yes' 2018-11-12 14:55:12.986867 [info] ENABLE_PRIVOXY defined as 'yes' 2018-11-12 14:55:13.020477 [info] Starting Supervisor... 2018-11-12 14:55:13,181 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing 2018-11-12 14:55:13,181 INFO Set uid to user 0 succeeded 2018-11-12 14:55:13,184 INFO supervisord started with pid 7 2018-11-12 14:55:14,186 INFO spawned: 'start-script' with pid 139 2018-11-12 14:55:14,188 INFO spawned: 'deluge-script' with pid 140 2018-11-12 14:55:14,189 INFO spawned: 'deluge-web-script' with pid 141 2018-11-12 14:55:14,191 INFO spawned: 'privoxy-script' with pid 142 2018-11-12 14:55:14,191 INFO reaped unknown pid 8 2018-11-12 14:55:14,194 DEBG 'start-script' stdout output: [info] VPN is enabled, beginning configuration of VPN 2018-11-12 14:55:14,194 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2018-11-12 14:55:14,195 INFO success: deluge-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2018-11-12 14:55:14,195 INFO success: deluge-web-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2018-11-12 14:55:14,195 INFO success: privoxy-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2018-11-12 14:55:14,195 DEBG 'deluge-script' stdout output: [info] Deluge config file already exists, skipping copy 2018-11-12 14:55:14,197 DEBG 'deluge-script' stdout output: [info] VPN is enabled, checking VPN tunnel local ip is valid 2018-11-12 14:55:14,273 DEBG 'start-script' stdout output: [info] Default route for container is 10.1.1.254 2018-11-12 14:55:14,280 DEBG 'start-script' stdout output: [info] Adding 209.222.18.222 to /etc/resolv.conf 2018-11-12 14:55:14,286 DEBG 'start-script' stdout output: [info] Adding 37.235.1.174 to /etc/resolv.conf 2018-11-12 14:55:14,292 DEBG 'start-script' stdout output: [info] Adding 8.8.8.8 to /etc/resolv.conf 2018-11-12 14:55:14,296 DEBG 'start-script' stdout output: [info] Adding 209.222.18.218 to /etc/resolv.conf 2018-11-12 14:55:14,300 DEBG 'start-script' stdout output: [info] Adding 37.235.1.177 to /etc/resolv.conf 2018-11-12 14:55:14,304 DEBG 'start-script' stdout output: [info] Adding 8.8.4.4 to /etc/resolv.conf 2018-11-12 14:55:14,520 DEBG 'start-script' stdout output: [info] Adding 10.1.1.0/24 as route via docker eth0 2018-11-12 14:55:14,521 DEBG 'start-script' stderr output: RTNETLINK answers: File exists 2018-11-12 14:55:14,522 DEBG 'start-script' stdout output: [info] ip route defined as follows... -------------------- 2018-11-12 14:55:14,522 DEBG 'start-script' stdout output: default via 10.1.1.254 dev eth0 10.1.1.0/24 dev eth0 proto kernel scope link src 10.1.1.14 2018-11-12 14:55:14,523 DEBG 'start-script' stdout output: -------------------- 2018-11-12 14:55:14,526 DEBG 'start-script' stdout output: iptable_mangle 16384 1 ip_tables 24576 3 iptable_mangle,iptable_filter,iptable_nat 2018-11-12 14:55:14,526 DEBG 'start-script' stdout output: [info] iptable_mangle support detected, adding fwmark for tables 2018-11-12 14:55:14,549 DEBG 'start-script' stdout output: [info] Docker network defined as 10.1.1.0/24 2018-11-12 14:55:14,689 DEBG 'start-script' stdout output: [info] iptables defined as follows... -------------------- 2018-11-12 14:55:14,691 DEBG 'start-script' stdout output: -P INPUT DROP -P FORWARD ACCEPT -P OUTPUT DROP -A INPUT -i tun0 -j ACCEPT -A INPUT -s 10.1.1.0/24 -d 10.1.1.0/24 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --sport 1198 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --sport 8112 -j ACCEPT -A INPUT -s 10.1.1.0/24 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT -A INPUT -s 10.1.1.0/24 -d 10.1.1.0/24 -i eth0 -p tcp -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A OUTPUT -o tun0 -j ACCEPT -A OUTPUT -s 10.1.1.0/24 -d 10.1.1.0/24 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --dport 1198 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --dport 8112 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT -A OUTPUT -d 10.1.1.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT -A OUTPUT -s 10.1.1.0/24 -d 10.1.1.0/24 -o eth0 -p tcp -j ACCEPT -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A OUTPUT -o lo -j ACCEPT 2018-11-12 14:55:14,691 DEBG 'start-script' stdout output: -------------------- 2018-11-12 14:55:14,693 DEBG 'start-script' stdout output: [info] Starting OpenVPN... 2018-11-12 14:55:14,702 DEBG 'start-script' stdout output: Mon Nov 12 14:55:14 2018 WARNING: file 'credentials.conf' is group or others accessible Mon Nov 12 14:55:14 2018 OpenVPN 2.4.6 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 24 2018 Mon Nov 12 14:55:14 2018 library versions: OpenSSL 1.1.0h 27 Mar 2018, LZO 2.10 2018-11-12 14:55:14,703 DEBG 'start-script' stdout output: [info] OpenVPN started 2018-11-12 14:55:14,703 DEBG 'start-script' stdout output: Mon Nov 12 14:55:14 2018 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts 2018-11-12 14:55:14,703 DEBG 'start-script' stdout output: Mon Nov 12 14:55:14 2018 TCP/UDP: Preserving recently used remote address: [AF_INET]172.98.67.59:1198 Mon Nov 12 14:55:14 2018 UDP link local: (not bound) Mon Nov 12 14:55:14 2018 UDP link remote: [AF_INET]172.98.67.59:1198 2018-11-12 14:55:15,309 DEBG 'start-script' stdout output: Mon Nov 12 14:55:15 2018 [68a030da2d6504d0f8ecfa90e2d37ef9] Peer Connection Initiated with [AF_INET]172.98.67.59:1198 2018-11-12 14:55:16,748 DEBG 'start-script' stdout output: Mon Nov 12 14:55:16 2018 auth-token received, disabling auth-nocache for the authentication token 2018-11-12 14:55:16,748 DEBG 'start-script' stdout output: Mon Nov 12 14:55:16 2018 TUN/TAP device tun0 opened 2018-11-12 14:55:16,748 DEBG 'start-script' stdout output: Mon Nov 12 14:55:16 2018 do_ifconfig, tt->did_ifconfig_ipv6_setup=0 Mon Nov 12 14:55:16 2018 /usr/bin/ip link set dev tun0 up mtu 1500 2018-11-12 14:55:16,750 DEBG 'start-script' stdout output: Mon Nov 12 14:55:16 2018 /usr/bin/ip addr add dev tun0 local 10.41.10.6 peer 10.41.10.5 2018-11-12 14:55:16,751 DEBG 'start-script' stdout output: Mon Nov 12 14:55:16 2018 /root/openvpnup.sh tun0 1500 1558 10.41.10.6 10.41.10.5 init 2018-11-12 14:55:16,758 DEBG 'start-script' stdout output: Mon Nov 12 14:55:16 2018 Initialization Sequence Completed 2018-11-12 14:55:16,778 DEBG 'privoxy-script' stdout output: [info] Configuring Privoxy... 2018-11-12 14:55:16,781 DEBG 'deluge-script' stdout output: [info] Deluge not running 2018-11-12 14:55:16,781 DEBG 'deluge-script' stdout output: [info] Deluge listening interface IP 0.0.0.0 and VPN provider IP 10.41.10.6 different, marking for reconfigure 2018-11-12 14:55:16,873 DEBG 'start-script' stdout output: [info] Attempting to curl http://209.222.18.222:2000/?client_id=682e86652ac00990c16b49ba2c92702c148703c477183c67e5cc19207b46af28... 2018-11-12 14:55:17,056 DEBG 'privoxy-script' stdout output: [info] All checks complete, starting Privoxy... 2018-11-12 14:55:17,057 DEBG 'privoxy-script' stderr output: 2018-11-12 14:55:17.057 1460d90e70c0 Info: Privoxy version 3.0.26 2018-11-12 14:55:17.057 1460d90e70c0 Info: Program name: /usr/bin/privoxy 2018-11-12 14:55:17,695 DEBG 'start-script' stdout output: [info] Successfully retrieved external IP address 173.239.230.58 2018-11-12 14:55:18,063 DEBG 'start-script' stdout output: [info] Curl successful for http://209.222.18.222:2000/?client_id=682e86652ac00990c16b49ba2c92702c148703c477183c67e5cc19207b46af28, response code 200 2018-11-12 14:55:18,783 DEBG 'deluge-script' stdout output: [info] Attempting to start Deluge... 2018-11-12 14:55:19,228 DEBG 'deluge-script' stdout output: [info] Deluge listening interface currently defined as 10.12.10.6 [info] Deluge listening interface will be changed to 10.41.10.6 [info] Saving changes to Deluge config file /config/core.conf... 2018-11-12 14:55:19,825 DEBG 'deluge-web-script' stdout output: [info] Starting Deluge webui... 2018-11-12 14:55:20,312 DEBG 'deluge-script' stdout output: Setting random_port to False.. Configuration value successfully updated. 2018-11-12 14:55:20,830 DEBG 'deluge-script' stdout output: Setting listen_ports to (43018, 43018).. Configuration value successfully updated. 2018-11-12 14:55:20,857 DEBG 'deluge-script' stdout output: [info] Deluge started Thoughts anyone?
  11. I have a familiar problem of webui stopped working. I have been using it for more than a year before this happened. I have removed the docker image and deleted the deluge vpn folder on unraid the reinstalled and still no dice. I have confirmed it works with out vpn enabled and it works. I have tried many different endpoints and with strict port forward disabled with no luck. Log attached. supervisord.log
  12. Were you ever able to find a solution to this? For I'm seeing this too using the hotplug as a workaround but have to use a laptop to do it since my mouse and keyboard will just randomly disconnect sometimes on my passthrough VM I use as my primary workstation?
  13. I have had that issue a few times when I was troubleshooting some vbios issues, but a shutdown and restart usually fixes it. What is your drive configuration for the VM, user share, or are you passing through a dedicated HD/SSD?
  14. If you look on your dashboard you can see the CPU cores in two columns. Each row is a physical and a hyperthreaded core. I would recommend that you keep your related physical and hyperthreaded cores together on any VM. Edit: You can also see this in tools --> system devices CPU Thread Pairings cpu 0 <===> cpu 14 cpu 1 <===> cpu 15 cpu 2 <===> cpu 16 cpu 3 <===> cpu 17 cpu 4 <===> cpu 18 cpu 5 <===> cpu 19 cpu 6 <===> cpu 20 cpu 7 <===> cpu 21 cpu 8 <===> cpu 22 cpu 9 <===> cpu 23 cpu 10 <===> cpu 24 cpu 11 <===> cpu 25 cpu 12 <===> cpu 26 cpu 13 <===> cpu 27
  15. So I changed to a different vbios from techpowerup, and it's work now. Not sure if it didn't like the one I was using for there were three to choose from, or if I just botched it up in the hex editor. Either way, I'm just happy it's working. Again thanks so much for your help Tuftuf!