Rudder2

Members
  • Posts

    157
  • Joined

  • Last visited

Everything posted by Rudder2

  1. I'm sorry for doubting you. You are right! The Dallas A02 exit node was broken!! I changed it to the Atlanta-a01 exit node and It says I have a VPN IP and a Deluge IP but I still can't access the WebUI. And this time when I try to go to the Proxy port it says "Invalid header received from client." instead of "This site can’t be reached ERR_CONNECTION_REFUSED"! Things are getting better! Here is my supervisord.log again. Hopefully it's the last time. Your AWESOME! Thank you for your help! supervisord.log
  2. Weird, because when I ping an IP address in the tunnel I get packets. I cannot ping web addresses as it can't resolve the DNS. I think the problem is a DNS problem.
  3. OK, I fixed the UMASK and deleted all the files in the APPS folder and started again. I get to Sun Feb 24 19:32:55 2019 Initialization Sequence Completed but still the webui and proxy is not accessible. Here is the new supervisord.log. The tunnel appears to be transmitting traffic when I ping in the container's bash the tun5 packets go up. This has me confused. Starting to think I need to create a PFSense router and go that route but really would rather just use this docker if possible. I think I got my username and password fully out of this file...I hope.... Thank you for your help and time on this. supervisord.log
  4. I used !'s in it and it still gives the warning. When I Bash in to the Container the tunnel responds to pings....This has me mindboggled... [root@335280eb9ecb /]# ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. 64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=36.7 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=35.7 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=44.8 ms 64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=36.0 ms 64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=37.2 ms 64 bytes from 1.1.1.1: icmp_seq=6 ttl=57 time=36.8 ms 64 bytes from 1.1.1.1: icmp_seq=7 ttl=57 time=37.4 ms 64 bytes from 1.1.1.1: icmp_seq=8 ttl=57 time=36.6 ms 64 bytes from 1.1.1.1: icmp_seq=9 ttl=57 time=36.2 ms 64 bytes from 1.1.1.1: icmp_seq=10 ttl=57 time=36.8 ms 64 bytes from 1.1.1.1: icmp_seq=11 ttl=57 time=39.6 ms 64 bytes from 1.1.1.1: icmp_seq=12 ttl=57 time=37.2 ms ^C --- 1.1.1.1 ping statistics --- 12 packets transmitted, 12 received, 0% packet loss, time 22ms rtt min/avg/max/mdev = 35.743/37.599/44.827/2.380 ms [root@335280eb9ecb /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.16 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:10 txqueuelen 0 (Ethernet) RX packets 171 bytes 25136 (24.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 43 bytes 5317 (5.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tun5: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 172.21.94.34 netmask 255.255.254.0 destination 172.21.94.34 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 20 bytes 1680 (1.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 1680 (1.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  5. My VPN provider requires special characters....
  6. I will change it. Sounds like is was good I accidentally missed all the entries. Thank you for the tip. I use random passwords for all services.
  7. Here is a new supervisord.log. Please help. I ping things in the container and the TX and RX of the tunnel increase so the tunnel is working. Just something isn't working. No idea what. supervisord.log
  8. Any ideas anyone? More information needed? Please help.
  9. Hello, Been trying to get this to work for a while. I bash in to the container and the VPN is working but Deluge is not accessible. I need a new set of eyes on it. There is so much information I cannot parse it all. Thank you. supervisord.log
  10. Cool, thank you for the info. I use NIC teaming and never noticed a problem with LAN connectivity. Every once in a while the LAN transfer speed drops but comes right back in a fraction of a second.
  11. Interesting...My NIC is on-board 4 port Intel NIC. Not sure where to start to look in to that. What would cause such a thing? Think it's the class 6 bonding? I noticed every once and a while it would say that ETH0 was down.
  12. My server was running performing great. I have been moving alot of files one disk at a time to update my movie library to be compatible with Radarr. My server went in to the weekly parity check while I was working so I quit and went to bed. This morning my dockers were failing. They can't access the App Data share. So I started to investigate and found that the Shares menu on unRAID is empty. I sshed in to the server and did an MC and sure enough /mnt/user is red and in accessible. I've added 3.7k to the Radarr library over the last couple days. Which means I've moved 3.7K of foles to a new folder to make the media compatible with Radarr. All the data is still on my /mnt/cache and my /mnt/diskN. Ironically, /mnt/user0 works fine. Here is my diagnostics file. I can't reboot or anything as my Parity Check is only 88% complete. This has never happened before in the 4.6 years I've been using unRAID. Even the initial data copy, 16TB, never gave me a problem. Any help is appreciated. Thank you for your time. rudder2-server-diagnostics-20190208-0852.zip
  13. Are you running a windows 10 VM with GPU Passthrew? If so, I had this problem going from 5.6.0 to 6.6.0 threw 6.6.6 and it was Windows 10 VM that was at fault. Here is my bug report and what I did to fix it.
  14. Also, make sure that the MSI Interrupts are on. I had the system lock up again and when Windows 10 Upgraded it turned of MSI Interrupts off again and the problem returned. I turned on the MSI Interrupts and the problem was cured again. The root problem is 100% confirmed that it's Windows 10 VM causing the system to hang when it's not using MSI Interrupts. Just wanted to make this message because I just had the problem this weekend when Windows 10 updated for completeness of this help to the next person reading it.
  15. After the reinstall of Windows 10VM I'm now on the latest driver without issues. My old VM I just can't use GPU Pass Threw anymore. Wish I could figure out what changed but don't have the time to compare the two Windows installs.
  16. Great advice! Never thought about that. I should be able to see if all the drives assigned right before I start the array. Now that I think about it, I might set all my Dockers and VMs to not auto start also.
  17. Hello all, First I want to say, Updating to 6.6.x has been a bear. I had a VM Problem after upgrading that took down my entire system and once I got it stable I discovered that my poor computer that I thought would last me 10 years I out grew after 4.5 years so I ordered an upgrade. Now to the question. How do you recommend I upgrade the hardware nicely? I plan on just swapping the stuff over and plugging in the unRAID USB. Should I make preparations on the unRAID system before I do this or should I be OK just swapping everything over? Just trying to make sure this goes smoothly without data loss and confusing other stuff. My Current System: *The base Computer: ASRock Z97 Extreme6 with an i7-4790K 32gig Ram LSI 9207-8i Controller The New System: 4U 24 Bay X9DRI-LN4F+ Rev 1.20 IPASS Server 2x E5-2690 V2 3Ghz 10 Core 64GB RAM (4x 16GB PC3-10600R) 3x LSI 9210-8i HBA Controllers 24x 3.5" Caddies Dual PS *The Stuff I'm switching Over: 1x32GB unRAID USB Flash Drive 8x4TB Data Drives 2x4TB Parity Drives 1x1TB Samsung 860Pro SSD Cache Drive Asus ROG 960GTX OC GPU Pass threw card 1x Logitech USB for VM Keyboard and Mouse 1x No name USB for Console Keyboard and Mouse Thank you for your time and help in advance. Thinking I should ask before I run in to a problem with such a hardware change.
  18. I'm playing with Dockers, CPU Pinning, and Hardware Trans-coding in Plex to band-aid the problem but I guess I just need to upgrade to a briefer system to do what I want. It's hard for me to believe that a 4790K couldn't handle this. It has for years. Seam like hen I upgraded to 6.6.x allot of things went wrong. Thank you for pointing out they load files. I now have learned to read my CPU load logs because of y'all! Thank you! I just wish they would do away with Mono for the Lidarr, Sonarr, Radarr, etc. and write them be able to natively run on Linux.
  19. OK, so the once and for all fix for this problem was to create a new Windows 10 VM with all the steps I fallowed above. I hate to have to reconfigure a whole new VM but I did. Lesson learned here is back up your VDisk once you have everything installed and running properly so all you have to do is updates instead of total reconfigure everything.
  20. @SpaceInvaderOne I'm having a problem with GPU passthrew since I upgraded to 6.6.x. I got it to go away by enabling MSI Interrupts for a couple weeks but I pushed the rope and updated the nVidia drivers it came back. The VM problem is with in 15 minutes windows 10 VM will freeze the unRAID host. If I remove the GPU passthrew it all works fine. I repeated all the steps, including rolling back the driver to the 2018-12-03 version and trying again and I cannot get it to work. I didn't have to use a rom dump. My GPU pass threw worked for the last 2 or 3 years with out a problem till 6.6.x came out. It doesn't work with any 6.6. version. Seeing you seam to be the expert with GPU passthrew I'm desperately asking for your help. Thank you in advance for your help. Here is the bug report that I closed thinking the problem was once and for all closed. It has everything I did in fixing it. unRAID was stumped. We didn't know it was the VM at first causing the problem...
  21. ***WARNING WARNING WARNING!!!** Do not install nVidia 2018-12-12 Driver...Problem comes back and cannot be removed by re-enabling MSI Interrupts!! I can't even get it back up by rolling back the driver.
  22. After google searching my problem I might have an answer. I forgot, I just configured Jackett on my server a couple weeks ago. It's the only process that's new on my server. and there are comments that it causes this behavior here: https://github.com/Jackett/Jackett/issues/2198
  23. So, I undid my CPU isolation for my VM and the problem happened again. Honestly, I've never had this happen before and I've been running everything I have on this server for years. Here is the new diagnostics file if you could please review it and see if anything stands out to you. Thank you for your time and help in this very important matter. I can't have my server going down every 8 - 18 hours because this problem. I have a CPU I was told was over kill and it just started acting up the same week I started this post. I've been considering getting a much beefier system even though I didn't think I had to. I'm starting to think that it wouldn't matter because even after unisolating my cores it's the same. rudder2-server-diagnostics-20181230-1627.zip
  24. I've found Lidarr to take WAY more CPU than I think it should. Every time I think the CPU is 100% it looks like it's Lidarr. Usually when the system is @ 100% cpu it still functions fine. This is the first time I've had a problem with the Web UI hanging...I will stop Lidarr and see what happens.
  25. It just happened in 8 hours this time. Here is the New Diagnostics File. rudder2-server-diagnostics-20181227-1744.zip