Jump to content

Rudder2

Members
  • Content Count

    117
  • Joined

  • Last visited

Everything posted by Rudder2

  1. Been using unRAID since the summer of 2014. It's the best Server solution I've used. Here is to many more years! Keep up the good work!
  2. +1 I know the threw put would be slower and there are lots of other arguments not to use WiFi for a server but for some the benefits of placing the server in locations where Ethernet is not practical or not possible would be nice. My little brother hasn't set up his unRAID server because of the WiFi problem. I know there are ways around it, WiFi to Ethernet adapters and the such but the ability for someone to just put WiFi in the case would be nice.
  3. Please implement the ability for unRAID to spin down SAS drives automatically when idle. I understand that there is an argument that spinning down lowers drive life but there is already a feature to disable this for those who don't want drives to spin down. I like the drives spinning down to save power. Also, Parity drives are spun down most of their life on my unRAID system as my cache drive is huge and only syncs @ 0300 or when the Cache drive hits 70% usage. I've searched the forms and see others who want this feature but couldn't find it as a formal Feature Request. It's already a feature of FreeNAS. I'm not going to switch from unRAID, you guys rock. Just would like on the road map that it will become a feature one day, preferably sooner than later. I do understand your busy and I'm not a programmer so have no idea what I'm asking you to do. Thank you for such a wonderful server product!
  4. Thank you for the information. I have SATA enterprise drives from Seagate and they came with it enabled by default. I pick up these SAS Seagate drives and it was disabled. Thank you for the correction. It makes since. I have a UPS so no issues. Never had a problem even with power outages on my computer over many years so it sound like the risk is small but can understand in enterprise environment even a small risk is too much.
  5. You are the man! This was my problem. Man, it would of saved me 24 hours if I could of found this. LOL. I wander why unRAID is set to have write cache off by default on SAS drives. It's a drive setting not an unRAID setting. (Corrected by jomathanm below.) It's not a problem now that I know. Thank you so much for your help!
  6. I'm experiencing what I consider slow Parity Disk rebuild, 50MB/s. I replaced my parity 1 disk with a bigger disk and I plan on replacing party 2 disk with a bigger one in the next month or two so I can start using bigger disks. I upgraded my computer recently to a Supermicro 24 bay system. Primary Specs: CPU: 2x Intel Xeon E5-2690 V2 Deca (10) Core 3Ghz RAM: 64GB DDR3 (4 x 16GB - DDR3 - PC3-10600R REG ECC) Hard Drives: Not included, visit our store to purchase 1 Year Warranty drives at low price Storage Controller: 24 Ports via 3x LSI 9210-8i HBA Controller Installed closer to all 3 PCI-E Slots near power supply so Customer can Install his Dual Graphics card NIC: Integrated Onboard 4x 1GB Ethernet Secondary Specs: Chassis/ Motherboard Supermicro 4U 24x 3.5" Drive Bays 1 Nodes Server Server Chassis/ Case: CSE-846BA-R920B (upgraded to 920W-SQ PS quiet power supplies) Motherboard: X9DRi-LN4F+ Rev 1.20 Backplane: BPN-SAS-846A 24-port 4U SAS 6Gbps direct-attached backplane, support up to 24x 3.5-inch SAS2/SATA3 HDD/SSD PCI-Expansions slots: Full Height 4 x16 PCI-E 3.0, 1 x8 PCI-E 3.0, 1 x4 PCI-E 3.0 (in x8) * Integrated Quad Intel 1000BASE-T Ports * Integrated IPMI 2.0 Management 24x 3.5" Supermicro caddy I don't see a reason for the parity rebuild to be only 50 MB/s. I'm including my diagnostics file. I did the: and verified that all my cards were running in PCIe x8 mode. My LSI cards are PCIe 2.0, want to upgrade to 3.0 someday but that's not high on the priority list. My bios says all the PCIe slots are in PCIe Gen3 x16 mode with exception of the x8 that is in x8 but they defiantly are over the required standard for the LSI cards. Been reading and researching for 24 hours before posting here for an extra set of eyes. Thank you for your help in advance. rudder2-server-diagnostics-20190407-1333.tar.gz
  7. Port 8112 is in use so I have port 8112 mapped to 8114 threw unRAID.
  8. I have my LAN Network set to: 192.168.2.0/24 and Network Type set to: Bridge not any of the others, tried then just for shits and giggles and they give errors. My LAN on 192.168.2.1 as the first IP and 192.168.2.254 as the last IP. AKA 192.168.2.1 with a Subnet of 255.255.255.0
  9. I'm sorry for doubting you. You are right! The Dallas A02 exit node was broken!! I changed it to the Atlanta-a01 exit node and It says I have a VPN IP and a Deluge IP but I still can't access the WebUI. And this time when I try to go to the Proxy port it says "Invalid header received from client." instead of "This site can’t be reached ERR_CONNECTION_REFUSED"! Things are getting better! Here is my supervisord.log again. Hopefully it's the last time. Your AWESOME! Thank you for your help! supervisord.log
  10. Weird, because when I ping an IP address in the tunnel I get packets. I cannot ping web addresses as it can't resolve the DNS. I think the problem is a DNS problem.
  11. OK, I fixed the UMASK and deleted all the files in the APPS folder and started again. I get to Sun Feb 24 19:32:55 2019 Initialization Sequence Completed but still the webui and proxy is not accessible. Here is the new supervisord.log. The tunnel appears to be transmitting traffic when I ping in the container's bash the tun5 packets go up. This has me confused. Starting to think I need to create a PFSense router and go that route but really would rather just use this docker if possible. I think I got my username and password fully out of this file...I hope.... Thank you for your help and time on this. supervisord.log
  12. I used !'s in it and it still gives the warning. When I Bash in to the Container the tunnel responds to pings....This has me mindboggled... [root@335280eb9ecb /]# ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. 64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=36.7 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=35.7 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=44.8 ms 64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=36.0 ms 64 bytes from 1.1.1.1: icmp_seq=5 ttl=57 time=37.2 ms 64 bytes from 1.1.1.1: icmp_seq=6 ttl=57 time=36.8 ms 64 bytes from 1.1.1.1: icmp_seq=7 ttl=57 time=37.4 ms 64 bytes from 1.1.1.1: icmp_seq=8 ttl=57 time=36.6 ms 64 bytes from 1.1.1.1: icmp_seq=9 ttl=57 time=36.2 ms 64 bytes from 1.1.1.1: icmp_seq=10 ttl=57 time=36.8 ms 64 bytes from 1.1.1.1: icmp_seq=11 ttl=57 time=39.6 ms 64 bytes from 1.1.1.1: icmp_seq=12 ttl=57 time=37.2 ms ^C --- 1.1.1.1 ping statistics --- 12 packets transmitted, 12 received, 0% packet loss, time 22ms rtt min/avg/max/mdev = 35.743/37.599/44.827/2.380 ms [root@335280eb9ecb /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.16 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:10 txqueuelen 0 (Ethernet) RX packets 171 bytes 25136 (24.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 43 bytes 5317 (5.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tun5: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 172.21.94.34 netmask 255.255.254.0 destination 172.21.94.34 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 20 bytes 1680 (1.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 1680 (1.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  13. My VPN provider requires special characters....
  14. I will change it. Sounds like is was good I accidentally missed all the entries. Thank you for the tip. I use random passwords for all services.
  15. Here is a new supervisord.log. Please help. I ping things in the container and the TX and RX of the tunnel increase so the tunnel is working. Just something isn't working. No idea what. supervisord.log
  16. Any ideas anyone? More information needed? Please help.
  17. Hello, Been trying to get this to work for a while. I bash in to the container and the VPN is working but Deluge is not accessible. I need a new set of eyes on it. There is so much information I cannot parse it all. Thank you. supervisord.log
  18. Cool, thank you for the info. I use NIC teaming and never noticed a problem with LAN connectivity. Every once in a while the LAN transfer speed drops but comes right back in a fraction of a second.
  19. Interesting...My NIC is on-board 4 port Intel NIC. Not sure where to start to look in to that. What would cause such a thing? Think it's the class 6 bonding? I noticed every once and a while it would say that ETH0 was down.
  20. My server was running performing great. I have been moving alot of files one disk at a time to update my movie library to be compatible with Radarr. My server went in to the weekly parity check while I was working so I quit and went to bed. This morning my dockers were failing. They can't access the App Data share. So I started to investigate and found that the Shares menu on unRAID is empty. I sshed in to the server and did an MC and sure enough /mnt/user is red and in accessible. I've added 3.7k to the Radarr library over the last couple days. Which means I've moved 3.7K of foles to a new folder to make the media compatible with Radarr. All the data is still on my /mnt/cache and my /mnt/diskN. Ironically, /mnt/user0 works fine. Here is my diagnostics file. I can't reboot or anything as my Parity Check is only 88% complete. This has never happened before in the 4.6 years I've been using unRAID. Even the initial data copy, 16TB, never gave me a problem. Any help is appreciated. Thank you for your time. rudder2-server-diagnostics-20190208-0852.zip
  21. Are you running a windows 10 VM with GPU Passthrew? If so, I had this problem going from 5.6.0 to 6.6.0 threw 6.6.6 and it was Windows 10 VM that was at fault. Here is my bug report and what I did to fix it.
  22. Also, make sure that the MSI Interrupts are on. I had the system lock up again and when Windows 10 Upgraded it turned of MSI Interrupts off again and the problem returned. I turned on the MSI Interrupts and the problem was cured again. The root problem is 100% confirmed that it's Windows 10 VM causing the system to hang when it's not using MSI Interrupts. Just wanted to make this message because I just had the problem this weekend when Windows 10 updated for completeness of this help to the next person reading it.
  23. After the reinstall of Windows 10VM I'm now on the latest driver without issues. My old VM I just can't use GPU Pass Threw anymore. Wish I could figure out what changed but don't have the time to compare the two Windows installs.