UNRAID5

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by UNRAID5

  1. You're a legend, TurboStreetCar. The htop trick worked for me too. I nuked the files the process was choking on and so far I appear to be back to normal. Thanks! As for the mass editor question, just go into Library > Mass Editor > Click on the Options button and check the Metadata Profile option then click Close. That should do the trick for you.
  2. I have noticed that I have a Rescan Folders task that takes forever (hours) to run and to make it worse, multiple Rescan Folders tasks are queued up. So once one finishes another will kick in shortly after. I suspect that this is at the heart of my issue. I did some searching and found this thread: https://github.com/lidarr/Lidarr/issues/1105 It mentions having many unmapped files, which I have a bunch of. Do you have the same behavior, I wonder? It's a tough thing to fix, as I can't manually map unmapped files until the Rescan Folders task ends, but you can't end the active scan in the UI. I am seeing if it's possible to end in the CLI, so I can map my files and see if that helps the situation out. Anyway, just thought I would drop an update on what I have learned so far. Let me know if you have learned anything on your end. Thanks!
  3. Hi there, Grasping at straws a bit here, but figured I would post anyway.... I have seen a number of posts about unraid writing unduly large amounts of data to cache pools, but I haven't really seen any solutions. It appears this issue is still fairly uncertain as to why it's happening and exactly how to resolve it, since it's only seemingly happening to smaller subset of users. Here are my details: Gigabyte Z390 DESIGNARE motherboard 2 x ADATA SX8200 2TB NVME plugged in to an M.2 PCIe SSD Adapter Card w/ bifurcation BTRFS RAID1 MBR: 1MiB-aligned Power on hours 11,617 Data units written 915,416,617 [468 TB] So you can see it writes nearly 1TB per day, a bit excessive. The latest drama is that out of the blue, the SSD temps are skyrocketing to the 60C range with nothing changing in the server. I can't help but suspect that all the writing has caused accelerated wear and the SSD's are starting down a path of permanent irreversible failure. In hopes of not having to buy new 2TB NVME SSD's every 1.5-2 years, I was wondering what others thoughts would be about what to try to do to perhaps salvage my SSD's before they call it quits. Thanks!
  4. @clowrym, perhaps you can educate me here. Wouldn't the version of Java running inside this docker be best updated through the hub? Or is upgrading Java manually inside the docker truly the best way (as mentioned by hexparrot in his post on github)? I looked at binhex's post as well, on this forum, and he has some valid hesitations listed there. Just trying to make sense of all the data to make an informed decision.
  5. For anyone interested, it looks as though someone has opened an issue there: https://github.com/hexparrot/mineos-node/issues/410
  6. From the docker console you can run: java -version openjdk version "1.8.0_292" As you can see, mine is running 1.8. From the Minecraft website: https://help.minecraft.net/hc/en-us/articles/360035131371-Minecraft-Java-Edition-system-requirements- you can see at the bottom that Minecraft 1.17 requires Java 16 or newer. So the docker needs a Java upgrade before a 1.17 server will be able to run successfully.
  7. How can I disable IPTables entirely for this docker? Including anytime the docker gets updated in the future. I am using my upstream firewall to do all of my blocking for me as that method is working very well for me and I can guarantee the traffic that is passed through upstream.
  8. Yes, that is still the same. To reiterate: All dockers have their own unique IP's on the same subnet (br0.x). All *arrs are using the proxy configuration through binhex/arch-qbittorrentvpn using privoxy. Then separately I also have binhex/arch-sabnzbdvpn using openvpn only and no privoxy. Everything works minus the *arrs talking to binhex/arch-sabnzbdvpn as a download client using the API key over port 8080. So far no test rules in iptables are working for me successfully, still thinking of more to possibly try. iptables -I INPUT 1 -s 10.x.x.0/24 -j ACCEPT iptables -I OUTPUT 1 -d 10.x.x.0/24 -j ACCEPT iptables -I INPUT 1 -p tcp -m tcp --dport 8080 -j ACCEPT iptables -I OUTPUT 1 -p tcp -m tcp --sport 8080 -j ACCEPT iptables -I INPUT 1 -i eth0 -j ACCEPT What I have noticed so far is the last rule above renders 0 DROPS on the INPUT chain. Which at least makes some sense (I like it when things make sense).
  9. I have tried them all multiple times, but nothing seems to be working for me. 🤷‍♂️
  10. I had a little time to troubleshoot again. This time I used a bigger hammer and disabled iptables all together. Sure enough, the *arrs can now talk to sabnzb now. So at this point I know it's the sab docker and I know it's the iptables rules at fault, but I don't know how to fix it... I suppose I may have to now experiment with adding iptables rules until I find the right one to add, but that doesn't sound very permanent as a restart or update of the docker will undo my changes.
  11. I am fairly confident that iptables is dropping my traffic. I rolled a Firefox docker, to rule out the *arr's entirely, and put it in the same vlan/br interface as SABnzb so it is layer 2 adjacent with nothing in the way (including not sharing the unraid IP, rather I assigned it its own IP). When I try to hit the SABnzb web ui on http://[IP]:[PORT:8080]/ the page times out and doesn't load. Same goes for https on 8090. I tested other internal web UI's to make sure connectivity was good and it is. So I hopped on the SABnzb console and looked at IP the tables hit counts and sure enough I see the drop counter increment when I attempt to hit SABnzb from the Firefox docker. sh-5.1# iptables -L -v -n --line-numbers Chain INPUT (policy DROP 12 packets, 828 bytes) num pkts bytes target prot opt in out source destination 1 125 4455 ACCEPT all -- * * 10.*.*.0/24 10.*.*.0/24 2 60 19355 ACCEPT all -- eth0 * 185.*.*.* 0.0.0.0/0 3 0 0 ACCEPT all -- eth0 * 185.*.*.* 0.0.0.0/0 4 0 0 ACCEPT all -- eth0 * 185.*.*.* 0.0.0.0/0 5 0 0 ACCEPT all -- eth0 * 185.*.*.* 0.0.0.0/0 6 0 0 ACCEPT all -- eth0 * 185.*.*.* 0.0.0.0/0 7 72 8910 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 8 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:8080 9 66 9205 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8090 10 0 0 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:8090 11 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 0 12 26 1360 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 13 31 9053 ACCEPT all -- tun0 * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) It doesn't make sense to me why. My traffic should match rule 1. Worst case scenario it should match rules 7 or 9, respectively. What's interesting is I CAN hit the SABnzb webui from my workstation which is on a different VLAN and IP space. I successfully match rules 7 and 9 from my workstation, which does indeed make sense. Also to note, when I add 8080 and/or 8090 to the ADDITIONAL_PORTS (which states this variable is depreciated in supervisord.log), VPN_INPUT_PORTS or VPN_OUTPUT_PORTS environmental variables, it just repeats rules 7-10 again for both INPUT and OUPUT chains. So these appear to have the same effect as Container Port: 8080 and Container Port: 8090. Not sure how to do a packet capture at this point, so I can see what SABnzb is seeing to try and understand why it's dropping traffic on the same network that should be allowed. The saga continues...
  12. I can ping each docker from each docker, no problem. I have also tried unchecking Use proxy in the *arr's and they still won't test successfully to SABnzb as a Download Client. That makes me think the issue lies with the SABnzb docker, but I can't seem to find the resolution. Any ideas?
  13. I am not having any luck getting my *arr's to connect to SABnzb. All of my dockers have their own IP, nothing shares IP with unraid. All of the *arr's go through the qbittorrent docker with vpn and privoxy. I can hit all webui's no problem and all openvpn's are connected successfully. Indexers all work and torrents work with all the *arr's. The only thing that isn't working is the *arr's can't see the SABnzb Download Client (they did before updates). Any ideas beyond what is in the FAQ? I have tried every combination that I can think of that is listed in the FAQ and not having any success. Thanks!
  14. Looks like it's QBittorrent's fault:
  15. I couldn't figure out why, but this appears to be exactly what is happening to me as well.
  16. I assumed the DB's would be built into the docker, is that not the case? Setting the IP as the host for both DB's renders the same result.
  17. I can't get this to work... I get Bad Request (400) regardless of using IP or hostname. My logs show this: I tried setting 'Database Host:' to 127.0.0.1, but that didn't help. Any ideas?
  18. For anyone future that may run into this. This was a pfSense config issue. I simply went into the DNS Resolver > Access Lists page and added an allow all entry.
  19. I am using pfSense as the DHCP/DNS/NTP server for all of my VLANs. I don't see any way to tag/untag in UNRAID so I am creating an interface for each of my VLANs in Network Settings. I then add an interface per br that I created to my pfSense VM. I can't seem to get beyond 4 interfaces, however. The VM simply won't post with the 5th NIC. Is there a hard limitation that I am running into? Is there anything I can do to add more NICs? Thanks!
  20. I tried another docker in the same br0.7 just to see how it would behave (jlesage/firefox) and same issue.
  21. I have a docker that is failing to resolve dns (hexparrot/mineos). I suspect the issue is because the docker is in a different VLAN than the UNRAID br0 interface. I have multiple VLANs configured and each docker has its own IP assigned and its appropriate bridge interface set. So when a docker in, say, br0.7 tries to query DNS, it appears to use the internal docker generated address in /etc/resolv.conf of 127.0.0.x. I assume (please correct if wrong) this in turn kicks DNS queries off to the UNRAID host and it resolves through its configured DNS server in Settings > Network Settings. It seems like this should work, no matter the interface my docker is configured with, but it's not working. ping google.com fails to resolve and ping internal.dns.hosts also fail by name, but neither fail to ping by IP. I am suspecting that perhaps the DNS query is being sent out to my local DNS server using the br0 interface, but with a return address on the br0.7 ip space (which doesn't seem like something it should do). That would obviously cause issues as the dns response would return on a different network path (direct on br0.7 to the docker) than it was sent on (direct on br0 from unraid) and likely be discarded. My DNS/DHCP server (pfsense) is a VM within UNRAID and has to have an interface on each network it is serving to work correctly. If my assumptions are correct than a way to resolve this would be to be able to specify the DNS server for each of my dockers that aren't in the default br0 network, but I can't seem to get that to work. I had tried adding --dns='10.x.x.x' in the "Extra Parameters:" field of my docker. That doesn't change the resolv.conf file and I still can't resolve DNS. So I am now seeking advice on how to move forward with this issue. Typing this out makes it seem like I don't have the root cause identified correctly and that my proposed resolution may not actually resolve the problem, but I am in need of some guidance either way. Thanks for taking the time to read through this.