UNRAID5

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by UNRAID5

  1. On 12/8/2022 at 9:42 PM, TurboStreetCar said:

    I was having same issue.

    For anyone having this issue, i found what i believe to possibly be the cause, at least it was in my case.

    I imported a large number of files that weren't formatted/tagged quite correctly. After importing them, i used Lidarr to rename all the files one artist at a time, this way it shows the preview of what its going to do, and allows you to proceed or cancel. I did this because sometimes it wrongly identifies the track, or uses a strange release, so i was able to verify it was all correct.

    If you enter the container console and run "htop", it shows the processes and one of them was what i believe to be the audio fingerprint process. It was stuck running on the same 3 files indefinitely. I believe at some point of moving/renaming files, the files it was currently working on broke the process and it was stuck, because the path of the file it was processing didnt exist any more.

    Restarting the container fixed it permanently. 

    I do have a question, im looking to use the mass editor to set all of the artists metadata to "none" but its not an option. 

    Is there a way to set all of the hundreds of artists to none without having to do it individually?

     

    You're a legend, TurboStreetCar. The htop trick worked for me too. I nuked the files the process was choking on and so far I appear to be back to normal. Thanks!

     

    As for the mass editor question, just go into Library > Mass Editor > Click on the Options button and check the Metadata Profile option then click Close. That should do the trick for you.

  2. On 8/4/2022 at 7:07 PM, hmoney007 said:

    Just wanted to follow up - I am still getting 100% cpu usage on lidarr for any cores i give it after somewhere between 1-24 hours of uptime on the container with no errors in logs. Any suggestions?

    I have noticed that I have a Rescan Folders task that takes forever (hours) to run and to make it worse, multiple Rescan Folders tasks are queued up. So  once one finishes another will kick in shortly after. I suspect that this is at the heart of my issue. I did some searching and found this thread: https://github.com/lidarr/Lidarr/issues/1105
    It mentions having many unmapped files, which I have a bunch of. Do you have the same behavior, I wonder? It's a tough thing to fix, as I can't manually map unmapped files until the Rescan Folders task ends, but you can't end the active scan in the UI. I am seeing if it's possible to end in the CLI, so I can map my files and see if that helps the situation out.

    Anyway, just thought I would drop an update on what I have learned so far. Let me know if you have learned anything on your end.

    Thanks!

  3. Hi there,

     

    Grasping at straws a bit here, but figured I would post anyway.... I have seen a number of posts about unraid writing unduly large amounts of data to cache pools, but I haven't really seen any solutions. It appears this issue is still fairly uncertain as to why it's happening and exactly how to resolve it, since it's only seemingly happening to smaller subset of users. Here are my details:

     

    Gigabyte Z390 DESIGNARE motherboard

    2 x ADATA SX8200 2TB NVME plugged in to an M.2 PCIe SSD Adapter Card w/ bifurcation

    BTRFS RAID1

     

    MBR: 1MiB-aligned

    Power on hours 11,617

    Data units written 915,416,617 [468 TB]

     

    So you can see it writes nearly 1TB per day, a bit excessive. The latest drama is that out of the blue, the SSD temps are skyrocketing to the 60C range with nothing changing in the server. I can't help but suspect that all the writing has caused accelerated wear and the SSD's are starting down a path of permanent irreversible failure. In hopes of not having to buy new 2TB NVME SSD's every 1.5-2 years, I was wondering what others thoughts would be about what to try to do to perhaps salvage my SSD's before they call it quits.

     

    Thanks!

     

  4. @clowrym, perhaps you can educate me here. Wouldn't the version of Java running inside this docker be best updated through the hub? Or is upgrading Java manually inside the docker truly the best way (as mentioned by hexparrot in his post on github)? I looked at binhex's post as well, on this forum, and he has some valid hesitations listed there. Just trying to make sense of all the data to make an informed decision.

  5. From the docker console you can run:
     

    java -version
    openjdk version "1.8.0_292"


    As you can see, mine is running 1.8.
    From the Minecraft website: https://help.minecraft.net/hc/en-us/articles/360035131371-Minecraft-Java-Edition-system-requirements-
    you can see at the bottom that Minecraft 1.17 requires Java 16 or newer.
    So the docker needs a Java upgrade before a 1.17 server will be able to run successfully.

  6. How can I disable IPTables entirely for this docker? Including anytime the docker gets updated in the future. I am using my upstream firewall to do all of my blocking for me as that method is working very well for me and I can guarantee the traffic that is passed through upstream.

  7. Yes, that is still the same. To reiterate: All dockers have their own unique IP's on the same subnet (br0.x). All *arrs are using the proxy configuration through binhex/arch-qbittorrentvpn using privoxy. Then separately I also have binhex/arch-sabnzbdvpn using openvpn only and no privoxy. Everything works minus the *arrs talking to binhex/arch-sabnzbdvpn as a download client using the API key over port 8080.

    So far no test rules in iptables are working for me successfully, still thinking of more to possibly try.

     

    iptables -I INPUT 1 -s 10.x.x.0/24 -j ACCEPT

    iptables -I OUTPUT 1 -d 10.x.x.0/24 -j ACCEPT

    iptables -I INPUT 1 -p tcp -m tcp --dport 8080 -j ACCEPT

    iptables -I OUTPUT 1 -p tcp -m tcp --sport 8080 -j ACCEPT

    iptables -I INPUT 1 -i eth0 -j ACCEPT

    What I have noticed so far is the last rule above renders 0 DROPS on the INPUT chain. Which at least makes some sense (I like it when things make sense).

  8. I had a little time to troubleshoot again. This time I used a bigger hammer and disabled iptables all together. Sure enough, the *arrs can now talk to sabnzb now. So at this point I know it's the sab docker and I know it's the iptables rules at fault, but I don't know how to fix it... I suppose I may have to now experiment with adding iptables rules until I find the right one to add, but that doesn't sound very permanent as a restart or update of the docker will undo my changes.

  9.  

    On 3/15/2021 at 5:49 PM, UNRAID5 said:

     

    I can ping each docker from each docker, no problem. I have also tried unchecking Use proxy in the *arr's and they still won't test successfully to SABnzb as a Download Client. That makes me think the issue lies with the SABnzb docker, but I can't seem to find the resolution. Any ideas?

     

    I am fairly confident that iptables is dropping my traffic. I rolled a Firefox docker, to rule out the *arr's entirely, and put it in the same vlan/br interface as SABnzb so it is layer 2 adjacent with nothing in the way (including not sharing the unraid IP, rather I assigned it its own IP). When I try to hit the SABnzb web ui on http://[IP]:[PORT:8080]/ the page times out and doesn't load. Same goes for https on 8090. I tested other internal web UI's to make sure connectivity was good and it is. So I hopped on the SABnzb console and looked at IP the tables hit counts and sure enough I see the drop counter increment when I attempt to hit SABnzb from the Firefox docker.

     

    sh-5.1# iptables -L -v -n --line-numbers
    Chain INPUT (policy DROP 12 packets, 828 bytes)
    num   pkts bytes target     prot opt in     out     source               destination         
    1      125  4455 ACCEPT     all  --  *      *       10.*.*.0/24       10.*.*.0/24      
    2       60 19355 ACCEPT     all  --  eth0   *       185.*.*.*        0.0.0.0/0           
    3        0     0 ACCEPT     all  --  eth0   *       185.*.*.*        0.0.0.0/0           
    4        0     0 ACCEPT     all  --  eth0   *       185.*.*.*        0.0.0.0/0           
    5        0     0 ACCEPT     all  --  eth0   *       185.*.*.*        0.0.0.0/0           
    6        0     0 ACCEPT     all  --  eth0   *       185.*.*.*        0.0.0.0/0           
    7       72  8910 ACCEPT     tcp  --  eth0   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080
    8        0     0 ACCEPT     udp  --  eth0   *       0.0.0.0/0            0.0.0.0/0            udp dpt:8080
    9       66  9205 ACCEPT     tcp  --  eth0   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8090
    10       0     0 ACCEPT     udp  --  eth0   *       0.0.0.0/0            0.0.0.0/0            udp dpt:8090
    11       0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0            icmptype 0
    12      26  1360 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
    13      31  9053 ACCEPT     all  --  tun0   *       0.0.0.0/0            0.0.0.0/0           
    
    Chain FORWARD (policy DROP 0 packets, 0 bytes)
    num   pkts bytes target     prot opt in     out     source               destination         
    
    Chain OUTPUT (policy DROP 0 packets, 0 bytes)

     

    It doesn't make sense to me why. My traffic should match rule 1. Worst case scenario it should match rules 7 or 9, respectively. What's interesting is I CAN hit the SABnzb webui from my workstation which is on a different VLAN and IP space. I successfully match rules 7 and 9 from my workstation, which does indeed make sense.

     

    Also to note, when I add 8080 and/or 8090 to the ADDITIONAL_PORTS (which states this variable is depreciated in supervisord.log), VPN_INPUT_PORTS or VPN_OUTPUT_PORTS environmental variables, it just repeats rules 7-10 again for both INPUT and OUPUT chains. So these appear to have the same effect as Container Port: 8080 and Container Port: 8090.

     

    Not sure how to do a packet capture at this point, so I can see what SABnzb is seeing to try and understand why it's dropping traffic on the same network that should be allowed.

    The saga continues...

  10. On 3/10/2021 at 6:08 PM, UNRAID5 said:

    I am not having any luck getting my *arr's to connect to SABnzb. All of my dockers have their own IP, nothing shares IP with unraid. All of the *arr's go through the qbittorrent docker with vpn and privoxy. I can hit all webui's no problem and all openvpn's are connected successfully. Indexers all work and torrents work with all the *arr's. The only thing that isn't working is the *arr's can't see the SABnzb Download Client (they did before updates). Any ideas beyond what is in the FAQ? I have tried every combination that I can think of that is listed in the FAQ and not having any success. Thanks!

     

    I can ping each docker from each docker, no problem. I have also tried unchecking Use proxy in the *arr's and they still won't test successfully to SABnzb as a Download Client. That makes me think the issue lies with the SABnzb docker, but I can't seem to find the resolution. Any ideas?

  11. I am not having any luck getting my *arr's to connect to SABnzb. All of my dockers have their own IP, nothing shares IP with unraid. All of the *arr's go through the qbittorrent docker with vpn and privoxy. I can hit all webui's no problem and all openvpn's are connected successfully. Indexers all work and torrents work with all the *arr's. The only thing that isn't working is the *arr's can't see the SABnzb Download Client (they did before updates). Any ideas beyond what is in the FAQ? I have tried every combination that I can think of that is listed in the FAQ and not having any success. Thanks!

  12. On 10/25/2020 at 10:20 PM, roobix said:

    I can't find the words to explain an issue I'm having, but has anyone run into a similar scenario:

     

    • QBittorrent downloads FavShow E01S01.mp4 into \FavShow.E01S01.WAZZAAAH\FavShow E01S01.mp4 (notice that there are periods in the folder name)
    • Sonarr can't import saying the following:
      • "Import failed, path does not exist or is not accessible by Sonarr:[Snip] /FavShow E01S01 WAZZAAAH"  (notice how the periods are replaced with spaces in the folder name)

    I've looked at both dockers and can't figure out what is causing the confusion between the two or whether it's a setting. Both dockers have been working for months until just this last week.

     

    Thanks for any help.

     

     

    I couldn't figure out why, but this appears to be exactly what is happening to me as well.

  13. On 10/14/2020 at 3:37 PM, saarg said:

    Both your database and redis does not resolve. Is the redis and database container in the same custom bridge? If not, then use the IP instead of name.

    I assumed the DB's would be built into the docker, is that not the case? Setting the IP as the host for both DB's renders the same result.

  14. I can't get this to work...

     

    I get Bad Request (400) regardless of using IP or hostname.

     

    My logs show this:

     

    Quote

    File "/usr/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
    django.db.utils.OperationalError: could not translate host name "postgres" to address: Name does not resolve

    command "/usr/bin/python3 ./manage.py remove_stale_contenttypes --no-input" exited with non-zero code: 1
    Wed Oct 14 15:04:28 2020 - FATAL hook failed, destroying instance
    SIGINT/SIGQUIT received...killing workers...
    WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x55c3465d5ba0 pid: 258 (default app)
    *** uWSGI is running in multiple interpreter mode ***
    spawned uWSGI master process (pid: 258)
    spawned uWSGI worker 1 (pid: 281, cores: 1)
    [uwsgi-daemons] spawning "/usr/bin/python3 ./manage.py rqworker" (uid: 99 gid: 100)
    Error -2 connecting to redis:6379. Name does not resolve.
    daemon "/usr/bin/python3 ./manage.py rqworker" (pid: 282) annihilated

     

    I tried setting 'Database Host:' to 127.0.0.1, but that didn't help. Any ideas?

  15. I am using pfSense as the DHCP/DNS/NTP server for all of my VLANs. I don't see any way to tag/untag in UNRAID so I am creating an interface for each of my VLANs in Network Settings. I then add an interface per br that I created to my pfSense VM. I can't seem to get beyond 4 interfaces, however. The VM simply won't post with the 5th NIC. Is there a hard limitation that I am running into? Is there anything I can do to add more NICs? Thanks!

  16. I have a docker that is failing to resolve dns (hexparrot/mineos). I suspect the issue is because the docker is in a different VLAN than the UNRAID br0 interface. I have multiple VLANs configured and each docker has its own IP assigned and its appropriate bridge interface set. So when a docker in, say, br0.7 tries to query DNS, it appears to use the internal docker generated address in /etc/resolv.conf of 127.0.0.x. I assume (please correct if wrong) this in turn kicks DNS queries off to the UNRAID host and it resolves through its configured DNS server in Settings > Network Settings. It seems like this should work, no matter the interface my docker is configured with, but it's not working. ping google.com fails to resolve and ping internal.dns.hosts also fail by name, but neither fail to ping by IP. I am suspecting that perhaps the DNS query is being sent out to my local DNS server using the br0 interface, but with a return address on the br0.7 ip space (which doesn't seem like something it should do). That would obviously cause issues as the dns response would return on a different network path (direct on br0.7 to the docker) than it was sent on (direct on br0 from unraid) and likely be discarded. My DNS/DHCP server (pfsense) is a VM within UNRAID and has to have an interface on each network it is serving to work correctly. If my assumptions are correct than a way to resolve this would be to be able to specify the DNS server for each of my dockers that aren't in the default br0 network, but I can't seem to get that to work. I had tried adding --dns='10.x.x.x' in the "Extra Parameters:" field of my docker. That doesn't change the resolv.conf file and I still can't resolve DNS. So I am now seeking advice on how to move forward with this issue.

    Typing this out makes it seem like I don't have the root cause identified correctly and that my proposed resolution may not actually resolve the problem, but I am in need of some guidance either way. Thanks for taking the time to read through this.