dyno

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by dyno

  1. Will do. Unfortunately, this appears to be a bug in qBittorrent’s code. For now, I’d recommend setting a memory limit for your qBit container(s) to prevent it from crashing the server. Aside from that, you may want to try Transmission. For myself, I can live with this for the moment since I have a lot of RAM. That said, migrating to Transmission is painful and not really feasible for me.
  2. No, it has no impact on memory usage.
  3. I have WebUI tabs open for all the qbit containers, but that has no impact on server memory usage. I have some manual scripts that poll the qBit API, but I only run them occasionally to locate unregistered .torrent files and clean them up. So that would not cause this either.
  4. Not yet, unfortunately. I posted a thread in the lscr.io Discord server, but they instructed me to make a bug report to qBittorrent. Since this issue is occurring across containers from three different maintainers. I'll probably make a bug report in GitHub today.
  5. I've been having some pretty severe memory leak issues with all three qbittorrent docker containers recently (binhex, hotio and lscr.io). I transferred ~10,000 torrents (47TB total) and fast resume data from another linux-based (not Unraid) server to my Unraid server. The previous server that these torrents were running on was rock solid stable for a year and that server used 8GB RAM total (including the OS). No memory leaks with that Qbit instance (4.3.9 libtorrent v1). The qbit instance is just seeding (no downloading). And no matter what I try, the qbit container eats up more and more RAM until it almost locks up the server. Even when I have zero active torrents, memory usage grows until the docker oom killer reaps it (and resets the qbit app within the container). For now, I've set a container memory limit of 64GB and that prevents the server itself from running out of RAM, otherwise Qbit will easily eat up all 192GB of RAM before I have to kill it. Troubleshooting Steps I've Tried -- no success with any of them: Different containers (lscr.io, Binhex and Hotio) Different Qbit versions (4.3.9, 4.4.5, 4.6.0, 4.6.2, 4.6.3 and 4.6.4) Different Libtorrent versions (v1 and v2) New Server Hardware (new CPU, Ram, Mobo, HBA) Server Specs: Supermicro H12SSL-NT EPYC 7B13 - 64 core at 2.20 Ghz 256GB DDR4-3200 ECC RDIMM Memory Allocation: 64GB to ZFS 64GB to troublesome Qbit container The remainder is unallocated Strangely enough, I have two other active lscr qbit containers (details below) and those containers only uses around 1-4GB of RAM each (depending on activity). This is all on the same unraid server. I've also tried endless changes to qbit advanced config options, but nothing helps. I'm not sure what the problem is, but I've never run across such memory leaks across so many different versions of qbit. The only constant factor for the memory leaks is this set of torrents and Unraid. Any suggestions would be greatly appreciated. Enclosed are my anonymized diagnostics. Since these issues persist across every new container I create, I'm unsure if perhaps there's a rogue torrent causing the issue? I'm completely stumped at this point. Qbit Instance 1: 1,000 torrents -- seed size 26TB -- no memory issues Qbit Instance 2: 10,000 torrents -- seed size 63TB -- no memory issues Qbit Instance 3: 10,000 torrents -- seed size 47TB -- terrible memory leaks (problem child) Other Config Info (may not be relevant): Local File Storage for Qbit 16 HDD RAIDZ2 Pool Two VDEVs with 8 HDD each galactica-diagnostics-20240402-1951.zip
  6. I've tried that previously and it did not fix the issue.
  7. I'm not sure if anyone else has come across this issue, but I've been having some pretty severe memory leak issues with the linuxserver qbittorrent docker container recently. I transferred ~10,000 torrents (50TB total) and fast resume data from another server and split it across two separate lscr qbit instances (roughly 5k torrents each). The files themselves are stored on an 8-wide raidz2 pool dedicated for torrents. Both instances are just seeding (no downloading). And no matter what I try, the qbit container eats up more and more RAM until it almost locks up the server. I have 128GB RAM (64GB for ZFS + 64GB for everything else) and qbit will easily eat up 40GB of RAM before I have to kill it. This happens with every version of qbit I've tried: 4.3.9, 4.4.5, 4.6.0, 4.6.2. I've tried both libtorrent v1 and v2 with all of these builds as well. Nothing works. I can limit RAM to 8G or 16G via docker compose, but that makes the container endlessly reset itself when it runs out of RAM. This causes the appearance of phantom torrents with the tracker and wildly fluctuating ul/dl speeds. Strangely enough, I have another active lscr qbit container (latest repo -- 4.6.2) with around 6,500 active seeding torrents in it and that container only uses around 3GB of RAM. This is all on the same unraid server. I tried the binhex container and same issue. As a last resort, I spun up a new hotio qbit container running 4.6.2 (libtorrent v1) and put the original 5k torrents into it. It's been pretty stable so far, only using 4GB RAM at present. I'm not sure what the problem is, but I've never run across such memory leaks across so many different versions of qbit. Any suggestions would be greatly appreciated.
  8. I noticed that this plugin displays space in a manner inconsistent with the rest of the unraid Main page. Specifically, it appears to show data in GiB/TiB, but the plugin uses the GB/TB nomenclature. It's a bit confusing. If you're going to continue using base 2 units, would you consider changing the nomenclature to be accurate? Or perhaps change the units to base 10 instead?
  9. This is a great plugin, but what is with the constant updates? It would be preferable to revert to a less frequent release schedule. It's annoying getting an update message several times a week.
  10. Well, that was an easy fix. Works perfectly now. Thanks!
  11. I use a VPN to remotely access my Unraid webUI, *ARR dockers, etc. I'm able to access webUI's for all my dockers, except for qbittorrentvpn. The webUI is accessible when I'm on the LAN, but not remotely via VPN. I just get a time out error. Is there something I need to change in order to enable this? What do you need from me in order to provide some guidance?
  12. Had to downgrade to 4.3.9-2-01. The new version keeps overwriting my qbittorrent config file, putting my downloads into my appdata folder, resetting queue settings, etc. I had to restore my config from a backup file. The old version is working fine, thankfully.
  13. Is there a way to force the mover to move only one file at a time? It seems to move more than one file simultaneously, often to different physical disks. When doing this, write speeds tank from 200MB/sec+ to maybe 20MB/sec (due to the simultaneous reads needed for parity calculations).
  14. I've got an issue where this plugin is unnecessarily waking up spun down disks almost immediately after they've been spun down. I finally traced the issue to Unbalance by scrolling through htop. I'm unsure why this happens, but I'm definitely leaving this plugin disabled for now. Disabling the plugin has fixed the issue. Dec 1 14:24:22 Galactica emhttpd: spinning down /dev/sdd Dec 1 14:25:29 Galactica emhttpd: read SMART /dev/sdd
  15. I'm currently running binhex-qbittorrentvpn and it works quite well. However, since the webui currently is running on the same network as my primary 1gbe home network, it can become quite sluggish when downloads saturate my internet connection (1gb symmetrical). I have a second NIC installed in my Unraid server and it's direct connected to my pc (no DHCP server, not part of the home network). I'd like to access the qbittorrent WebUI via this private connection. How do I best configure things so I don't cause any issues with other services (Sonarr, Radarr, Plex, etc.). FWIW, I can access those other services using the secondary/private NIC address (IP_2) even though they're all set-up to use the primary ip (IP_1). My setup is as follows: Primary NIC (eth0) - 1gbe LAN IP_1: 192.168.1.90 Static IP assigned by Unifi router Bridging enabled Members of br0 = eth0 Secondary NIC (eth2) - 10gbe direct connect to pc IP_2: 192.168.100.90 Static IP set in Unraid, no gateway Bridging enabled Members of br2 = eth2 IP of pc: 192.168.100.100 qbittorrentVPN settings IP: 192.168.1.90 (ports 6881, 8080, 8118) -- set in Docker config Accessible from my pc at this ip only. Web UI IP address: * Port: 8080 Network Interface: bound to wg0 Optional IP address to bind to: All addresses Inactive NICs: eth1 (1gbe) eth3 (10gbe) To reiterate, I can access Unraid, Sonarr, Radarr, Heimdall, Plex, etc. at either 192.168.1.90 or 192.168.100.90. Both IPs resolve to the correct service (depending on port used). However, the Qbittorrent webUI is only reachable at IP_1 - 192.168.1.90. The arr services access the qbittorrent webUI via 192.168.1.90. Any idea why this occurs or how to get the desired behavior? Ideally, I'd like to be able to use 192.168.100.90 and 192.168.1.90 interchangeably and have them resolve to the same place.
  16. I'm having some unexpected behavior with the Sonarr app and I'd like to see if this is an Unraid issue or an issue with my configuration (or something else). I use binhex-Sonarr with binhex-qbittorrentvpn as download client. Sonarr adds the torrent to qbit and it's grabbed. Upon completion, the torrent is paused and the file is supposed to be copied to the tv share - leaving the original downloaded file intact. Then I manage the torrents manually. Rarely, the file is copied (as desired). Most commonly, the downloaded file is moved to the tv share (and deleted from downloads folder) instead of being copied. How can I correct this? Share Setup downloads: /mnt/user/downloads --- cache only tv shows: /mnt/user/tv shows --- cache yes Drive Setup downloads: /mnt/cache/downloads tv shows (cache): /mnt/cache/tv shows tv shows (array): /mnt/disk1,2,3,etc/tv shows Ideally, Sonarr would make a copy of the files on the cache drive after downloading. So two physical copies would reside on the cache drive until the mover is invoked. Then the copy of the file residing at /mnt/cache/tv shows would be moved to the array (disk1, disk2, etc). Is it a problem having both directories on the same cache drive? Should I have my downloads share using a different/dedicated drive instead? I don't have a spare ssd for torrents right now, but I may get another one soon. When I had Sonarr running on macos, this never occurred. But I did have torrents and media residing on different physical drives.