cferrero
-
Posts
13 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by cferrero
-
-
What I could find, the issue with bandwidth limits is caused by a bugged libtorrent library (v 1.1.5). Linuxserver.io rebased the image to Alpine Edge that pulls that version which doesn't work with Deluge 1.3.15
- 1
-
I sent limetech diagnostics with shfsExtra=-logging 2.
Now, I'm testing 6.5.0 rc1 and it's working fine, no leaks at the moment, in fact, after sending the message to @limetech .... it went from 6400 to 5776 ... looks like is working as intended
-
It looks like not only transmission, but sabnzb too (from the prerelease thread)
-
15 hours ago, BRiT said:
@cferrero make sure to edit your post in this thread to either change or remove out the rpc-password field, just in case someone can get to your server address. I was unaware it would have included that field.
I didn't check etheir, i was a clean test install with auth disabled, there is no outside access, but edited and removed just in case.
15 hours ago, BRiT said:Now as to what preallocation is configured as ...
0 - None - No preallocation, just let the file grow whenever a new packet comes in
1 - Sparse - Preallocate by writing just the final block in the file
2 - Full - Preallocate by writing zeroes to the entire file
A method of Sparse should be fine, however I have mine set to "2". I would try setting it to "2" and do a restart to set everything to a clean slate and see where it goes from there.
I will test it after a reboot just to be sure, but it happened too just seeding files
15 hours ago, BRiT said:For reference from a 6.3.5 system uptime of 78 days:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 10342 0.0 0.0 153296 596 ? Ssl 2017 0:00 /usr/local/sbin/shfs /mnt/user0 -disks 14 -o noatime,big_writes,allow_other root 10352 0.1 0.0 1514560 19240 ? Ssl 2017 157:46 /usr/local/sbin/shfs /mnt/user -disks 15 2048000000 -o noatime,big_writes,allow_other -o remember=0
I think 6.3.5 is free of this, I didn't notice any problems. But for reference, in my test, an example would be:
around 500 bytes in standby, before starting transmission
around 300Mb after 4 hours with 5-8 torrents. (test)
around 5Gb after 4 days (observed)
the main server was using 10Gb in 9 days uptime ...
-
I don't think so, the issue persist after transmission is killed, also happens only seeding files, and I'm think it was causing the bizarre behavior on my openhab VM, now looks like its working fine.
the settings are:
{ "alt-speed-down": 50, "alt-speed-enabled": false, "alt-speed-time-begin": 540, "alt-speed-time-day": 127, "alt-speed-time-enabled": false, "alt-speed-time-end": 1020, "alt-speed-up": 50, "bind-address-ipv4": "0.0.0.0", "bind-address-ipv6": "::", "blocklist-enabled": false, "blocklist-url": "http://www.example.com/blocklist", "cache-size-mb": 4, "dht-enabled": true, "download-dir": "/downloads/complete", "download-queue-enabled": true, "download-queue-size": 5, "encryption": 1, "idle-seeding-limit": 30, "idle-seeding-limit-enabled": false, "incomplete-dir": "/downloads/incomplete", "incomplete-dir-enabled": true, "lpd-enabled": false, "message-level": 2, "peer-congestion-algorithm": "", "peer-id-ttl-hours": 6, "peer-limit-global": 200, "peer-limit-per-torrent": 50, "peer-port": 51413, "peer-port-random-high": 65535, "peer-port-random-low": 49152, "peer-port-random-on-start": false, "peer-socket-tos": "default", "pex-enabled": true, "port-forwarding-enabled": true, "preallocation": 1, "prefetch-enabled": true, "queue-stalled-enabled": true, "queue-stalled-minutes": 30, "ratio-limit": 3, "ratio-limit-enabled": true, "rename-partial-files": true, "rpc-authentication-required": false, "rpc-bind-address": "0.0.0.0", "rpc-enabled": true, "rpc-host-whitelist": "", "rpc-host-whitelist-enabled": true, "rpc-password": "{1ddd3f1f6a71d655cde7767242a23a575b44c909n5YuRT.f", "rpc-port": 9091, "rpc-url": "/transmission/", "rpc-username": "", "rpc-whitelist": "127.0.0.1", "rpc-whitelist-enabled": false, "scrape-paused-torrents-enabled": true, "script-torrent-done-enabled": false, "script-torrent-done-filename": "", "seed-queue-enabled": false, "seed-queue-size": 10, "speed-limit-down": 100, "speed-limit-down-enabled": false, "speed-limit-up": 100, "speed-limit-up-enabled": false, "start-added-torrents": true, "trash-original-torrent-files": false, "umask": 2, "upload-slots-per-torrent": 14, "utp-enabled": true, "watch-dir": "/watch", "watch-dir-enabled": true }
-
Hi.
I having some issues with this container, not sure if someone else notice, but transmission container looks like is triggering a memory leak in shfs process, I'm not having this issue with other containers (I will keep testing and monitoring), quoting myself:
QuoteI have been able to reproduce this with just transmission container, vm engine disabled and no plugins, docker image is loaded from a ssd outside of the array manually mounted before array start.
The test was:
All plugins removed, vm engine disabled, server rebooted to clean shfs from other test, ssd manually mounted, array started, transmission started, a few torrents seeding and/or downloading, in just a few minutes its clear that shfs memory is growing fast, but to be sure, waited 2+ hours and check again to see the ram over 200+ Mb and not getting lower even after stopping transmission.
The exact same test with deluge instead transmission, it never went over 30Mb after 15h and loads of torrents.
So, recap, the ¿leak? looks triggered by transmision container (linuxserver.io version) and needs a reboot to clear it (if the container is stopped and deluge started the ram usage continues to grow).
What is the exact problem, no idea at the moment, as Jeronyson noted, it's an issue not present on unraid 6.3.5
-
I have been able to reproduce this with just transmission container, vm engine disabled and no plugins, docker image is loaded from a ssd outside of the array manually mounted before array start.
The test was:
All plugins removed, vm engine disabled, server rebooted to clean shfs from other test, ssd manually mounted, array started, transmission started, a few torrents seeding and/or downloading, in just a few minutes its clear that shfs memory is growing fast, but to be sure, waited 2+ hours and check again to see the ram over 200+ Mb and not getting lower even after stopping transmission.
The exact same test with deluge instead transmission, it never went over 30Mb after 15h and loads of torrents.
So, recap, the ¿leak? looks triggered by transmision container (linuxserver.io version) and needs a reboot to clear it (if the container is stopped and deluge started the ram usage continues to grow).
What is the exact problem, no idea at the moment, as Jeronyson noted, it's an issue not present on unraid 6.3.5
Now that I have isolated the issue I will validate it on the main server, changing transmission to deluge while I think what more test to do.
- 1
[Support] Linuxserver.io - Deluge
in Docker Containers
Posted
sadly no, I was searching about the bandwidth issue (ignoring limits) and what I could find was several coments about being a libtorrent library issue, then I found that comment about the rebase (I can't remember where but was just that line), I checked the update logs and tested the previous build and other based on archlinux, both obey the global limit BUT then I found that something is still off with the upload limit, let's say my line's upload is 20+ Mb/s and I set the global limit on 5, and there are 4 torrents seeding, the individual upload of each torrent will be pretty random each few seconds, ranging between 0 and a few hundreds but the total (0-2 Mb/s) pretty far from the limit (5 Mb).
If I remove the limit, the upload will go 15+++ Mb/s fast. If put a limit by torrent (1 Mb/s), I can see the upload of each one close to that limit, 800-900 Kb/s. Right now I have the latest version (linuxserver) with individual limit.