Jump to content
linuxserver.io

[Support] Linuxserver.io - Deluge

367 posts in this topic Last Reply

Recommended Posts

What I could find, the issue with bandwidth limits is caused by a bugged  libtorrent library (v 1.1.5). Linuxserver.io rebased the image to Alpine Edge that pulls that version which doesn't work with Deluge 1.3.15

 

Share this post


Link to post
What I could find, the issue with bandwidth limits is caused by a bugged  libtorrent library (v 1.1.5). Linuxserver.io rebased the image to Alpine Edge that pulls that version which doesn't work with Deluge 1.3.15
 
Got some links to that please?

Share this post


Link to post
On 5/16/2018 at 7:56 AM, CHBMB said:
On 5/15/2018 at 6:08 PM, cferrero said:
What I could find, the issue with bandwidth limits is caused by a bugged  libtorrent library (v 1.1.5). Linuxserver.io rebased the image to Alpine Edge that pulls that version which doesn't work with Deluge 1.3.15
 

Got some links to that please?

 

sadly no, I was searching about the bandwidth issue (ignoring limits) and what I could find was several coments about being a libtorrent library issue, then I found that comment about the rebase (I can't remember where but was just that line), I checked the update logs and tested the previous build and other based on archlinux, both obey the global limit BUT then I found that something is still off with the upload limit, let's say my line's upload is 20+ Mb/s and I set the global limit on 5, and there are 4 torrents seeding, the individual upload of each torrent will be pretty random each few seconds, ranging between 0 and a few hundreds but the total (0-2 Mb/s) pretty far from the limit (5 Mb).

If I remove the limit, the upload will go 15+++ Mb/s fast.  If put a limit by torrent (1 Mb/s), I can see the upload of each one close to that limit, 800-900 Kb/s. Right now I have the latest version (linuxserver) with individual limit.

Share this post


Link to post

What's the usual memory usage by the this docker image for you guys?

for me, it is over 1GB, even when i'm not running anything.

seed a few shows for a few days, I delete them, but the docker is still at well over 1 gig of usage (docker stats command)

is this normal?

shouldn't the usage go down after a while?

 

thanks!

Share this post


Link to post

Having the same issue discussed above where the global bandwidth limiter isn't doing anything at all. It does work on a per torrent basis, however. 

Share this post


Link to post
On 5/16/2018 at 1:56 PM, CHBMB said:
On 5/16/2018 at 12:08 AM, cferrero said:
What I could find, the issue with bandwidth limits is caused by a bugged  libtorrent library (v 1.1.5). Linuxserver.io rebased the image to Alpine Edge that pulls that version which doesn't work with Deluge 1.3.15
 

Got some links to that please?

doing some general googling I found these links of others in the deluge world with similar issues. Hope this helps

 

https://github.com/arvidn/libtorrent/issues/2857

 

https://dev.deluge-torrent.org/ticket/3153

 

https://forum.deluge-torrent.org/viewtopic.php?p=227823

 

if this is unclear, please forgive my noobish research.

 

 

 

 

Share this post


Link to post

Is anyone else having an issue where once a torrent finishes downloading, it stays in the downloading state @ 100%? The only way to get it to move to seeding is to force a recheck on the file. 

Any idea's what could cause it or a solution to fixing it?

Share this post


Link to post

I'm hoping someone has a fix for this.  Whenever I restart the docker, several of my torrents will have Errors requiring rechecking. It's very annoying as it can take a day or two if it's a very large torrent. 

 

Does anyone else have this problem?

 

Here are my mappings - appdata is set to R/W Slave on an UD.   Thanks in advance for any help

 

1114109374_FireShotCapture133-Highlander_UpdateContainer_-https___1d087a25aac48109ee9a15217a.thumb.png.368298e22dd121f450b9e20fa25ff687.png

 

screenshot.9.thumb.jpg.a9f59f8335b1ff7e3ae752b4e6cf66b1.jpg

Share this post


Link to post
I'm hoping someone has a fix for this.  Whenever I restart the docker, several of my torrents will have Errors requiring rechecking. It's very annoying as it can take a day or two if it's a very large torrent. 
 
Does anyone else have this problem?
 
Here are my mappings - appdata is set to R/W Slave on an UD.   Thanks in advance for any help
 
1114109374_FireShotCapture133-Highlander_UpdateContainer_-https___1d087a25aac48109ee9a15217a.thumb.png.368298e22dd121f450b9e20fa25ff687.png
 
screenshot.9.thumb.jpg.a9f59f8335b1ff7e3ae752b4e6cf66b1.jpg
What happens if you stop it from the command line doing this:

docker stop -t 60 deluge

Will it still have to check the consistency of the torrents?

Sent via Tapatalk because my wife thinks I spend too much time on the computer

Share this post


Link to post
1 hour ago, Squid said:

What happens if you stop it from the command line doing this:

docker stop -t 60 deluge

Will it still have to check the consistency of the torrents?

Sent via Tapatalk because my wife thinks I spend too much time on the computer
 

Thanks - I'll have to try this maybe tomorrow/Monday. I've just created a fresh docker instance just in case some weird plugin settings etc were causing problems, so I'm re-adding my old torrents by rechecking.

Share this post


Link to post

Hi,

 

I just updated to the new deluge and it looks like I"m also affected by the ignore upload limit issue. I'm also affected by not being able to have more than 13 active torrents.

 

Is there a way for me to downgrade back to my previous version?

 

Thanks.

Share this post


Link to post
1 hour ago, DZMM said:

Thanks - I'll have to try this maybe tomorrow/Monday. I've just created a fresh docker instance just in case some weird plugin settings etc were causing problems, so I'm re-adding my old torrents by rechecking.

Yeah, I don't use torrents (for years now), but I remember from uTorrent that if the program exited abruptly, then a check had to be done.  unRaid's default for stopping any container is 10 seconds.  If it doesn't stop in that time, then it forcibly kills the container.  The command is telling it to not forcibly kill for 60 seconds.  

 

Due to a quirk though of dockerMan, you won't be able to restart the container via the GUI if you stop it that way.  You'll have to

docker start deluge


 

If this fixes it, then I'll reissue my previously rejected PR that added a selectable timeout for stopping a container.

Share this post


Link to post
1 hour ago, Squid said:

Yeah, I don't use torrents (for years now), but I remember from uTorrent that if the program exited abruptly, then a check had to be done.  unRaid's default for stopping any container is 10 seconds.  If it doesn't stop in that time, then it forcibly kills the container.  The command is telling it to not forcibly kill for 60 seconds.  

 

Due to a quirk though of dockerMan, you won't be able to restart the container via the GUI if you stop it that way.  You'll have to


docker start deluge


 

If this fixes it, then I'll reissue my previously rejected PR that added a selectable timeout for stopping a container.

I think this is my problem.  I've noticed since I moved my appdata to an unassigned drive, I've had problems stopping and editing Dockers e.g. if i don't stop before editing, after editing I could lose the whole image i.e. I can't even go back to CA and just restore, I have to re-create from scratch.

 

Will do the delige stop test when I can, but I think I might have a bigger problem - I'll also try running from my cache to see if that removes the problem.

 

Edit: Just realised I meant docker not appdata - will try moving docker image back to my cache drive as that's easier

Edited by DZMM

Share this post


Link to post

@Squid I had a window to reboot - no joy with docker stop -t 60 deluge. 

 

I tried moving docker and the appdata back to my cache from UD.  At least moving docker and appdata back to my cache let me edit dockers without losing the whole image.

 

Logs attached - nothing looks wrong in there.

 

 

deluge.log

Share this post


Link to post
10 hours ago, DZMM said:

@Squid I had a window to reboot - no joy with docker stop -t 60 deluge. 

 

I tried moving docker and the appdata back to my cache from UD.  At least moving docker and appdata back to my cache let me edit dockers without losing the whole image.

 

Logs attached - nothing looks wrong in there.

 

 

deluge.log

seems to be working now - I've done a few restarts with no errors.  Will monitor

Share this post


Link to post

I'm using Deluge together with Radarr. 

All the logic kind of work. Torrents are added. Files are downloaded and moved to /download/complete folder. 

 

I have also enabled standard Extraktor plugin (also tried SimpleExtractor). 

 

The movie gets extracted without any problem. 

 

After this Radar detects the movie file and creates the new movie folder in my movies folder. 

It starts copying the file from /download/complete folder to my moviefolder. 

 

The problem starts now. 

Radarr copies the file to movie.partial~

When it reaches 100% it doesn't finish but restarts the copying from 0%. This keeps on indefinitely.

 

If I stop seeding the torrent in Deluge, Radarr completes the copying. 

 

Seems like Deluge locks the file in some way. 

 

I can understand lock on the seeding rar files but the extracted moviefile really shouldn't be locked when seeding (since this file is not actually seeding). 

 

Anyone knows what could cause this problem and how to fix it. 

 

Thanks! 

 

 

Share this post


Link to post

My deluge container will not write to my nas.

It is connectable to outside my network and through my local network, but everytime I add a torrent it starts downloading, but progress will not increase. Then deluge will drop the connected computer, because no data is being written to my array.

 

im using a cache drive and my array is using btrfs.

all app data is written to the ssd only

and my  mnt/user/media/ share has cache drive enabled.

 

*edit I inserted the correct screenshot*

1724105047_Screenshot(2).thumb.png.18187a8d2d5057c2c732369cb481bf01.png

Edited by jamse

Share this post


Link to post

I'm sorry if this has been answered already, but is there a way for deluge to start its daemon automatically after updating the docker app?

I have set the docker app to auto-update and almost every morning I need to connect to the web ui in order to start the daemon.

Edited by avpap

Share this post


Link to post
On 8/25/2018 at 6:36 AM, jamse said:

My deluge container will not write to my nas.

It is connectable to outside my network and through my local network, but everytime I add a torrent it starts downloading, but progress will not increase. Then deluge will drop the connected computer, because no data is being written to my array.

 

im using a cache drive and my array is using btrfs

Screenshot (1).png

You aren't using our container - you're using binhex's - please goto his thread for support.

Share this post


Link to post
On 8/26/2018 at 11:35 AM, j0nnymoe said:

You aren't using our container - you're using binhex's - please goto his thread for support.

 

woops, sorry I inserted the wrong screenshot. I updated my original post, and I am having issues with binhex deluge as well, so I am assuming its a permissions issue?

Screenshot (2).png

 

*solved the issues

I was pointing the downloads folder in deluge to the absolute path that I had setup in the docker unraid settings.

I did not realize that once it was mounted in the container it would show up as a relative path inside the container.

Edited by jamse
solved the issue

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.