[Support] binhex - rTorrentVPN


Recommended Posts

On 15/09/2017 at 1:38 PM, lift said:

binhex,

 

I have your delugevpn docker container working great, but rtorrentvpn isn't able to connect to peers.  In the rtorrentvpn With the latest Ubuntu Desktop torrent, I see 1600+ seeders and 60+ peers, but not one will connect which makes me wonder if the port fowarding is not working correctly.  I'm using PIA as my VPN provider, but I'd also tried a custom OpenVPN with another provider and the same results.  The deluge container is able to quickly connect to peers and begin the download.  I'm using the same /config/openvpn files in both containers (not same directory, but the same files copied to the other containers volume).

 

There are 3 files in the below pastebin: docker-compose.yml snippets for deluge and rtorrent, rtorrent's supervisord.log and deluge's supervisord.log.

 

https://pastebin.com/cDx3SNU0

 

ok lets try with debug turned on, this might shed some more light on this, please just post the supervisord.log file here, i dont really need any other files (although i appreciate the extra info), steps in the link below:-

 

 

Link to comment
5 hours ago, binhex said:

 

 

so rtorrent will auto restart on crash, im assuming this is what you're seeing, so you shouldn't need to restart the container, its all handled for you. so onto the question as to why its crashing on downloads in progress, so it could be disk I/O related, are you writing downloads directly to the unraid array or are you writing to cache drive?, if its array then write speeds are not good, this in turn could cause issues with configuration files as well (if also stored on the array), if you do have access to a cache drive i would recommend using that to store downloads on, and then use mover/rutorrent to move the completed downloads to the array.

 

I am writing torrents/downloads to a directory on my cache pool which is using SSD, so I don't think I should have any problems with that. I'm only running one VM and the rtorrentvpn docker + its downloads at this point on the SSD so I don't think it should be overworked already either, I would hope. I suppose I could try to move the downloads to a separate SSD mounted with Unassigned Devices, but pretty much out of space on my server so that would be complicated to physically arrange.

 

It actually got worse and I think brought networking to its knees yesterday on my entire unRAID server and had to reboot the whole server. Posted in this thread about it: 

Since rebooting entire server it has been running ok, though I still get the timeout errors frequently in ruTorrent. Keeping an eye on it for now.

 

 

8 hours ago, RallyAK said:

I'm almost out of space on my cache pool (2x4TB in raid1), 95% of which is being used for longterm seeding. I have 2x2TB drives at my disposal and I'd like to use them to expand capacity for rTorrent but not sure how to go about that.

 

From what I've read in the FAQ's it doesn't sound like it's possible to expand the cache pool with smaller drives, so wondering if it's possible to add the 2TB's to my current array, create additional /data path's to each, and use them for rTorrent exclusively?

 

As binhex said, I think this is a prime usage for Unassigned Devices. Put in a dedicated non-array drive and mount with UD so that you aren't constantly reading from/writing to entire array and causing drives to be spinning 24/7 (unless you are ok with that, then just put on array somewhere).

  • Like 1
Link to comment
14 hours ago, binhex said:

ok so the problem you are going to have is that if you do long term seeding from your array then your disks will be spun up most of the time, using more electricity, if this isn't a problem for you then you could simply do that, so stop the container, copy/move the long term seeded downloads to the array, add in an volume mapping to the array and then start the container and re-located the long term seeded torrents to point at the array. This really is your best option for doing this as it will mean you won't run out of disk space and you could then use your cache drive for short term seeding only, you could mess around with unassigned disks and perhaps work around it using that, but it might be quite complex to do, i have no experience with using that so you would be on your own.

 

8 hours ago, deusxanime said:

As binhex said, I think this is a prime usage for Unassigned Devices. Put in a dedicated non-array drive and mount with UD so that you aren't constantly reading from/writing to entire array and causing drives to be spinning 24/7 (unless you are ok with that, then just put on array somewhere).

 

Thanks for the feedback, to save power I'd rather not seed from the array. I'd like to go the UD route but I'm having trouble mapping a new path to one of the 2TB drives. In docker settings I've created a 2nd host container path called "/data2" mapped to a 2TB UD drive but after restart ruTorrent does not recognize the new path.

 

Is it even possible to map this docker to multiple /data paths or drives (e.g. cache + UD)?

 

Another option is running DelugeVPN and mapping it to a UD drive but I prefer to stick with rTorrent since that's what I'm familiar with and migrating hundreds of active torrents from rTorrent to Deluge appears to be a tedious process. :-\

Edited by RallyAK
Link to comment

You should be able to add as many mounts/mappings as needed to a docker. When you say restart do you mean unraid or just the container? You may have to tell UD to auto mount the disk is you mean unraid. If just the docker that should survive a reboot and come back if you added it to the definition, as far as I know.

Link to comment

I'm currently using the binhex DelugeVPN docker but investigating other alternatives. Deluge is slowing my unRAID server to a crawl. Noticed that my most recent parity check was running at about 5 MB/s so stopped the Deluge docker and parity check speed jumped up to over 90 MB/s. This was with no active torrents down or up. :S

 

I'm tired of trying to get Deluge to be less of a resource hog and I've seen posts suggesting that this docker is much less resource intensive. A few question though.

 

1) Is rtorrent less of a resource hog?

 

2) Will my Sonarr and Radarr dockers work with rtorrent?

Looks like both have rtorrent presets so I'm guessing this isn't an issue.

 

3) Is there a plugin for rtorrent similar to AutoRemovePlus on Deluge that will automatically remove torrents based on ratio and/or seed time?

 

I'm working my way through this thread, but really these are the only things that really matter. The rest I can probably work out as I go along.

 

Thanks

Edited by wgstarks
Link to comment

Do your downloads write to cache pool, to your array, or to another disk using unassigned devices? If you see my post from a few days back, I've run into some performance issues as well using this rtorrentvpn container and I'm using the cache pool as my download location. Still testing and monitoring, but starting to wonder if torrents would be better suited to a VM than a container. 

Link to comment

I currently have my downloads writing to an external UD drive. Before that I was getting delays that I think were related to pcie bus bottlenecks (just a guess). Using the UD mounted drive has solved that problem, but Deluge still causes problems when running parity check or rebuild even when it's not downloading or seeding. Not really sure what it's doing. Don't see any unusual numbers for CPU load or network activity.

Link to comment

Seeing similar performance problems mentioned in linuxserver's rtorrent docker thread. Might be torrenting is just not suited to containers... Though I assume there are many people here who use it just fine?

 

Update on my binhex-rtorrentvpn setup here though. It seems to be running really slow (when it is going). Have some stuff that has been running a few days that is not even half done that would usually have completed overnight easily on my old VM setup. I think rtorrent is constantly crashing and restarting which is also causing the rutorrent timeouts. Of course this just builds up and makes things worse because stuff isn't completing and I keep wanting to add more...

Link to comment
I'm currently using the binhex DelugeVPN docker but investigating other alternatives. Deluge is slowing my unRAID server to a crawl. Noticed that my most recent parity check was running at about 5 MB/s so stopped the Deluge docker and parity check speed jumped up to over 90 MB/s. This was with no active torrents down or up. 
 
I'm tired of trying to get Deluge to be less of a resource hog and I've seen posts suggesting that this docker is much less resource intensive. A few question though.
 
1) Is rtorrent less of a resource hog?
 
2) Will my Sonarr and Radarr dockers work with rtorrent?
Looks like both have rtorrent presets so I'm guessing this isn't an issue.
 
3) Is there a plugin for rtorrent similar to AutoRemovePlus on Deluge that will automatically remove torrents based on ratio and/or seed time?
 
I'm working my way through this thread, but really these are the only things that really matter. The rest I can probably work out as I go along.
 
Thanks
Yes to all

Sent from my SM-G935F using Tapatalk

Link to comment
6 hours ago, deusxanime said:

Might be torrenting is just not suited to containers...

 

no, its not related to running this in a container, im confident of that.

6 hours ago, deusxanime said:

Though I assume there are many people here who use it just fine?

 

indeed, count me as one, i use this as my daily downloader, speeds are very good (better than deluge), if you look back a few pages or so there are reports of people running this exact image with a few hundred torrents with no reported issue

 

6 hours ago, deusxanime said:

Update on my binhex-rtorrentvpn setup here though. It seems to be running really slow (when it is going).

 

this is odd, one of the benefits of running this container is that its extremely fast (faster than deluge in my experience), i can basically flood my connection with this and have to throttle it back to prevent it using up all available bandwidth. if your seeing slow speeds then i would assume you dont have a active incoming port, do you have the green tick at the bottom of the screen? if it shows a red cross then you dont have an incoming port working.

Link to comment
17 hours ago, wgstarks said:

I currently have my downloads writing to an external UD drive. Before that I was getting delays that I think were related to pcie bus bottlenecks (just a guess). Using the UD mounted drive has solved that problem, but Deluge still causes problems when running parity check or rebuild even when it's not downloading or seeding. Not really sure what it's doing. Don't see any unusual numbers for CPU load or network activity.

 

ive never encountered that for rtorrent or deluge, are you writing to /mnt/user for either /config or /data?

Link to comment
1 hour ago, binhex said:

are you writing to /mnt/user for either /config or /data?

I was originally. Wanted to avoid using /mnt/diskX/ so that I wouldn't have to worry about the disk getting full. Now I'm just mounting a disk in UD just for /data. Works very well. I can get about 20 MB/s with well seeded downloads.

 

The only issue I'm having now is resource related I think. I have to stop the docker when my system is doing parity checks or rebuilds. Otherwise it will slow everything to a crawl. That's the main reason I'm looking at rtorrent.

Link to comment
2 minutes ago, wgstarks said:

The only issue I'm having now is resource related I think. I have to stop the docker when my system is doing parity checks or rebuilds. Otherwise it will slow everything to a crawl.

 

the only reason this would happen is if rtorrent/deluge is writing directly to the array, as unraid is a bit crap at writing to a protected array doing lots of little writes (which is what torrents client do) is going to really slow down things like parity checks. so two things you could look at:-

 

 dont write to the array, instead write to the UD mounted drive and then do the final move of the completed download to the array, so no incomplete downloads on the array basically.

or

use a schedule to pause downloads during known times for parity checks, doable as you are in control of both when parity checks occur and when pausing can occur on your torrent client.

Link to comment
22 minutes ago, binhex said:

 

the only reason this would happen is if rtorrent/deluge is writing directly to the array, as unraid is a bit crap at writing to a protected array doing lots of little writes (which is what torrents client do) is going to really slow down things like parity checks. so two things you could look at:-

 

 dont write to the array, instead write to the UD mounted drive and then do the final move of the completed download to the array, so no incomplete downloads on the array basically.

or

use a schedule to pause downloads during known times for parity checks, doable as you are in control of both when parity checks occur and when pausing can occur on your torrent client.

It wasn't writing to the array. /config is mapped to cache and /data/incomplete is mapped to a UD mount. In fact, it probably wasn't writing at all since I had paused all the torrents in an effort to speed up the parity check. Pausing all torrents didn't work so I had to stop the docker. As soon as I did my parity check speed jumped from 5MB/s to 92MB/s. No idea why. CPU load was <20% and didn't change much. Network activity was near zero.

 

The only thing mapped to the array now is /data/completed (didn't have an HDD big enough for all the data). I suppose if there was a file transfer in progress when the parity check started it might have contributed to this. Seems unlikely though since the parity check had been running for more than 24 hours at these super slow speeds before I discovered the problem. Can't believe that a file transfer of a single torrent could possibly take that long.

 

If you've got some ideas how to troubleshoot this I'd be glad to give it a shot. Maybe in the Deluge thread though. Didn't mean to sidetrack this one.

 

Question: I know you have used both. Which do you recommend?

Link to comment

i have no idea why stopping the container would affect the parity check speeds if all torrent are paused, that is just plain weird!, hmm i will have a think about it.

 

2 hours ago, wgstarks said:

Question: I know you have used both. Which do you recommend?

 

there are pro's and con's to both:-

 

rtorrent pro's

its fast - yes in my experience i get about 1/3 faster downloads compared to deluge

its pretty - subjective i know but rutorrent with oblivion theme is quite nice, flood is even nicer, but tbh i use transdroid 99% of the time so i dont see the ui much.

its got a ton of features - there are a lot of plugins for rutorrent, including built in rss feed support (tricky to enable on deluge).

its lightweight - resource usage is typically low, even with the additional overheard of nginx for the webui

 

rtorrent con's

unstable - i have seen situations where adding a dozen or more torrents at one time seems to overload the system and can cause gridlock situations, i tend to not do this personally but i know it can happen, deluge seems a little better in this regard, ive also seen the odd random crash, but as i have built code around this you will rarely notice

 

deluge pro's

stable - yes it is pretty stable, ive seen people complain that it doesnt scale well with a lot of torrents (1000+) though, leading to crashes and/or performance issues, but it NEVER crashed for me, and i was using it for well over a year, thats impressive.

development - there seem to be more activity around deluge than rtorrent, thus possibly ensuring less bugs over time?, who knows if rtorrent will create a new release, whereas deluge is pretty much guaranteed to pump out a new release (around every 6 months to a year).

 

deluge con's

basically the opposite of the rtorrent pro's.

 

personally i use rtorrent/rutorrent and it works for me, but it does depend on your usage.

 

Link to comment
9 hours ago, binhex said:

 

no, its not related to running this in a container, im confident of that.

 

indeed, count me as one, i use this as my daily downloader, speeds are very good (better than deluge), if you look back a few pages or so there are reports of people running this exact image with a few hundred torrents with no reported issue

 

 

this is odd, one of the benefits of running this container is that its extremely fast (faster than deluge in my experience), i can basically flood my connection with this and have to throttle it back to prevent it using up all available bandwidth. if your seeing slow speeds then i would assume you dont have a active incoming port, do you have the green tick at the bottom of the screen? if it shows a red cross then you dont have an incoming port working.

 

I'd love to figure out what is causing it. Not excited to stand up a VM for what has a nice docker container sitting here available!

 

I don't have an active incoming port because my VPN provider (TorGuard) doesn't have a way to automate that happening. So every time I disconnect/reconnect I get a new IP and would of course have to go in to their webpage and manually change around the requested port to my new VPN IP. I use both rtorrent here (previously on CentOS7) and uTorrent in Windows with the VPN though and have never had a problem with speeds. Downloads go fast enough that they've been close to saturating my connection, such that I don't even bother manually updating the port forwards anymore. I don't think that would be the issue. Even in the docker container I have seen it jump up to very good speeds with multiple torrents hitting 500k - 1Mbps or better, but it doesn't seem to take long and they drop way down to 20kbps if I'm lucky. 

 

After some more searching of running rtorrent in a container on linuxserver.io and googling, I've tried a couple things. One is to just delete the container image, remove anything left over in the session folder, and rebuild. Also I've seen suggestions of this being caused by DHT, so I disabled that. Neither have really seemed to improve the performance.

 

Let me know if you'd like to have me upload anything (rc file, logs, etc) to help with troubleshooting. If you can help me figure it out, would save me from having to create a separate VM which would be great.

 

Thanks!

 

edit: poking around and was looking at iotop at the unraid CLI:

 

image.png.34cfb780e75c02dce35e5f0975dc1d47.png

 

rtorrent is basically pegging at 99.99% the majority of the time. I'm not sure if that is just a percentage of all activity, in which case it might make sense because not a lot else is going on, or more of overall capacity. Anyway it seems to be chewing up quite a bit.

 

Also in the rtorrent process I see a -p which defines the port I believe correct? But I have a different port defined in my .rc file, so odd it is passed at the command line as well. Is that just done to have a default value and the port in the rtorrent.rc overrides it? 

Edited by deusxanime
more details
Link to comment

i'm getting the following error, after having this docker run fine for the past 2 weeks:

 

[28.09.2017 13:45:58] WebUI started.
[28.09.2017 13:45:58] Bad response from server: (200 [parsererror,getplugins]) SyntaxError: Unexpected token ,

Seeding around 1000 torrents. Restarting does not help :(

 

When clicking the settings:

[28.09.2017 13:50:54] JS error: [http://192.168.0.200:9080/js/webui.js : 762] Uncaught TypeError: Cannot read property 'rTorrent' of undefined

 

Edited by bamtan
Link to comment
2 hours ago, bamtan said:

i'm getting the following error, after having this docker run fine for the past 2 weeks:

 


[28.09.2017 13:45:58] WebUI started.
[28.09.2017 13:45:58] Bad response from server: (200 [parsererror,getplugins]) SyntaxError: Unexpected token ,

Seeding around 1000 torrents. Restarting does not help :(

 

When clicking the settings:


[28.09.2017 13:50:54] JS error: [http://192.168.0.200:9080/js/webui.js : 762] Uncaught TypeError: Cannot read property 'rTorrent' of undefined

 

 

it looks like it might be corruption of one or more rutorrent plugins, try this:-

 

try resetting your plugins:-

 

1. stop container

2. delete /config/rutorrent/plugins folder

3. restart container

  • Like 1
Link to comment
16 hours ago, deusxanime said:

rtorrent is basically pegging at 99.99% the majority of the time.

 

its misleading i know, but this does not indicate that rtorrent is using 99.99% of all available I/O, instead it means that the rtorrent process  is spending 99.99% of the time waiting on I/O, so the question is why is it waiting, things you might want to check:-

 

1. check available free memory, if this is low then caching will not take place, resulting in poor IO performance

2. disk issues could also result in low IO, have you checked smart attributes for all drives you are writing to?, check the syslog for any errors relating to disks.

 

also keep in mind that rtorrent will use a lot of IO resources when you add a torrent, this is because it pre-allocates on disk, if you add multiple torrents in one go this will compound the issue.

Link to comment
4 hours ago, binhex said:

 

its misleading i know, but this does not indicate that rtorrent is using 99.99% of all available I/O, instead it means that the rtorrent process  is spending 99.99% of the time waiting on I/O, so the question is why is it waiting, things you might want to check:-

 

1. check available free memory, if this is low then caching will not take place, resulting in poor IO performance

2. disk issues could also result in low IO, have you checked smart attributes for all drives you are writing to?, check the syslog for any errors relating to disks.

 

also keep in mind that rtorrent will use a lot of IO resources when you add a torrent, this is because it pre-allocates on disk, if you add multiple torrents in one go this will compound the issue.

 

I haven't used iotop much, so I thought that might be the case of it not using 99% of the i/o but still interesting that that means it is waiting as there isn't much going on other than that.

 

1. I have 64 GB of memory in the system and only a couple used, so that shouldn't be a problem

2. No issues on the disks that I can see. It is my cache pool which is brtfs raid1 of 1 TB samsung SSDs. Both brand new and just bought a few weeks ago for this unraid build and none of the other dockers running on there having issue. No smart errors being reported. I'll dig into syslog a bit more and see if anything, but so far no indication of problems on the drives.

 

I'm wondering if there is something conflicting or that it doesn't like about writing to cache - either the btrfs filesystem or that it is mirrored or something??

 

I found a port conflict in that openvpn-as also uses port 9443 so I changed rtorrentvpn to use host port 9444 instead, but no help.

 

I thought maybe network issue with bonded (have server motherboard with dual NIC and that was default when I installed unraid), so I removed that and now just using eth0 with no bonding. No joy.

 

Current experiment is I put in a plain drive that will not be part of the array. I'll mount with Unassigned Devices and set that as my download location.

Link to comment
52 minutes ago, deusxanime said:

I'm wondering if there is something conflicting or that it doesn't like about writing to cache - either the btrfs filesystem or that it is mirrored or something??

 

it wont be the fact its using fs btrfs, as i am (as well as a lot of other users) with no io issues, but it could be mirror related, i have no idea how many people are using raid 1 cache setup (im not), im not saying that is the cause but its something to look at, def, your suggestion of using unassigned devices is probably worth a go to rule this out.

Link to comment

I moved the download location to an unassigned drive and started it up again this morning. It ran great all day! I got good consistent speeds with no drop offs after an hour or so like before and the files I was downloading (about 5, each 6-7GB), which had reached less than 30-40% over the last couple days, all completed in a few hours. 

 

My issue definitely is downloading to my mirrored cache pool. Not sure if that is due to just the fact that it is mirrored, because the docker container is also running on the cache pool, or a combination of the two. You run the container and download location both off your (non-mirrored) cache drive at the same time as well, correct? Any ideas what may be the issue? If not, I can maybe post in the general support section to see if I can catch the eye of a unRAID dev.

 

edit: Almost forgot to add... also, no "request has timed out" errors in the ruTorrent web GUI that I was getting constantly before.

Edited by deusxanime
more details
Link to comment
7 hours ago, deusxanime said:

You run the container and download location both off your (non-mirrored) cache drive at the same time as well, correct?

 

yes i do, so the finger of blame is currently pointing at the mirror, now this is of course software raid 1, and thus is going to be slow and will consume cpu cycles too, i would advise posting this in the general section and see if anybody has any ideas, but my gut feeling is that there isnt going to be a quick or easy fix to this other than running off UD or breaking the mirror :-(

Link to comment
On 9/22/2017 at 5:00 AM, binhex said:

 

ok lets try with debug turned on, this might shed some more light on this, please just post the supervisord.log file here, i dont really need any other files (although i appreciate the extra info), steps in the link below:-

 

 

 

rtorrentvpn supervisord.log attached.

supervisord.log

Link to comment
  • binhex locked this topic
Guest
This topic is now closed to further replies.