Jump to content

Cat_Seeder

Members
  • Posts

    95
  • Joined

  • Last visited

Posts posted by Cat_Seeder

  1. 3 minutes ago, DavidAUK said:

    No, that doesn't work. Shows as closed.

     

    Does it matter if I connect to the VPN in UDP or TCP protocol?

     

     

    Depends on your VPN Provider, PIA is fine with both (openVPN connects with whatever protocol you have selected and then the container has to go through a a seris of http calls to obtain a port). Maybe contact your VPN and check if they have changed anything or their side?

     

    Since netstat is not showing the desired port as LISTEN you may also try to dynamically set the port:

     

    rtxmlrpc network.port_range.set '' "60210-60210"
    rtxmlrpc dht.port.set '' "60210"

    And also IP settings:

    rtxmlrpc network.bind_address.set '' "the vpn assigned IP"
    rtxmlrpc network.local_address.set '' "the external IP"

     

  2. 14 minutes ago, DavidAUK said:

    Yes. That seems to be set up correctly in rtorrent.tc and is shown in the GUI.

    Cool, restart your container and, using the IP that your VPN has assigned to your box and port 60210, check if your box is connectable (https://www.yougetsignal.com/tools/open-ports/). If that's good and your trackers show your box as connectable you are done (the port indicator is known  to be a little flaky and give false positives).

  3. 14 hours ago, DavidAUK said:

    I'm having a problem with the listening port. It's correctly forwarded at the VPN provider and has worked in the past. The port I've set is 60210 and that's the port that the GUI says it's listening on. The status bar says it's closed.

     

    Here's the results of the netstat from the docker container.

    
    [root@50c27a10609f /]# netstat -lntu
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State
    tcp        0      0 127.0.0.11:44369        0.0.0.0:*               LISTEN
    tcp        0      0 0.0.0.0:9080            0.0.0.0:*               LISTEN
    tcp        0      0 127.0.0.1:7777          0.0.0.0:*               LISTEN
    tcp        0      0 0.0.0.0:9443            0.0.0.0:*               LISTEN
    tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN
    udp        0      0 127.0.0.11:54700        0.0.0.0:*
    [root@50c27a10609f /]#

    I would expect 60210 to show up there, so maybe it's related to that? Help appreciated.
     

    Have you manually set port 60210 as your rtorrent port (i.e.,  network.port_range.set = 60210-60210 in rtorrent.rc)? The container will only detect and set ports  automatically for PIA and AirVPN

  4. 8 hours ago, morreale said:

     

     

     

    Anyone?

     

    I don't have any extra trackers, sorry, however my configuration is at:

    • /home/nobody/.irssi (which contains a symlink  pointing to /usr/share/autodl-irssi/AutodlIrssi/) and
    • /home/nobody/.autodl (with a symlink ultimately pointing to /config/autodl/autodl.cfg)

     

    docker exec -it  nginx-proxy_rtorrentvpn_1 sh
    
    sh-5.0# ls -lah /home/nobody/.irssi/
    total 32K
    drwxrwxr-x 1 nobody users 4.0K Jan 11 20:37 .
    drwxrwxr-x 1 nobody users 4.0K Jan 11 20:18 ..
    -rw-r----- 1 nobody users 6.6K Jan 11 20:37 config
    drwxrwxr-x 1 nobody users 4.0K Jan  7 12:41 scripts
    
    sh-5.0# ls -lah /home/nobody/.irssi/scripts/
    total 24K
    drwxrwxr-x 1 nobody users 4.0K Jan  7 12:41 .
    drwxrwxr-x 1 nobody users 4.0K Jan 11 20:37 ..
    lrwxrwxrwx 1 nobody users   36 Jan  7 12:41 AutodlIrssi -> /usr/share/autodl-irssi/AutodlIrssi/
    drwxrwxr-x 1 nobody users 4.0K Jan  7 12:41 autorun
    
    sh-5.0# ls -lah /home/nobody/.irssi/scripts/autorun
    total 16K
    drwxrwxr-x 1 nobody users 4.0K Jan  7 12:41 .
    drwxrwxr-x 1 nobody users 4.0K Jan  7 12:41 ..
    lrwxrwxrwx 1 nobody users   39 Jan  7 12:41 autodl-irssi.pl -> /usr/share/autodl-irssi/autodl-irssi.pl
    
    sh-5.0# ls -lah /home/nobody/.autodl/
    total 32K
    drwxrwxr-x 1 nobody users 4.0K Jan 11 19:38 .
    drwxrwxr-x 1 nobody users 4.0K Jan 11 20:18 ..
    lrwxrwxrwx 1 nobody users   25 Jan 11 19:37 autodl.cfg -> /config/autodl/autodl.cfg
    -rwxrwxr-x 1 nobody users   69 Jan  7 12:41 autodl.cfg.bak
    -rw-r--r-- 1 nobody users  11K Jan 18 13:25 AutodlState.xml
    
    sh-5.0# ls -lah  /config/autodl/
    total 28K
    drwxrwx---+ 2 nobody users 4.0K Feb 28  2019 .
    drwxrwx---+ 9 nobody users 4.0K Jan 11 19:52 ..
    -rwxrwx---+ 1 nobody users 4.3K Jan 11 18:09 autodl.cfg

    Your guess is as good as mine really, but given the folder structure above I would try to mount a volume with custom trackers at /usr/share/autodl-irssi/AutodlIrssi/trackers (e.g., -v /path/to/host/trackers:/usr/share/autodl-irssi/AutodlIrssi/trackers).

    • Like 1
  5. On 1/16/2020 at 10:17 AM, pyc said:

    I was informed that this "docker" installation of rtorrent/rutorrent is able to seed 35k torrents (mp3/flac albums) on a machine with Atom CPU and 4 GBs of RAM. Can anyone confirm such capability so I don't spend another few days trying to install all of that?

     

    I was also informed pyrocore is able to regulate number of seeded torrents at one time in rtorrent, but it seems to be that it only regulates number of torrents downloading, not seeding. Please help. Thanks!

    Well, I can't say for sure. Like I said before, seeding multiple thousand torrents is an extreme sport and some people take it to the limit, nevertheless I would also be suspicious about his info.

     

    I mean, I know that PHP 7 has come a long way but rutorrent is not exactly a lightweight client. In order to seed around 10% of the number of torrents that this Reddit user is claiming to do I had to tweak rtorrent and kernel parameters. The load is also hitting my disks pretty hard (as in, with a 1 Gbps connection even my SSDs are having trouble keeping up with IOPs, they get pretty warm and I already had to replace one of them) ruTorrent actually still works but it's very slow, and when I try to delete Large Files or move files around with File Manager the UI freezes.

    rtorrent-ps / pyrocore are indeed still flying and, although I haven't tried it myself, I doubt that I would have any problems seeding twice as many torrents as I currently do.

     

    5k is certainly doable. I've read trustable reports (sysads that runs reputable seedbox hosting services) saying that 10k is achievable with rtorrent.

     

    35k is certainly extreme, and on a low spec machine that would be certainly impressive. Assuming that his claim are true, maybe his usage pattern helps (I assume that he is seeding lots of small infrequently downloaded torrents, probably on a private tracker like RED, so torrents will just be sitting there and rtorrent will be mostly idle). My usage pattern is just the opposite, there is always IO going on, the container is often downloading and seeding as fast as my VPN can handle and rtorrent is working very hard.

     

     

  6. On 1/13/2020 at 12:09 PM, binhex said:

    after having a long hard think about it, the only change i would be prepared to make is to randomise the web ui and rpc2 password if not already set, everything else i want to keep as it currently is. i think this would be a good compromise, it improves security by not having a hard coded default password, whilst giving the users a decent out of the box experience with rpc2 enabled (most people will still require this for external access for apps).

     

    this will cause some disruption, im fairly certain of that as i suspect a lot of users are probably running with the password not defined (unraid templates do not auto update), but i can mitigate some of the support by links to docs, posts in this thread etc.

     

    let me know if you feel its worth going ahead with this, it shouldn't take me too long to code up, if not then the alternative is leave it as is, or of course create your own fork on github and do your own thing 🙂

    Thanks @binhex. I understand and actually think this is a good solution, it will at least force users to think about passwords and over time it will reduce the attack surface quite a bit (just make sure that the first example that a user finds in the documentation does not use admin/rutorrent as default credentials). I know that it's going to be a little painful, but trust me on this one, you will be doing a great thing for your userbase, even if they hate you for it.

     

    I have to admit that I already mantain my own private fork of your repo, as well as a docker-compose build that handles the stuff that I need: Multiuser support - as in one rtorrent instance per user  - rar support, rutorrent plugins such as File Manager, File Upload, etc for my wife and for the rare occasion when I use the web client (ruTorrent is no longer working that we'll for me), some tune-up, tools to beef up security a little bit (fail2ban, etc) as well as a reverse proxy + ddns client + auto certificate manager combo :). I don't plan to make it public anytime soon, but if any of the rtorrent / ruTorrent / security features sound interesting I'll be happy to contribute upstream.

     

    Cheers and thanks again :)

  7. On 1/6/2020 at 5:32 PM, Cat_Seeder said:

    @binhex, I can confirm the error message. We may want to get rid of the screenshoots plugin (or install ffmpeg, although I doubt that many people will actually care for it).
     

    I've just updated to the latest version (after over 2 weeks of uptime, yay!). Everything is working great, including the screenshots plugin. As always, thanks @binhex.

    On 1/6/2020 at 4:46 PM, Cat_Seeder said:

    So, here's my take on it, we can leverage the environment variables to change defaults without affecting old users (that already have rpc2_auth and webui_auth files in place). I think that most of the changes would need to be done in: https://github.com/binhex/arch-rtorrentvpn/blob/master/run/nobody/rutorrent.sh#L225-L328 and https://github.com/binhex/arch-rtorrentvpn/blob/master/build/root/install.sh#L479-L539

     

    Here's my proposal ordered by most painful change to least painful one:

    1. The default for ENABLE_RPC2, when not specified, should be no. I know, this one may result in some support requests, but most of the users will probably follow your guide and have it set to yes already. For everyone else I can give a hand by asking them to set ENABLE_RPC2=yes. This change, by itself, will make your container way safer.
    2. There should be no default for RPC2_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS, instead one of 3 things should happen:
          a. If non-empty rpc2_auth and web_auth exists and use credentials match admin / rutorrent then we print a giant warning telling users not to usr default credentials. But the container still boots and works as expected - this is the best that we can do for older insecure installations without a flood of support requests.
          b. If non-empty rpc2_auth and web_auth exists and there isn't a admin / rutorrent user present we do nothing. Security conscious users that know what they are doing will not be bothered
          c. If rpc_auth and web_auth aren't there or are empty (new users), then we fail the container startup with a clear explanation about how to use RPC_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS. New users will set their own credentials as supposed
    3. README.md examples should be changed so that admin / rutorrent are not there by default. Although not Ideal, I'm ok with myusername / mypassword. I doubt that we'll get support tickets from new users, although some will actually use myusername and mypassword, we may also want to check for it in 2.a. :D).

    IMO old users will basically be taken care of by 2.a and 2.b and the support requests that we get about RPC2 can be solved by asking the user to set ENABLE_RPC2=yes. Maybe I'm being naive about the typical user knowledge level (you know your user base better than anyone else), however it feels like this can be done without resulting in an endless ****storm of support requests. What do you think? If you agree I again volunteer to help with the changes as well as to monitor the forums and tell users to ENABLE_RPC2 when Sonarr / Radarr / **rr fails :D.

     

     

    binhex, Just letting you know that I'm available and willing to write the above code if I have your blessing - just give me a shout since those changes are a little bit more involved than the rtxmlrpc stuff. If you rather me not to do it / still think that the cost of disruption is too high please let me know (no hard feelings at all, I promise not to pester you with security stuff any more).

  8. 20 hours ago, Sturm said:

    I don't suppose you could share some of your automated workflows through the CLI that I could adapt for qBittorrent? I'm fairly comfortable with bash/zsh. Or at least just point me in some good directions for how to manage torrents manually without the WebUI?

    Sure, I don't know much about qBittorrent, but here's my setup for rtorrent.

     

    1. I have a watch folder, an incomplete folder and a complete folder, each with a bunch of subdirectories, e.g.:

        /watch

            /tv

            /movies

            /bonus

            /...

        /incomplete

            /tv

            /....

        /complete

            /tv

            /...

     

    2. /watch should be monitored by pyrotorque's Watch Job (see: https://pyrocore.readthedocs.io/en/latest/advanced.html#rtorrent-queue-manager). I'm still using old school rtorrent watched directories (https://github.com/rakshasa/rtorrent/wiki/Common-Tasks-in-rTorrent#watch-a-directory-for-torrents) because I'm too lazy to upgrade, but pyrotorque is certainly easier and more powerful to use.

    3. /incomplete is where rtorrent keeps data from downloads in progress (see 2).

    4. /complete is where downloads are moved after they are done.

    5. To automatically delete torrents I do two things.

         a) All of my important trackers have aliases configured: (see https://pyrocore.readthedocs.io/en/latest/setup.html#setting-values-in-config-ini)

         b) I have a cron job that deletes properly seeded torrents. It uses a bunch of rcontrol --cull commands - one rule per tracker, plus a general overall rule. It's a slight variation of: https://pyrocore.readthedocs.io/en/latest/usage.html#deleting-download-items-and-their-data

    6. autodl-irssi is set to download stuff from the trackers where I want to build ratio, it monitors IRC Channels and add a certain number of filtered torrent files to the watched folder every week (see: https://autodl-community.github.io/autodl-irssi/configuration/overview/ )

    7. I can add a torrent manually just by dropping it in the watch folder. I can delete torrents manually with rcontrol and I can use rtorrent-ps (CLI tool) instead of ruTorrent to check how things are going.

    8. I use Downloads Router (tohttps://www.google.com/url?sa=t&source=web&rct=j&url=https://chrome.google.com/webstore/detail/downloads-router/fgkboeogiiklpklnjgdiaghaiehcknjo%3Fhl%3Den&ved=2ahUKEwiAs93ujPPmAhXAUxUIHWzsAZAQFjADegQIBBAB&usg=AOvVaw2Fwk4fqI4MValYtWgVXOrE) go save torrent files to the most appropriate watched subfolder according to the tracker from where I'm downloading stuff (e.g., if the private tracker is specialized in in movies, any torrent files that I download will end up in watch/movies by default). I can always override where torrent files are saved or move them to a different folder later.

    9. My Plex libraries are configured to read from suvolders of /complete

     

    That's pretty much it for torrenting. I have Sonarr and Radarr but I only use them with Usenet (they can quickly destroy your buffer on private trackers if you are not careful with what you are doing). 

    20 hours ago, Sturm said:

    However, I have been told by others that it would start using up more resources and overhead. I have a feeling they overestimate how much resources the qBittorrentVPN container uses on my NAS. Currently, it's using less than 4% CPU and about 2.6GB out of 8GB total RAM. Spread across 2–3 containers, it ought to be fine, don't you think?

    You will have no problems running a few torrent containers in your NAS. Yes, theoretically more processes means more things to manage, but from personal experience, it's easier to split the load. If you reach an OS or hardware limit you will know when it happens, but I know for a fact that quite a few seedbox providers are running dozens of rtorrent clients per VM with very happy and satisfied paying customers.

     

    Cheers,

    • Thanks 1
  9. 16 hours ago, Sturm said:

    @Cat_Seeder Thank you very much for your help, as well, but I'm afraid most of what you have said goes a bit over my head. I mean, I understand the idea of using SSDs to cache data, but making sure my "cgroups" aren't limiting resources? No idea how to determine that or even what cgroups are. Anyway, my issues with the WebUI being unresponsive are with qBittorrentVPN, thus the reason I'm attempting to use rTorrentVPN instead. (Because I've heard it can handle a larger number of torrents.) Still, I may check out that optimizing rTorrent link you provided if it turns out that rTorrent does, indeed, cope well with 1,600+ torrents.

    Hi @Sturm, I understand. We all have been starters someday. To be honest though, seeding several thousand torrents from a single docker container on a single machine requires some sysadmin skills. While you may get 2k torrents to work out of the box, it is important to understand that rtorrent, although very capable, is not the only piece required to make it work. It also important to set realistic expectations, as you add more torrents, ruTorrent (the web interface) will begin to slowdown and eventually become unstable. Most of us seeding over 3k torrents are using rTorrent exclusively through the command line and have automated flows to import, organise and automatically remove torrents. Other than that, what I was trying to say is that you will eventually hit configuration / OS / Kernel limits and ultimately hardware limits. Seeding 5k+ torrents from a single client is quite a challenging sport that a few of us nerds like to play :D.

     

    I may be wrong, but it seems to be like you are more interested in the end result (being able to seed your torrents) than the technical challenge of running everything in a single container right? If so, an alternative that may prove easier to setup is to run multiple containers. Just seed 1k / 1.5k torrents per container, keep an eye on CPU, network and memory usage. As for storage, spread the load over multiple HDs. Maybe finish setting up your first rtorrent box and then use docker-compose or a orchestration tool to run multiple instances in different http / https ports.

    16 hours ago, Sturm said:

     

    So, I edited my docker-compose.yml file to only have several volume mappings rather than just the one, overarching volume mapping of /volume2/media:/data. Each of the new volume mappings points to just those folders that I need rTorrent to have access to. Once I did another container re-creation with these new mappings, it seems rTorrent and ruTorrent start up fine. (Well, except for rTorrent complaining about ffmpeg with "screenshots: Plugin will not work. rTorrent user can't access external program (ffmpeg)." But I don't think that's necessary.)

    That's ok. It's just a ruTorrent plugin failing to load, no need to worry unless you want to screenshoot videos from ruTorrent. @binhex, I can confirm the error message. We may want to get rid of the screenshoots plugin (or install ffmpeg, although I doubt that many people will actually care for it).

    16 hours ago, Sturm said:

    Now, I just need to figure out how to copy over my 1,600 .torrent files from qBittorrent to rTorrent while keeping their download locations intact and such. Then we'll see whether or not rTorrent really can handle that kind of load.

     

    Again, thank you guys so much for your help in this matter.

    Autotools may be handy ( https://github.com/Novik/ruTorrent/wiki/PluginAutotools ) to add torrents. I would also temporarily disable hash checks (assuming that you are just pointing rtorrent to the same files that qBittorrent was serving) so that IO operations do not overload your server. Finally, add torrents in small batches (up to 100 at a time). I've migrated a few of my seedboxes to binhex containers and it's nowhere as painful as it looks :).

     

    Cheers,

  10. 18 hours ago, binhex said:

    Hi @Cat_Seeder  firstly thanks for the PR, its nicely done and im pretty sure i will accept it as it does mean no exposure of rpc2 in order to have just the enhancement of auto restart on port change, which is very welcome!. 

    Thanks for merging my PR @binhex . Hopefully it turns out to be useful for everyone else. I've latter realised that we can probably safely remove this line from the watchdog script as well as the xmlrpc-c client itself if we want, but this can be done at a later stage.

    18 hours ago, binhex said:

    now onto the prickly subject of rpc2 and default credentials, so as much as i would love to say yes to remove all default credentials i have to keep in mind the hundreds or even thousands of active users out there who (i suspect) are mostly using this rightly or wrongly with default credentials (due to it being difficult to change - see end paragraph).

     

    if i remove any default password (or set a random password) then there is the potential for a massive amount of support posts as people flood in with stories of sonarr/radarr/<insert metadata downloader app> stopping working on the latest update due to rpc2 credentials being invalid, and whilst this will make you a very happy man it leaves me being an extremely overloaded and unhappy man. 

    Hahaha, I understand, and I've noticed the huge amount of support tickets when you added the extra options as well. I can't really ask you to sacrifice even more of your time to handle support tickets. 

    To be honest, my main worry is that your containers may become a target for malware and endanger the exact type of inexperienced user that today struggles with docker environment variables. With 5M+ pull requests and considering that your container is part of DockSTARTer, I expect tens of thousands of installations, maybe more. Since, IMO, you have the best and most complete rtorrentt + rutorrent container out there I expect it's popularity to grow. As you said, right or wrong people will use default credentials. How many of those installations are exposing unprotected RPC2 mounts over the internet? How many of them are only protecting XMLRPC with default credentials? That's a juicy target for hackers, and unfortunately it's trivial to modify the scripts that already target unprotected RPC2 mounts and attempt a few passwords like admin / rutorrent.


    My take on it though is that maybe removing defaults can be done in a way that does not disrupt your existing user base too much but still provides better security out of the box for newcomers? See bellow.

     

    18 hours ago, binhex said:

    Just to be clear here, default credentials for the web ui and rpc2 has been in place since the inception of this docker image, ive just made it easier now for people to set them independently via environment variables, before it was a single set of credentials based off the web ui credentials for both, it was not exposed and not obvious on how to change them - in my opinion the change has improved security.

     

    So, here's my take on it, we can leverage the environment variables to change defaults without affecting old users (that already have rpc2_auth and webui_auth files in place). I think that most of the changes would need to be done in: https://github.com/binhex/arch-rtorrentvpn/blob/master/run/nobody/rutorrent.sh#L225-L328 and https://github.com/binhex/arch-rtorrentvpn/blob/master/build/root/install.sh#L479-L539

     

    Here's my proposal ordered by most painful change to least painful one:

    1. The default for ENABLE_RPC2, when not specified, should be no. I know, this one may result in some support requests, but most of the users will probably follow your guide and have it set to yes already. For everyone else I can give a hand by asking them to set ENABLE_RPC2=yes. This change, by itself, will make your container way safer.
    2. There should be no default for RPC2_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS, instead one of 3 things should happen:
          a. If non-empty rpc2_auth and web_auth exists and use credentials match admin / rutorrent then we print a giant warning telling users not to use
              default credentials. But the container still boots and works as expected - this is the best that we can do for older insecure installations without a
              flood of support requests.
          b. If non-empty rpc2_auth and web_auth exists and there isn't a admin / rutorrent user present we do nothing. Security conscious users that know what
              they are doing will not be bothered
          c. If rpc_auth and web_auth aren't there or are empty (new users), then we fail the container startup with a clear explanation about how to use
              RPC_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS. New users will set their own credentials as supposed
    3. README.md examples should be changed so that admin / rutorrent are not there by default. Although not Ideal, I'm ok with myusername / mypassword. I doubt that we'll get support tickets from new users, although some will actually use myusername and mypassword, we may also want to check for it in 2.a. :D).

    IMO old users will basically be taken care of by 2.a and 2.b and the support requests that we get about RPC2 can be solved by asking the user to set ENABLE_RPC2=yes. Maybe I'm being naive about the typical user knowledge level (you know your user base better than anyone else), however it feels like this can be done without resulting in an endless ****storm of support requests. What do you think? If you agree I again volunteer to help with the changes as well as to monitor the forums and tell users to ENABLE_RPC2 when Sonarr / Radarr / **rr fails :D.

     

     

  11. 11 hours ago, Sturm said:

    So, nothing, @binhex? I posted the requested file, yet I've not received a response. I apologize if I seem impatient. It's just that your qBittorrent-vpn image appears to have a great deal of trouble handling my 1,600+ torrents (at least, via its WebUI) and it's getting worse the more torrents I add to it, so I'd really like to move them all to one that—I'm told—should be able to handle up to 6,000 torrents. i.e., your rTorrent-vpn image. I'll re-attach it to this post.

     

    As a side note, because I am still relatively new to this, I don't know what `ENABLE_AUTODL_IRSSI` is for, as well as `ENABLE_RPC2`. Could someone please explain those to me so I know whether or not I should have them enabled?

    rtorrent.rc 5.14 kB · 0 downloads

    I'm seeding 2k+ torrents at the moment (not on Unraid though, cheap NAS with 8GB of memory and two SSD disks acting as cache that handle most of the IO). I didn't really do much with rtorrent.rc other than slightly increasing the number of files and sockets available due to the nature of the load that my system in running, as well as slightly increase buffers so that my disks don't get hit as hard. This isn't a silver bullet though (see: https://github.com/rakshasa/rtorrent/wiki/Performance-Tuning for more info). Maybe start over with the container's default rtorrent.rc file?
    Also make sure that your OS and cgroups aren't limiting system resources. I don't really use ruTorrent all that much, but it still loads fine, although UI is very slow. 

     

    Binex explained the RPC2 mount above. autodl_irssi is used to automatically download torrents from IRC announce channels (it's mainly used by people on private trackers trying to build buffer).

  12. On 12/17/2019 at 10:42 AM, binhex said:

    @Cat_Seeder a thought i had whilst driving home, you havent by any chance set ENABLE_RPC2 to a value of 'no' have you?, if so this will be the cause of the issue, as i use xmlrpc in order to reconfigure the incoming port on port closed.

     

    On 12/23/2019 at 10:43 AM, binhex said:

    yep, xmlrpc needs a url specifying to connect to rpc2, thus it needs to be exposed.

     

     

    On 12/24/2019 at 1:26 AM, Cat_Seeder said:

    I see, that's a limitation from the xmlrpc cli tool. Libraries can talk to rtorrent straight over SCGI. This is how software like pyrocore talks with rttorent.

     

    Actually, pyrocore even exposes its own xmlrpc cli tool that does not require an http endpoint https://pyrocore.readthedocs.io/en/latest/references.html#cli-usage-rtxmlrpc

     

    Would it be much of a hassle to replace xmlrpc with pyrocore's rtxmlrpc client?

     

    As far I understand you already have pyrocore installed. Changes would probably be limited to the first if statement in  https://github.com/binhex/arch-rtorrentvpn/blob/master/run/nobody/rtorrent.sh and then, AFAICT, the requirement for default credentials can go away.

     

    If you are busy and accept contributions I can even take a stab at it myself.

     

    What do you think?

     

     

    @binhex, I've modified the code as I've mentioned above it works on my machine :D. I have been running the modified version for a while with ENABLE_RPC2=no, things have been stable for over 10 days with PIA, the watchdog process has triggered port changes a couple of times with no issues whatsoever. It also works with ENABLE_RPC2=yes.

     

    PR: https://github.com/binhex/arch-rtorrentvpn/pull/134 , if for whatever reason you decide not to merge the PR, I hereby grant you the right to use it in any way that suits you, no strings attached. Having said that, if you choose to merge my changes and then eliminate defaults for RPC2_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS I'll be a very happy man =).

     

    ----

     

    If anyone else is willing to try it, I've also uploaded a standalone version to a (temporary) Gist: https://gist.github.com/flatmapthatshit/6ff8d1ac441092b33339890b5145de3a

    Testers with VPNS other than PIA are especially welcome.


    For a quick experiment you can use docker to mount the new version of rtorrent.sh over /home/nobody/rtorrent.sh, e.g.:

    docker run -d \
        --cap-add=NET_ADMIN \
        -p 9080:9080 \
        -p 9443:9443 \
        -p 8118:8118 \
        --name=rtorrentvpn \
        -v /root/docker/data:/data \
        -v /root/docker/config:/config \
        -v /etc/localtime:/etc/localtime:ro \
        -v ./rtorrent.sh:/home/nobody/rtorrent.sh \
        -e ENABLE_RPC2=no `#followed by all other environment variables` \
        binhex/arch-rtorrentvpn

    Once everything is running, other than waiting for the watchdog to do it's job, you can also run the script manually to force changes, e.g:

    docker exec -it -e rtorrent_running='true' \
         -e VPN_INCOMING_PORT='30163' \
         -e vpn_ip='10.16.11.6' \
         -e external_ip='31.168.172.14 \
         <my_container> /home/nobody/rtorrent.sh

    Hopefully this is helpful to the community.

  13. 14 hours ago, binhex said:

    yep, xmlrpc needs a url specifying to connect to rpc2, thus it needs to be exposed.

     

    I see, that's a limitation from the xmlrpc cli tool. Libraries can talk to rtorrent straight over SCGI. This is how software like pyrocore talks with rttorent.

     

    Actually, pyrocore even exposes its own xmlrpc cli tool that does not require an http endpoint https://pyrocore.readthedocs.io/en/latest/references.html#cli-usage-rtxmlrpc

     

    Would it be much of a hassle to replace xmlrpc with pyrocore's rtxmlrpc client?

     

    As far I understand you already have pyrocore installed. Changes would probably be limited to the first if statement in  https://github.com/binhex/arch-rtorrentvpn/blob/master/run/nobody/rtorrent.sh and then, AFAICT, the requirement for default credentials can go away.

     

    If you are busy and accept contributions I can even take a stab at it myself.

     

    What do you think?

     

  14. On 12/17/2019 at 10:42 AM, binhex said:

    @Cat_Seeder a thought i had whilst driving home, you havent by any chance set ENABLE_RPC2 to a value of 'no' have you?, if so this will be the cause of the issue, as i use xmlrpc in order to reconfigure the incoming port on port closed.

    Sorry for the late response, that's a good guess, thanks! I do actually have RPC2 disabled. I did downgrade to a previous image a few days ago, but I'll upgrade to the latest image, enable RPC2 and see how it goes for the sake of troubleshooting (if it breaks again  I'll try to extract something useful from the logs using your previous hints).

     

    I know that I'm being a little - well, more than a little :D -  insistent about this, but since the port renew scripts are operating from inside the container, are you sure that exposing RPC2 over Nginx is necessary? Can't xmlrpc client issue commands straight to localhost:5000 (rtorrent scgi port)?

    Or even better, can't you move to a socket setup? For users that really need to expose RPC2 to Sonarr / Radarr / etc Nginx can still act as a Proxy (see: https://www.reddit.com/r/seedboxes/comments/92oi6u/creating_a_scgi_socket_file_to_http_proxy_for/ for an example ).

     

    If this works you can revert all of the unnecessary security compromisses. No necessity for default credentials, no requirement to expose RPC to the outside world for those of us that don't need it, etc. Don't take this the wrong way, auto-restart is a great feature and I don't want it to go away, however, as it is the security compromisses are really bothering me.

  15. Just to report, the containers are less stable than before. Despite the new auto-restart logic, after 3 to 6 days I always end with no connectivity (red icon at the bottom of rutorrent) until I restart the container manually. I'm with PIA, on a very stable 1 GB connection. With older images I was managing several weeks of uptime.

     

    Unfortunately logs are pure garbage after a few hours. Is there a way that I can setup the container to dump logs as soon as connectivity is lost to help troubleshoot the issue?

  16. On 11/10/2019 at 3:43 AM, DazedAndConfused said:

    Im having constant issues with the docker bringing my server to a crawl. Im migrating from qbittorent and it seems like everything I do in rtorrent maxes out my CPU utilization on my server and locks up the server. I've had to force reboot twice already.

     

    Simple things like deleting dead torrents that have been removed from trackers or force rechecking files so that I can get everything migrated is causing the server to be at 100% CPU most of the time. Ive never had this problem before installing this docker. 

     

    In order to keep this from completely killing my server, I run operations on 5 torrents at a time and even still, I have issues with that.

     

    Does anyone know what may be causing this? I'm going to have to disable this docker until I find out why this is happening.

     

     

    53552108630b68b4f65c392f0262aede.png

    That's a weird one, maybe attach to the container and run `top` to understand what processes are using CPU Time. You can also limit the how many CPUs are available to the container with Docker's --cpu option (see https://docs.docker.com/config/containers/resource_constraints/ for further info). I've noticed that rutorrent may get stuck for a while when I delete large (as in 100GB+) torrents with data, other than that I normally seed a few thousand torrents with no issues (and nowhere near 100% CPU usage). 

  17. On 10/22/2019 at 6:35 PM, binhex said:

    Hi guys, so historically it has been difficult/impossible to change the listening port for rtorrent programmatically whilst its running, this lead to the only option which is to sigint rtorrent and change the port and then restart the process, this unfortunately can lead to situations where the process does not end nicely and thus rtorrent cannot be restarted.
     

    Wind forward in time and this is now possible, however in order to do this we need to supply the credentials the authenticate with rtorrent before we can issues commands. As i don't know what each user has set their rpc2 and web ui passwords to (and i obviously shouldn't know this either!) then the only way for me to do this going forward is to allow the user to define the credentials for rpc2 and web ui as environment variables.
     

    So from the next build if you have changed the username and password from the defaults (user admin, password rutorrent) then you will need to add in the following 'variables' :-
     

    
    Key                 Value               Default         Description
    RPC2_USER           <your username>     admin           sets the username for RPC2 auth
    RPC2_PASS           <your passs>        rutorrent       sets the password for RPC2 auth
    ENABLE_WEBUI_AUTH   yes/no              yes             sets web ui authentication for web ui (nginx) - if you are reverse proxying you probably want this set to no, otherwise yes.
    WEBUI_USER          <your username>     admin           sets the username for web ui auth
    WEBUI_PASS          <your passs>        rutorrent       sets the password for web ui auth
    

    The above will result in a more stable rtorrent/rutorrent experience, whilst allowing us to maintain a working incoming port during all situations, i hope you agree a worthy improvement! (testing was a bitch).

     

    Any questions please post them.

     

     

    Hi @binhex, sorry that I'm kinda late to the party. The upgrade path wasn't totally smooth for me. I haven't noticed the new parameters, and since they have defaults (and also given that you've changed nginx.conf to read credentials from the new webui_auth file bypassing my custom settings) my container's rutorrent has been exposed to the internet with default credentials for a while.

    Luckily I had

    ENABLE_RPC2=no

    Or it would have also exposed XML RPC over the internet with easily guessable default credentials. I guess I've shared the following article already, but it demonstrates why exposing RPC2 with no password or default credentials is not a good idea: https://www.f5.com/labs/articles/threat-intelligence/rtorrent-client-exploited-in-the-wild-to-deploy-monero-crypto-miner

     

    I know that ultimately this is a trade-off between security and convenience but given how many people are using your container I would suggest that you make it as secure as possible by default. I guess that two good ways to handle RPC_USER/RPC_PASS/WEBUI_USER/WEBUI_PASS without making the container vulnerable by default are:

     

    1. Remove the mentioned variables. From inside the container you can probably issue XMLRPC commands straight to port 5000 or use a sgci_local socket (bypassing nginx authentication altogether) right? Otherwise, during container startup you can generate random credentials.

    2. Keep the variables but fail to start the container if no values are provided. IMO, default credentials are a really bad idea. Of course that a simple docker inspect will reveal the passwords, but I guess that this is not a big issue given that VPN_PASS is already exposed (secrets are a better alternative but it is limited to docker swarm and docker-compose).

     

    Cheers

  18. On 6/12/2019 at 6:17 PM, binhex said:

    The above is now fixed in the latest image please pull down and try

    Sent from my EML-L29 using Tapatalk
     

    Just dropping by to say that containers have been stable for 3 days already. Buttery smooth
    I initially didn't get any disk space back and then I remembered to manually delete the Flood folder created by previous versions :D.
    Great stuff getting rid of Google and OpenDNS by the way; never know when DNS leaks will happen. Despite some Reddit hate for Cloudfare I never had a problem with 1.1.1.1.

     

    Will send you some extra beer your way soon :).

    • Like 1
  19. On 6/4/2019 at 1:02 PM, binhex said:

    A heads up, next release of this image will NOT include the optional 'Flood' web ui, its become too much of a headache to keep fixing, nearly every time i attempt to build it will fall over during the install of Flood which is annoying to say the least. If you require Flood then your options are:-

      

    1. switch to a named tagged image that includes flood (as in the current image)

    or

    2. fork my code on github and add in flood and create your own image.

     

    Sorry guys its just too much of a hassle, and IMHO brings very little to the table, other than eye candy.

     

    Hey @binhex. Thansk for the heads up.

    I've noticed that you have updated the docs and uploaded a test image. However, "latest" is still pointing to a 5 weeks old image that contains flood.

    Are you planning to release the new version soon?

  20. 5 minutes ago, binhex said:

    i think you have to draw a line and say the user is responsible for security to a certain degree.

    i think you maybe underestimating the popularity of android/apple apps such as transdroid and nzb360, both of these rely soley on rpc being accessible, im not really comfortable with the fallout of setting this to no by default (for existing users), this is bound to increase the support issues. 

    Fair enough. Both valid points.

    6 minutes ago, binhex said:

    How about i meet you mid way and set ENABLE_RPC to 'no' by default in the template (will take affect for new users) and put in a nice big warning in the template saying something like 'enable at your risk, make sure you have changed username and/or password'.

    That's a good compromise. New users will be safe by default, but it will not be a draconian imposition on old users. 
    Good idea about the warning as well. Maybe something like:

     

    ENABLE_RPC: This option exposes XMLRPC over SCGI (/RPC2 mount). Useful for mobile clients such as Transdroid and nzb360.
    WARNING: Once enabled, authenticated users will be able to execute shell commands using http / https (including commands that write to shared volumes). Known exploits target insecure XMLRPC deployments (e.g., Monero exploit). Enable at your own risk, make sure you have changed username and password before doing that.

     

    And something similar for ENABLE_RPC_AUTH.

     

    WARNING: By disabling ENABLE_RPC_AUTH you are essentially allowing everyone with http / https to run arbitrary shell commands against your container. Disabling this option is not recommended.

     

    In summary: What it does, why would anyone enable it, why it can be dangerous and how to properly protect the setup. (Sorry for my bad English BTW. I'm not a native speaker).

     

    Honestly, I wouldn't set ENABLE_RPC_AUTH=no even in my own LAN. Perimetral security is great and all, however, to me XMLRPC is about the same as ssh access (None of my servers allow unauthenticated access). Keep it safe and let the reverse proxy pass-through auth headers.

     

  21. 3 hours ago, binhex said:

    this is now done, you can control it via two new env vars (yes you will need to create these if you are an existing user):-

    
    key                  values
    ---                  ------
    ENABLE_RPC2          yes|no            
    if set to yes then rpc2 location will be added to nginx.conf (for http and https), if no then it wll be removed.
    
    ENABLE_RPC2_AUTH     yes|no          
    if set to yes then rpc2 location will be secured using basic auth (for http and https), if no  then it will be 
    insecure (useful for people using reverse proxy for authentication).

    if either of these env vars are not defined then the default is 'yes'.

     

    Tested. All working great! Thanks Binhex. More beer coming as soon as I receive my paycheck ;).

     

    I know that I'm being annoying / paranoid, but I think that ENABLE_RPC2 should be "no" by default. RCP2 + default admin / password means that containers are still easily exploitable out of the box. A valid counterpoint is that most people that are not techie-savvy enough to change the default credentials will probably not be exposing ports to the internet... However, I guess that most users will not really care about / need XMLRPC over SCGI out of the box. Honestly, I think that it is safer to assume that most people will not need it and let everyone else enable it manually.

  22. 3 hours ago, binhex said:

    for most apps probably not, but as you can see some applications are coded in such a way to only use the /RPC2 mount (nzb360 being one) and in the scenario where sonarr/radarr maybe installed at a remote location connected over the internet, i think its probably safer to expose RPC2 than it would be to port forward port 5000, unless im missing something here?.

    Yes. Certainly proxing with auth is better than directly exposing port 5000 to the internet.

    I don't know much about nzb360. I have however enabled HTTPRPC and tested Sonarr with it (plugins/httprtc/action.php endpoint). It is working well (for about 5 hours). Not really impacting my CPU usage very much (testing on a laptop with I7 CPU).

    3 hours ago, binhex said:

    i could look at adding in an env var to control whether you allow access to /RPC2 or not, its a bit of work so it may take a while before its in place (lots going on).

     

    whilst i agree that having it disabled by default is the way to go, i do not want to shut the door on existing users who currently may rely on this, so i will be setting the default to disabled for all new users and leaving it enabled for all existing, they can then decide whether they want to disable it or not via the new env var.

    I do understand and thank you for taking action. If possible, however, I would at least recommend pushing basic authentication in the /RPC2 mount to everyone ASAP (as per my understanding you are already planning to do it as soon as you get confirmation that it works). While it is not as effective as completely removing the attack vector by default; with basic authentication exposed containers are at least safer from scanning bots.

  23. 5 hours ago, binhex said:

    ok i was getting tripped up by my reverse proxy blocking access to /RPC2 externally, tried it internally and you are correct, i can indeed access /RPC2 with credentials. so had a look at the nginx config and this is the fix:-

     

    open file /config/nginx/config/nginx.conf and look for the following section (there will be two occurrences):-

    
            # include scgi for rtorent, specifying port 5000, important MUST use ip address
            location /RPC2 {
                include scgi_params;
                scgi_pass 127.0.0.1:5000;
            }
               

    so change this to be:-

    
            # include scgi for rtorent, specifying port 5000, important MUST use ip address
            location /RPC2 {
                include scgi_params;
                scgi_pass 127.0.0.1:5000;
                auth_basic "Restricted Content";
                auth_basic_user_file /config/nginx/security/auth;
            }

    once done save and restart the container, this for me then forces auth when connecting to /RPC2, give it a go if you have the same success as me then i will come up with some way of patching existing users config.

    Hi @binhex, it works. However... Is that /RCP2 mount proxying 9080 and 9443 to 5000 over SCGI really necessary for rutorrent or flood to work over the internet?

     

    As far as I can tell from config.ini, rutorrent is actually going straight to 127.0.0.1:5000:

     

        $scgi_port = 5000;

        $scgi_host = "127.0.0.1";

     

    Flood can also connect directly to port 5000.

     

    There are even a couple of plugins meant to replace SCGI altogether (see https://github.com/Novik/ruTorrent/wiki/PluginHTTPRPC for instance), it even promises to reduce bandwidth usage in exchange for extra CPU load.

     

    Regardless of authentication. If WAN exposure to XML-RPC can be limited I think that it would be a great idea. There are several known exploits that target insecure RPC deployments. There are also bots looking for this kind of thing (you can thank years and years of insecure WordPress deployments for that).

     

    If XML-RPC exposure over WAN is not strictly necessary I would vote to disable it by default. Maybe include an ENABLE_XMLRPC_OVER_SCGI flag for people that really need it. Basic authentication should certainly be enabled and I would really advice that people thinking about enabling such a flag first take the time to tighten security (or better yet, do not expose ports to WAN. Use a VPN instead).

     

    Further reading

     

    * Discussion about rtorrent and XML-RPC exploits: https://github.com/rakshasa/rtorrent/issues/696

    * "Secure" setup with rtorrent + local / UNIX socket +  Nginx reverse proxy with basic authentication + rutorrent: 

     

    * rTorrent - ruTorrent communication: http://forum.cheapseedboxes.com/threads/rtorrent-rutorrent-guide.1417/ - HTTPRPC sounds like a great alternative to XML-RPC over SCGI (unless you are running a potato PC).

     

  24. 7 hours ago, BrttClne22 said:

    I was considering using a reverse proxy to make rutorrent accessible from the web (mainly for NZB360), however, I was doing some research before I did so and it seems that RPC2 doesn't require auth (I think it used to?). To test, I removed my username/password from Sonarr, Radarr and NZB360, all three seem to connect without any username/password. Are others seeing this?

     

    I don't think I've modified any configuration that would effect this. I've been using this container for a while but I think the deepest thing I've done configuration wise is add my own username/password and delete the default using the scripts.

     

    You don't need to expose port 5000. Just expose 9443 (or 9080 if you are offloading Https to the proxy). As far as rutorrent is concerned it is talking to XML-RPC in the container's localhost.

    If you are using other clients that require access to port 5000, you can use a similar strategy.

    I've personally created a docker compose file for all applications that require access to rtorrent. Docker compose creates a shared network where services are discoverable by name (https://docs.docker.com/compose/networking/). Applications like Sonarr can access port 5000 even though it is not directly exposed to the internet. (E.g., use container hostname:5000)

     

    I would not recommend exposing RPC2 over the internet. However, if you want to do it for whatever reason (e.g., Sonarr is running in a separate box in the internet and for whatever reason you don't want to use a VPN) you will need to really beef up security. Username and password is just a first measure; monitoring your logs, setting up fail2ban, etc are all good steps. Otherwise you will soon find a crypto miner installed at your server... 

     

  25. Hi guys,

     

    Just sharing (no support needed). I have missconfigured autodl-irssi and ended up with over 3k torrents in the container.

    Tonight I got several emails warning me that OOM Killer was running in a loop (since VPN IP changes every time I was also blocked from a tracker for spamming... Again! :().

    Out of curiosity I had a look at the running process tab. rtorrent-ps + rutorrent + nginx + PHP were sitting at a cool 1.2 GB. Flood on the other hand quickly goes from 500 MB to 4 GB to 6 GB to OOM. node.js processes are basically eating all available run (not sure if this is expected or a memory leak).

     

    What I've learned today:

     

    1) Always double check that you have setup autodl-irssi correctly. Do set sane daily limits on every filter.

    2) Disable Flood. Honestly, it's not worth it. I've stopped and removed 2700+ torrent files manually. Even with only ~300 torrents Flood was still using ~550 MB memory (a good 4x more than rtorrent + rutorrent combined). At this stage I would say that Flood is only for casual users.

    3) Be very careful with VPNs + container auto restart. 

     

×
×
  • Create New...