Cat_Seeder

Members
  • Posts

    95
  • Joined

  • Last visited

Everything posted by Cat_Seeder

  1. Depends on your VPN Provider, PIA is fine with both (openVPN connects with whatever protocol you have selected and then the container has to go through a a seris of http calls to obtain a port). Maybe contact your VPN and check if they have changed anything or their side? Since netstat is not showing the desired port as LISTEN you may also try to dynamically set the port: rtxmlrpc network.port_range.set '' "60210-60210" rtxmlrpc dht.port.set '' "60210" And also IP settings: rtxmlrpc network.bind_address.set '' "the vpn assigned IP" rtxmlrpc network.local_address.set '' "the external IP"
  2. Cool, restart your container and, using the IP that your VPN has assigned to your box and port 60210, check if your box is connectable (https://www.yougetsignal.com/tools/open-ports/). If that's good and your trackers show your box as connectable you are done (the port indicator is known to be a little flaky and give false positives).
  3. Have you manually set port 60210 as your rtorrent port (i.e., network.port_range.set = 60210-60210 in rtorrent.rc)? The container will only detect and set ports automatically for PIA and AirVPN
  4. I don't have any extra trackers, sorry, however my configuration is at: /home/nobody/.irssi (which contains a symlink pointing to /usr/share/autodl-irssi/AutodlIrssi/) and /home/nobody/.autodl (with a symlink ultimately pointing to /config/autodl/autodl.cfg) docker exec -it nginx-proxy_rtorrentvpn_1 sh sh-5.0# ls -lah /home/nobody/.irssi/ total 32K drwxrwxr-x 1 nobody users 4.0K Jan 11 20:37 . drwxrwxr-x 1 nobody users 4.0K Jan 11 20:18 .. -rw-r----- 1 nobody users 6.6K Jan 11 20:37 config drwxrwxr-x 1 nobody users 4.0K Jan 7 12:41 scripts sh-5.0# ls -lah /home/nobody/.irssi/scripts/ total 24K drwxrwxr-x 1 nobody users 4.0K Jan 7 12:41 . drwxrwxr-x 1 nobody users 4.0K Jan 11 20:37 .. lrwxrwxrwx 1 nobody users 36 Jan 7 12:41 AutodlIrssi -> /usr/share/autodl-irssi/AutodlIrssi/ drwxrwxr-x 1 nobody users 4.0K Jan 7 12:41 autorun sh-5.0# ls -lah /home/nobody/.irssi/scripts/autorun total 16K drwxrwxr-x 1 nobody users 4.0K Jan 7 12:41 . drwxrwxr-x 1 nobody users 4.0K Jan 7 12:41 .. lrwxrwxrwx 1 nobody users 39 Jan 7 12:41 autodl-irssi.pl -> /usr/share/autodl-irssi/autodl-irssi.pl sh-5.0# ls -lah /home/nobody/.autodl/ total 32K drwxrwxr-x 1 nobody users 4.0K Jan 11 19:38 . drwxrwxr-x 1 nobody users 4.0K Jan 11 20:18 .. lrwxrwxrwx 1 nobody users 25 Jan 11 19:37 autodl.cfg -> /config/autodl/autodl.cfg -rwxrwxr-x 1 nobody users 69 Jan 7 12:41 autodl.cfg.bak -rw-r--r-- 1 nobody users 11K Jan 18 13:25 AutodlState.xml sh-5.0# ls -lah /config/autodl/ total 28K drwxrwx---+ 2 nobody users 4.0K Feb 28 2019 . drwxrwx---+ 9 nobody users 4.0K Jan 11 19:52 .. -rwxrwx---+ 1 nobody users 4.3K Jan 11 18:09 autodl.cfg Your guess is as good as mine really, but given the folder structure above I would try to mount a volume with custom trackers at /usr/share/autodl-irssi/AutodlIrssi/trackers (e.g., -v /path/to/host/trackers:/usr/share/autodl-irssi/AutodlIrssi/trackers).
  5. Well, I can't say for sure. Like I said before, seeding multiple thousand torrents is an extreme sport and some people take it to the limit, nevertheless I would also be suspicious about his info. I mean, I know that PHP 7 has come a long way but rutorrent is not exactly a lightweight client. In order to seed around 10% of the number of torrents that this Reddit user is claiming to do I had to tweak rtorrent and kernel parameters. The load is also hitting my disks pretty hard (as in, with a 1 Gbps connection even my SSDs are having trouble keeping up with IOPs, they get pretty warm and I already had to replace one of them) ruTorrent actually still works but it's very slow, and when I try to delete Large Files or move files around with File Manager the UI freezes. rtorrent-ps / pyrocore are indeed still flying and, although I haven't tried it myself, I doubt that I would have any problems seeding twice as many torrents as I currently do. 5k is certainly doable. I've read trustable reports (sysads that runs reputable seedbox hosting services) saying that 10k is achievable with rtorrent. 35k is certainly extreme, and on a low spec machine that would be certainly impressive. Assuming that his claim are true, maybe his usage pattern helps (I assume that he is seeding lots of small infrequently downloaded torrents, probably on a private tracker like RED, so torrents will just be sitting there and rtorrent will be mostly idle). My usage pattern is just the opposite, there is always IO going on, the container is often downloading and seeding as fast as my VPN can handle and rtorrent is working very hard.
  6. Thanks @binhex. I understand and actually think this is a good solution, it will at least force users to think about passwords and over time it will reduce the attack surface quite a bit (just make sure that the first example that a user finds in the documentation does not use admin/rutorrent as default credentials). I know that it's going to be a little painful, but trust me on this one, you will be doing a great thing for your userbase, even if they hate you for it. I have to admit that I already mantain my own private fork of your repo, as well as a docker-compose build that handles the stuff that I need: Multiuser support - as in one rtorrent instance per user - rar support, rutorrent plugins such as File Manager, File Upload, etc for my wife and for the rare occasion when I use the web client (ruTorrent is no longer working that we'll for me), some tune-up, tools to beef up security a little bit (fail2ban, etc) as well as a reverse proxy + ddns client + auto certificate manager combo :). I don't plan to make it public anytime soon, but if any of the rtorrent / ruTorrent / security features sound interesting I'll be happy to contribute upstream. Cheers and thanks again
  7. I've just updated to the latest version (after over 2 weeks of uptime, yay!). Everything is working great, including the screenshots plugin. As always, thanks @binhex. binhex, Just letting you know that I'm available and willing to write the above code if I have your blessing - just give me a shout since those changes are a little bit more involved than the rtxmlrpc stuff. If you rather me not to do it / still think that the cost of disruption is too high please let me know (no hard feelings at all, I promise not to pester you with security stuff any more).
  8. Sure, I don't know much about qBittorrent, but here's my setup for rtorrent. 1. I have a watch folder, an incomplete folder and a complete folder, each with a bunch of subdirectories, e.g.: /watch /tv /movies /bonus /... /incomplete /tv /.... /complete /tv /... 2. /watch should be monitored by pyrotorque's Watch Job (see: https://pyrocore.readthedocs.io/en/latest/advanced.html#rtorrent-queue-manager). I'm still using old school rtorrent watched directories (https://github.com/rakshasa/rtorrent/wiki/Common-Tasks-in-rTorrent#watch-a-directory-for-torrents) because I'm too lazy to upgrade, but pyrotorque is certainly easier and more powerful to use. 3. /incomplete is where rtorrent keeps data from downloads in progress (see 2). 4. /complete is where downloads are moved after they are done. 5. To automatically delete torrents I do two things. a) All of my important trackers have aliases configured: (see https://pyrocore.readthedocs.io/en/latest/setup.html#setting-values-in-config-ini) b) I have a cron job that deletes properly seeded torrents. It uses a bunch of rcontrol --cull commands - one rule per tracker, plus a general overall rule. It's a slight variation of: https://pyrocore.readthedocs.io/en/latest/usage.html#deleting-download-items-and-their-data 6. autodl-irssi is set to download stuff from the trackers where I want to build ratio, it monitors IRC Channels and add a certain number of filtered torrent files to the watched folder every week (see: https://autodl-community.github.io/autodl-irssi/configuration/overview/ ) 7. I can add a torrent manually just by dropping it in the watch folder. I can delete torrents manually with rcontrol and I can use rtorrent-ps (CLI tool) instead of ruTorrent to check how things are going. 8. I use Downloads Router (tohttps://www.google.com/url?sa=t&source=web&rct=j&url=https://chrome.google.com/webstore/detail/downloads-router/fgkboeogiiklpklnjgdiaghaiehcknjo%3Fhl%3Den&ved=2ahUKEwiAs93ujPPmAhXAUxUIHWzsAZAQFjADegQIBBAB&usg=AOvVaw2Fwk4fqI4MValYtWgVXOrE) go save torrent files to the most appropriate watched subfolder according to the tracker from where I'm downloading stuff (e.g., if the private tracker is specialized in in movies, any torrent files that I download will end up in watch/movies by default). I can always override where torrent files are saved or move them to a different folder later. 9. My Plex libraries are configured to read from suvolders of /complete That's pretty much it for torrenting. I have Sonarr and Radarr but I only use them with Usenet (they can quickly destroy your buffer on private trackers if you are not careful with what you are doing). You will have no problems running a few torrent containers in your NAS. Yes, theoretically more processes means more things to manage, but from personal experience, it's easier to split the load. If you reach an OS or hardware limit you will know when it happens, but I know for a fact that quite a few seedbox providers are running dozens of rtorrent clients per VM with very happy and satisfied paying customers. Cheers,
  9. Hi @Sturm, I understand. We all have been starters someday. To be honest though, seeding several thousand torrents from a single docker container on a single machine requires some sysadmin skills. While you may get 2k torrents to work out of the box, it is important to understand that rtorrent, although very capable, is not the only piece required to make it work. It also important to set realistic expectations, as you add more torrents, ruTorrent (the web interface) will begin to slowdown and eventually become unstable. Most of us seeding over 3k torrents are using rTorrent exclusively through the command line and have automated flows to import, organise and automatically remove torrents. Other than that, what I was trying to say is that you will eventually hit configuration / OS / Kernel limits and ultimately hardware limits. Seeding 5k+ torrents from a single client is quite a challenging sport that a few of us nerds like to play . I may be wrong, but it seems to be like you are more interested in the end result (being able to seed your torrents) than the technical challenge of running everything in a single container right? If so, an alternative that may prove easier to setup is to run multiple containers. Just seed 1k / 1.5k torrents per container, keep an eye on CPU, network and memory usage. As for storage, spread the load over multiple HDs. Maybe finish setting up your first rtorrent box and then use docker-compose or a orchestration tool to run multiple instances in different http / https ports. That's ok. It's just a ruTorrent plugin failing to load, no need to worry unless you want to screenshoot videos from ruTorrent. @binhex, I can confirm the error message. We may want to get rid of the screenshoots plugin (or install ffmpeg, although I doubt that many people will actually care for it). Autotools may be handy ( https://github.com/Novik/ruTorrent/wiki/PluginAutotools ) to add torrents. I would also temporarily disable hash checks (assuming that you are just pointing rtorrent to the same files that qBittorrent was serving) so that IO operations do not overload your server. Finally, add torrents in small batches (up to 100 at a time). I've migrated a few of my seedboxes to binhex containers and it's nowhere as painful as it looks :). Cheers,
  10. Thanks for merging my PR @binhex . Hopefully it turns out to be useful for everyone else. I've latter realised that we can probably safely remove this line from the watchdog script as well as the xmlrpc-c client itself if we want, but this can be done at a later stage. Hahaha, I understand, and I've noticed the huge amount of support tickets when you added the extra options as well. I can't really ask you to sacrifice even more of your time to handle support tickets. To be honest, my main worry is that your containers may become a target for malware and endanger the exact type of inexperienced user that today struggles with docker environment variables. With 5M+ pull requests and considering that your container is part of DockSTARTer, I expect tens of thousands of installations, maybe more. Since, IMO, you have the best and most complete rtorrentt + rutorrent container out there I expect it's popularity to grow. As you said, right or wrong people will use default credentials. How many of those installations are exposing unprotected RPC2 mounts over the internet? How many of them are only protecting XMLRPC with default credentials? That's a juicy target for hackers, and unfortunately it's trivial to modify the scripts that already target unprotected RPC2 mounts and attempt a few passwords like admin / rutorrent. My take on it though is that maybe removing defaults can be done in a way that does not disrupt your existing user base too much but still provides better security out of the box for newcomers? See bellow. So, here's my take on it, we can leverage the environment variables to change defaults without affecting old users (that already have rpc2_auth and webui_auth files in place). I think that most of the changes would need to be done in: https://github.com/binhex/arch-rtorrentvpn/blob/master/run/nobody/rutorrent.sh#L225-L328 and https://github.com/binhex/arch-rtorrentvpn/blob/master/build/root/install.sh#L479-L539 Here's my proposal ordered by most painful change to least painful one: The default for ENABLE_RPC2, when not specified, should be no. I know, this one may result in some support requests, but most of the users will probably follow your guide and have it set to yes already. For everyone else I can give a hand by asking them to set ENABLE_RPC2=yes. This change, by itself, will make your container way safer. There should be no default for RPC2_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS, instead one of 3 things should happen: a. If non-empty rpc2_auth and web_auth exists and use credentials match admin / rutorrent then we print a giant warning telling users not to use default credentials. But the container still boots and works as expected - this is the best that we can do for older insecure installations without a flood of support requests. b. If non-empty rpc2_auth and web_auth exists and there isn't a admin / rutorrent user present we do nothing. Security conscious users that know what they are doing will not be bothered c. If rpc_auth and web_auth aren't there or are empty (new users), then we fail the container startup with a clear explanation about how to use RPC_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS. New users will set their own credentials as supposed README.md examples should be changed so that admin / rutorrent are not there by default. Although not Ideal, I'm ok with myusername / mypassword. I doubt that we'll get support tickets from new users, although some will actually use myusername and mypassword, we may also want to check for it in 2.a. ). IMO old users will basically be taken care of by 2.a and 2.b and the support requests that we get about RPC2 can be solved by asking the user to set ENABLE_RPC2=yes. Maybe I'm being naive about the typical user knowledge level (you know your user base better than anyone else), however it feels like this can be done without resulting in an endless ****storm of support requests. What do you think? If you agree I again volunteer to help with the changes as well as to monitor the forums and tell users to ENABLE_RPC2 when Sonarr / Radarr / **rr fails :D.
  11. I'm seeding 2k+ torrents at the moment (not on Unraid though, cheap NAS with 8GB of memory and two SSD disks acting as cache that handle most of the IO). I didn't really do much with rtorrent.rc other than slightly increasing the number of files and sockets available due to the nature of the load that my system in running, as well as slightly increase buffers so that my disks don't get hit as hard. This isn't a silver bullet though (see: https://github.com/rakshasa/rtorrent/wiki/Performance-Tuning for more info). Maybe start over with the container's default rtorrent.rc file? Also make sure that your OS and cgroups aren't limiting system resources. I don't really use ruTorrent all that much, but it still loads fine, although UI is very slow. Binex explained the RPC2 mount above. autodl_irssi is used to automatically download torrents from IRC announce channels (it's mainly used by people on private trackers trying to build buffer).
  12. @binhex, I've modified the code as I've mentioned above it works on my machine :D. I have been running the modified version for a while with ENABLE_RPC2=no, things have been stable for over 10 days with PIA, the watchdog process has triggered port changes a couple of times with no issues whatsoever. It also works with ENABLE_RPC2=yes. PR: https://github.com/binhex/arch-rtorrentvpn/pull/134 , if for whatever reason you decide not to merge the PR, I hereby grant you the right to use it in any way that suits you, no strings attached. Having said that, if you choose to merge my changes and then eliminate defaults for RPC2_USER, RPC2_PASS, WEBUI_USER and WEBUI_PASS I'll be a very happy man =). ---- If anyone else is willing to try it, I've also uploaded a standalone version to a (temporary) Gist: https://gist.github.com/flatmapthatshit/6ff8d1ac441092b33339890b5145de3a Testers with VPNS other than PIA are especially welcome. For a quick experiment you can use docker to mount the new version of rtorrent.sh over /home/nobody/rtorrent.sh, e.g.: docker run -d \ --cap-add=NET_ADMIN \ -p 9080:9080 \ -p 9443:9443 \ -p 8118:8118 \ --name=rtorrentvpn \ -v /root/docker/data:/data \ -v /root/docker/config:/config \ -v /etc/localtime:/etc/localtime:ro \ -v ./rtorrent.sh:/home/nobody/rtorrent.sh \ -e ENABLE_RPC2=no `#followed by all other environment variables` \ binhex/arch-rtorrentvpn Once everything is running, other than waiting for the watchdog to do it's job, you can also run the script manually to force changes, e.g: docker exec -it -e rtorrent_running='true' \ -e VPN_INCOMING_PORT='30163' \ -e vpn_ip='10.16.11.6' \ -e external_ip='31.168.172.14 \ <my_container> /home/nobody/rtorrent.sh Hopefully this is helpful to the community.
  13. I see, that's a limitation from the xmlrpc cli tool. Libraries can talk to rtorrent straight over SCGI. This is how software like pyrocore talks with rttorent. Actually, pyrocore even exposes its own xmlrpc cli tool that does not require an http endpoint https://pyrocore.readthedocs.io/en/latest/references.html#cli-usage-rtxmlrpc Would it be much of a hassle to replace xmlrpc with pyrocore's rtxmlrpc client? As far I understand you already have pyrocore installed. Changes would probably be limited to the first if statement in https://github.com/binhex/arch-rtorrentvpn/blob/master/run/nobody/rtorrent.sh and then, AFAICT, the requirement for default credentials can go away. If you are busy and accept contributions I can even take a stab at it myself. What do you think?
  14. Sorry for the late response, that's a good guess, thanks! I do actually have RPC2 disabled. I did downgrade to a previous image a few days ago, but I'll upgrade to the latest image, enable RPC2 and see how it goes for the sake of troubleshooting (if it breaks again I'll try to extract something useful from the logs using your previous hints). I know that I'm being a little - well, more than a little - insistent about this, but since the port renew scripts are operating from inside the container, are you sure that exposing RPC2 over Nginx is necessary? Can't xmlrpc client issue commands straight to localhost:5000 (rtorrent scgi port)? Or even better, can't you move to a socket setup? For users that really need to expose RPC2 to Sonarr / Radarr / etc Nginx can still act as a Proxy (see: https://www.reddit.com/r/seedboxes/comments/92oi6u/creating_a_scgi_socket_file_to_http_proxy_for/ for an example ). If this works you can revert all of the unnecessary security compromisses. No necessity for default credentials, no requirement to expose RPC to the outside world for those of us that don't need it, etc. Don't take this the wrong way, auto-restart is a great feature and I don't want it to go away, however, as it is the security compromisses are really bothering me.
  15. Just to report, the containers are less stable than before. Despite the new auto-restart logic, after 3 to 6 days I always end with no connectivity (red icon at the bottom of rutorrent) until I restart the container manually. I'm with PIA, on a very stable 1 GB connection. With older images I was managing several weeks of uptime. Unfortunately logs are pure garbage after a few hours. Is there a way that I can setup the container to dump logs as soon as connectivity is lost to help troubleshoot the issue?
  16. That's a weird one, maybe attach to the container and run `top` to understand what processes are using CPU Time. You can also limit the how many CPUs are available to the container with Docker's --cpu option (see https://docs.docker.com/config/containers/resource_constraints/ for further info). I've noticed that rutorrent may get stuck for a while when I delete large (as in 100GB+) torrents with data, other than that I normally seed a few thousand torrents with no issues (and nowhere near 100% CPU usage).
  17. Hi @binhex, sorry that I'm kinda late to the party. The upgrade path wasn't totally smooth for me. I haven't noticed the new parameters, and since they have defaults (and also given that you've changed nginx.conf to read credentials from the new webui_auth file bypassing my custom settings) my container's rutorrent has been exposed to the internet with default credentials for a while. Luckily I had ENABLE_RPC2=no Or it would have also exposed XML RPC over the internet with easily guessable default credentials. I guess I've shared the following article already, but it demonstrates why exposing RPC2 with no password or default credentials is not a good idea: https://www.f5.com/labs/articles/threat-intelligence/rtorrent-client-exploited-in-the-wild-to-deploy-monero-crypto-miner I know that ultimately this is a trade-off between security and convenience but given how many people are using your container I would suggest that you make it as secure as possible by default. I guess that two good ways to handle RPC_USER/RPC_PASS/WEBUI_USER/WEBUI_PASS without making the container vulnerable by default are: 1. Remove the mentioned variables. From inside the container you can probably issue XMLRPC commands straight to port 5000 or use a sgci_local socket (bypassing nginx authentication altogether) right? Otherwise, during container startup you can generate random credentials. 2. Keep the variables but fail to start the container if no values are provided. IMO, default credentials are a really bad idea. Of course that a simple docker inspect will reveal the passwords, but I guess that this is not a big issue given that VPN_PASS is already exposed (secrets are a better alternative but it is limited to docker swarm and docker-compose). Cheers
  18. Just dropping by to say that containers have been stable for 3 days already. Buttery smooth I initially didn't get any disk space back and then I remembered to manually delete the Flood folder created by previous versions :D. Great stuff getting rid of Google and OpenDNS by the way; never know when DNS leaks will happen. Despite some Reddit hate for Cloudfare I never had a problem with 1.1.1.1. Will send you some extra beer your way soon :).
  19. Hey @binhex. Thansk for the heads up. I've noticed that you have updated the docs and uploaded a test image. However, "latest" is still pointing to a 5 weeks old image that contains flood. Are you planning to release the new version soon?
  20. Fair enough. Both valid points. That's a good compromise. New users will be safe by default, but it will not be a draconian imposition on old users. Good idea about the warning as well. Maybe something like: ENABLE_RPC: This option exposes XMLRPC over SCGI (/RPC2 mount). Useful for mobile clients such as Transdroid and nzb360. WARNING: Once enabled, authenticated users will be able to execute shell commands using http / https (including commands that write to shared volumes). Known exploits target insecure XMLRPC deployments (e.g., Monero exploit). Enable at your own risk, make sure you have changed username and password before doing that. And something similar for ENABLE_RPC_AUTH. WARNING: By disabling ENABLE_RPC_AUTH you are essentially allowing everyone with http / https to run arbitrary shell commands against your container. Disabling this option is not recommended. In summary: What it does, why would anyone enable it, why it can be dangerous and how to properly protect the setup. (Sorry for my bad English BTW. I'm not a native speaker). Honestly, I wouldn't set ENABLE_RPC_AUTH=no even in my own LAN. Perimetral security is great and all, however, to me XMLRPC is about the same as ssh access (None of my servers allow unauthenticated access). Keep it safe and let the reverse proxy pass-through auth headers.
  21. Tested. All working great! Thanks Binhex. More beer coming as soon as I receive my paycheck ;). I know that I'm being annoying / paranoid, but I think that ENABLE_RPC2 should be "no" by default. RCP2 + default admin / password means that containers are still easily exploitable out of the box. A valid counterpoint is that most people that are not techie-savvy enough to change the default credentials will probably not be exposing ports to the internet... However, I guess that most users will not really care about / need XMLRPC over SCGI out of the box. Honestly, I think that it is safer to assume that most people will not need it and let everyone else enable it manually.
  22. Yes. Certainly proxing with auth is better than directly exposing port 5000 to the internet. I don't know much about nzb360. I have however enabled HTTPRPC and tested Sonarr with it (plugins/httprtc/action.php endpoint). It is working well (for about 5 hours). Not really impacting my CPU usage very much (testing on a laptop with I7 CPU). I do understand and thank you for taking action. If possible, however, I would at least recommend pushing basic authentication in the /RPC2 mount to everyone ASAP (as per my understanding you are already planning to do it as soon as you get confirmation that it works). While it is not as effective as completely removing the attack vector by default; with basic authentication exposed containers are at least safer from scanning bots.
  23. Hi @binhex, it works. However... Is that /RCP2 mount proxying 9080 and 9443 to 5000 over SCGI really necessary for rutorrent or flood to work over the internet? As far as I can tell from config.ini, rutorrent is actually going straight to 127.0.0.1:5000: $scgi_port = 5000; $scgi_host = "127.0.0.1"; Flood can also connect directly to port 5000. There are even a couple of plugins meant to replace SCGI altogether (see https://github.com/Novik/ruTorrent/wiki/PluginHTTPRPC for instance), it even promises to reduce bandwidth usage in exchange for extra CPU load. Regardless of authentication. If WAN exposure to XML-RPC can be limited I think that it would be a great idea. There are several known exploits that target insecure RPC deployments. There are also bots looking for this kind of thing (you can thank years and years of insecure WordPress deployments for that). If XML-RPC exposure over WAN is not strictly necessary I would vote to disable it by default. Maybe include an ENABLE_XMLRPC_OVER_SCGI flag for people that really need it. Basic authentication should certainly be enabled and I would really advice that people thinking about enabling such a flag first take the time to tighten security (or better yet, do not expose ports to WAN. Use a VPN instead). Further reading * Discussion about rtorrent and XML-RPC exploits: https://github.com/rakshasa/rtorrent/issues/696 * "Secure" setup with rtorrent + local / UNIX socket + Nginx reverse proxy with basic authentication + rutorrent: * rTorrent - ruTorrent communication: http://forum.cheapseedboxes.com/threads/rtorrent-rutorrent-guide.1417/ - HTTPRPC sounds like a great alternative to XML-RPC over SCGI (unless you are running a potato PC).
  24. You don't need to expose port 5000. Just expose 9443 (or 9080 if you are offloading Https to the proxy). As far as rutorrent is concerned it is talking to XML-RPC in the container's localhost. If you are using other clients that require access to port 5000, you can use a similar strategy. I've personally created a docker compose file for all applications that require access to rtorrent. Docker compose creates a shared network where services are discoverable by name (https://docs.docker.com/compose/networking/). Applications like Sonarr can access port 5000 even though it is not directly exposed to the internet. (E.g., use container hostname:5000) I would not recommend exposing RPC2 over the internet. However, if you want to do it for whatever reason (e.g., Sonarr is running in a separate box in the internet and for whatever reason you don't want to use a VPN) you will need to really beef up security. Username and password is just a first measure; monitoring your logs, setting up fail2ban, etc are all good steps. Otherwise you will soon find a crypto miner installed at your server...
  25. Hi guys, Just sharing (no support needed). I have missconfigured autodl-irssi and ended up with over 3k torrents in the container. Tonight I got several emails warning me that OOM Killer was running in a loop (since VPN IP changes every time I was also blocked from a tracker for spamming... Again! :(). Out of curiosity I had a look at the running process tab. rtorrent-ps + rutorrent + nginx + PHP were sitting at a cool 1.2 GB. Flood on the other hand quickly goes from 500 MB to 4 GB to 6 GB to OOM. node.js processes are basically eating all available run (not sure if this is expected or a memory leak). What I've learned today: 1) Always double check that you have setup autodl-irssi correctly. Do set sane daily limits on every filter. 2) Disable Flood. Honestly, it's not worth it. I've stopped and removed 2700+ torrent files manually. Even with only ~300 torrents Flood was still using ~550 MB memory (a good 4x more than rtorrent + rutorrent combined). At this stage I would say that Flood is only for casual users. 3) Be very careful with VPNs + container auto restart.