Jump to content

Cat_Seeder

Members
  • Content Count

    26
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Cat_Seeder

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Fair enough. Both valid points. That's a good compromise. New users will be safe by default, but it will not be a draconian imposition on old users. Good idea about the warning as well. Maybe something like: ENABLE_RPC: This option exposes XMLRPC over SCGI (/RPC2 mount). Useful for mobile clients such as Transdroid and nzb360. WARNING: Once enabled, authenticated users will be able to execute shell commands using http / https (including commands that write to shared volumes). Known exploits target insecure XMLRPC deployments (e.g., Monero exploit). Enable at your own risk, make sure you have changed username and password before doing that. And something similar for ENABLE_RPC_AUTH. WARNING: By disabling ENABLE_RPC_AUTH you are essentially allowing everyone with http / https to run arbitrary shell commands against your container. Disabling this option is not recommended. In summary: What it does, why would anyone enable it, why it can be dangerous and how to properly protect the setup. (Sorry for my bad English BTW. I'm not a native speaker). Honestly, I wouldn't set ENABLE_RPC_AUTH=no even in my own LAN. Perimetral security is great and all, however, to me XMLRPC is about the same as ssh access (None of my servers allow unauthenticated access). Keep it safe and let the reverse proxy pass-through auth headers.
  2. Tested. All working great! Thanks Binhex. More beer coming as soon as I receive my paycheck ;). I know that I'm being annoying / paranoid, but I think that ENABLE_RPC2 should be "no" by default. RCP2 + default admin / password means that containers are still easily exploitable out of the box. A valid counterpoint is that most people that are not techie-savvy enough to change the default credentials will probably not be exposing ports to the internet... However, I guess that most users will not really care about / need XMLRPC over SCGI out of the box. Honestly, I think that it is safer to assume that most people will not need it and let everyone else enable it manually.
  3. Yes. Certainly proxing with auth is better than directly exposing port 5000 to the internet. I don't know much about nzb360. I have however enabled HTTPRPC and tested Sonarr with it (plugins/httprtc/action.php endpoint). It is working well (for about 5 hours). Not really impacting my CPU usage very much (testing on a laptop with I7 CPU). I do understand and thank you for taking action. If possible, however, I would at least recommend pushing basic authentication in the /RPC2 mount to everyone ASAP (as per my understanding you are already planning to do it as soon as you get confirmation that it works). While it is not as effective as completely removing the attack vector by default; with basic authentication exposed containers are at least safer from scanning bots.
  4. Hi @binhex, it works. However... Is that /RCP2 mount proxying 9080 and 9443 to 5000 over SCGI really necessary for rutorrent or flood to work over the internet? As far as I can tell from config.ini, rutorrent is actually going straight to 127.0.0.1:5000: $scgi_port = 5000; $scgi_host = "127.0.0.1"; Flood can also connect directly to port 5000. There are even a couple of plugins meant to replace SCGI altogether (see https://github.com/Novik/ruTorrent/wiki/PluginHTTPRPC for instance), it even promises to reduce bandwidth usage in exchange for extra CPU load. Regardless of authentication. If WAN exposure to XML-RPC can be limited I think that it would be a great idea. There are several known exploits that target insecure RPC deployments. There are also bots looking for this kind of thing (you can thank years and years of insecure WordPress deployments for that). If XML-RPC exposure over WAN is not strictly necessary I would vote to disable it by default. Maybe include an ENABLE_XMLRPC_OVER_SCGI flag for people that really need it. Basic authentication should certainly be enabled and I would really advice that people thinking about enabling such a flag first take the time to tighten security (or better yet, do not expose ports to WAN. Use a VPN instead). Further reading * Discussion about rtorrent and XML-RPC exploits: https://github.com/rakshasa/rtorrent/issues/696 * "Secure" setup with rtorrent + local / UNIX socket + Nginx reverse proxy with basic authentication + rutorrent: * rTorrent - ruTorrent communication: http://forum.cheapseedboxes.com/threads/rtorrent-rutorrent-guide.1417/ - HTTPRPC sounds like a great alternative to XML-RPC over SCGI (unless you are running a potato PC).
  5. You don't need to expose port 5000. Just expose 9443 (or 9080 if you are offloading Https to the proxy). As far as rutorrent is concerned it is talking to XML-RPC in the container's localhost. If you are using other clients that require access to port 5000, you can use a similar strategy. I've personally created a docker compose file for all applications that require access to rtorrent. Docker compose creates a shared network where services are discoverable by name (https://docs.docker.com/compose/networking/). Applications like Sonarr can access port 5000 even though it is not directly exposed to the internet. (E.g., use container hostname:5000) I would not recommend exposing RPC2 over the internet. However, if you want to do it for whatever reason (e.g., Sonarr is running in a separate box in the internet and for whatever reason you don't want to use a VPN) you will need to really beef up security. Username and password is just a first measure; monitoring your logs, setting up fail2ban, etc are all good steps. Otherwise you will soon find a crypto miner installed at your server...
  6. Hi guys, Just sharing (no support needed). I have missconfigured autodl-irssi and ended up with over 3k torrents in the container. Tonight I got several emails warning me that OOM Killer was running in a loop (since VPN IP changes every time I was also blocked from a tracker for spamming... Again! :(). Out of curiosity I had a look at the running process tab. rtorrent-ps + rutorrent + nginx + PHP were sitting at a cool 1.2 GB. Flood on the other hand quickly goes from 500 MB to 4 GB to 6 GB to OOM. node.js processes are basically eating all available run (not sure if this is expected or a memory leak). What I've learned today: 1) Always double check that you have setup autodl-irssi correctly. Do set sane daily limits on every filter. 2) Disable Flood. Honestly, it's not worth it. I've stopped and removed 2700+ torrent files manually. Even with only ~300 torrents Flood was still using ~550 MB memory (a good 4x more than rtorrent + rutorrent combined). At this stage I would say that Flood is only for casual users. 3) Be very careful with VPNs + container auto restart.
  7. Humm. I'm not familiar with Proton VPN, but I would check if they support Port Forwarding. Without it you will have a half-baked experience at best. No incoming connections means passive mode. You will get in trouble in private trackers. In public trackers you may not find many (or even any peers)... If you want to set up a VPN always check if they do allow port forwarding and if the port that you have opened is reachable (https://www.yougetsignal.com/tools/open-ports/). Other than that, while it is not related to your issue, you may not be able to access the webui remotely (see Q2 at
  8. Sorry, took a while trying to fix it by myself (and I've "succeeded"). Initially I was having the same errors as before: 2019/03/16 04:36:26 [error] 904#904: *3 connect() failed (113: Host is unreachable) while connecting to upstream, client: 192.168.X.Y, server: mydomain.local, request: "GET / HTTP/1.1", upstream: "http://192.168.X.Y:3000/", host: "mydomain.local" Turns out that Docker wouldn't me to access my host IP (be it public or internal) without --net=host. That's of course, an undesirable workaround. A better solution is to create a user-defined bridge network so that containers can talk directly. After doing that I've enabled IPv6 and modified NGINX configuration: listen 8080; listen [::]:8080; And finally it worked as expected: $ curl -6 -g -v -H "Host: mydomain.local" http://[::1]:8080 * Rebuilt URL to: http://[::1]:8080/ * Trying ::1... * TCP_NODELAY set * Connected to ::1 (::1) port 8080 (#0) > GET / HTTP/1.1 > Host: mydomain.local > User-Agent: curl/7.60.0 > Accept: */* > < HTTP/1.1 200 OK [16/Mar/2019:05:14:09 +0000] - 200 200 - GET http mydomain.local "/" [Client 192.168.X.Y] [Length 543] [Gzip -] [Sent-to 192.168.X.Y] "curl/7.60.0" "-" [16/Mar/2019:05:18:40 +0000] - 200 200 - GET http mydomain.local "/" [Client ::1] [Length 543] [Gzip -] [Sent-to 192.168.7.2] "curl/7.60.0" "-" However, I have to say that while I love the Nice UI and have nothing but praise for the Developers, the container is not really what I was expecting. It is not currently able to generate configuration on the fly when I run new containers (that's probably the most important feature of jwilder/nginx-proxy); plus, I quickly outgrown the UI and had to intervene manually in order to make the container work with IPv6, make it play well with Syslog, etc. You will need to set a Proxy Host configuration for the http port (9981) and a stream for the other port (9982). In the Stream configuration UI you can select a different port than 8080 (or whatever you http port is). Don't forget to publish that second port (e.g., -P 9982:9982) and add a rule to allow incoming traffic to that port in your firewall.
  9. I would probably drop Kinematic all together and go native (e.g., Docker Desktop). I'm not familiar with Kitematic, but it seems to be binding ports to localhost only (e.g. -p 127.0.0.1:32862:9080 instead of -p 9080:9080). The CLI is not that hard to learn; plus not having to deal with Docker Toolbox / VirtualBox will make your life easier.
  10. Hi Djoss, no luck with my local (192.168.x.y) or public IPs :(. Any other ideas? My setup is: * Linux Host * Your image running on Docker * Another image running on Docker, exposing port 3000 to the router. Accessing my local IP directly works and nginx-proxy image works as expected. Any other ideas?
  11. Just sharing one of the links that I've sent you in private yesterday in case anyone else hits the same issue. Please ignore the Haproxy specific tweaks: https://medium.com/@pawilon/tuning-your-linux-kernel-and-haproxy-instance-for-high-loads-1a2105ea553e Number of open files, max TCP connections and "reservation" times can all affect the end result when dealing with a large amount of torrents. I'm on Linux (not Unraid) and had to fine tune the host to get it all working with 1k+ torrents.
  12. Flood listens in port 3000. Rutorrent listens in port 9080 and 9443 (Https). If you want both you can set ENABLE_FLOOD to BOTH. Be warned that, while it looks great, flood uses a lot of memory and is not as feature complete as rutorrent. With 1k torrents Flood's Node.js process is using quite a bit of memory, plus the UI lags so much that it's barely usable; Rutorrent is still doing "reasonably" fine. As for rutorrent not starting, try to start from scratch. Delete the container, pull the latest image and start with a fresh volume / host folder bound to the container's /config folder.
  13. I think so. Have a look at the documentation bellow: enable_retry will turn off encryption in the second case. So basically the difference is that 1) Tries plain text first and then retry with encryption. If client can do both it will prefer plaintext 2) Tries encryption first and then retry plain text. If client can do both it will prefer encryption. Both strategies, in theory, will allow the user to connect with any kind of peer. Effects on speed are somewhat hard to predict. All things being equal, plaintext is probably faster. However, if the ISP is traffic shaping, encryption will probably boost the speeds. Maybe go for 1 when VPN is enabled and 2 otherwise?
  14. @binhex, I'm sorry to keep bothering you. Just want to check something. In rtorrent.rc we have: protocol.encryption.set = allow_incoming,enable_retry,prefer_plaintext As far as I understand rTorrent will work in plain text mode by default right? Is there a reason not to change it to something like: protocol.encryption.set = allow_incoming,try_outgoing,enable_retry So that it tries to use RC4 encryption when possible? As far as I understand this is safer and it's a good neighbour police (helps people that do not use a VPN).
  15. I haven't tried it myself but if rTorrent + flood is all you need maybe the following image may fit the bill: https://hub.docker.com/r/wonderfall/rtorrent-flood You may, of course, use binhex's image with the correct flags to disable vpn, privoxy and ruTorent + autodl-rssi, however, given that you do not need 80% of its features, if might feel like driving your kids to school with a lorry :).