Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About 7thSon

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Could we sort this out in PM, I'm also using a reverse proxy? I just need to see how you formatted your credentials etc so I can match my input to the app and the way I run rTorrent?
  2. Does anyone have a working setup of rTorrent in the nzb360 app currently? It used to work for me, and I'm unsure of what's causing me to not be able to log in, but I'm thinking it might be the username/password settings for the container.
  3. Setting VPN_ENABLED=no makes no difference. And how does VPN even affect the ability to connect to the deluge daemon? I just renamed my core.conf to core.conf.old and restarted deluge so it recreated the file, and now I can connect again. I recall this happening before, somehow the core.conf file get corrupted and I can't connect to the daemon.
  4. Have there been some breaking updates to the container in regards to connecting to the daemon? I can't connect using either the deluge-console ("deluge-console -d localhost -p 58846 -U user -P password"), or the desktop client (2.0.4.dev23). I've been able to successfully connect to the daemon up until about a week ago. The only new error I've seen is the "ngettext" issue from python, but that seems unlikely to be the source of this problem.
  5. Yeah, it works again with that fix applied. Thanks.
  6. Has something been changed with the rutorrent-script with regard to nginx? A few days ago my rutorrent UI stopped responding, and checking the logs I find this on startup: 2020-01-22 17:25:38,433 DEBG 'rutorrent-script' stderr output: ln: failed to create hard link '/home/nobody/bin/nginx' => '/usr/bin/nginx' 2020-01-22 17:25:38,433 DEBG 'rutorrent-script' stderr output: : Operation not permitted 2020-01-22 17:25:38,434 DEBG 'rutorrent-script' stderr output: /home/nobody/rutorrent.sh: line 342: /home/nobody/bin/nginx: No such file or directory 2020-01-22 17:25:38,434 DEBG fd 21 closed, stopped monitoring <POutputDispatcher at 140256739577568 for <Subprocess at 140256739577520 with name rutorrent-script in state RUNNING> (stdout)> 2020-01-22 17:25:38,434 DEBG fd 25 closed, stopped monitoring <POutputDispatcher at 140256739636848 for <Subprocess at 140256739577520 with name rutorrent-script in state RUNNING> (stderr)> 2020-01-22 17:25:38,434 INFO exited: rutorrent-script (exit status 127; not expected) 2020-01-22 17:25:38,434 DEBG received SIGCHLD indicating a child quit 2020-01-22 17:25:40,703 DEBG 'watchdog-script' stdout output: [info] rTorrent process started [info] Waiting for rTorrent process to start listening on port 5000... 2020-01-22 17:25:40,713 DEBG 'watchdog-script' stdout output: [info] rTorrent process listening on port 5000 2020-01-22 17:25:40,720 DEBG 'watchdog-script' stdout output: [info] Autodl-irssi not enabled, skipping startup 2020-01-22 17:25:40,721 DEBG 'watchdog-script' stdout output: [info] Initialising ruTorrent plugins (checking rTorrent is running)... 2020-01-22 17:25:40,738 DEBG 'watchdog-script' stdout output: [info] rTorrent running [info] Initialising ruTorrent plugins (checking nginx is running)... The process hangs for a long time waiting for nginx with nothing happening
  7. I'm not able to set the "Incoming Port" in deluge UI to 58846, it reverts to 0 when I click "apply". Is this intended, or is it some kind of restriction/bug?
  8. What is the correct way of removing the ruTorrent authentication? I set "ENABLE_RPC2_AUTH=no" and restarted, still get auth dialog. Also tried removing rtorrent/config/nginx/security/auth and also still get auth dialog. Also tried just removing the admin user with the deluser.sh script, still same.
  9. I added the below lines in my docker-compose file, but I'm still getting the same error? devices: - /dev/net/tun:/dev/net/tun Also tried it like this with the same outcome: devices: - /dev/net/tun EDIT: Ah, I had a docker update pending, after a reboot it works again
  10. I checked the issue on github but I'm not quite following what you are suggesting I do. Are you saying I should enter the container when I create it and run the tun parts of the start.sh script at line 148 from another container? It didn't work when I tried. Why is the tun check not added to the delugevpn container, if it's a known issue?
  11. I needed to create a new container for deluge-vpn after changing my vpn password, but now I keep getting some errors when I check the container logs. I've been on an older image for quite some time (a couple of years probably), just starting and stopping it as needed, in case that matters. The errors from the container logs: [warn] 'iptable_mangle' kernel module not available, you will not be able to connect to the applications Web UI or Privoxy outside of your LAN 2019-03-01T16:56:11.451257413Z [info] unRAID users: Please attempt to load the module by executing the following on your host:- '/sbin/modprobe iptable_mangle' 2019-03-01T16:56:11.451265784Z [info] Ubuntu users: Please attempt to load the module by executing the following on your host:- '/sbin/modprobe iptable_mangle' 2019-03-01T16:56:11.451273999Z 2019-03-01T16:56:11.451394124Z 2019-03-01 17:56:11,451 DEBG 'start-script' stdout output: 2019-03-01T16:56:11.451406522Z [info] Synology users: Please attempt to load the module by executing the following on your host:- 'insmod /lib/modules/iptable_mangle.ko' ... 2019-03-01T16:56:18.965140888Z Fri Mar 1 17:56:18 2019 ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19) 2019-03-01T16:56:18.965149495Z Fri Mar 1 17:56:18 2019 Exiting due to fatal error Checked the tun device and iptable_mangle in the container, they're not found: [root@2dce11e396c5 /]# cat /dev/net/tun cat: /dev/net/tun: No such device [root@2dce11e396c5 /]# modprobe tun modprobe: FATAL: Module tun not found in directory /lib/modules/4.20.12-arch1-1-ARCH [root@2dce11e396c5 /]# /sbin/modprobe iptable_mangle modprobe: FATAL: Module iptable_mangle not found in directory /lib/modules/4.20.12-arch1-1-ARCH [root@2dce11e396c5 /]# I'm running the container from a docker-compose file like this: version: '3.2' services: deluge-vpn: restart: unless-stopped image: binhex/arch-delugevpn container_name: deluge-vpn cap_add: - NET_ADMIN ports: - 8112:8112 - 8118:8118 - 58846:58846 - 58946:58946 environment: - PUID=1000 - PGID=100 - VPN_ENABLED=yes - VPN_USER=*username* - VPN_PASS=*password* - VPN_PROV=custom - STRICT_PORT_FORWARD=yes - ENABLE_PRIVOXY=yes - LAN_NETWORK= - NAME_SERVERS=*ip-numbers* - DELUGE_DAEMON_LOG_LEVEL=info - DELUGE_WEB_LOG_LEVEL=info - DEBUG=false - UMASK=000 volumes: - /apps/docker/deluge/data:/data - /apps/docker/deluge/config:/config - /home/user/.config/deluge/state:/config/state - /mnt/Downloads:/mnt/Downloads - /etc/localtime:/etc/localtime:ro My host machine is running Arch, on which I'm not connecting to openvpn at the moment. What am I missing?
  12. I'm trying to use the transmission script-torrent-done functionality to start another docker container from inside the transmission container. To do this I've mounted the /var/run/docker.sock, but there are permissions issues after that. My script is located in "/config/postproc.sh", and is run by the user "abc" presumably. I've tried setting the postproc script to print "whoami", but it shows up blank mostly, however I think once it did manage to print "abc" for some reason. These are the recurring errors I'm getting, the first one is obvious since /root/.docker/config.json doesn't even exist, the question is, what user is trying to access it, and why doesn't it exist? WARNING: Error loading config file: /root/.docker/config.json: stat /root/.docker/config.json: permission denied The second one is permission error for docker.sock, perhaps because "abc" isn't actually the user trying to access it? docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.38/containers/create?name=filebot: dial unix /var/run/docker.sock: connect: permission denied. So looking in /etc/passwd I can see that the abc users' home folder is /config. I installed bash and set the abc users shell to /bin/bash in my Dockerfile (below), and created this .bashrc in /config for abc to source: if [ -e /var/run/docker.sock ]; then sudo chown abc:docker /var/run/docker.sock; fi if [ -e /root/.docker/config.json ]; then sudo chown abc:docker /root/.docker/config.json; fi Dockerfile: FROM linuxserver/transmission:latest RUN apk update RUN apk add docker bash RUN usermod -a -G docker abc RUN chsh --shell /bin/bash abc What more do I need to get the docker socket to be available for the user running a script-torrent-done script?
  13. My bad, it was very late when posting last night. Will go to linuxserver.io forum instead.
  14. I'm having issues starting today with the transmission container running on my Synology NAS, something had happened in the past few days and I couldn't connect using transmission-remote-gui to port 9091 anymore. When I check the container activity with "docker logs -ft transmission" I see that it keeps looping over and over with some cron jobs: 2018-04-16T22:26:00.118912370Z crond[230]: wakeup dt=60 2018-04-16T22:26:00.119820664Z crond[230]: file root: 2018-04-16T22:26:00.120240118Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:26:00.120578958Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:26:00.120908051Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:26:00.121254684Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:26:00.121692976Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:26:00.122274140Z crond[230]: line /config/blocklist-update.sh 2>&1 2018-04-16T22:27:00.119934553Z crond[230]: wakeup dt=60 2018-04-16T22:27:00.120979449Z crond[230]: file root: 2018-04-16T22:27:00.121349029Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:27:00.121689309Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:27:00.122074448Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:27:00.122412187Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:27:00.122886813Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:27:00.123194023Z crond[230]: line /config/blocklist-update.sh 2>&1 2018-04-16T22:28:00.120922269Z crond[230]: wakeup dt=60 2018-04-16T22:28:00.121834883Z crond[230]: file root: 2018-04-16T22:28:00.122234549Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:28:00.122546785Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:28:00.122811917Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:28:00.123184936Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:28:00.123513314Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:28:00.123795709Z crond[230]: line /config/blocklist-update.sh 2>&1 2018-04-16T22:29:00.123897329Z crond[230]: wakeup dt=60 2018-04-16T22:29:00.124822063Z crond[230]: file root: 2018-04-16T22:29:00.125267840Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:29:00.125683464Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:29:00.126120896Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:29:00.126486089Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:29:00.126885788Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:29:00.127262263Z crond[230]: line /config/blocklist-update.sh 2>&1 2018-04-16T22:30:00.125966011Z crond[230]: wakeup dt=60 2018-04-16T22:30:00.127182959Z crond[230]: file root: 2018-04-16T22:30:00.127579155Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:30:00.127943840Z crond[230]: job: 0 run-parts /etc/periodic/15min 2018-04-16T22:30:00.128167902Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:30:00.128470224Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:30:00.128774032Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:30:00.129120691Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:30:00.129423172Z crond[230]: line /config/blocklist-update.sh 2>&1 2018-04-16T22:30:00.129726678Z crond[1148]: child running /bin/sh 2018-04-16T22:30:00.129956405Z crond[230]: USER root pid 1148 cmd run-parts /etc/periodic/15min 2018-04-16T22:30:10.128887516Z crond[230]: wakeup dt=10 2018-04-16T22:31:00.129923726Z crond[230]: wakeup dt=50 2018-04-16T22:31:00.131015858Z crond[230]: file root: 2018-04-16T22:31:00.131366878Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:31:00.131534077Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:31:00.131913003Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:31:00.132235543Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:31:00.132560756Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:31:00.132960474Z crond[230]: line /config/blocklist-update.sh 2>&1 2018-04-16T22:32:00.132896426Z crond[230]: wakeup dt=60 2018-04-16T22:32:00.134140080Z crond[230]: file root: 2018-04-16T22:32:00.134492799Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:32:00.134721947Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:32:00.135106433Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:32:00.135405947Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:32:00.135675820Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:32:00.136016803Z crond[230]: line /config/blocklist-update.sh 2>&1 2018-04-16T22:33:00.134923640Z crond[230]: wakeup dt=60 2018-04-16T22:33:00.135957225Z crond[230]: file root: 2018-04-16T22:33:00.136361075Z crond[230]: line run-parts /etc/periodic/15min 2018-04-16T22:33:00.136682863Z crond[230]: line run-parts /etc/periodic/hourly 2018-04-16T22:33:00.137031869Z crond[230]: line run-parts /etc/periodic/daily 2018-04-16T22:33:00.137355044Z crond[230]: line run-parts /etc/periodic/weekly 2018-04-16T22:33:00.137653641Z crond[230]: line run-parts /etc/periodic/monthly 2018-04-16T22:33:00.137988072Z crond[230]: line /config/blocklist-update.sh 2>&1 Other than that all I get from the very start of the container launch is the below. The watch dir not having any space is peculiar, I have no idea why that is, the folder has permissions and theres definitely space left. 2018-04-16T22:12:56.038035807Z ------------------------------------- 2018-04-16T22:12:56.038205625Z GID/UID 2018-04-16T22:12:56.038315851Z ------------------------------------- 2018-04-16T22:12:56.039945533Z 2018-04-16T22:12:56.040247858Z User uid: 1024 2018-04-16T22:12:56.040461942Z User gid: 100 2018-04-16T22:12:56.040669955Z ------------------------------------- 2018-04-16T22:12:56.040922241Z 2018-04-16T22:12:56.046951854Z [cont-init.d] 10-adduser: exited 0. 2018-04-16T22:12:56.047264525Z [cont-init.d] 20-config: executing... 2018-04-16T22:12:56.060947838Z [cont-init.d] 20-config: exited 0. 2018-04-16T22:12:56.061185440Z [cont-init.d] done. 2018-04-16T22:12:56.064945582Z [services.d] starting services 2018-04-16T22:12:56.082927904Z [services.d] done. 2018-04-16T22:12:56.085941752Z crond[230]: crond (busybox 1.27.2) started, log level 0 2018-04-16T22:12:56.086303692Z crond[230]: user:root entry:*/15 * * * * run-parts /etc/periodic/15min 2018-04-16T22:12:56.087930269Z crond[230]: user:root entry:0 * * * * run-parts /etc/periodic/hourly 2018-04-16T22:12:56.088277033Z crond[230]: user:root entry:0 2 * * * run-parts /etc/periodic/daily 2018-04-16T22:12:56.088539751Z crond[230]: user:root entry:0 3 * * 6 run-parts /etc/periodic/weekly 2018-04-16T22:12:56.088765925Z crond[230]: user:root entry:0 5 1 * * run-parts /etc/periodic/monthly 2018-04-16T22:12:56.089080773Z crond[230]: user:root entry:0 3 * * * /config/blocklist-update.sh 2>&1 2018-04-16T22:12:59.978902938Z [2018-04-17 00:12:59.976] Transmission 2.93 (3c5870d4f5) started (session.c:740) 2018-04-16T22:12:59.979630357Z [2018-04-17 00:12:59.977] RPC Server Adding address to whitelist: (rpc-server.c:971) 2018-04-16T22:12:59.979832088Z [2018-04-17 00:12:59.977] RPC Server Serving RPC and Web requests on port (rpc-server.c:1213) 2018-04-16T22:12:59.980159973Z [2018-04-17 00:12:59.977] RPC Server Password required (rpc-server.c:1220) 2018-04-16T22:12:59.980367358Z [2018-04-17 00:12:59.977] UDP Failed to set receive buffer: requested 4194304, got 425984 (tr-udp.c:84) 2018-04-16T22:12:59.980512177Z [2018-04-17 00:12:59.977] UDP Please add the line "net.core.rmem_max = 4194304" to /etc/sysctl.conf (tr-udp.c:89) 2018-04-16T22:12:59.980663913Z [2018-04-17 00:12:59.977] UDP Failed to set send buffer: requested 1048576, got 425984 (tr-udp.c:95) 2018-04-16T22:12:59.980796676Z [2018-04-17 00:12:59.977] UDP Please add the line "net.core.wmem_max = 1048576" to /etc/sysctl.conf (tr-udp.c:100) 2018-04-16T22:12:59.981037827Z [2018-04-17 00:12:59.977] DHT Reusing old id (tr-dht.c:307) 2018-04-16T22:12:59.981287222Z [2018-04-17 00:12:59.977] DHT Bootstrapping from 148 IPv4 nodes (tr-dht.c:156) 2018-04-16T22:12:59.981463145Z [2018-04-17 00:12:59.977] Using settings from "/config" (daemon.c:528) 2018-04-16T22:12:59.981591487Z [2018-04-17 00:12:59.977] Saved "/config/settings.json" (variant.c:1266) 2018-04-16T22:12:59.981725141Z [2018-04-17 00:12:59.977] Saved pidfile "/transmission.pid" (daemon.c:543) 2018-04-16T22:12:59.981886441Z [2018-04-17 00:12:59.977] transmission-daemon requiring authentication (daemon.c:554) 2018-04-16T22:12:59.982121182Z [2018-04-17 00:12:59.977] Watching "/watch" for new .torrent files (daemon.c:573) 2018-04-16T22:12:59.982522411Z [2018-04-17 00:12:59.977] watchdir:inotify Failed to setup watchdir "/watch": No space left on device (28) (watchdir-inotify.c:176) 2018-04-16T22:12:59.982740871Z [2018-04-17 00:12:59.977] Loaded __ torrents (session.c:2034) 2018-04-16T22:12:59.982918527Z [2018-04-17 00:12:59.977] Port Forwarding (NAT-PMP) initnatpmp succeeded (0) (natpmp.c:70) 2018-04-16T22:12:59.983168351Z [2018-04-17 00:12:59.977] Port Forwarding (NAT-PMP) sendpublicaddressrequest succeeded (2) (natpmp.c:70) My docker settings in JSON format from the NAS: { "cap_add" : [], "cap_drop" : [], "cmd" : "", "cpu_priority" : 50, "devices" : null, "enable_publish_all_ports" : false, "enable_restart_policy" : false, "enabled" : false, "entrypoint_default" : "/init", "env_variables" : [ { "key" : "TRANSMISSION_WATCH_DIR_ENABLED", "value" : "false" }, { "key" : "PATH", "value" : "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" }, { "key" : "PS1", "value" : "$(whoami)@$(hostname):$(pwd)$ " }, { "key" : "HOME", "value" : "/root" }, { "key" : "TERM", "value" : "xterm" }, { "key" : "TZ", "value" : "Europe/Berlin" }, { "key" : "PGID", "value" : "100" }, { "key" : "PUID", "value" : "1024" } ], "exporting" : false, "id" : "77b1d0b0c044071360b962d48ac8471f1c15c4c93ca0e97eb7cdeea43dd2ff37", "image" : "linuxserver/transmission:latest", "is_ddsm" : false, "is_package" : false, "links" : [], "memory_limit" : 0, "name" : "transmission", "network" : [ { "driver" : "bridge", "name" : "bridge" } ], "network_mode" : "bridge", "port_bindings" : [ { "container_port" : 8888, "host_port" : 0, "type" : "tcp" }, { "container_port" : 9091, "host_port" : 0, "type" : "tcp" } ], "privileged" : false, "shortcut" : { "enable_shortcut" : true, "enable_status_page" : false, "enable_web_page" : true, "web_page_url" : "" }, "ulimits" : null, "use_host_network" : false, "volume_bindings" : [ { "host_volume_file" : "/docker/watch", "mount_point" : "/watch", "type" : "rw" }, { "host_volume_file" : "/docker/incomplete", "mount_point" : "/incomplete", "type" : "rw" }, { "host_volume_file" : "/docker/downloads", "mount_point" : "/downloads", "type" : "rw" }, { "host_volume_file" : "/homes/admin/.config/transmission", "mount_point" : "/config", "type" : "rw" }, { "host_volume_file" : "/scripts/filebot", "mount_point" : "/volume1/scripts/filebot", "type" : "rw" } ], "volumes_from" : null } Any ideas what to try next? I can't seem to figure out why the cron job just loops and loops,
  15. Got the port forwarding working perfectly, thanks for that. The file permissions are still troubling me, I'm checking the files in /config in the container, but all files are read/write enabled except perms.txt and auth. What file(s) contains the settings from the GUI/webUI (they should be the same, since if I enable a plugin in either one, it immediately show up in the other)?