Jump to content

PeterB

Community Developer
  • Posts

    2,729
  • Joined

  • Last visited

Everything posted by PeterB

  1. For the last day or two, my vpn connection has been repeatedly falling to zero transfer rate and returning to maximum rate. There is nothing in the delugevpn container log. It's not just deluge because, at the times the deluge transfer rate is zero, other connections via privoxy also go down. Other connections which don't go through the vpn tunnel are unaffected. Is The PIA Netherlands server playing up, or is my ISP messing with my vpn connection? Any thoughts how to identify the cause?
  2. Hmmm .... just looked at the Docker page - it says that binhex-couchpotato image was created one year ago! I know that it was working properly just days ago.
  3. My couchpotato docker appears to have lost the ability to access imdb. All the Box Office and Top 250 Movies displays ar grey and say "Unknown movie n/a". "Search & add new media" never finds anything. The pop up window opened by the CouchPotato bookmark never displays a movie in the drop down menu box. I've tried restarting the container several times, to no avail. What could be causing this?
  4. See the FAQ : (referenced in Binhex's .sig).
  5. Thanks, that seems to have resolved the problem. The .plg file was definitely not there, but the config folder still existed.
  6. I have no idea when this issue started, but I've just noted the following message being posted to my logfile once every minute: crond[1596]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null Now, I find that here is no dynamix.system.stats directory in /usr/local/emhttp/plugins, so it's not surprising that I'm getting errors. However, crontab -l (as root) does not show any entries which are run every minute, so I'm not sure what is happening here. I'm running v6.4 rc9f. Can anyone help me?
  7. Just upgraded to 6.40rc9f and the initial reboot failed. After a restart (with parity check) delugevpn fails to start. The log contains: Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2017-09-23 19:59:08.657658 [info] Host is running unRAID 2017-09-23 19:59:08.877041 [info] System information Linux 2d6ab80a01be 4.12.14-unRAID #1 SMP PREEMPT Wed Sep 20 11:41:59 PDT 2017 x86_64 GNU/Linux 2017-09-23 19:59:08.964549 [info] PUID defined as '99' 2017-09-23 19:59:09.135634 [info] PGID defined as '100' 2017-09-23 19:59:09.687772 [info] UMASK defined as '000' 2017-09-23 19:59:09.757070 [info] Permissions already set for volume mappings 2017-09-23 19:59:09.868686 [info] VPN_ENABLED defined as 'yes' 2017-09-23 19:59:10.101475 [info] OpenVPN config file (ovpn extension) is located at /config/openvpn/Netherlands.ovpn dos2unix: converting file /config/openvpn/Netherlands.ovpn to Unix format... 2017-09-23 19:59:10.225978 [crit] VPN configuration file /config/openvpn/Netherlands.ovpn does not contain 'remote' line, showing contents of file before exit... It turns out that the .ovpn file had become empty (zero length)! Replaced the file, and all appears to be okay.
  8. Do you mean uploading of public torrents, or are you referring specifically to private torrents? I'm on a 50Mb/s fibre connection and I regularly see 5+MB/s uploads through the Dutch PIA server - all public torrents. However, speeds are somewhat variable at the moment - the whole of Asia is suffering because two Pacific cables were damaged by storms, recently, off the coast of HongKong
  9. What sorting/renaming do you want? For my purpose, simply moving the files to a different directory is sufficient.
  10. Can't the facility, built into deluge, to move completed torrents, be used to automate this? The only problem I've found with using Deluge to move a file is that all torrenting activity stops while the file is being moved.
  11. Well, they have a three day 'money back' promise, so I may give it a try sometime. However, I chatted with a support person yesterday and learned two things: 1) He couldn't tell me whether their certificates are compatible with OpenSSL 1.1 2) Their 'chameleon' doesn't foil the latest BBC iPlayer blocking ... but they are working on it. So, as far as I can tell, the only way to access iPlayer just now, is to use 'Unlocator'. Since unlocator works simply by changing dns servers, I presume that they are routing traffic through a proxy of some sort, and I'm not comfortable about doing that until I know more about exactly what they are doing.
  12. Ooops, yes - I'd missed that. However, I'm not clear what Binhex has spotted in the log which indicates that it's a no-go. It all looks perfectly normal up to the UDP link remote. Then it just seems to hang, waiting 60 seconds for a timeout. Given that this is all a standard OpenVPN, it seems strange. I'm sure that Vypr does work with OpenVPN - perhaps this is another change in the latest OpenSSL? What I've done with other providers is to connect to the docker with bash. Ensure that the start script has stopped, then try starting OpenVPN manually and see what happens. I can compare that with the same procedure on my Linux Mint desktop (using OpenSSL 1.0).
  13. Further research suggests that Vypr, with their 'Chameleon' software, should do all I need. Chameleon, supposedly, hides the VPN proxy, and allows ports to be configured for forwarding. Their .ovpn files contain: auth SHA256 cipher AES-256-CBC tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA so I hope that there will be no problem with the latest OpenSSL. However, their 'money back' offer only lasts for three days, so I need to make sure that I have sufficient time available to set up and test.
  14. Interesting that I recently set up with PIA, using the Netherlands server, and can achieve near to a full 50Mbps up and down. I didn't do anything to open/forward ports. Experimenting with other VPN providers, my experience is that bandwidth is severely restricted. However, running speed tests is a little difficult within SE Asia at the moment, with a storm having knocked out some undersea cables close to HongKong. My problem with PIA is that they don't mask the fact that I'm using VPN and this prevents me accessing BBC iPlayer. I'm trying to find a VPN service which supports DelugeVPN and allows access to iPlayer. ExpressVPN seems not to allow port forwarding, which restricts Deluge. PureVPN uses old MD5 certificates which are no longer accepted by latest versions of OpenSSL in DelugeVPN. Does anyone have experience of VyprVPN or can recommend other VPN providers? This list of VPN providers who support port forwarding may be of interest. According to that, NordVPN does not allow port forwarding, so speeds will always be restricted.
  15. I had that problem last week. Thing is, I'm not sure exactly what I did to get it working. Check your setting for LAN_NETWORK.
  16. Thanks for the response. I have entered my password in Key3 (VPN_PASS). Where else do I have to define it? Okay, it seems that using a '$' as the first character of the password was upsetting things. I've changed the password and I now get further but, for some reason, twisted is reporting a 'CannotListenError'. I'm guessing that this might be something to do with two lines earlier in the log: [info] Deluge listening interface currently defined as 0.0.0.0 [info] Deluge listening interface will be changed to 10.24.10.6 Deluge appears to be running - I can access via the web gui but the deluge UI, running on my desktop machine, is unable to connect to the daemon. This was working fine until I enabled vpn. Later: Well, it seems to be working now, despite the messages still appearing in the log!
  17. I originally used the nonVPN version of the deluge docker, but then moved to this delugevpn version, with vpn turned off. I have now created an account on PIA and have been attempting to configure the docker to use that. I have found nothing in this topic, nor on docker hub or github which describes the configuration page which the docker presents me with. I even found a you tube video with instructions on how to configure, but that also shows a different configuration screen to the one I have. My first attempt, after setting VPN_ENABLED = yes, reported a missing .ovpn file. After some research, I found that I have to download a .ovpn file from PIA, and place it in /config/openvpn, along with the .crt and .pem files. Having done this, the logfile shows the following error: 2017-08-30 00:59:30,968 DEBG 'start-script' stdout output: Wed Aug 30 00:59:30 2017 neither stdin nor stderr are a tty device and you have neither a controlling tty nor systemd - can't ask for 'Enter Auth Username:'. If you used --daemon, you need to use --askpass to make passphrase-protected keys work, and you can not use --auth-nocache. Wed Aug 30 00:59:30 2017 Exiting due to fatal error Here I am stuck. Having spent a couple of hours searching, I can find nothing which enlightens me. Full log attached. logfile
  18. Apart from ensuring that inotify is set to yes in /config/minidlna.conf, what else is necessary to make sure that the database is updated after adding a file to the media directory? I've always relied on using minidlnad -R, but would like to get inotify working. max_user_watches is already set to 524288 [root@Tower /]# cat /proc/sys/fs/inotify/max_user_watches 524288 [root@Tower /]# I have less than 524288 files in the /media directory: [root@Tower /]# find /media -type f | wc -l 15411 [root@Tower /]# Adding a new file to the /media directory doesn't appear to get it added to the database.
  19. Okay, that appears to have resolved the issue - many thanks.
  20. Well, I guess if docker is killing the process with a signal 9, then the pid file wouldn't get removed. My photo frame is the only client of minidlna, so I turned it off, 'pkill'ed the minidlnad process (which removed the pid file), and restarted the container. I waited more than ten minutes before doing a restart on the container. When it started up again, the old pid file was still present and, of course, the minidlnad process was not running. With no client, what could be keeping the process busy? I have not seen a situation were (p)kill has failed to stop the minidlnad process cleanly, with deletion of the pid file. This suggests to me that either: 1) the system isn't waiting long enough for the Tini-initiated shutdown to complete or, more likely in my view: 2) Tini isn't actually doing what it's meant to do I see that tini is running with pid 1. [root@Tower /]# ps -eaf root 1 0 0 00:12 ? 00:00:00 /usr/bin/tini -- /bin/bash /root/init.sh root 7 1 1 00:12 ? 00:00:00 /usr/bin/python2 /usr/bin/supervisord -c /etc/supervisor.conf -n root 57 7 0 00:12 ? 00:00:00 /bin/bash /root/crond.sh root 61 57 0 00:12 ? 00:00:00 crond -n nobody 62 1 0 00:12 ? 00:00:00 /usr/bin/minidlnad -f /config/minidlna.conf root 63 0 0 00:12 ? 00:00:00 bash root 69 63 0 00:12 ? 00:00:00 ps -eaf [root@Tower /]# How could we prove whether tini is working correctly, or not? I can see evidence, in supervisord.log, of crond being stopped, and waiting for it to die, but see no evidence of minidlnad being stopped, but perhaps that is not logged in the same way - as a child process, for instance?: Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2017-05-23 00:12:22.714187 [info] Host is running unRAID 2017-05-23 00:12:22.741692 [info] System information Linux Tower 4.9.28-unRAID #1 SMP PREEMPT Sun May 14 11:03:53 PDT 2017 x86_64 GNU/Linux 2017-05-23 00:12:22.800289 [info] PUID defined as '99' 2017-05-23 00:12:22.882640 [info] PGID defined as '100' 2017-05-23 00:12:23.168672 [info] UMASK defined as '000' 2017-05-23 00:12:23.218772 [info] Permissions already set for volume mappings 2017-05-23 00:12:23.256912 [info] SCAN_ON_BOOT defined as 'no' 2017-05-23 00:12:23.297819 [info] SCHEDULE_SCAN_DAYS defined as '06' 2017-05-23 00:12:23.327048 [info] SCHEDULE_SCAN_HOURS defined as '02' 2017-05-23 00:12:24,359 CRIT Set uid to user 0 2017-05-23 00:12:24,359 INFO Included extra file "/etc/supervisor/conf.d/minidlna.conf" during parsing 2017-05-23 00:12:24,376 INFO supervisord started with pid 7 2017-05-23 00:12:25,378 INFO spawned: 'start' with pid 56 2017-05-23 00:12:25,379 INFO spawned: 'crond' with pid 57 2017-05-23 00:12:26,003 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47810346303928 for <Subprocess at 47810346306160 with name start in state STARTING> (stdout)> 2017-05-23 00:12:26,003 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47810346306088 for <Subprocess at 47810346306160 with name start in state STARTING> (stderr)> 2017-05-23 00:12:26,003 INFO success: start entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2017-05-23 00:12:26,003 INFO exited: start (exit status 0; expected) 2017-05-23 00:12:26,003 DEBG received SIGCLD indicating a child quit 2017-05-23 00:12:27,004 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2017-05-23 00:22:11,142 WARN received SIGTERM indicating exit request 2017-05-23 00:22:11,155 DEBG killing crond (pid 57) with signal SIGTERM 2017-05-23 00:22:11,166 INFO waiting for crond to die 2017-05-23 00:22:12,167 DEBG fd 16 closed, stopped monitoring <POutputDispatcher at 47810345808600 for <Subprocess at 47810346306016 with name crond in state STOPPING> (stderr)> 2017-05-23 00:22:12,167 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 47810346305008 for <Subprocess at 47810346306016 with name crond in state STOPPING> (stdout)> 2017-05-23 00:22:12,167 INFO stopped: crond (terminated by SIGTERM) 2017-05-23 00:22:12,167 DEBG received SIGCLD indicating a child quit I note that if I (p)kill minidlna, then the following line appears at the end of the minidlna.log file: [2017/05/23 00:34:56] minidlna.c:154: warn: received signal 15, good-bye However, that line never appears when the container is stopped or restarted.
  21. Err... but shouldn't the pid file be removed when the application quits? Perhaps minidlna isn't closing down cleanly? File permissions look okay: -rw-rw-rw- 1 nobody users 3 May 21 13:31 /run/minidlna/minidlna.pid ... and directory permissions: drwxrwxr-x 1 nobody users 24 May 21 13:31 minidlna If I kill the minidlnad process, then the pid file does get removed and the container will restart in a fully functional condition. If I restart the container, without killing the minidlnad process then, on restart, the pid file date/time is unchanged from the earlier start, proving that the file has survived the restart. By what mechanism is/should the minidlnad process be terminated when the container is stopped or restarted?
  22. Ha, it seems that if I delete the minidlna.pid file before I do the restart: root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna rm /run/minidlna/minidlna.pid then the minidlna process starts up perfectly on the restart! root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna nobody 62 1 0 13:29 ? 00:00:00 /usr/bin/minidlnad -f /config/mi root@Tower:/mnt/docker/appdata# The problem appears to be that, for me, at least, the pid file persists over a restart and/or a system reboot. As I understand it, the last update you issued was dealing with pid creation. I'm not sure why rolling back to the previous release [1.1.6-1-04] didn't fix the problem. Is this the complete answer? Why is it a problem for me, but not a problem for you? Confused!
  23. I'm not sure whether I'd noted this before (and if not, why not)? : After Force Update: root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna nobody 62 1 0 12:34 ? 00:00:00 /usr/bin/minidlnad -f /config/mi nobody 64 62 6 12:34 ? 00:00:01 /usr/bin/minidlnad -f /config/mi root@Tower:/mnt/docker/appdata# About three minutes later: root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna nobody 62 1 0 12:34 ? 00:00:00 /usr/bin/minidlnad -f /config/mi root@Tower:/mnt/docker/appdata# After Restart: root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna root@Tower:/mnt/docker/appdata# So, after a forced update, a minidlna scan is spawned as a child process, and it completes after about three minutes. This suggests that after a restart, either the minidlna process never launches or, if it does, it terminates very quickly. After a restart, one entry is added to the minidlna.log: [2017/05/21 13:12:10] minidlna.c:935: error: MiniDLNA is already running. EXITING. at the same time, supervisord.log has these lines added: Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2017-05-21 13:12:07.891345 [info] Host is running unRAID 2017-05-21 13:12:07.924476 [info] System information Linux Tower 4.9.28-unRAID #1 SMP PREEMPT Sun May 14 11:03:53 PDT 2017 x86_64 GNU/Linux 2017-05-21 13:12:07.980942 [info] PUID defined as '99' 2017-05-21 13:12:08.054813 [info] PGID defined as '100' 2017-05-21 13:12:08.341885 [info] UMASK defined as '000' 2017-05-21 13:12:08.371429 [info] Permissions already set for volume mappings 2017-05-21 13:12:08.399995 [info] SCAN_ON_BOOT defined as 'no' 2017-05-21 13:12:08.428914 [info] SCHEDULE_SCAN_DAYS defined as '06' 2017-05-21 13:12:08.460774 [info] SCHEDULE_SCAN_HOURS defined as '02' 2017-05-21 13:12:09,120 CRIT Set uid to user 0 2017-05-21 13:12:09,120 INFO Included extra file "/etc/supervisor/conf.d/minidlna.conf" during parsing 2017-05-21 13:12:09,131 INFO supervisord started with pid 7 2017-05-21 13:12:10,134 INFO spawned: 'start' with pid 56 2017-05-21 13:12:10,135 INFO spawned: 'crond' with pid 57 2017-05-21 13:12:10,593 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47968944308664 for <Subprocess at 47968944310896 with name start in state STARTING> (stdout)> 2017-05-21 13:12:10,593 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47968944310824 for <Subprocess at 47968944310896 with name start in state STARTING> (stderr)> 2017-05-21 13:12:10,593 INFO success: start entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2017-05-21 13:12:10,593 INFO exited: start (exit status 0; expected) 2017-05-21 13:12:10,593 DEBG received SIGCLD indicating a child quit 2017-05-21 13:12:11,594 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
  24. It seems to be nothing to do with the template, or my settings. I have completely removed the container, including deleting the image and deleting the appdata directory. I then re-installed and didn't change any settings whatsoever. For a minute, or so, it is possible to restart the container and the dlna service will still be visible on the network. However, after this time, restarting the container, results in the service no longer being visible on the network. From this, I conclude that something that minidlna is doing at 'initial' start (perhaps initial scan/creating the files.db database?) prevents subsequent 'clean' restarts. After a forced update, I note that files.db starts small and grows and also the art_cache directory is created (but only if I don't restrict /media to point to photos only). Could it be some resource issue? However, if I restrict the number of photos pointed to by the /media mapping, thereby restricting the size of files.db, the problem still persists.
  25. That's a little difficult to determine since it appears to be impossible to edit the parameters without the container restarting. However, I'm beginning to convince myself that the problem is being caused by the way in which the container is started/restarted. When the container is 'updated', either by editing the parameters, or by 'force update', the command which is used to launch the container is displayed in the web gui, and I can use the same command to launch the container from a console window. This requires that the old container is removed, or else the name has to be changed. In this case. the application always starts up in a functional mode. This is the command: If the container is already running, and I issue: "docker restart binhex-minidlna" then the application is non-functional even though the container appears to be active. Clearly, for me/my system, docker restart does not leave the container (or its app) in the same condition as it is when the container is first run. Now, I'm guessing that the autostart procedure doesn't use docker run - perhaps it does a docker load followed by docker start? The big question, of course, is: Why does this problem only affect me and, perhaps, one other person? What is the difference between my system and yours?
×
×
  • Create New...