mattekure

Members
  • Posts

    206
  • Joined

  • Last visited

Everything posted by mattekure

  1. I'll throw in my vote for the strong option as well. The update instructions worked perfect for me.
  2. I've been using this docker for a while with about 3.5Tb of data to backup. Thank you to the developer who put this out there for everyone. Crashplan started to really drag and for the last month or so, was stuck "Analyzing" and not actually sending any data. After a lot of web searching and trying different things, I finally found an article from the crashplan website on increasing the RAM. Apparently having a larger data set requires a much greater amount of RAM. There article can be found here: https://support.code42.com/CrashPlan/4/Troubleshooting/Adjusting_CrashPlan_Settings_For_Memory_Usage_With_Large_Backups By following their instructions and bumping up the RAM for crashplan to 4Gb, it immediately began uploading again and went from an estimated 2 months to finish, down to 2 days. So if you have a large data set, make sure you give crashplan enough RAM to do its job or it will never finish.
  3. Has anyone worked on a tor relay docker for unraid?
  4. Just wanted to post and issue that I had and resolved in case others have the same issue. I am using this ownCloud docker along with the Linuxserver mariadb docker as the backend DB. The issue I had was that when trying to connect to the DB to create the initial user, I would get the following error 'Cannot execute statement: impossible to write to binary log since statement is in row format and BINLOG_FORMAT = STATEMENT.' I found I could fix this by connecting to the DB and executing SET GLOBAL binlog_format = 'MIXED'; but this fix would not persist if the DB docker restarted. I ended up modifying the mariadb dockers config file my.cnf. I found the section under [mysqld] and added binlog_format = mixed This change allowed it to persist if the DB docker gets restarted or updated.
  5. Hmm. . . never noticed the temp directory and that behavior. I'll look into it. It is strange that Calibre doesn't do any clean up of the tmp files until the import is completed. Here's a temporary workaround (since you only have to do this once, it should be fine). You can edit the container in unraid gui, and add a new volume mapping. Put in "/tmp" under container volume and put in whatever folder on unraid you would like to use for the temporary temp location (double temp :-) under host path. This way, that folder you pick, which should be outside of the docker image and somewhere on your array, cache drive or a disk outside of the array, should be used for the temporary files during import. After import, if you like, you can edit the container settings to remove that mapping, and then you can delete that folder, too. Keep in mind that every time you edit the container settings (if edge is set to 1), the container will download the latest version which may take a few minutes depending on how calibre's server is feeling. By the way, are you hosting the National Library Archives? 400GB of ebooks? How? Hmm, I tried mounting to /mnt/user/tmp and keep getting this error Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. so, instead of mounting to /mnt/user, I mounted it directly to one of the disks instead. after that it was able to start up. Did you first create a user share called tmp in unraid? Yes, the user share was created. With specifying it on a single disk it has been running without issue for about 12 hours. Now at 5% done
  6. Hmm. . . never noticed the temp directory and that behavior. I'll look into it. It is strange that Calibre doesn't do any clean up of the tmp files until the import is completed. Here's a temporary workaround (since you only have to do this once, it should be fine). You can edit the container in unraid gui, and add a new volume mapping. Put in "/tmp" under container volume and put in whatever folder on unraid you would like to use for the temporary temp location (double temp :-) under host path. This way, that folder you pick, which should be outside of the docker image and somewhere on your array, cache drive or a disk outside of the array, should be used for the temporary files during import. After import, if you like, you can edit the container settings to remove that mapping, and then you can delete that folder, too. Keep in mind that every time you edit the container settings (if edge is set to 1), the container will download the latest version which may take a few minutes depending on how calibre's server is feeling. By the way, are you hosting the National Library Archives? 400GB of ebooks? How? Hmm, I tried mounting to /mnt/user/tmp and keep getting this error Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. Fatal server error: Can't read lock file /tmp/.X1-lock Openbox-Message: Failed to open the display from the DISPLAY environment variable. so, instead of mounting to /mnt/user, I mounted it directly to one of the disks instead. after that it was able to start up.
  7. I only notice it because I am importing a library well over 400Gb, and my cache drive isnt that big.
  8. With your RDP-Calibre docker, is there a way that I could temporarily set the Temp directory to something not in the docker? I am trying to import a large number of books and it very quickly fills up the entire docker with files in the calibre tmp directory. This would only need to be done once on the initial import and after that it could go back to its normal setting. I originally had the docker.img at 20Gb, but that filled up with about 1% of the import done . I increased the size of the docker to 200Gb, but watching the size grow, it is likely to fill up again before its finished.
  9. You were right, the port forwarding worked without issue when I connected to the netherlands, but not to the US servers so far. The docker image is scripted to automatically do the port forwarding for you (PIA only), i can only assume you are connected to a gateway that doesn't support port forwarding, as not all gateways do, which gateway are you connecting to?.
  10. I think I found it. I removed it and reinstalled and one of the environment variables changed from LAN_RANGE to LAN_NETWORK
  11. I reconnected using New York which is listed as supporting port forwarding. it seemed to redownload the image, now when I run it I get the error 2016-02-03 09:59:01,656 DEBG 'start' stdout output: [crit] LAN network not defined, please specify via env variable LAN_NETWORK
  12. I was connected to US East, I will try one of the others that support port forwarding and see what happens. Thanks
  13. In my logs, I see this frequently. 2016-02-02 20:32:21,017 DEBG 'deluge' stdout output: [warn] PIA incoming port is not an integer, downloads will be slow, check if remote gateway supports port forwarding Is there a way that this can be configured to use PIA's port forwarding feature? https://www.privateinternetaccess.com/forum/discussion/3359/port-forwarding-without-application-pia-script-advanced-users
  14. Ok, apparently it didn't like my password for some reason and would fail with it. I used a randomly generated 20 digit password. I changed the password and the new password works fine. Any idea why 4l%3OsZb0$sX$^K0GKPy would cause it to fail?
  15. Double and triple checked, no trailing spaces on username/password.
  16. The username & password is the same as one used to log onto the website correct? The same username & password combo work when using openvpn from my desktop to connect to PIA, so I am fairly sure that the password is correct. Do I need to overwrite the ca.crt or crl.pem files with the PIA ones?
  17. I am having trouble getting the VPN to connect. When I have VPN set to off, everything runs as expected. When I turn it on, it looks like it connects, but then I get an AUTH_FAILED. I cant tell if thats for deluge or vpn. I am using PIA as my vpn provider. I have verified the username/passwd. I turned on debug and this is the startup log I get. EDIT* for clarification, when I start it with VPN enabled, it appears to run. However, I cannot connect to the webui, or the daemon. When I start it without VPN enabled, I can connect to both fine. I am connecting via a computer on the LAN. Also, it seems that privoxy is being started despite having the setting set to no. 2016-02-01 17:00:10,026 CRIT Set uid to user 0 2016-02-01 17:00:10,026 WARN Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing 2016-02-01 17:00:10,029 INFO supervisord started with pid 1 2016-02-01 17:00:11,032 INFO spawned: 'privoxy' with pid 7 2016-02-01 17:00:11,034 INFO spawned: 'start' with pid 8 2016-02-01 17:00:11,035 INFO spawned: 'webui' with pid 9 2016-02-01 17:00:11,037 INFO spawned: 'deluge' with pid 10 2016-02-01 17:00:11,044 DEBG 'privoxy' stdout output: [info] VPN is enabled, checking VPN tunnel local ip is valid 2016-02-01 17:00:11,044 INFO success: privoxy entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2016-02-01 17:00:11,044 INFO success: start entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2016-02-01 17:00:11,045 DEBG 'privoxy' stdout output: [info] checking VPN tunnel local ip is valid... 2016-02-01 17:00:11,048 DEBG 'deluge' stdout output: [info] VPN enabled, configuring Deluge... 2016-02-01 17:00:11,049 DEBG 'deluge' stdout output: [info] checking VPN tunnel local ip is valid... 2016-02-01 17:00:11,054 DEBG 'start' stdout output: [info] VPN is enabled, beginning configuration of VPN 2016-02-01 17:00:11,067 DEBG 'start' stdout output: [info] VPN provider defined as pia 2016-02-01 17:00:11,067 DEBG 'start' stdout output: [info] VPN config file (ovpn extension) is located at /config/openvpn/openvpn.ovpn 2016-02-01 17:00:11,070 DEBG 'start' stdout output: [info] Env vars defined via docker -e flags for remote host, port and protocol, writing values to ovpn file... 2016-02-01 17:00:11,093 DEBG 'start' stdout output: [debug] Contents of ovpn file /config/openvpn/openvpn.ovpn as follows... 2016-02-01 17:00:11,094 DEBG 'start' stdout output: client dev tun remote us-east.privateinternetaccess.com 1194 udp resolv-retry infinite nobind persist-key ca ca.crt tls-client remote-cert-tls server auth-user-pass credentials.conf comp-lzo verb 1 reneg-sec 0 crl-verify crl.pem 2016-02-01 17:00:11,094 DEBG 'start' stdout output: [debug] Environment variables defined as follows... 2016-02-01 17:00:11,095 DEBG 'start' stdout output: LAN_RANGE=192.168.1.1-192.168.1.254 HOSTNAME=0bd56e14739c TERM=xterm VPN_PORT=1194 VPN_USER=*********** VPN_PASS=**************** VPN_ENABLED=yes PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin SUPERVISOR_GROUP_NAME=start PWD=/ LANG=en_GB.UTF-8 TZ=America/New_York VPN_PROTOCOL=udp SUPERVISOR_ENABLED=1 SHLVL=1 HOME=/home/nobody VPN_REMOTE=us-east.privateinternetaccess.com SUPERVISOR_PROCESS_NAME=start DEBUG=true VPN_PROV=pia ENABLE_PRIVOXY=no _=/usr/sbin/printenv 2016-02-01 17:00:11,095 DEBG 'start' stdout output: [info] VPN provider remote gateway defined as us-east.privateinternetaccess.com [info] VPN provider remote port defined as 1194 [info] VPN provider remote protocol defined as udp 2016-02-01 17:00:11,104 DEBG 'start' stdout output: [info] VPN provider username defined as ************** 2016-02-01 17:00:11,108 DEBG 'start' stdout output: [info] VPN provider password defined as ************ 2016-02-01 17:00:11,127 DEBG 'start' stdout output: [info] ip routing table 2016-02-01 17:00:11,127 DEBG 'start' stdout output: default via 172.17.42.1 dev eth0 2016-02-01 17:00:11,128 DEBG 'start' stdout output: 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.27 -------------------- 2016-02-01 17:00:11,159 DEBG 'start' stdout output: [info] iptables 2016-02-01 17:00:11,161 DEBG 'start' stdout output: -P INPUT DROP -P FORWARD ACCEPT -P OUTPUT DROP -A INPUT -i tun0 -j ACCEPT -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A INPUT -i eth0 -p udp -m udp --sport 1194 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --sport 8112 -j ACCEPT -A INPUT -i eth0 -m iprange --src-range 192.168.1.1-192.168.1.254 -j ACCEPT -A INPUT -i eth0 -m iprange --dst-range 192.168.1.1-192.168.1.254 -j ACCEPT -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A OUTPUT -o tun0 -j ACCEPT -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --dport 1194 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --dport 8112 -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A OUTPUT -o lo -j ACCEPT 2016-02-01 17:00:11,161 DEBG 'start' stdout output: -------------------- 2016-02-01 17:00:11,161 DEBG 'start' stdout output: [info] nameservers 2016-02-01 17:00:11,161 DEBG 'start' stdout output: nameserver 8.8.8.8 nameserver 8.8.4.4 2016-02-01 17:00:11,162 DEBG 'start' stdout output: -------------------- 2016-02-01 17:00:11,171 DEBG 'start' stdout output: [info] Starting OpenVPN... 2016-02-01 17:00:11,176 DEBG 'start' stdout output: Mon Feb 1 17:00:11 2016 OpenVPN 2.3.9 x86_64-unknown-linux-gnu [sSL (OpenSSL)] [LZO] [EPOLL] [MH] [iPv6] built on Dec 24 2015 Mon Feb 1 17:00:11 2016 library versions: OpenSSL 1.0.2e 3 Dec 2015, LZO 2.09 Mon Feb 1 17:00:11 2016 WARNING: file 'credentials.conf' is group or others accessible 2016-02-01 17:00:11,194 DEBG 'start' stdout output: Mon Feb 1 17:00:11 2016 UDPv4 link local: [undef] Mon Feb 1 17:00:11 2016 UDPv4 link remote: [AF_INET]66.55.134.201:1194 2016-02-01 17:00:11,207 DEBG 'start' stdout output: Mon Feb 1 17:00:11 2016 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this 2016-02-01 17:00:11,267 DEBG 'start' stdout output: Mon Feb 1 17:00:11 2016 [Private Internet Access] Peer Connection Initiated with [AF_INET]66.55.134.201:1194 2016-02-01 17:00:12,269 INFO success: webui entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2016-02-01 17:00:12,269 INFO success: deluge entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2016-02-01 17:00:13,640 DEBG 'start' stdout output: Mon Feb 1 17:00:13 2016 AUTH: Received control message: AUTH_FAILED 2016-02-01 17:00:13,640 DEBG 'start' stdout output: Mon Feb 1 17:00:13 2016 SIGTERM[soft,auth-failure] received, process exiting 2016-02-01 17:00:13,641 DEBG fd 9 closed, stopped monitoring (stdout)> 2016-02-01 17:00:13,641 DEBG fd 14 closed, stopped monitoring (stderr)> 2016-02-01 17:00:13,641 INFO exited: start (exit status 0; expected) 2016-02-01 17:00:13,641 DEBG received SIGCLD indicating a child quit
  18. *EDIT - I got this working using hernandito's excellent apache-php docker. Hey everyone, I am consolidating all of my current Ubuntu server functions into my new Unraid box and the only thing I havnt been able to do is find a Glype docker. I have no idea how to create a docker, so I am hoping someone here can help.
  19. *EDIT: Nevermind, its an error on the putty end, not unraid or ssh. I can successfully ssh in via a separate linux box. I am having trouble connecting to my server via SSH. running 6.1.7 I can connect to the server via telnet from my laptop using putty. I know that the ssh server is running because when I connect via Telnet, I can ssh to localhost. When I attempt to ssh using putty to it, the putty session opens, and then just sits there. no promt, no error, no login, nothing. any ideas?