Jump to content

lovingHDTV

Members
  • Content Count

    497
  • Joined

  • Last visited

Everything posted by lovingHDTV

  1. Recently after an upgrade all I get is a window that says "Exectuion Error, server error" There are no log files, nothing. It used to work for me. I just updated again, same issue, removed the docker and image and reinstalled, same issue. any ideas on how to debug what is going on? thanks, david EDIT: I figure it out. The new version of unraid uses 443 so I had to change that to a different port.
  2. Does this mean we no longer need to set VPN_USER and VPN_PASSWORD, etc?
  3. Ah, I guess I didn't see that as the answer because I didn't get a missing config message. I got a missing remote line in config file and didn't think this was the same thing. thanks again for all the help, david
  4. I have all that setup and working for years. I was just surprised when it stopped working with the last update. I just re-edited my .ovpn file and put all the stuff that is supposed to be set with environment variables back into the .ovpn file and it works now. Not sure why the environment variables no longer work. thanks david
  5. I recently updated, I guess its been a couple months since my last update. Now it doesn't work. In the log file I see: Any ideas why the bold line? It was working before, did something change on the VPN side? thanks david EDIT: I guess you have to have the server in the ovpn file now, before you could just set it up in the docker.
  6. No sure if this is expected, but I see a very similar slowness for verification. I've been trying to backup ~650GB to B2 storage. Periodically I get a message that a timeout has happened and this causes the backup to stop, it then spends 24-30 hours "verifying backup" before it starts actually uploading data again. I'm a bit concerned that this will continue even after I have successfully completed my backup. If so, I'll have to go find something else to use. I've check my bandwidth and CPU and cannot see why it takes so long to verify. david
  7. I asked and got an answer from the Duplicati forums and verified that email works just fine in the docker: https://forum.duplicati.com/t/how-to-set-up-email-notification/233/12 hopefully this helps someone, david
  8. That does look like a nice docker, it even supports users. I tried using Owncloud, but get errors with Duplicati and I figured that sshd is the lightest weight way to get it working. thanks david UPDATE: Got it working, now I have only a single volume mounted for this sshd. thanks again for the pointer.
  9. As I move away from crashplan and move to duplicati I would like to have a docker that just runs sshd. This allows me to limit the exposure of the entire unRaid server to just one partition. I have it all working with the system sshd using the helpful ssh config plugin, but don't like that the ftp user has access to more stuff than they need. I guess I'm just paranoid, but even requiring certs to log in, I don't like having an external port into my entire server. I'm up for trying to create it myself, but I've not found a how to guide. I see many guides on how to setup and use dockers, and I'm sure I've seen something over the years on how to create one, but my search kung-foo is weak. thanks david
  10. I have a single user defined that I want to provide ssh access too. However, I have to have "allow root" set to yes for it to work even for the non-root user. How can I disallow root ssh access, but still allow my chosen user? My ssh.cfg If I set PERMITROOTLOGIN="yes" then I can log in as user wimp. thanks david
  11. Anyone have tips on getting this to send email notifications? thanks david
  12. A couple options I found: Crashplan Business is $10 per device/month with unlimited online storage, no local backups. Duplicati: software is open source, supports many cloud based storage solutions, supports local backups Cloudberry: software is ~$30 with a 5TB backup size limit. $150 for unlimited size. Supports many cloud based storage and local backups. My opinions and how I picked my solution: Crashplan is in the business of selling backup solutions. They feel that their solution is improving and they can charge more for it. So they increase their price/reduce service. For me I need to trust that the backup solution will continue to exist and not be phased out. The backup solution market is pretty small, and not very competitive. Just try finding a good linux backup product. Duplicati: software is free, you buy storage. Cloud based storage is a commodity and will just decrease in cost over time. There is no incentive for cloud based storage providers to increase cost, it is a pretty competitive market. Cloudberry: sells you the software for a one time cost, reasonably priced, storage is the same as Duplicati. cloud storage: Today I see that Amazon Glacier is the "cheapest" at $4/TB/month but retrieval is expensive and I don't know how you verify your backups without paying for downloads, etc. Backblaze is 20% more at $5/TB/month, but it has normal access, so verifying a backup is easy. There is a download cost as well, you get 1GB/day free, not sure how backup verification works, but I would imagine you don't need to download the entire thing. I was concerned that using Glacier I may end up with corrupted backupfiles that when I needed them, they wouldn't work. All software has issues, and you could easily run into this issue. So I went with Duplicati/Backblaze B2. I may go with Cloudberry/local unmapped disk for local backups. As I want both cloud and local, I figure that may increase my chances of having a valid backup, but at the same times increases my chances of issues as I'm using two solutions. . . . Other considerations: I backup unRaid, 4 laptops, and two desktops for a total of 2TB. Staying with Crashplan would cost me $80/month for 8 devices and 2TB of data. Duplicati is free for each machine, and 2TB of data is $10 per month, much cheaper than Crashplan, and provides local backups. hopefully this helps someone, david
  13. Like many here I've been using Crashplan and am now moving on to something else. Now that I need to change my backup solution, I've a couple questions on how everyone does their backups. Today I backup /mnt/user/<share>. This lets me choose things like My Documents, Videos, Music etc and I don't have to worry about disk numbers. However, I was thinking that if I had a multi-disk failure that required me to rebuild two or more disks, it would be nice to be able to just restore /mnt/disk? So how do you do this? User share based or disk based? thanks david
  14. I am backup up to the cloud and changed mine to 100MB without issue. It all has to do with how long it takes to manage a single file if it is huge. There were issues if the block size is > 4TB, but I think those have been fixed.
  15. I decided to give this a try, now that I need to drop Crashplan before Oct 31. Went with Backblaze B2 as I can backup up 4 devices for cheaper than Crashplan Business at $10 per device. I followed SpaceInvader One video and got it all setup. did a couple backup/restore tests to ensure it would work. I have paused/restarted the backup without issue. It will take a while to complete as I only have 10mb/s upload. I killed the docker and upon restart the backup is not automatically resumed. It did report that it will start at the next scheduled time. I have the default of once per day. It did not clean up the unsent dup-* files found on the cache drive where I mapped /tmp. That is a bit disconcerting. Hopefully they will get cleaned up at some point. I do see it making new files after I started it again. david
  16. I updated the server and now I cannot connect. I then updated my client and still the same issue. So I removed and re-installed, still no go. Anyone else get the new server working? UPDATE: I had a bit of time this AM and got it working. I had to remove the mumble-server.sqlite and then when it restarted it worked. thanks
  17. I added VPN_PROTOCOL upd and that worked. I looked at the settings and didn't see the VPN_PROTOCOL, I didn't realize I needed to add it manually to the setup. Up and running again. thanks! david open your ovpn file, the vpn_protocol is the protocol defined via the "proto" or at the end of the remote line, should be either tcp or udp, also whilst your there check the port, as its most probably not 1194 (old default pia port). i know it hasnt changed but you haven't actually defined the settings before as they weren't mandatory (fallback to reading ovpn if env vars not specified), they now are mandatory, so you need to specify the following:- VPN_REMOTE VPN_PROTOCOL VPN_PORT values for you looking at your ovpn file are:- VPN_REMOTE = dal-a01.wlvpn.com VPN_PROTOCOL = udp VPN_PORT = 1194
  18. Here is my .ovpn file. From usenetserver, it hasn't changed: client dev tun proto udp remote dal-a01.wlvpn.com 1194 resolv-retry infinite nobind persist-key persist-remote-ip ca dal-a01.cert tls-client remote-cert-tls server auth-user-pass credentials.conf comp-lzo verb 3 auth SHA256 cipher AES-256-CBC EDIT: I check file permissions. It is owned by root root, with 644 permissions so that shouldn't be an issue. david open your ovpn file, the vpn_protocol is the protocol defined via the "proto" or at the end of the remote line, should be either tcp or udp, also whilst your there check the port, as its most probably not 1194 (old default pia port).
  19. I just updated to 6.2 and noticed a new update for this docker after doing so. I updated and now it won't run. I validated that my VPN settings having changed and match what they should. Here is the log file: 2016-09-16 14:29:47,729 DEBG 'deluge-script' stdout output: [info] deluge config file already exists, skipping copy 2016-09-16 14:29:47,730 DEBG 'deluge-script' stdout output: [info] VPN is enabled, checking VPN tunnel local ip is valid 2016-09-16 14:29:47,738 DEBG 'start-script' stdout output: [info] VPN provider defined as custom [info] VPN config file (ovpn extension) is located at /config/openvpn/dal-a01.ovpn 2016-09-16 14:29:47,741 DEBG 'start-script' stdout output: [info] VPN provider remote gateway defined as dal-a01.wlvpn.com [info] VPN provider remote port defined as 1194 [crit] VPN provider remote protocol not defined (via -e VPN_PROTOCOL), exiting... anyone know what the -e VPN_PROTOCOL is? This seems to be a new thing. thanks david
  20. My cache drive did not get assigned after update. It is shown on the main page as a new device. I've attached my diagnostics. EDIT: this is even wierder. I just noticed that I didn't get updated. I hit the update, it downloaded extracted and said to reboot. I stopped the array, power down, power up. and still at 6.1.9 with no cache drive assigned. I can stop the array and assign the cache drive and then docker started. I noticed that my cache is set for 2 slots, but only one is assigned, the other is unassigned. Maybe that is the issue? I guess I'll re-apply the update and try again. EDIT2: after my second upgrade attempt it worked. What I did differently. On the first update, after I updated and it said to reboot. I remembered that for 6.2 I don't need the powerdown plugin, so I removed it. I then powered down/powered up. the second time I didn't remove that plugin, because it is already gone. Can removing a plugin after updating, but before rebooting revert the update? For the cache drive I set it to 1 slot and upon upgrading it came up without issue. Docker ran for a few minutes before the webpage came up, but things seem to be working now. david tower-diagnostics-20160916-0842.zip
  21. That is what I was afraid of. thanks, david
  22. here you go. thanks david tower-diagnostics-20160829-1107.zip
  23. I searched here on the forum to see why my cache drive goes to read only after a while. Most people ran a check on the drive to check for corruptions. So I stopped the array and ran reiserfsck -check /dev/sdb1 Replaying journal: Done. Reiserfs journal '/dev/sdb1' in blocks [18..8211]: 662 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 63511 Internal nodes 417 Directories 309746 Other files 184359 Data block pointers 31617135 (969150 of them are zero) Safe links 0 ########### reiserfsck finished at Mon Aug 29 10:23:45 2016 ########### It reports no issues. Any other suggestions on why my cache drive started acting up all of a sudden? When I try to create a file I get: /mnt/cache# touch test touch: cannot touch âtestâ: Read-only file system And in the log file I get the following error/warning: Aug 29 10:32:22 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375255 Aug 29 10:32:22 tower kernel: Buffer I/O error on dev sdb1, logical block 178296899, async page read Aug 29 10:32:25 tower shfs/user: shfs_read: read: (5) Input/output error Aug 29 10:32:25 tower kernel: mpt2sas0: log_info(0x31080000): originator(PL), code(0x08), sub_code(0x0000) Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] Sense Key : 0x3 [current] Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] ASC=0x11 ASCQ=0x0 Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] CDB: opcode=0x28 28 00 55 04 c2 5f 00 00 08 00 Aug 29 10:32:25 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375263 Aug 29 10:32:25 tower kernel: Buffer I/O error on dev sdb1, logical block 178296900, async page read Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] Sense Key : 0x3 [current] Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] ASC=0x11 ASCQ=0x0 Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] CDB: opcode=0x28 28 00 55 04 c3 c7 00 00 08 00 Aug 29 10:32:32 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375623 Aug 29 10:32:32 tower kernel: Buffer I/O error on dev sdb1, logical block 178296945, async page read It seems to me as if the drive is failing? thanks david
  24. Just wanted to drop by and say thanks! I was in Taiwan for business when I got an email saying that Crashplan wasn't backing up. I was aware that this was going to happen, but couldn't do anything about it. A few days later the power went out and unRaid shutdown automatically. After power was restored it started back up, and crashplan automatically updated and I got 4.7. I didn't know this all happened until I got another email saying the backups were working again, imagine my surprise. I don't think anything has every worked this perfectly for me thanks for the great docker, david
  25. I just do a : docker ps cut-n-paste the id into docker exec -it <paste> bash then if I want to run top export TERM=xterm I find it faster to cut-n-paste the ID than trying to remember/figure out the names david