lovingHDTV

Members
  • Posts

    605
  • Joined

  • Last visited

Everything posted by lovingHDTV

  1. I asked and got an answer from the Duplicati forums and verified that email works just fine in the docker: https://forum.duplicati.com/t/how-to-set-up-email-notification/233/12 hopefully this helps someone, david
  2. That does look like a nice docker, it even supports users. I tried using Owncloud, but get errors with Duplicati and I figured that sshd is the lightest weight way to get it working. thanks david UPDATE: Got it working, now I have only a single volume mounted for this sshd. thanks again for the pointer.
  3. As I move away from crashplan and move to duplicati I would like to have a docker that just runs sshd. This allows me to limit the exposure of the entire unRaid server to just one partition. I have it all working with the system sshd using the helpful ssh config plugin, but don't like that the ftp user has access to more stuff than they need. I guess I'm just paranoid, but even requiring certs to log in, I don't like having an external port into my entire server. I'm up for trying to create it myself, but I've not found a how to guide. I see many guides on how to setup and use dockers, and I'm sure I've seen something over the years on how to create one, but my search kung-foo is weak. thanks david
  4. I have a single user defined that I want to provide ssh access too. However, I have to have "allow root" set to yes for it to work even for the non-root user. How can I disallow root ssh access, but still allow my chosen user? My ssh.cfg If I set PERMITROOTLOGIN="yes" then I can log in as user wimp. thanks david
  5. Anyone have tips on getting this to send email notifications? thanks david
  6. A couple options I found: Crashplan Business is $10 per device/month with unlimited online storage, no local backups. Duplicati: software is open source, supports many cloud based storage solutions, supports local backups Cloudberry: software is ~$30 with a 5TB backup size limit. $150 for unlimited size. Supports many cloud based storage and local backups. My opinions and how I picked my solution: Crashplan is in the business of selling backup solutions. They feel that their solution is improving and they can charge more for it. So they increase their price/reduce service. For me I need to trust that the backup solution will continue to exist and not be phased out. The backup solution market is pretty small, and not very competitive. Just try finding a good linux backup product. Duplicati: software is free, you buy storage. Cloud based storage is a commodity and will just decrease in cost over time. There is no incentive for cloud based storage providers to increase cost, it is a pretty competitive market. Cloudberry: sells you the software for a one time cost, reasonably priced, storage is the same as Duplicati. cloud storage: Today I see that Amazon Glacier is the "cheapest" at $4/TB/month but retrieval is expensive and I don't know how you verify your backups without paying for downloads, etc. Backblaze is 20% more at $5/TB/month, but it has normal access, so verifying a backup is easy. There is a download cost as well, you get 1GB/day free, not sure how backup verification works, but I would imagine you don't need to download the entire thing. I was concerned that using Glacier I may end up with corrupted backupfiles that when I needed them, they wouldn't work. All software has issues, and you could easily run into this issue. So I went with Duplicati/Backblaze B2. I may go with Cloudberry/local unmapped disk for local backups. As I want both cloud and local, I figure that may increase my chances of having a valid backup, but at the same times increases my chances of issues as I'm using two solutions. . . . Other considerations: I backup unRaid, 4 laptops, and two desktops for a total of 2TB. Staying with Crashplan would cost me $80/month for 8 devices and 2TB of data. Duplicati is free for each machine, and 2TB of data is $10 per month, much cheaper than Crashplan, and provides local backups. hopefully this helps someone, david
  7. Like many here I've been using Crashplan and am now moving on to something else. Now that I need to change my backup solution, I've a couple questions on how everyone does their backups. Today I backup /mnt/user/<share>. This lets me choose things like My Documents, Videos, Music etc and I don't have to worry about disk numbers. However, I was thinking that if I had a multi-disk failure that required me to rebuild two or more disks, it would be nice to be able to just restore /mnt/disk? So how do you do this? User share based or disk based? thanks david
  8. I am backup up to the cloud and changed mine to 100MB without issue. It all has to do with how long it takes to manage a single file if it is huge. There were issues if the block size is > 4TB, but I think those have been fixed.
  9. I decided to give this a try, now that I need to drop Crashplan before Oct 31. Went with Backblaze B2 as I can backup up 4 devices for cheaper than Crashplan Business at $10 per device. I followed SpaceInvader One video and got it all setup. did a couple backup/restore tests to ensure it would work. I have paused/restarted the backup without issue. It will take a while to complete as I only have 10mb/s upload. I killed the docker and upon restart the backup is not automatically resumed. It did report that it will start at the next scheduled time. I have the default of once per day. It did not clean up the unsent dup-* files found on the cache drive where I mapped /tmp. That is a bit disconcerting. Hopefully they will get cleaned up at some point. I do see it making new files after I started it again. david
  10. I updated the server and now I cannot connect. I then updated my client and still the same issue. So I removed and re-installed, still no go. Anyone else get the new server working? UPDATE: I had a bit of time this AM and got it working. I had to remove the mumble-server.sqlite and then when it restarted it worked. thanks
  11. I added VPN_PROTOCOL upd and that worked. I looked at the settings and didn't see the VPN_PROTOCOL, I didn't realize I needed to add it manually to the setup. Up and running again. thanks! david open your ovpn file, the vpn_protocol is the protocol defined via the "proto" or at the end of the remote line, should be either tcp or udp, also whilst your there check the port, as its most probably not 1194 (old default pia port). i know it hasnt changed but you haven't actually defined the settings before as they weren't mandatory (fallback to reading ovpn if env vars not specified), they now are mandatory, so you need to specify the following:- VPN_REMOTE VPN_PROTOCOL VPN_PORT values for you looking at your ovpn file are:- VPN_REMOTE = dal-a01.wlvpn.com VPN_PROTOCOL = udp VPN_PORT = 1194
  12. Here is my .ovpn file. From usenetserver, it hasn't changed: client dev tun proto udp remote dal-a01.wlvpn.com 1194 resolv-retry infinite nobind persist-key persist-remote-ip ca dal-a01.cert tls-client remote-cert-tls server auth-user-pass credentials.conf comp-lzo verb 3 auth SHA256 cipher AES-256-CBC EDIT: I check file permissions. It is owned by root root, with 644 permissions so that shouldn't be an issue. david open your ovpn file, the vpn_protocol is the protocol defined via the "proto" or at the end of the remote line, should be either tcp or udp, also whilst your there check the port, as its most probably not 1194 (old default pia port).
  13. I just updated to 6.2 and noticed a new update for this docker after doing so. I updated and now it won't run. I validated that my VPN settings having changed and match what they should. Here is the log file: 2016-09-16 14:29:47,729 DEBG 'deluge-script' stdout output: [info] deluge config file already exists, skipping copy 2016-09-16 14:29:47,730 DEBG 'deluge-script' stdout output: [info] VPN is enabled, checking VPN tunnel local ip is valid 2016-09-16 14:29:47,738 DEBG 'start-script' stdout output: [info] VPN provider defined as custom [info] VPN config file (ovpn extension) is located at /config/openvpn/dal-a01.ovpn 2016-09-16 14:29:47,741 DEBG 'start-script' stdout output: [info] VPN provider remote gateway defined as dal-a01.wlvpn.com [info] VPN provider remote port defined as 1194 [crit] VPN provider remote protocol not defined (via -e VPN_PROTOCOL), exiting... anyone know what the -e VPN_PROTOCOL is? This seems to be a new thing. thanks david
  14. My cache drive did not get assigned after update. It is shown on the main page as a new device. I've attached my diagnostics. EDIT: this is even wierder. I just noticed that I didn't get updated. I hit the update, it downloaded extracted and said to reboot. I stopped the array, power down, power up. and still at 6.1.9 with no cache drive assigned. I can stop the array and assign the cache drive and then docker started. I noticed that my cache is set for 2 slots, but only one is assigned, the other is unassigned. Maybe that is the issue? I guess I'll re-apply the update and try again. EDIT2: after my second upgrade attempt it worked. What I did differently. On the first update, after I updated and it said to reboot. I remembered that for 6.2 I don't need the powerdown plugin, so I removed it. I then powered down/powered up. the second time I didn't remove that plugin, because it is already gone. Can removing a plugin after updating, but before rebooting revert the update? For the cache drive I set it to 1 slot and upon upgrading it came up without issue. Docker ran for a few minutes before the webpage came up, but things seem to be working now. david tower-diagnostics-20160916-0842.zip
  15. That is what I was afraid of. thanks, david
  16. here you go. thanks david tower-diagnostics-20160829-1107.zip
  17. I searched here on the forum to see why my cache drive goes to read only after a while. Most people ran a check on the drive to check for corruptions. So I stopped the array and ran reiserfsck -check /dev/sdb1 Replaying journal: Done. Reiserfs journal '/dev/sdb1' in blocks [18..8211]: 662 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 63511 Internal nodes 417 Directories 309746 Other files 184359 Data block pointers 31617135 (969150 of them are zero) Safe links 0 ########### reiserfsck finished at Mon Aug 29 10:23:45 2016 ########### It reports no issues. Any other suggestions on why my cache drive started acting up all of a sudden? When I try to create a file I get: /mnt/cache# touch test touch: cannot touch âtestâ: Read-only file system And in the log file I get the following error/warning: Aug 29 10:32:22 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375255 Aug 29 10:32:22 tower kernel: Buffer I/O error on dev sdb1, logical block 178296899, async page read Aug 29 10:32:25 tower shfs/user: shfs_read: read: (5) Input/output error Aug 29 10:32:25 tower kernel: mpt2sas0: log_info(0x31080000): originator(PL), code(0x08), sub_code(0x0000) Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] Sense Key : 0x3 [current] Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] ASC=0x11 ASCQ=0x0 Aug 29 10:32:25 tower kernel: sd 3:0:0:0: [sdb] CDB: opcode=0x28 28 00 55 04 c2 5f 00 00 08 00 Aug 29 10:32:25 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375263 Aug 29 10:32:25 tower kernel: Buffer I/O error on dev sdb1, logical block 178296900, async page read Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:28 tower kernel: mpt2sas0: log_info(0x31110630): originator(PL), code(0x11), sub_code(0x0630) Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] Sense Key : 0x3 [current] Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] ASC=0x11 ASCQ=0x0 Aug 29 10:32:32 tower kernel: sd 3:0:0:0: [sdb] CDB: opcode=0x28 28 00 55 04 c3 c7 00 00 08 00 Aug 29 10:32:32 tower kernel: blk_update_request: critical medium error, dev sdb, sector 1426375623 Aug 29 10:32:32 tower kernel: Buffer I/O error on dev sdb1, logical block 178296945, async page read It seems to me as if the drive is failing? thanks david
  18. Just wanted to drop by and say thanks! I was in Taiwan for business when I got an email saying that Crashplan wasn't backing up. I was aware that this was going to happen, but couldn't do anything about it. A few days later the power went out and unRaid shutdown automatically. After power was restored it started back up, and crashplan automatically updated and I got 4.7. I didn't know this all happened until I got another email saying the backups were working again, imagine my surprise. I don't think anything has every worked this perfectly for me thanks for the great docker, david
  19. I just do a : docker ps cut-n-paste the id into docker exec -it <paste> bash then if I want to run top export TERM=xterm I find it faster to cut-n-paste the ID than trying to remember/figure out the names david
  20. Today I monitor my home internet connection. It is running as a windows server, but I turn off my windows box. I'd like to have it running on my unRaid because it is always on. david
  21. Wondering if anyone would be interested in a Neubot docker, it is similar to smokeping, but does more than ICMP checking. http://neubot.org/ It does a pretty nice job measuring you network performance. At the moment I have it running on a PC, but would like to move it to a docker. thanks, david
  22. Would it be possible to get a bandwidth probe installed as well, or is there one already? thanks, david
  23. I setup openVPN on my pfsense router and it worked just fine. However, it got annoying that the bank requires additional authentication when coming from a VPN, many store fronts don't work because they block known VPN IP ranges, craigslist etc. Took a bit of work to get Plex to work because PIA didn't support a valid port forwarding model. So I went with delugeVPN to avoid the full VPN hassle. Not an openVPN or pfsense issue, just the fallout when people do stupid things via VPN. david +1 This type of solution is very much needed. Having the ability to choose which application uses the VPN provides a great amount of control and flexibility. As an example VPNing docker applications like Sickbeard, Sickrage, Sonarr, Couchpotato, Sabnzbd, to name a few, but not VPNing Plex Server since it has an issue with losing it's connection once a VPN connection is established. Hope to see this type of solution come to fruition in the future. OR... You can use your external firewall / router. Load a DD-WRT or Tomato firmware on your router and using the OpenVPN client & the active routing policies - you can "route" any number of specific internal source IP's out through the VPN. Keeps the networking on the edge and it is actully quite easy to set up (If I can do it! ;-)) My 2c, Did it. WAAAYYYY too slow. routers don't have enough CPU power for real-time encryption. I slowed my speeds by about 75%. The VPN really needs to run on a modern CPU, so inside a separate docker is the better solution. I'm looking at setting up a pfSense install for this very purpose, but an openVPN docker would probably work just as well.
  24. Thanks for the pointer. I don't do any transcoding inside the house, this is all for external streams. This helps a lot. david
  25. I'm hoping someone has a suggestion. My current unRaid box is an AMD Phenom II X3 710 with 8GB of memory. It does fine with a single plex transcode, but I'm finding that my family is wanting 3-4 now and it just can't handle that many. I'm looking to upgrade my CPU, Memory, motherboard to something that can handle it. Is there any advantage for unRaid to run a Xeon? I know that it has better/bigger L3, but not sure that really helps with this issue. I've looked at an A10, or maybe a i5. I'm "hoping" to keep it around the $200 mark. Suggestion? thanks david