Jump to content

FreeMan

Members
  • Posts

    1,520
  • Joined

  • Last visited

Everything posted by FreeMan

  1. CHBMB also went the extra mile to spoon-feed me a link, so, there's that. I did give you the kudos for writing/maintaining the docker. I have run into one minor snag - I don't seem to be getting redirection from HTTP to HTTPS for any of my dockers. My /nginx/site-confs/default contains this right at the top: # listening on port 80 disabled by default, remove the "#" signs to enable # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } yet, entering photos.mydomain.ddns.us in the address bar does not ... Whew! thought to check one thing before posting that. It doesn't work when you're not port-forwarding 80 -> 81 at the router... Works a treat now. Thanks again, APTALCA!
  2. Thanks, CHBMB! That's exactly what I'm looking for! Now, if there was some way to ensure a minimum password length...
  3. That would be assuming they would understand things like "VPN", "SSH", or "one way encrypted hash". Sigh... Guess my best bet is to make up good passwords for them, eh? I did it for the various containers, I guess I can do it for this, too. Thanks again!
  4. After rereading the first 5 pages about 20 times... I've got my LE/NGINX setup and running! Thanks for the container and all the tips and tricks. After reading the remainder of the 32 pages, my one remaining question is how do I get/give access to friends/family to actually be able to create/change their own NGINX passwords? I created mine by SSH into my unRAID box then running: docker exec -it letsencrypt htpasswd -c /config/nginx/.htpasswd <username> but I have people spread across the country, and they can't all do that themselves. To be fair, I could create accounts/passwords for them (which would likely be longer, ergo more secure), but I'd prefer to give them a way of doing it themselves. Once I get this last piece in place, I can email out instructions & shut down all the other port forwards in the router and feel much better about my server. Thanks again to the linux|server.io gang, especially @aptalca for putting this all together!
  5. Just curious which version you were using. Their "One" version looks to be the lowest cost ($1/month less than CrashPlan Small Business), but that does indeed look limited. For my 2.6TB, I don't think they've got a cost-effective version. Has anyone been able to figure out how long the 75% discount is valid for? With a couple of people acting as Guinea Pigs for us - thanks @pjrellum and @J.R.! - this seems to be a viable option, and, as folks have mentioned, backing up one machine isn't inordinately more expensive than the old version. I'm using SyncBackFree to push from each of my home Win10 machines to unRAID then letting CP back that up to their cloud. I'll miss being able to have my kids backup their laptops from school to my server, though. Maybe I'll enable SFTP locally and have SyncBack use that as the destination from their lappies... A friend of mine just switched to LiveDrive, but I'm not totally convinced that's the right option, either.
  6. From the Code42 FAQ: "Yes. You can back up NAS devices with Carbonite Core, or any of Carbonite's other business plans." (emphasis mine). i.e. it's gonna cost ya! "Yes. You can back up external hard drives with Carbonite for Home (Plus and Prime) and Carbonite Core." i.e. not the basic, $5/month plan. "Carbonite for Home and Carbonite Core do not support Linux. Carbonite Evault supports Linux and offers many other advanced backup features." i.e. it's gonna cost ya! Really looking forward to the hive mind options!
  7. I'll assume you did, but it's always worth asking... Did you try symbolic links? When I was running CrashPlan on my Win7 machine, that's how I was getting it to backup my unRAID box. I just put a symlink in My Documents and it merrily followed along. I couldn't get that to work from Win10, though, and I figured the CP docker on the server was a better bet anyway.
  8. I switched to CrashPlan after spending most of 2 years waiting for my initial backup (< 1 TB of data, probably 6-750GB) to Carbonite to complete. My initial backup to CrashPlan finished in about 45 days. So far I've found nothing about an automatic migration of data from CP to Carbonite - it would be great if they would do it on their pipes for us, but I doubt that's gonna happen. Amusingly /sarcasm, Carbonite recommended their "Core" plan with a limit of 250GB (less than 10% of the data I've currently got backed up to CP!) of storage for me at a mere $270/year. But wait, I get 50% off!! I'm pretty sure the Basic plan at $60/year with unlimited data would be more than sufficient. Wonder if I can get my 50% discount on that plan. I may make a dummy account, sign up for their free trial (I wonder if they still have that), and see if I can get their backup engine to run on my Win10 machine and follow a symlink to the server and backup the server. If that works, I guess I can live with slower speed unless/until one of our local geniuses comes up with a better option... Maybe I'll invest in a couple of 4TB externals, backup to one of them with SyncBack and swap them out at the office weekly-ish for my off site backup. At least I have until April to find a solution. /rant I'll add a huge shout-out to @gfjardim for all his (?) efforts making the CP docker and keeping it functional for all of us.
  9. I'm going to say that if you post diagnostics for the failing machine, people will be able to stop asking questions and taking guesses and actually be able to provide useful input. Might be worthwhile to post diagnostics for the non-failing machine, as well (just make sure you label them very clearly), just for comparison purposes.
  10. I've noticed that one disk takes an extreme amount of time to be checked by FIP. By extreme, I mean 30+ hours. I've got a dozen data disks: root@NAS:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 932G 914G 18G 99% /mnt/disk1 /dev/md2 1.9T 1.8T 22G 99% /mnt/disk2 /dev/md3 932G 929G 2.9G 100% /mnt/disk3 /dev/md4 1.9T 1.9T 9.5G 100% /mnt/disk4 /dev/md5 3.7T 3.7T 33G 100% /mnt/disk5 /dev/md6 1.9T 1.9T 16G 100% /mnt/disk6 /dev/md7 1.9T 1.3T 571G 70% /mnt/disk7 /dev/md8 1.9T 1.9T 151M 100% /mnt/disk8 /dev/md9 932G 932G 55M 100% /mnt/disk9 /dev/md10 2.8T 2.8T 25G 100% /mnt/disk10 /dev/md11 3.7T 2.1T 1.7T 56% /mnt/disk11 /dev/md12 3.7T 1.9T 1.9T 51% /mnt/disk12 Of the 4TB drives (md4, 11 & 12) 5 & 12 run to completion in 6-10 hours (don't recall off the top of my head), but disk 11 regularly takes the 30+ hours. Using jbartlett's excellent drive performance test all 3 of those disks are performing at right about the same speeds. (FIP is currently processing disks 11 & 12, so there were some files open that probably slowed them down a bit.) Disk 5: HGST HDN724040ALE640 PK1338P4GT2D9B 4 TB 133 MB/sec avg Disk 11: HGST HMS5C4040ALE640 PL2331LAG6W5WJ 4 TB 100 MB/sec avg Disk 12: HGST HMS5C4040ALE640 PL1331LAHE2R0H 4 TB 102 MB/sec avg So I mark this up to the very large number of files on disk11: root@NAS:/mnt# ls disk5 -ARl|egrep -c '^-' 74328 root@NAS:/mnt# ls disk11 -ARl|egrep -c '^-' 2311532 root@NAS:/mnt# ls disk12 -ARl|egrep -c '^-' 380921 Which brings me back to the question I asked here - how do I properly exclude directories that I don't want to have checked? Since asking that question, I changed my directory exclude settings to have full paths from root, then executed 'Clear', 'Remove', 'Build', 'Export' for each and every disk in turn in an effort to update FIP's understanding of what it's supposed to do, but I'm still getting bunker reports of hash key mismatches on directories that should be excluded. I've set the "Exclude" paths from /mnt/users, do I need to exclude /mnt/diskx instead? I would think doing this would be a major pain since I'm writing to user shares that can easily span multiple drives - to begin with I'd need to exclude the paths from every existing disk, then I would need to remember to update my FIP settings every time I add a new disk. (Granted, I don't do it that often, but that's still a royal pain.) I've confirmed that disk11 does contain a large portion of the files I'd like to exclude from FIP scanning. Is this an issue with how FIP is skipping the paths in the "exclude" setting or how I'm defining them, or is there something else I'm missing completely?
  11. That is exactly what I did. When I got my first disk converted from RFS to XFS, I started the FIP running against that one XFS drive. Each time I got a drive converted, I built & exported that disk and added it to the check schedule. Eventually I got the whole server converted to XFS and now all drives are being tested on a regular basis.
  12. Then you should be able to tell it to trust parity, unless you have dual parity configured.
  13. And... that's what I get for not reading. (Did I mention that I hadn't had my first cup of coffee yet and, therefore, really shouldn't have been messing about with server settings to begin with?) Also, thanks, @dlandon for another great tool to supplement unRAID! As you were.
  14. I installed a while ago but just really looked through this. I saw the "Disable FTP & Telnet" option and turned that on. As I clicked "Apply" I realized, hey, I do need Telnet, that's how I remote in to the console, and sure enough PuTTY wouldn't connect. I turned off "Disable", and it still won't connect. Now, as I've read through this thread, I've realized that I should be using SSH. I've reconfigured my saved PuTTY session to use SSH and it works fine. For my server's security, I'll be leaving this setting enabled. However, I'm reporting this because setting "Disable FTP Server & Telnet" back to "No" doesn't seem to reenable telnet (haven't tested FTP). While I get that it should really remain that way for security, most users would expect that setting it back to "No" would reestablish telnet services.
  15. I just hit an odd situation that may or may not be related to CrashPlan, so I'm asking to see if there's anything here that might be worth pursing. As detailed in my post here, I discovered an odd directory in /mnt/user that I've not created manually. I have a ".Trash-99" directory there that was created on 4 Jun 2017. There was a suggestion in my post that it might be something created by Ubuntu as a recycle bin and if something had a mapping to /mnt/user, it might be the culprit for creating this. As CrashPlan is one of two dockers that have a mapping to /mnt/user, I'm asking here. A) Has anyone else seen this directory created? B) Was there a release on or about 4th June that may have unintentionally included creating a Recycle Bin? C) Does anyone have any thoughts at all as to any other cause for this directory to have appeared?
  16. See Saarg's message that was just above yours: Emphasis added If you're not comfortable editing the file in Linux, you can power down the server, stick the flash drive in a Win/Mac machine and make the edit there. Just be sure that if you're using a Win PC that you use an editor that will preserve the Linux line breaks. (Notepad++ is free and will do just that for you. Not affiliated, just a happy user.)
  17. I've been rebuilding all my hashes one disk at a time to attempt to eliminate the constant warning I was getting because I think I improperly excluded some paths. I came home this evening to a very unusual display. It's telling me that my Disk11 is 100% completed with an ETA of 00:00:00, however it's reporting that it's working on file 664,413 of 2,248,611, and the current file number is incrementing. Is there possibly an issue with the size of the counter variable that's overflowing and causing the percentage to think it's done?
  18. Thanks for the hint! I can ignore a spurious error message. I appreciate all your work in making these available and in supporting them!
  19. Got the error. Applied this fix. SAB starts and runs. I'm getting this error also: However, SAB is running and remains running. I'm not set up to use HTTPS, so that may be the reason. General Docker FAQ : I don't see anything relevant, you'll have to point for us slower folk. unRAID Docker FAQ : Sorry, there is a problem You do not have permission to view this content. Error code: 2F173/H I get that the FAQs are there to save you time, I fully understand and accept that. But for me, at least, you'll have to provide a bigger hint.
  20. I am getting notifications of hash key mismatches from directories that are in my exclude custom folder list. I want to include my /mnt/user/Backups share because all my machines at home are backing up to here, but I want to exclude a subset of directories under that share because my server is a backup location for several family member's machines. My structure is: /mnt/user/Backups/... + Local machine 1 backup + Local machine 2 backup + Local machine 3 backup + CommunityApplicationAppdataBackup + Crashplan machine 1 backup + Crashplan machine 2 backup + Crashplan machine n backup In the excluded files and folders Custom Folders box, I have this: 619375375455617289,619380445513515330,619452463559020549,622829716619395332,622831926866608387,682140704451330318,712875340537760924,CommunityApplicationsAppdataBackup The numbered directories are the ones CrashPlan creates when an incoming backup is created. I recall that when I first excluded the folders, there was some sort of drop-down picker and that the format above was created by that, but now I'm not so certain of my recollection and I may have just made it up. 2 part question, then: 1) why am I still getting notifications of file mismatches in these folders? 2) if the answer to 1) is "because the exclude folder format is wrong", what is the correct format for this box? Do I need the full path, and if so from root (/)?
  21. That's so disgusting I'm tempted to report the post. shudder...
  22. 16GB as noted in .sig Seeing how much others have, I'm a tad jealous, but really, I don't know that I need it. CrashPlan backing up nearly 1 million files at 2.7TB ran me out of room when I had only 8GB. Everything's been smooth since the upgrade.
  23. @bonienl first of all, thank you for all your great plugins and the fantastic job you've done with the WebGUI (even back in the 5.x days)! After installing my first SSD cache drive, I got a warning from Squid's excellent FCP that I didn't have Trim installed. After doing so, I had 2 observations: 1) I had NO idea where to go to find the settings for it. Some of your plugins very conveniently have a clickable button that takes you to the config page, some don't. Not all of them set up their config pages in the same section of the menus, so it's not particularly obvious (at least to me) where to go for each individual batch of settings. Would it be possible to make them all have the "button click takes you to the config page" functionality? I searched this thread for "trim", and, fortunately, saw that I'm not the only one with this issue. I was able to follow the directions in response to other's requests to find it. 2) Once I found the config, I had no idea what kind of settings to provide. Some more searching in this thread lead me to these three messages: (sorry, couldn't seem to get the last one to embed with the quotes from the previous two ) this is great info for those of us who aren't yet that familiar with SSDs and Trim operations. I think it would be a great service to all of us less knowledgeable types if you were to include a link to John_M's comment (it includes quotes of all the rest), and/or include some info along these lines in your description of the plug-in in your OP. It would also probably save you and/or Squid (or others) having to answer these questions time after time. Again, thanks for all your excellent work! (You, too, Squid!)
×
×
  • Create New...