Jump to content

ljm42

Administrators
  • Posts

    4,469
  • Joined

  • Last visited

  • Days Won

    32

Everything posted by ljm42

  1. Hey gfjardim, Can you explain the difference between this: https://github.com/gfjardim/docker-crashplan and this: https://github.com/gfjardim/docker-containers/tree/master/crashplan They are no longer in sync. Thanks!
  2. I just went through the same thing Adobe Lightroom and Google Picasa are good choices for single user, single computer needs (I have not tried some of the other options listed, and the multi-user Picasa trick didn't seem to address multiple computers) But I discovered Daminion about a month ago: http://daminion.net/ and it is awesome It has a true client-server architecture; I installed the server on our home desktop computer and clients on our laptops. Now my wife and I can both organize photos while the kids are playing games on the desktop. The photos themselves are stored on unRAID of course. We have over 65,000 photos and it handles them without problems. If you are evaluating products, be sure to look into how powerful their search is. I found Picasa to be particularly disappointing; you put all that effort into tagging and then can't really do anything with it, ironic considering who owns them. Lightroom was decent but Daminion is very powerful, with the ability to search on tags/people/locations/ratings/and more, all at the same time. For this particular audience, the main downside is that it is a Windows product. It would be so much nicer if you could run the server in a Docker.
  3. I went ahead and rebooted. Not sure if took 2 seconds or 20 minutes to fix but I'm not getting the error any more. Thanks!
  4. Thanks. Hmm, I'm not sure how long to wait before rebooting. I had assumed btrfs-freespace would show up in htop, but it doesn't. it doesn't really look like it is doing much: root@Tower:~# ps -ef | grep "btrfs-freespace" root 17662 2 0 18:25 ? 00:00:00 [btrfs-freespace] root 17752 17740 0 18:30 pts/0 00:00:00 grep btrfs-freespace Here's what the wiki says: Any idea how to tell if it is "actively doing some IO"?
  5. I am getting this error in my syslog related to the BTRFS cache drive: kernel: BTRFS info (device sdd1): The free space cache file (26872905728) is invalid. skip it I'm currently on 6b8 but I've had the error since 6b6. In 6b6 it led to the "(28) No space left on device" error, but in betas 7 and 8 it hasn't caused any problems that I know of. I have already run the btrfs scrub option that was added to 6b7 but it didn't find or fix anything. From what I can tell, the BTRFS clear_cache mount option should take care of the problem: https://btrfs.wiki.kernel.org/index.php/Mount_options Would this be the right process to use that option? * Stop the array * unmount /mnt/cache * mount -t btrfs -o noatime,nodiratime,clear_cache /dev/sdd1 /mnt/cache * run htop, watch for btrfs-freespace to drop off the list * reboot Thanks for any advice! syslog-20140901.txt
  6. hmm, not for those of us who run headless. Prefer it to be a menu option. My assumption is that it would be fully automated, with no need to "press Y to continue" or anything like that. So running headless shouldn't be an issue? I do think it needs to be run outside of the unRAID environment (either from a boot menu option or automated during bootup) to ensure the flash is ready to be mounted when unRAID boots. It has been suggested in other threads that a plugin (powerdown) could be the reason so many people are having to run chkdsk these days, so I don't think it would help a whole lot to run it prior to shutdown. Even once that particular plugin is fixed I think unRAID would benefit from the ability to fix the flashdrive without having to use Windows. I just setup a remote, headless (with IPMI) system for my sister and I need to be able to manage the system without having physical access to it.
  7. The forums are filled with instructions to put the flash drive in a Windows computer and run chkdsk. But then I saw this one: http://lime-technology.com/forum/index.php?topic=34266.msg318661#msg318661 that explains how to run dosfsck from within unRAID. Which makes me wonder... 1. Can we add a "checkdisk" option to the boot menu that loads a minimal environment, runs dosfsck on the USB drive, and then reboots? 2. Or even better... when unRAID detects a problem with the USB drive during bootup, why not have it automatically unmount the drive, run dosfsck and then remount the drive (or reboot if needed)?
  8. Great! I'm glad it is connecting now. Once you have it configured you can close the Crashplan app on your Windows box and exit putty too. It is pretty much a "set it and forget" it type of thing, unless you want to check up on how much progress it has made on the backup. (depending on how much data you are backing up, it could take a few months to complete the initial backup) Also, I see now that gfjardim is recommending an alternate way to connect in the first post. It looks like his method bypasses putty, which should be easier. I'm not sure why Crashplan doesn't recommend that method.
  9. I'd revert the changes you made and then follow the directions here: http://support.code42.com/CrashPlan/Latest/Configuring/Configuring_A_Headless_Client You basically setup a SSH tunnel from your Windows machine to unRAID, and the Crashplan GUI you installed on Windows then communicates over that tunnel to the Crashplan service running on unRAID. Okay, I tried the above and am still having issues. I removed the Docker container and the config folder and reinstalled the image. I uninstalled on my Windows PC, rebooted, did a clean install and followed the guide you posted. When I connect with Putty it asks for a username/password. I am assuming it wants my UnRAID root account since if I put in my Crashplan credentials I get access denied. When I launch the app on my Windows machine it only lists my Windows machine. I do see an entry for '42' in the main GUI page, which I thought may be my UnRAID machine, but says no files were selected for backup, and I can't browse it. When I try and browse I only see my Windows PC. When I installed my docker for data I selected /mnt/user/Pictures as this is all I want to backup. Any idea on what I am missing still? Thanks OK, once you connect via putty it should just be a matter of setting the "service port" in your ui.properties file to 4200. That step in Crashplan's docs is kind of buried. Then when you start the crashplan client on windows it should see the folders on unRAID and NOT the folders on Windows. You won't be able to see both at the same time.
  10. I'd revert the changes you made and then follow the directions here: http://support.code42.com/CrashPlan/Latest/Configuring/Configuring_A_Headless_Client You basically setup a SSH tunnel from your Windows machine to unRAID, and the Crashplan GUI you installed on Windows then communicates over that tunnel to the Crashplan service running on unRAID.
  11. That's a good point, I've got mine set to "minimal": http://support.code42.com/CrashPlan/Latest/Backup/Backup_Settings#Advanced_Settings
  12. Does it mean that if I have 30TB of data, it will take 30GB of RAM just for Crashplan? According to the Crashplan help pages: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Runs_Out_Of_Memory_And_Crashes What I do to try and get around this is create multiple backup sets. My theory is that if I can keep each backup set to around 1 TB I'll be ok. It seems to work, I've backed up 3.5 TB so far and haven't had any memory problems. But 30 TB is a different story That would probably take a few years to backup. I'd use Crashplan for your critical data and then find some other kind of local backup for the less critical data.
  13. Here's a suggestion for a best practice... always add a .gitattributes file to the root of your project that contains these lines: # Auto detect text files and force unix-style line endings * text eol=lf That forces unix-style line endings on text files, which solves problems for people who use Git for Windows to checkout a repository to an unRAID share. Without that line, Git for Windows uses Windows-style line endings, which causes errors when running shell scripts in Docker.
  14. Oh sorry about that. I don't see any service.logs yet but maybe they will appear once it gets past the "synchronizing" stage. I haven't had any memory problems yet, but that help page says you need 1GB RAM for every 1TB of data being backed up, so I assume I'll need more. I don't know if it helps, but I'm using multiple backup sets with the hope that it will need less ram.
  15. Thanks! I upgraded to the current Docker and it is synchronizing now. I see the .identify file outside of the Docker, thanks! Oh cool, this is using phusion now too. I don't mean to be a pain... but I wanted to ask about this article: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Runs_Out_Of_Memory_And_Crashes It references some log files here: /usr/local/crashplan/log/service.log.# and run.conf: /usr/local/crashplan/bin/run.conf How would you feel doing the copy/link thing with the /config directory for those? Thanks again!
  16. Thank you for this gfardim! This is a fairly complex Docker and I really appreciate the work you put into it! BTW, I passed in "-v /mnt:/mnt" and then when I adopted my unRAID 5 Crashplan account all of the paths stayed the same which simplified the process. At least I assume it will work, it is still synchronizing One thing to think about... could this be tweaked to store the identity information outside of the Docker so we don't have to re-adopt it if we rebuild the Docker? Along those lines... it would also be nice if we had access to the logs and to run.conf so we could change the amount of memory available to Crashplan as described here: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Runs_Out_Of_Memory_And_Crashes Thanks again!
  17. Yeah I don't think we'll need all those extra containers either. But I think the key thing they are recommending is that you store all your data and config in volumes, which are simply directories on your unRAID array. Then you can access the config/logs/data directly from unRAID and don't need to SSH into the container to make changes.
  18. Here's my tunables report. I am trying to understand these results and I had a few ideas for the script if you don't mind First, would it make sense for automatic mode to have a test "zero" based on unRAID's default values so it is easier to see what the improvement will be? Second, would you perhaps consider adding a sixth column to the report to show how much RAM each option takes? I think that would help me choose between test 2 (the "best bang") and test 5 (which looks like a pretty good improvement over the best bang) It might also be nice to have an option to limit the amount of ram that automatic mode uses. For instance, If I know I don't want to use over a certain amount (200? 300?) then I don't really need to find unthrottled values, although it is kind of interesting Thanks for writing this! Tunables Report from unRAID Tunables Tester v2.2 by Pauven NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Test | num_stripes | write_limit | sync_window | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)--- 1 | 1408 | 768 | 512 | 147.0 MB/s 2 | 1536 | 768 | 640 | 148.2 MB/s 3 | 1664 | 768 | 768 | 148.6 MB/s 4 | 1920 | 896 | 896 | 149.2 MB/s 5 | 2176 | 1024 | 1024 | 150.4 MB/s 6 | 2560 | 1152 | 1152 | 150.8 MB/s 7 | 2816 | 1280 | 1280 | 151.0 MB/s 8 | 3072 | 1408 | 1408 | 151.2 MB/s 9 | 3328 | 1536 | 1536 | 151.0 MB/s 10 | 3584 | 1664 | 1664 | 151.3 MB/s 11 | 3968 | 1792 | 1792 | 151.4 MB/s 12 | 4224 | 1920 | 1920 | 151.4 MB/s 13 | 4480 | 2048 | 2048 | 151.1 MB/s 14 | 4736 | 2176 | 2176 | 151.1 MB/s 15 | 5120 | 2304 | 2304 | 151.2 MB/s 16 | 5376 | 2432 | 2432 | 151.4 MB/s 17 | 5632 | 2560 | 2560 | 151.1 MB/s 18 | 5888 | 2688 | 2688 | 151.3 MB/s 19 | 6144 | 2816 | 2816 | 151.4 MB/s 20 | 6528 | 2944 | 2944 | 151.5 MB/s --- Targeting Fastest Result of md_sync_window 2944 bytes for Final Pass --- --- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)--- 21 | 6272 | 2824 | 2824 | 153.7 MB/s 22 | 6288 | 2832 | 2832 | 154.0 MB/s 23 | 6304 | 2840 | 2840 | 154.0 MB/s 24 | 6328 | 2848 | 2848 | 153.9 MB/s 25 | 6344 | 2856 | 2856 | 153.9 MB/s 26 | 6360 | 2864 | 2864 | 153.7 MB/s 27 | 6376 | 2872 | 2872 | 153.8 MB/s 28 | 6400 | 2880 | 2880 | 153.9 MB/s 29 | 6416 | 2888 | 2888 | 154.0 MB/s 30 | 6432 | 2896 | 2896 | 154.0 MB/s 31 | 6448 | 2904 | 2904 | 154.0 MB/s 32 | 6464 | 2912 | 2912 | 154.0 MB/s 33 | 6488 | 2920 | 2920 | 153.9 MB/s 34 | 6504 | 2928 | 2928 | 153.6 MB/s 35 | 6520 | 2936 | 2936 | 153.8 MB/s 36 | 6536 | 2944 | 2944 | 153.9 MB/s Completed: 2 Hrs 7 Min 50 Sec. Best Bang for the Buck: Test 2 with a speed of 148.2 MB/s Tunable (md_num_stripes): 1536 Tunable (md_write_limit): 768 Tunable (md_sync_window): 640 These settings will consume 138MB of RAM on your hardware. Unthrottled values for your server came from Test 22 with a speed of 154.0 MB/s Tunable (md_num_stripes): 6288 Tunable (md_write_limit): 2832 Tunable (md_sync_window): 2832 These settings will consume 564MB of RAM on your hardware. This is 449MB more than your current utilization of 115MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values.
×
×
  • Create New...