Jump to content

ljm42

Administrators
  • Posts

    4,404
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by ljm42

  1. I have not tried using USB devices on unRAID yet, can you explain how this is different from SNAP? One thing I'd like to do is run a script that detects when I plug in a particular microsd card (from my camera), moves the files to a particular location on unraid, then unmounts the card and beeps so I know it is safe to remove it. Would this or SNAP be better starting point for that sort of thing? Thanks!
  2. Having a plugin to install these and keep them current would be great! There are a lot of people who are moving an entire disk's worth of data from one disk to another so they can switch filesystems. These utilities seem to operate around shares, could there be an option (or a third script) to move entire disks? It would be great to tell people to add a plugin and run a simple command to move an entire disk's worth of data around.
  3. Oh dang, you're right. Thanks, it is working now.
  4. Hmm, I was just prompted to update to the 2015.02.03 version of Dynamix System Temperature, and the /Settings/TempSettings page no longer works. The "available drivers" is empty by default, although if I click "Detect" it does display the correct ones (coretemp nct6775). Load / Add to Startup don't seem to do anything, but my go script already contains them, so nothing additional should be needed: modprobe coretemp modprobe nct6775 /usr/bin/sensors -s All of the dropdowns on the page are disabled, and nothing I do allows me to select anything: <select name="cpu_temp" disabled> I made the mistake of hitting "Apply" and all of the values changed to "not used" when the page reloaded, and the temperatures disappeared from the footer. The dropdowns are still disabled, although when I view the source of the page the correct values are listed, I just can't choose them: <select name="cpu_temp" disabled> <option>Not used</option> <option value='coretemp-isa-0000|temp1|CPU Temp' >coretemp - Physical id 0 - 37.0 °C</option> <option value='coretemp-isa-0000|temp2|CPU Temp' >coretemp - Core 0 - 37.0 °C</option> <option value='coretemp-isa-0000|temp3|CPU Temp' >coretemp - Core 1 - 34.0 °C</option> <option value='coretemp-isa-0000|temp4|CPU Temp' >coretemp - Core 2 - 33.0 °C</option> <option value='coretemp-isa-0000|temp5|CPU Temp' >coretemp - Core 3 - 36.0 °C</option> <option value='nct6776-isa-0290|temp1|CPU Temp' >nct6776 - SYSTIN - 35.0 °C</option> <option value='nct6776-isa-0290|temp2|CPU Temp' >nct6776 - CPUTIN - 37.0 °C</option> <option value='nct6776-isa-0290|temp3|CPU Temp' >nct6776 - AUXTIN - 26.0 °C</option> </select> Hopefully that is enough information to help figure out what happened?
  5. Thanks! I was easily able to remove the fan speed from the footer using the new checkboxes. Works perfectly Thank you very much bonienl!
  6. Thanks! I've also got a request for the stats page I have 8 GB of RAM in my system, but the graph goes from 0 to 10 GB. Would it be possible for Y-axis of the graph to max out at 8 GB (or whatever the actual amount of RAM is in the system)?
  7. Hi bonienl, Unfortunately my motherboard doesn't report fan speed, 'sensors' just reports: fan1: 0 RPM (min = 0 RPM) fan2: 0 RPM (min = 0 RPM) So I added this to my sensors.conf file: ignore fan1 ignore fan2 This suppressed the fan readout from 'sensors' and removed it from the dashboard, but with the 1.31c version of dynamix.system.temp it shows "## rpm" in the footer. Could you hide the fan icon and text when there is no fan speed information? Thanks!
  8. Hey gfjardim, Can you explain the difference between this: https://github.com/gfjardim/docker-crashplan and this: https://github.com/gfjardim/docker-containers/tree/master/crashplan They are no longer in sync. Thanks!
  9. I just went through the same thing Adobe Lightroom and Google Picasa are good choices for single user, single computer needs (I have not tried some of the other options listed, and the multi-user Picasa trick didn't seem to address multiple computers) But I discovered Daminion about a month ago: http://daminion.net/ and it is awesome It has a true client-server architecture; I installed the server on our home desktop computer and clients on our laptops. Now my wife and I can both organize photos while the kids are playing games on the desktop. The photos themselves are stored on unRAID of course. We have over 65,000 photos and it handles them without problems. If you are evaluating products, be sure to look into how powerful their search is. I found Picasa to be particularly disappointing; you put all that effort into tagging and then can't really do anything with it, ironic considering who owns them. Lightroom was decent but Daminion is very powerful, with the ability to search on tags/people/locations/ratings/and more, all at the same time. For this particular audience, the main downside is that it is a Windows product. It would be so much nicer if you could run the server in a Docker.
  10. I went ahead and rebooted. Not sure if took 2 seconds or 20 minutes to fix but I'm not getting the error any more. Thanks!
  11. Thanks. Hmm, I'm not sure how long to wait before rebooting. I had assumed btrfs-freespace would show up in htop, but it doesn't. it doesn't really look like it is doing much: root@Tower:~# ps -ef | grep "btrfs-freespace" root 17662 2 0 18:25 ? 00:00:00 [btrfs-freespace] root 17752 17740 0 18:30 pts/0 00:00:00 grep btrfs-freespace Here's what the wiki says: Any idea how to tell if it is "actively doing some IO"?
  12. I am getting this error in my syslog related to the BTRFS cache drive: kernel: BTRFS info (device sdd1): The free space cache file (26872905728) is invalid. skip it I'm currently on 6b8 but I've had the error since 6b6. In 6b6 it led to the "(28) No space left on device" error, but in betas 7 and 8 it hasn't caused any problems that I know of. I have already run the btrfs scrub option that was added to 6b7 but it didn't find or fix anything. From what I can tell, the BTRFS clear_cache mount option should take care of the problem: https://btrfs.wiki.kernel.org/index.php/Mount_options Would this be the right process to use that option? * Stop the array * unmount /mnt/cache * mount -t btrfs -o noatime,nodiratime,clear_cache /dev/sdd1 /mnt/cache * run htop, watch for btrfs-freespace to drop off the list * reboot Thanks for any advice! syslog-20140901.txt
  13. hmm, not for those of us who run headless. Prefer it to be a menu option. My assumption is that it would be fully automated, with no need to "press Y to continue" or anything like that. So running headless shouldn't be an issue? I do think it needs to be run outside of the unRAID environment (either from a boot menu option or automated during bootup) to ensure the flash is ready to be mounted when unRAID boots. It has been suggested in other threads that a plugin (powerdown) could be the reason so many people are having to run chkdsk these days, so I don't think it would help a whole lot to run it prior to shutdown. Even once that particular plugin is fixed I think unRAID would benefit from the ability to fix the flashdrive without having to use Windows. I just setup a remote, headless (with IPMI) system for my sister and I need to be able to manage the system without having physical access to it.
  14. The forums are filled with instructions to put the flash drive in a Windows computer and run chkdsk. But then I saw this one: http://lime-technology.com/forum/index.php?topic=34266.msg318661#msg318661 that explains how to run dosfsck from within unRAID. Which makes me wonder... 1. Can we add a "checkdisk" option to the boot menu that loads a minimal environment, runs dosfsck on the USB drive, and then reboots? 2. Or even better... when unRAID detects a problem with the USB drive during bootup, why not have it automatically unmount the drive, run dosfsck and then remount the drive (or reboot if needed)?
  15. Great! I'm glad it is connecting now. Once you have it configured you can close the Crashplan app on your Windows box and exit putty too. It is pretty much a "set it and forget" it type of thing, unless you want to check up on how much progress it has made on the backup. (depending on how much data you are backing up, it could take a few months to complete the initial backup) Also, I see now that gfjardim is recommending an alternate way to connect in the first post. It looks like his method bypasses putty, which should be easier. I'm not sure why Crashplan doesn't recommend that method.
  16. I'd revert the changes you made and then follow the directions here: http://support.code42.com/CrashPlan/Latest/Configuring/Configuring_A_Headless_Client You basically setup a SSH tunnel from your Windows machine to unRAID, and the Crashplan GUI you installed on Windows then communicates over that tunnel to the Crashplan service running on unRAID. Okay, I tried the above and am still having issues. I removed the Docker container and the config folder and reinstalled the image. I uninstalled on my Windows PC, rebooted, did a clean install and followed the guide you posted. When I connect with Putty it asks for a username/password. I am assuming it wants my UnRAID root account since if I put in my Crashplan credentials I get access denied. When I launch the app on my Windows machine it only lists my Windows machine. I do see an entry for '42' in the main GUI page, which I thought may be my UnRAID machine, but says no files were selected for backup, and I can't browse it. When I try and browse I only see my Windows PC. When I installed my docker for data I selected /mnt/user/Pictures as this is all I want to backup. Any idea on what I am missing still? Thanks OK, once you connect via putty it should just be a matter of setting the "service port" in your ui.properties file to 4200. That step in Crashplan's docs is kind of buried. Then when you start the crashplan client on windows it should see the folders on unRAID and NOT the folders on Windows. You won't be able to see both at the same time.
  17. I'd revert the changes you made and then follow the directions here: http://support.code42.com/CrashPlan/Latest/Configuring/Configuring_A_Headless_Client You basically setup a SSH tunnel from your Windows machine to unRAID, and the Crashplan GUI you installed on Windows then communicates over that tunnel to the Crashplan service running on unRAID.
  18. That's a good point, I've got mine set to "minimal": http://support.code42.com/CrashPlan/Latest/Backup/Backup_Settings#Advanced_Settings
  19. Does it mean that if I have 30TB of data, it will take 30GB of RAM just for Crashplan? According to the Crashplan help pages: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Runs_Out_Of_Memory_And_Crashes What I do to try and get around this is create multiple backup sets. My theory is that if I can keep each backup set to around 1 TB I'll be ok. It seems to work, I've backed up 3.5 TB so far and haven't had any memory problems. But 30 TB is a different story That would probably take a few years to backup. I'd use Crashplan for your critical data and then find some other kind of local backup for the less critical data.
  20. Here's a suggestion for a best practice... always add a .gitattributes file to the root of your project that contains these lines: # Auto detect text files and force unix-style line endings * text eol=lf That forces unix-style line endings on text files, which solves problems for people who use Git for Windows to checkout a repository to an unRAID share. Without that line, Git for Windows uses Windows-style line endings, which causes errors when running shell scripts in Docker.
  21. Oh sorry about that. I don't see any service.logs yet but maybe they will appear once it gets past the "synchronizing" stage. I haven't had any memory problems yet, but that help page says you need 1GB RAM for every 1TB of data being backed up, so I assume I'll need more. I don't know if it helps, but I'm using multiple backup sets with the hope that it will need less ram.
  22. Thanks! I upgraded to the current Docker and it is synchronizing now. I see the .identify file outside of the Docker, thanks! Oh cool, this is using phusion now too. I don't mean to be a pain... but I wanted to ask about this article: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Runs_Out_Of_Memory_And_Crashes It references some log files here: /usr/local/crashplan/log/service.log.# and run.conf: /usr/local/crashplan/bin/run.conf How would you feel doing the copy/link thing with the /config directory for those? Thanks again!
  23. Thank you for this gfardim! This is a fairly complex Docker and I really appreciate the work you put into it! BTW, I passed in "-v /mnt:/mnt" and then when I adopted my unRAID 5 Crashplan account all of the paths stayed the same which simplified the process. At least I assume it will work, it is still synchronizing One thing to think about... could this be tweaked to store the identity information outside of the Docker so we don't have to re-adopt it if we rebuild the Docker? Along those lines... it would also be nice if we had access to the logs and to run.conf so we could change the amount of memory available to Crashplan as described here: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Runs_Out_Of_Memory_And_Crashes Thanks again!
×
×
  • Create New...