• Posts

  • Joined

  • Last visited


  • Gender
  • Location
    Cedar Park TX

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

snowboardjoe's Achievements


Explorer (4/14)



  1. I upgraded unraid 6 days ago, but that log shows running solid ever since.I was wondering if the uptime from the Docker view would accurately represent the true time it was up. Will keep monitoring it. Alternatively, starting to look at AWS Glacier and using s3sync container for the really large media that are static and get that out of CP, but then I'm paying for both services. One project at a time. Want to keep this current config stable for now. I think CP changed some retention options over time. I found some ridiculous settings in there keeping extra versions way too long. Right when I did that they announce they're doing the same things globally. Odd. I think some configuration slipped in there generating extra versions for a lot of clients.
  2. Backups restored. I think it was a setting with the frequency I had that caused the everlasting file synchronizations. Support pretty much told me to add 10GB of RAM to the container (they don't know this was a container) to have reliable backups because that was their recommendation and ended it there despite I'm only backup up 81K files. When I explained I'm getting good backups, that told me that was likely a false status as it's not possible to backup 11TB of data with 2GB of RAM available to the container. Really? Wow. Can't verify the files can completes successfully and this is one of your core functions. But, if I add gobs of RAM, they'll support me and the file scans will be successful. How will they know if it's successful it's already reporting a false positive? Brilliant. I'll be looking at backup alternatives.
  3. CPU is likely not the issue at all. It's been some time, but years ago with large backups the client would run a data deduplication job to find and consolidate redundant data. This was painstakingly slow. There was a setting I added long ago that told the client to NOT do this and backups were off and flying at record pace. I don't know if this is still a thing. I've not customized that setting in ages and may be gone for all I know.
  4. Day 42 and still no complete backup. Been working with support and they claim my issue has been escalated, but it does not seem to have changed anything on the urgency to resolve this. They're currently throwing up some ideas and I've rejected most of them because they don't make sense. For example, they said I needed to increase the Java memory allocation from 2GB to 9GB. Uh, no. I'm only using 700MB. They seem to think this thing is crashing over and over again when it's not. Having to wait 3-5 days for a synchronization to complete is a big problem. Having it repeat this endlessly is a bigger problem. Not sure what to tell them and I don't want to share this is a container for fear they'll just hang up on me.
  5. For the past two weeks CrashPlan is telling me maintenance is still in progress and not to authorize my client until I hear from them. I'm now at 29 days of no backups for this host. I don't know how to escalate this issue with them.
  6. I also updated the container to the latest version ( and the counter has reset back to 0%. Based on my estimates and the current rate, it will be another 36 hours before this completes. This is unacceptable.
  7. Wondering if I need to do the same thing. No backups for over a week now because it's "Synchronizing block information". I don't recall it ever taking this long. Currently at 57%. Not happy.
  8. SMART settings altered after 6.8.3>6.9.1. This includes temperature settings and the monitoring or attribute 197 of my SSD's. Is that expected when upgrading the OS of unRAID?
  9. Did you find a solution that worked for you? I, too, want to set a preset that automatically selects passthru for all tracks. I don't know how to set this from the GUI. Guessing possibly from the config itself?
  10. I'm getting much faster rates than that and have been using the service for many years now. Is your rate dropping over time? Did it ever complete the initial full backup?
  11. Fixed the issue. Not sure if this is already documented somewhere. The /storage mount point is strictly configured to be read-only (and a safe thing to do for security). In order to restore files back, you need to create a new mount point in the container configuration. In my case, I just added /restore and mapped to /mnt/user/scratch/restore. Provided destination /restore to restore job and it worked just fine.
  12. Restores are still failing here: root@laffy:/mnt/user/appdata/CrashPlanPRO/log# more restore_files.log.0 I 12/23/20 10:25AM 41 Starting restore from CrashPlan PRO Online: 2 files (53.10GB) I 12/23/20 10:25AM 41 Restoring files to original location I 12/23/20 10:42AM 41 Restore from CrashPlan PRO Online completed: 0 files restored @ 445.2Mbps W 12/23/20 10:42AM 41 2 files had a problem W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986488868677447596 (Read-only file system) W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986489628182015916 (Read-only file system) Someone said this might be fixed in latest version, but was not sure if I needed to set UID/GID to 0 and if there were any security concerns with that? UPDATE: Set UID/GID to 0 and restores are now in progress. UPDATE2: Still failed due to read-only status. I have no idea how to restore files. This is pretty serious now.
  13. I'm seeing the same thing. Dashboard reports I'm using 83% of my 16GB docker image. A report on container size shows I'm only using 4GB. Numbers are not adding up. The main thing that caught my attention is directory /var/lib/docker/btrfs is using 42GB? Confused.