[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)


Recommended Posts

1 hour ago, wickedathletes said:

popping in to see if crashplan is worth trying out again? I haven't used them in about 6 years, and would have 80TB to slowly move to the cloud. Do they still have major RAM issues? Any other options?

I struggle with the 'deep maintenance' that they run monthly on my backups (see my complaints scattered throughout this thread). On my 3.7TB backup it takes days, and backups can't run during that time (I even have to shutdown the Docker sometimes to let their maintenance finish running on their server, because, for reasons unknown, the app causes it to loop and never finish). I would not recommend trying to backup 80 TB to their servers - I think 'deep maintenance' would never finish.

I am in a fairly regular "f&*k this service" mood every month or so, then whatever problem fixes, and I run the maths, and I can't disagree with the value. But this is not a set-and-forget service, so I now have a habit of checking the Docker like every second day to make sure it's still working, still backing up. You get what you pay for, I guess.

I also think they may get shirty at your 80TB backup ('unlimited' is never unlimited), but I have no experience or even anecdote to back that up  - it's just me thinking out loud. And they also recommend 1GB RAM per TB backed up. I doubt you'll actually need 80GB RAM for the Docker, but that's just what they recommend. Note that the service isn't meant for archival - everything is geared towards looking for changes in files, and uploading and versioning them, so there is a lot of overhead. Thus 80TB of archive will cause the whole backup to run inefficiently (deep maintenance is just one aspect of this).

I have been considering buying a second server and storing it at a friends house for off-site backup (running a weekly script to connect, sync, disconnect). I don't know if that's feasible for an 80TB backup, but I imagine your cloud fees will be huuuge with any provider other than Crashplan, so it may not take too long to pay off the cost of a second-hand server with some black friday/cyber monday shucked drives (I am also assuming you have such a friend, with a good internet connection, a spare port on their UPS, room in their cupboard/basement, and that trusts you to place hardware on their network... hrmmmm lots of maybes here...). Just a thought.

Also: do you really need all 80TB backed up in the cloud? At a guess, most of it will be media that is already 'backed up' somewhere on the internet, and will just (ok, that word is doing some heavy lifting) need to be downloaded again (and may not even really be missed if it was lost). Again, just speaking from my choice in what I backup, so YMMV.

It may even be a good idea to split the backup into different types: irreplaceable and/or frequently updated backed up using CrashPlan, replaceable/seldom updated (ie media) in cold/archive storage somewhere (even a series of external hard drives kept offsite).

Just my £0.02

  • Like 1
Link to comment
3 hours ago, wickedathletes said:

 would have 80TB to slowly move to the cloud.

I think it's worth mentioning annecdotal, but credible, reports that once you get above 10TB, they have been known to contact you to require you reduce your backup size.

 

Also, noting their speed limits, you may find that you practically never get a complete initial backup. 

 

Despite the increased cost, you may want to consider using Backblaze B2.

 

If you can reduce your 80TB backup, I recommend you do. For example, if that's literally everything you have, perhaps backup only irreplacable data such as photos and documents.

Link to comment
1 hour ago, jademonkee said:

Just my £0.02

 

Thank you for the full response, it is appreciated. When I stopped using CrashPlan I was uploading about 25TB. For awhile, I maintained the original CrashPlan plugin here (long before docker) haha. They never cared or complained but the RAM usage was stupid and the service was trash. Sounds like that hasn't changed much.

 

Anything of "value" to me is backed up offsite to my parents house on portable drives (photography/music). Everything else is just parity at this point. So although I don't need a backup, it would be nice if catastrophe happened. Probably not nice enough to pay backblaze $400 a month or to spend about $3K in a backup system either though. I might look at google drive as well, as they are unlimited for a small enough business fee.

Link to comment
38 minutes ago, hpka said:

I think it's worth mentioning annecdotal, but credible, reports that once you get above 10TB, they have been known to contact you to require you reduce your backup size.

I currently have 13TB backed up with CrashPlan (no contact from them) but it is a last resort as I also backup to another server and external drives.

 

Speed seems to be OK as it just runs in the background and I do not really notice it much.  My initial backup of ~5TB when I started did take 3-4 months.

Link to comment
  • 3 weeks later...

Recently upgraded to UR V6.9.2 and my Crashplan reports are coming back as failed to backup. So I finally get a chance to try logging on and see what the problem might be and I keep getting 'unable to sign in' through the GUI. So I figured I'd try resetting the PW...but there doesn't seem to be any way to do that. The text is there on the GUI, but there doesn't seem to be an actual link attached to it.

 

Appreciate some help here for resetting PW and hopefully figuring out why my backup isn't working. Thanks!!

Link to comment
8 hours ago, J.R. said:

Recently upgraded to UR V6.9.2 and my Crashplan reports are coming back as failed to backup. So I finally get a chance to try logging on and see what the problem might be and I keep getting 'unable to sign in' through the GUI. So I figured I'd try resetting the PW...but there doesn't seem to be any way to do that. The text is there on the GUI, but there doesn't seem to be an actual link attached to it.

 

Appreciate some help here for resetting PW and hopefully figuring out why my backup isn't working. Thanks!!

You can reset your password using the CrashPlan website (https://www.crashplan.com).

Link to comment

 

So I've been using thius docker since May 2020 without any problems, I have about 5 Tb in different backup sets.

 

Since the last update, 8.8.1.36 (could also be the previous update since the last 2 were pretty close to eachother) some of my backup sets can't be backed up, it throws an exception in the Crashplan service.log due to invalid file paths, probably Swedish characters causing the problem.

 

Since the docker is not officially supported I just wanted to rule out that it's some problem with the docker before raising this ticket to Code42 as an application bug.

 

The error in question states, this ghets repeated all day long for 2 of my backup sets which only get to 50% and 75% respectively:

 

Caused by: java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters: /storage/backup_archive/PhotosArchive/Photos Vol 1/Backed up 030126/Backed up 020919/Äldre med digitalkamera/Storsjöyran2000/DSCF0024.JPG

 

Anyone have any experience with this problem or input in the matter?

Link to comment
On 12/22/2021 at 5:27 AM, bunkermagnus said:

 

So I've been using thius docker since May 2020 without any problems, I have about 5 Tb in different backup sets.

 

Since the last update, 8.8.1.36 (could also be the previous update since the last 2 were pretty close to eachother) some of my backup sets can't be backed up, it throws an exception in the Crashplan service.log due to invalid file paths, probably Swedish characters causing the problem.

 

Since the docker is not officially supported I just wanted to rule out that it's some problem with the docker before raising this ticket to Code42 as an application bug.

 

The error in question states, this ghets repeated all day long for 2 of my backup sets which only get to 50% and 75% respectively:

 

Caused by: java.nio.file.InvalidPathException: Malformed input or input contains unmappable characters: /storage/backup_archive/PhotosArchive/Photos Vol 1/Backed up 030126/Backed up 020919/Äldre med digitalkamera/Storsjöyran2000/DSCF0024.JPG

 

Anyone have any experience with this problem or input in the matter?

 

This should be fixed with the latest Docker image version.

  • Like 2
Link to comment
  • 2 weeks later...

I logged in to WebUI today and noticed that it was working on backup the whole share and not only the intended folders.

 

In the container setting is Storage: set to "/mnt/user/Lagring" and from my earlier memories was i expected to see this folder with is sub folder when i log in to Manage Files in the WebUI and there set the folders I want to backup. But I just get to Root and cant find my share, the mnt folder is empty.

 

Anyone who could help? Btw, thanks for the docker, have been using it for many years now.

Link to comment
On 1/7/2022 at 11:50 AM, Lidde said:

I logged in to WebUI today and noticed that it was working on backup the whole share and not only the intended folders.

 

In the container setting is Storage: set to "/mnt/user/Lagring" and from my earlier memories was i expected to see this folder with is sub folder when i log in to Manage Files in the WebUI and there set the folders I want to backup. But I just get to Root and cant find my share, the mnt folder is empty.

 

Anyone who could help? Btw, thanks for the docker, have been using it for many years now.

Inside the container, you should find your data under "/storage".  So "/mnt/user/Lagring" in unRAID is mapped to "/storage" in the container.

  • Thanks 1
Link to comment
  • 2 weeks later...
5 hours ago, Gnomuz said:

Hello,

An upgrade of Crashplan to a new version 15252000006882 is attempting to install, thanks in advance for upgrading the container if ever the upgrade were to become mandatory later.

 

Yes I'm working on this.  A new docker image should be available today.

  • Thanks 2
Link to comment

I have a question about what keeps happening whenever a new version of the docker is published.  Every few weeks I'll check in on the docker and notice that it is eating up 50-60GB of space!  For what it's worth, the space is taken up by files in this folder: (\appdata\CrashPlanPRO\conf\tmp)

Whenever this happens, I check to see if there are updates to CrashPlan docker and 100% of the time there is.  And if I pull up the

CrashPlan WebUI I see some sort of error saying the app couldn't be updated.  So I update the docker and it resolves the issue (tmp files are deleted).  But sometimes (before I fix this docker) this causes issues because it uses up so much space on my cache drive that Plex and other dockers start failing.  I'm wondering if this is expected behavior, or do I have something on CrashPlan misconfigured.  Or, as a last resort, can I sign up to be notified when new CrashPlan updates are pushed.  I realize that the issue may actually reside in CrashPlan servers demanding use of the new version, and isn't directly tied to the actual docker being updated.

Edited by zero_koop
Link to comment
32 minutes ago, zero_koop said:

I have a question about what keeps happening whenever a new version of the docker is published.  Every few weeks I'll check in on the docker and notice that it is eating up 50-60GB of space!  For what it's worth, the space is taken up by files in this folder: (\appdata\CrashPlanPRO\conf\tmp)

Whenever this happens, I check to see if there are updates to CrashPlan docker and 100% of the time there is.  And if I pull up the

CrashPlan WebUI I see some sort of error saying the app couldn't be updated.  So I update the docker and it resolves the issue (tmp files are deleted).  But sometimes (before I fix this docker) this causes issues because it uses up so much space on my cache drive that Plex and other dockers start failing.  I'm wondering if this is expected behavior, or do I have something on CrashPlan misconfigured.  Or, as a last resort, can I sign up to be notified when new CrashPlan updates are pushed.  I realize that the issue may actually reside in CrashPlan servers demanding use of the new version, and isn't directly tied to the actual docker being updated.

 

There is a bug with CrashPlan where it doesn't clean a downloaded update that fails to be applied.  However, the latest version of the image includes a change that should prevent that.

 

  • Thanks 3
Link to comment
1 minute ago, Djoss said:

 

There is a bug with CrashPlan where it doesn't clean a downloaded update that fails to be applied.  However, the latest version of the image includes a change that should prevent that.

 

Leave it to me to finally ask about an issue just after it has been resolved.  That's great!  But this has been an issue for a while for me.  So either this is just a coincidence or I'm referring to another issue...

Link to comment
On 1/22/2022 at 10:03 PM, zero_koop said:

Leave it to me to finally ask about an issue just after it has been resolved.  That's great!  But this has been an issue for a while for me.  So either this is just a coincidence or I'm referring to another issue...

Probably a coincidence.  The issue was not new, but was visible only when *not* running the latest Docker image.

Link to comment

Not sure if this is the right forum but I felt compelled to post this somewhere. Move it or delete it if you must.

 

I just wanted to say how much I appreciate this docker. After many years of struggle, I finally got rid of CrashPlan so this might come off as something everyone else already knew and I just discovered.

What a joy to get a backup solution that works. No mysterious stopping half-way through a backup. No hang-ups, no wondering what will get backed up next ... everything is clear as day and, as I said, it just works.

Restores are easy too (which, I supposes, is the most important part of a backup plan).

I liked it so much I turned off "Time Machine" on my Macs and got the CloudBerry solution. Turns out it works too. Time Machine was too ... magical ... for my liking. Plus it was a lot slower getting the job done. I guess more processing power is required for time travel than for backing up some files.

 

Thanks to the developers, the contributors, to everyone who had a hand in this. I like it because it works.

  • Like 1
Link to comment

I've been having a problem recently: my set didn't change at all, but CrashPlan is finding 1TB of data to upload each day.

Support said, that they don't help with container environments and suggest that for my size (17TB) maybe I should look into other solutions. I guess I can make a VM just for CrashPlan, but I'd rather keep it in the docker. Anyone know how to troubleshoot this?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.