[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)


Recommended Posts

On 8/16/2021 at 4:01 PM, Crunklydunks said:

Has the container been updated to the latest version? I've checked for updates and it says the docker is up to date but I'm still getting errors updating, and I've been getting emails saying that I haven't backed up in awhile, even though it does seem I am plugging away at my initial backup (of several terabytes) still. Thanks!

Do you still get the errors with the latest version ?

Link to comment
On 8/17/2021 at 5:43 AM, jademonkee said:

Well, not anymore. I'm having the same problem of 'synchronizing file information' endlessly looping.

I'm about ready to chuck in the towel with crashplan and just install another Unraid box at a friend's house and run an rsync once a week.

 

CrashPlan has been updated recently.  This triggers a synchronization.  Is it what you are seeing ?

Link to comment
12 minutes ago, Djoss said:

 

CrashPlan has been updated recently.  This triggers a synchronization.  Is it what you are seeing ?

I'm seeing a synchronization, yes. However, it's been however many days now and it's never hit 100%. It keeps climbing then dropping back to 0%, just as it did in my earlier posts. I've reached out to CrashPlan support to see if they can shed any light on it.

Link to comment
2 hours ago, jademonkee said:

I'm seeing a synchronization, yes. However, it's been however many days now and it's never hit 100%. It keeps climbing then dropping back to 0%, just as it did in my earlier posts. I've reached out to CrashPlan support to see if they can shed any light on it.

You can look at /mnt/user/appdata/CrashPlanPRO/log/service.log.0 and see if there is anything useful.  Also, maybe try to increase the memory limit to 5GB ?

Link to comment

I've just received the following from CrashPlan support:

Quote

The reason for this behavior is due to the sync and maintenance conflicting on priority.

Archive maintenance is a regularly scheduled task that runs on each backup destination. The purpose is to maintain archive integrity and optimize the size of the archives. Maintenance may take several days to run. Typically maintenance is able to complete and backups resume correctly afterward, however if the activities are both disrupting each other we won't see good progress.

Please deauthorize the CrashPlan application. With the CrashPlan desktop application, double click the Code42 icon in the top left of the application (above your device name) to open the Code42/CrashPlan Command Line Interface.

Alternatively: Press Shift + Control + C (Option + Command + C on Mac)

You may then type the following command:

deauthorize

and press enter to run it. This command will cause the CrashPlan application to close.

 

Please don't sign back in yet. I will monitor the progress of maintenance and let you know when to sign back in.

I'll report back on any progress.

  • Like 1
Link to comment
13 hours ago, snowboardjoe said:

This is unacceptable.

Yeah, I'm sick of these periods of syncing an no-backups.

I've been considering buying an old HP Microserver and installing it in a closet at a friend's place to run a weekly rsync backup to. However, with CrashPlan only $12/month, it'll take a little too long to pay off... so I keep giving up on the idea.

Link to comment
14 hours ago, snowboardjoe said:

Just tried that and logged back in. Monitoring.

The customer support agent that suggested I deauthorize told me not to log back in until they told me to. It's because there is some maintenance occuring on the data stored on their servers, and the file sync can sometimes interfere with that. So the deauthorization stops the file sync so that the maintenance can complete without interruption. By logging back in immediately, the maintenance doesn't have a chance to complete.

So I'm waiting for the customer support agent to tell me when it's ok to log back in.

Link to comment

FYI Customer Support have said that my maintenance has now completed and it's now ok for me to log back in again, so I have done so. Will see if the sync finishes and report back if it does/doesn't.

Here's the email as it contains some good info:

Quote

Your device is now out of maintenance so you may feel free to sign back into the CrashPlan app.

Give CrashPlan some time to rebuild your cache, re-scan your hard drive(s), then see if it is able to connect and resume backing up. It will spend a fair amount of time analyzing block data and resynchronizing with the server, but should resume backing up when it is done.

It will appear your backup is starting over, but as it gets to files already backed up they will be skipped. Since CrashPlan backs up newer and smaller files first, it often takes running for some time until CrashPlan is convincingly doing this, so just keep letting it run.

More info:
* Sign out of the Code42 app: https://support.code42.com/CrashPlan/6/Configuring/Sign_out_of_the_Code42_app

If, after ~24hrs, behavior does not seem to have improved, please reply and we will be happy to assist further.

Please let me know if you have any further questions.

 

Link to comment

Been using CP for awhile now and I'm curious to know how many CPU's you are allocating for it? I'm aware that CP is extremely slow uploading as it took me a year to upload (I have fiber 500 up/down) my media files (10TB and growing). But does adding more cores make a difference? Any insight would be appreciated. Thank you.

Link to comment
  • 2 weeks later...
On 9/8/2021 at 2:48 PM, snowboardjoe said:

For the past two weeks CrashPlan is telling me maintenance is still in progress and not to authorize my client until I hear from them. I'm now at 29 days of no backups for this host. I don't know how to escalate this issue with them.

 

Day 42 and still no complete backup. Been working with support and they claim my issue has been escalated, but it does not seem to have changed anything on the urgency to resolve this. They're currently throwing up some ideas and I've rejected most of them because they don't make sense.

 

For example, they said I needed to increase the Java memory allocation from 2GB to 9GB. Uh, no. I'm only using 700MB. They seem to think this thing is crashing over and over again when it's not. Having to wait 3-5 days for a synchronization to complete is a big problem. Having it repeat this endlessly is a bigger problem.

 

Not sure what to tell them and I don't want to share this is a container for fear they'll just hang up on me. 

Link to comment
On 9/10/2021 at 11:45 AM, CodeEngie said:

Been using CP for awhile now and I'm curious to know how many CPU's you are allocating for it? I'm aware that CP is extremely slow uploading as it took me a year to upload (I have fiber 500 up/down) my media files (10TB and growing). But does adding more cores make a difference? Any insight would be appreciated. Thank you.

 

CPU is likely not the issue at all. It's been some time, but years ago with large backups the client would run a data deduplication job to find and consolidate redundant data. This was painstakingly slow. There was a setting I added long ago that told the client to NOT do this and backups were off and flying at record pace. I don't know if this is still a thing. I've not customized that setting in ages and may be gone for all I know.

Link to comment
  • 2 weeks later...

Backups restored. I think it was a setting with the frequency I had that caused the everlasting file synchronizations. Support pretty much told me to add 10GB of RAM to the container (they don't know this was a container) to have reliable backups because that was their recommendation and ended it there despite I'm only backup up 81K files. When I explained I'm getting good backups, that told me that was likely a false status as it's not possible to backup 11TB of data with 2GB of RAM available to the container. Really? Wow. Can't verify the files can completes successfully and this is one of your core functions. But, if I add gobs of RAM, they'll support me and the file scans will be successful. How will they know if it's successful it's already reporting a false positive? Brilliant.

 

I'll be looking at backup alternatives.

Edited by snowboardjoe
Link to comment

I upgraded unraid 6 days ago, but that log shows running solid ever since.I was wondering if the uptime from the Docker view would accurately represent the true time it was up. Will keep monitoring it.

 

Alternatively, starting to look at AWS Glacier and using s3sync container for the really large media that are static and get that out of CP, but then I'm paying for both services. One project at a time. Want to keep this current config stable for now.

 

I think CP changed some retention options over time. I found some ridiculous settings in there keeping extra versions way too long. Right when I did that they announce they're doing the same things globally. Odd. I think some configuration slipped in there generating extra versions for a lot of clients.

Link to comment
  • 1 month later...
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.