[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)


Recommended Posts

46 minutes ago, tcharron said:

If you log in to your account at https://www.crashplanpro.com/app/#/console/ , what does it say?

I wonder if somehow you have ended up with recent backups being somehow identified as a new device.

Thanks for replying, tcharron.

 

My dashboard says 2.4TB used.

If I click on Devices > my server (the only device under 'Active'), it shows as 2.4TB stored. But if I click on the entry for my server, it says under 'Selected' that there's only 212GB. I don't know what 'selected' means, but it's clearly got something to do with this.

If I click on the 'Backup' tab on that same page, it appears that all my folders are indeed present (at least the top level ones - I can't drill down in this UI).

In Devices > 'Deactivated' there is one other device, but I'm pretty sure it's just my old backup from before they made everyone switch to a 'pro' account. And it's 0MB anyhow. 

Link to comment

tl/dr: My docker keeps crashing - like every 30 seconds. Occasionally I see the message about running out of memory, but not every time.

 

This container keeps crashing around every 30 seconds or so, sometimes with the "CRASHPLAN_SRV_MAX_MEM" error, but sometimes not. Sometimes there is no error at all, it's like the container just reboots. I currently have 16G allocated in the container settings and it still keeps crashing. I have bumped this as high as 20G and the issue persists, as if changing that number doesn't affect anything. 

 

The full size of the backup is about 4TB, and there is only 44GB remaining to do. It's basically 99% done but can't seem to finish that last 1%. Does it make sense that I would have to allocate all of my server memory to this container in order to complete a 4TB backup?

 

Any ideas why it could be crashing so much or how to fix?

 

Unraid container logs aren't showing anything unless the MAX_MEM error pops up. And the Tools>History window inside the app just shows it stopping and starting over and over.

Edited by scud133b
Link to comment
On 2/8/2020 at 5:35 PM, scud133b said:

tl/dr: My docker keeps crashing - like every 30 seconds. Occasionally I see the message about running out of memory, but not every time.

 

This container keeps crashing around every 30 seconds or so, sometimes with the "CRASHPLAN_SRV_MAX_MEM" error, but sometimes not. Sometimes there is no error at all, it's like the container just reboots. I currently have 16G allocated in the container settings and it still keeps crashing. I have bumped this as high as 20G and the issue persists, as if changing that number doesn't affect anything. 

 

The full size of the backup is about 4TB, and there is only 44GB remaining to do. It's basically 99% done but can't seem to finish that last 1%. Does it make sense that I would have to allocate all of my server memory to this container in order to complete a 4TB backup?

 

Any ideas why it could be crashing so much or how to fix?

 

Unraid container logs aren't showing anything unless the MAX_MEM error pops up. And the Tools>History window inside the app just shows it stopping and starting over and over.

https://support.code42.com/CrashPlan/6/Troubleshooting/Adjust_Code42_app_settings_for_memory_usage_with_large_backups

Have you tried this?

Link to comment
8 hours ago, jademonkee said:

Yes, the main topic in that article is to increase memory allocation. It says for a 4TB backup you should need 4GB of memory on a Linux system. I've actually adjusted this in the docker settings to *20GB* and it still keeps crashing.

 

Note: I'm assuming the article's CLI command does the same thing as the memory allocation option in the Docker container settings:

java mx 1536

If this is actually something different then maybe I haven't tried it yet.

Link to comment
1 minute ago, scud133b said:

Yes, the main topic in that article is to increase memory allocation. It says for a 4TB backup you should need 4GB of memory on a Linux system. I've actually adjusted this in the docker settings to *20GB* and it still keeps crashing.

 

Note: I'm assuming the article's CLI command does the same thing as the memory allocation option in the Docker container settings:


java mx 1536

If this is actually something different then maybe I haven't tried it yet.

My bad: I was thinking it was different to the option you'd changed, but it's actually the same thing. Sorry 'bout that.

Link to comment
On 2/5/2020 at 6:23 AM, jademonkee said:

I rebooted my network equipment and server this morning, and thought I'd just check that everything came back up happy, and upon entering CrashPlan, found that it thinks I only have 228GB of backups - but I actually have about 3TB.

You can see what I mean in the attached screenshots: 

1 showing the main screen with only 228.4GB backed up

and the other showing the preferences > destinations size of using 3TB.

 

I fairly frequently open the CrashPlan UI to check on it, so this has only happened either in the last few days, or since my server reboot. 

I restarted the Docker to see if it fixed anything, but it remains the same. 

As you can see from the main screenshot, no maintenance is currently being performed.

I went through the file list (under the manage files button), and all major folders appear in there as backed up.

 

Does anybody know what's happening? Or is this something I should email CrashPlan about?

Thanks for your help.

 

CrashPlan_backupsize.png

CrashPlan_mainscreen.png

 

I do not see this as a problem.  The size of all files you backup can differ from the space used on CrashPlan's server.

 

CrashPlan is keeping multiple versions of the same file.  So if you have files that change frequently, a lot of versions can be created.  Also, files you deleted are kept for a certain amount of time before being removed from the backup.

Link to comment
On 2/8/2020 at 12:35 PM, scud133b said:

tl/dr: My docker keeps crashing - like every 30 seconds. Occasionally I see the message about running out of memory, but not every time.

 

This container keeps crashing around every 30 seconds or so, sometimes with the "CRASHPLAN_SRV_MAX_MEM" error, but sometimes not. Sometimes there is no error at all, it's like the container just reboots. I currently have 16G allocated in the container settings and it still keeps crashing. I have bumped this as high as 20G and the issue persists, as if changing that number doesn't affect anything. 

 

The full size of the backup is about 4TB, and there is only 44GB remaining to do. It's basically 99% done but can't seem to finish that last 1%. Does it make sense that I would have to allocate all of my server memory to this container in order to complete a 4TB backup?

 

Any ideas why it could be crashing so much or how to fix?

 

Unraid container logs aren't showing anything unless the MAX_MEM error pops up. And the Tools>History window inside the app just shows it stopping and starting over and over.

Not sure what you mean by "CRASHPLAN_SRV_MAX_MEM error".  Can you provide the exact error(s) ?

Link to comment
6 hours ago, Djoss said:

Not sure what you mean by "CRASHPLAN_SRV_MAX_MEM error".  Can you provide the exact error(s) ?

Here is the error:

**CrashPlan for Small Business's memory consumption comes close to its limit.** More than 75% of allocated memory is used. Consider increasing memory allocation to avoid unexpected crashes. This can be done via the CRASHPLAN_SRV_MAX_MEM environment variable.

(And it provides successively worse errors; e.g., 90% of memory is used, then eventually all memory is used and it crashes.)

 

Spaced used for the backup is 6TB and I've allocated much more than the GB equivalent of RAM (12GB, then 16GB, then 20GB) and the errors are still appearing with crashes.

 

I just rebooted the server for the first time in a while just to do a gut check here.

Edited by scud133b
Link to comment
9 hours ago, SNDS said:

Semi-related: has anyone had trouble with CrashPlan Pro backing up large volumes of files? I have approx 50TB that I want to backup.

I'd recommend a different service for that much data - they're not really unlimited, and may end up booting you off the service for a backup that large.

https://www.reddit.com/r/Crashplan/comments/ezuztk/warning_unlimited_not_really_unlimited/

 

Edited by jademonkee
Link to comment
On 2/5/2020 at 11:23 AM, jademonkee said:

I rebooted my network equipment and server this morning, and thought I'd just check that everything came back up happy, and upon entering CrashPlan, found that it thinks I only have 228GB of backups - but I actually have about 3TB.

You can see what I mean in the attached screenshots: 

1 showing the main screen with only 228.4GB backed up

and the other showing the preferences > destinations size of using 3TB.

 

I fairly frequently open the CrashPlan UI to check on it, so this has only happened either in the last few days, or since my server reboot. 

I restarted the Docker to see if it fixed anything, but it remains the same. 

As you can see from the main screenshot, no maintenance is currently being performed.

I went through the file list (under the manage files button), and all major folders appear in there as backed up.

 

Does anybody know what's happening? Or is this something I should email CrashPlan about?

Thanks for your help.

 

Don't know how, don't know why, but as of this morning, it's fixed itself.

Maybe this has something to do with me raising a ticket with CrashPlan support yesterday? I don't know.

Link to comment
2 minutes ago, xman111 said:

i am surprised people still use Crashplan.  Months to upload, deleting files, says unlimited but not..   I am done with them.

I think it's coz they're cheap. 

Serious question, though: what's a good alternative? I've been thinking of chucking in the towel, but haven't found anything even close in price.

  • Like 2
Link to comment
2 minutes ago, jademonkee said:

I think it's coz they're cheap. 

Serious question, though: what's a good alternative? I've been thinking of chucking in the towel, but haven't found anything even close in price.

I’d like to know this as well. My alternative, currently, is working with family members to create replicated servers offsite and use an rclone docker service to automatically mirror my drives to the other server(s). 

Edited by SNDS
  • Like 1
Link to comment

i personally built a freenas server and keep it at my parents house with enough storage to hold all my movies and important stuff..  it was built with leftover computer parts and all the hard drives i used to have in Unraid that i upgraded.  I know that isn't for everyone.   I also use Duplicacy with Onedrive and Google Drive.  I just found Crashplan slow, buggy and their business practices are a little shady.

Edited by xman111
  • Like 1
Link to comment
7 minutes ago, SNDS said:

I have almost 50TB and that is growing with my personal work files (digital design/photography/videography assets and project files), dvd, and br backups it’s difficult to setup that much storage offsite affordably. 

for sure.. that is a lot of storage.  I am surprised you didn't get an email from them saying you have too much on their system, you still may.   When you get into that much storage, cloud storage really doesn't make sense.  Or you pay the $10 a month, get google drive for business and if they ever enforce the 5 user, you might have to just pay the $50 a month.  You have a better chance with them than Crashplan.

Edited by xman111
Link to comment
28 minutes ago, xman111 said:

for sure.. that is a lot of storage.  I am surprised you didn't get an email from them saying you have too much on their system, you still may.   When you get into that much storage, cloud storage really doesn't make sense.  Or you pay the $10 a month, get google drive for business and if they ever enforce the 5 user, you might have to just pay the $50 a month.  You have a better chance with them than Crashplan.

I'm not actually a customer. I'm looking into getting a backup service or solution. Crashplan was an initial recommendation, but I'd like to avoid service cut off as my storage needs grow.

Link to comment
On 2/11/2020 at 12:13 PM, xman111 said:

i think the one guy on reddit only had like 5TB i think and they were cutting him off.

I’m at 13TB uploaded for my unRAID server and another couple TB for my Mac. I still have 7TB to go on the server. Luckily I found a way to speed up the backups or it would have taken years to get this far. Even still it certainly isn’t terribly efficient. 
 

I do wish there were more options in backup services that can run native Docker though. 

Link to comment
7 hours ago, acurcione said:

I’m at 13TB uploaded for my unRAID server and another couple TB for my Mac. I still have 7TB to go on the server. Luckily I found a way to speed up the backups or it would have taken years to get this far. Even still it certainly isn’t terribly efficient. 
 

I do wish there were more options in backup services that can run native Docker though. 

i would expect a letter and have a backup plan.  not sure if they will but i wouldn't trust them.

Link to comment
4 minutes ago, acurcione said:

Saw there was an update this morning so I updated and I've waited for CP to do it's scanning thing, but wanted to verify settings in the my.services.xml file and... there is no my.services.xml file in the conf directory now! Where the heck did it go??

I pulled a copy from my automated backup on the array and restarted CP. Hope like hell it still reads the file!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.