[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)


Recommended Posts

I just wanted to give an update with my situation, I was never able to get crashplan to fully back up my 4M+ files (~3.5 TB).  Not only did the upload end up being capped at ~10mbps vs my 30mbps full upload, but the VM latency issues extended beyond the single main VM to all 4 gaming VMs.  I ended up spending $30 for a license for Cloudberry Linux (on the app store, free 30 day trial) and setting up cloud backs with google datastore archival class for ~$5/month in fees (vs $10 for crashplan).  I also really like Cloudberry's retention and versioning features and it's encryption and data compression seems to be very robust.  I use it to back up locally to a fireproof USB drive as well.  The docker uses less ram & CPU than crashplan's.

Link to comment
On 4/19/2020 at 4:22 PM, 0x00000111 said:

I just wanted to give an update with my situation, I was never able to get crashplan to fully back up my 4M+ files (~3.5 TB).  Not only did the upload end up being capped at ~10mbps vs my 30mbps full upload, but the VM latency issues extended beyond the single main VM to all 4 gaming VMs.  I ended up spending $30 for a license for Cloudberry Linux (on the app store, free 30 day trial) and setting up cloud backs with google datastore archival class for ~$5/month in fees (vs $10 for crashplan).  I also really like Cloudberry's retention and versioning features and it's encryption and data compression seems to be very robust.  I use it to back up locally to a fireproof USB drive as well.  The docker uses less ram & CPU than crashplan's.

What pricing did you get on Google archival storage? I see they list $0.0012 - $0.005 per GB, that's actually a big difference when you're taking thousands of GBs. 

 

If I could get my 3TB backed up for $3.6 @ $0.0012/GB instead I may want to move over to CloudBerry (or MSP360 as it's now called I guess).

 

Thanks in advance.

  • Like 1
Link to comment

I'm backing up over 10TB to CrashPlan so the $10 price for "unlimited" is still a better deal for those backing up 8+ TB even with the $0.0012/GB pricing.

 

I often wish there was a better option than CrashPlan, but, every time I look into it, I am back to CrashPlan.

 

Fortunately, I have no speed or upload issues with CrashPlan and this container.

Link to comment
On 4/20/2020 at 4:43 PM, Hoopster said:

I'm backing up over 10TB to CrashPlan so the $10 price for "unlimited" is still a better deal for those backing up 8+ TB even with the $0.0012/GB pricing.

 

I often wish there was a better option than CrashPlan, but, every time I look into it, I am back to CrashPlan.

 

Fortunately, I have no speed or upload issues with CrashPlan and this container.

I *still* have INOTIFY issues constantly but I've increaed the amount so much that I'm afraid to keep taking it higher...

Other than that, CrashPlan's container works for me. I don't like their restore interface but I use it so infrequently that I'm okay with it.

Link to comment
On 10/30/2019 at 12:13 PM, xman111 said:

hey guys, just started using this.   Tried deleting a file in my keep directory on my server.  I then went to restore it from Crashplan but it said it was a read only file system.  Probably a permissions error or something, any ideas how to fix this?

 

Sorry found the fix editing the container's storage permissions..

 How exactly does one do that?  And what did you set it to?  I need to do a restore, and of course its not working, and cant find documentation about how to make it work..  hmmm. Sounds like you hit the nail on the head, but changing the UMASK to 777 seemed to make the docker never start up properly..

Link to comment
2 hours ago, RodgMahal said:

 How exactly does one do that?  And what did you set it to?  I need to do a restore, and of course its not working, and cant find documentation about how to make it work..  hmmm. Sounds like you hit the nail on the head, but changing the UMASK to 777 seemed to make the docker never start up properly..

Open the container's settings, then activate the Advanced View (toggle auto top right), edit the Storage item and change the "Access mode"  to Read/Write".

Link to comment
On 4/21/2020 at 7:27 PM, CorneliousJD said:

I *still* have INOTIFY issues constantly but I've increaed the amount so much that I'm afraid to keep taking it higher...

I have INOTIFY now triggered every 4-6 weeks.  I've bumped it up to 8192000 on a machine with 2 always on Windows VMs and 32GB of ECC ram.   I'm actually contemplating doubling the memory just to increase INOTIFY further

Link to comment
On 4/28/2020 at 7:33 AM, landS said:

I have INOTIFY now triggered every 4-6 weeks.  I've bumped it up to 8192000 on a machine with 2 always on Windows VMs and 32GB of ECC ram.   I'm actually contemplating doubling the memory just to increase INOTIFY further

Oh wow, I only had mine set at 1.5 mil, I've up'd it to 3mil for now. I have 128GB of RAM so plenty to spare, I guess I could just let it go really really high then?

 

I haven't noticed it NOT backing antyhing up though even when it hits the INOTIFY issue?

Link to comment
On 4/20/2020 at 1:35 PM, CorneliousJD said:

What pricing did you get on Google archival storage? I see they list $0.0012 - $0.005 per GB, that's actually a big difference when you're taking thousands of GBs. 

 

If I could get my 3TB backed up for $3.6 @ $0.0012/GB instead I may want to move over to CloudBerry (or MSP360 as it's now called I guess).

 

Thanks in advance.

It doesn't actually give me a breakdown because I still have $170ish in credit from my $300 free trial credits for signing up.  The esitmator says $4.50/month though.  You do have to change the type over to archival, and if you have many small files and don't do it in one big upload (i.e. many restarts), then you pay easily $60-100 in API calls (for each file, checking date/size comparison for cloudberry to compare if needs to be re-uploaded or not).  But once you get it uploaded, it looks like it should be under $5/month moving forward.  And with that $300 free credit, you basically get 4 years worth of backup, though I think the credits might expire after 1 year, so just 1 year free maybe?  I'll let you know next year :P

Link to comment
On 4/29/2020 at 2:18 PM, CorneliousJD said:

Oh wow, I only had mine set at 1.5 mil, I've up'd it to 3mil for now. I have 128GB of RAM so plenty to spare, I guess I could just let it go really really high then?

 

I haven't noticed it NOT backing antyhing up though even when it hits the INOTIFY issue?

I haven't hit the limit yet - - - but I believe this means I now have 8gb allocated to Cp and 2 dockers - leaving 8 for Unraid.   

So, I am at the highest limit I'm conformable on offer cp without adding more ram. 

 

It could be unrelated, but when I'd find the limit hit, added new files, and checked online backups the files were not present.   Restart Cp and within a day they are uploaded 

Link to comment
On 5/2/2020 at 12:50 AM, 0x00000111 said:

It doesn't actually give me a breakdown because I still have $170ish in credit from my $300 free trial credits for signing up.  The esitmator says $4.50/month though.  You do have to change the type over to archival, and if you have many small files and don't do it in one big upload (i.e. many restarts), then you pay easily $60-100 in API calls (for each file, checking date/size comparison for cloudberry to compare if needs to be re-uploaded or not).  But once you get it uploaded, it looks like it should be under $5/month moving forward.  And with that $300 free credit, you basically get 4 years worth of backup, though I think the credits might expire after 1 year, so just 1 year free maybe?  I'll let you know next year :P

Well, thankfully I set a budget on this - backing up less than 50GB of data cost me $18 over 2 days.

Cancelled that idea right away. Going to just go with Backblaze B2 I think instead, at least pricing is extremely predictable that way, they don't charge for Class A API calls, where Google, Azure, Aamazon Glacier, etc, all do, which is what kills these ideas for us.

Link to comment

Hey all

 

I started getting this error today:

 

rfbProcessClientProtocolVersion: read: I/O error

 

I can't login, when I open the docker web GUI the VNC says "disconnected" and it cant connect.

 

Anyone else getting the same thing?

 

Quote

16/05/2020 18:26:49 client_count: 0
16/05/2020 18:26:49 Restored X server key autorepeat to: 1
16/05/2020 18:26:49 Client 127.0.0.1 gone
16/05/2020 18:26:49 Statistics events Transmit/ RawEquiv ( saved)
16/05/2020 18:26:49 TOTALS : 0 | 0/ 0 ( 0.0%)
16/05/2020 18:26:49 Statistics events Received/ RawEquiv ( saved)
16/05/2020 18:26:49 TOTALS : 0 | 0/ 0 ( 0.0%)
16/05/2020 18:27:14 Got connection from client 127.0.0.1
16/05/2020 18:27:14 other clients:
16/05/2020 18:27:14 Got 'ws' WebSockets handshake
16/05/2020 18:27:14 - webSocketsHandshake: using base64 encoding
16/05/2020 18:27:14 - WebSockets client version hybi-13
16/05/2020 18:27:14 Disabled X server key autorepeat.
16/05/2020 18:27:14 to force back on run: 'xset r on' (3 times)
16/05/2020 18:27:14 incr accepted_client=3 for 127.0.0.1:40138 sock=10
16/05/2020 18:27:14 webSocketsDecodeHybi: got frame without mask
16/05/2020 18:27:14 rfbProcessClientProtocolVersion: read: I/O error

 

 

EDIT: Randomly started working again....didn't make any changes.

 

I see now there is a banner saying "Code42 wasn't able to upgrade, but will try again in 1hour". Im guessing this has something to do it with it. Any way to stop/prevent it from trying to do taht?

 

Thanks

Edited by Mooks
Link to comment
On 5/16/2020 at 4:37 AM, Mooks said:

I can't login, when I open the docker web GUI the VNC says "disconnected" and it cant connect.

This is usually fixed by clearing the browser's cache.

 

On 5/16/2020 at 4:37 AM, Mooks said:

I see now there is a banner saying "Code42 wasn't able to upgrade, but will try again in 1hour". Im guessing this has something to do it with it. Any way to stop/prevent it from trying to do taht?

Are you using the latest version of the Docker image ?

Link to comment

I am unable to restore files from my CrashPlan backup. I have the Access Mode set to Read/Write. When I restore, CrashPlan goes through the motions of downloading - takes several minutes to get 4GB down, etc. But then no files are ever restored.

restore_files.log lists all the files being restored and ends with "Restore from CrashPlan PRO Online completed: 1,463 files restored  @ 49.3Mbps" - but no files were restored.

I've tried restoring to the original location & a different location. I've tried setting to "overwrite" and "rename" but still nothing.

 

Any ideas?

Link to comment
4 hours ago, khager said:

I am unable to restore files from my CrashPlan backup. I have the Access Mode set to Read/Write. When I restore, CrashPlan goes through the motions of downloading - takes several minutes to get 4GB down, etc. But then no files are ever restored.

restore_files.log lists all the files being restored and ends with "Restore from CrashPlan PRO Online completed: 1,463 files restored  @ 49.3Mbps" - but no files were restored.

I've tried restoring to the original location & a different location. I've tried setting to "overwrite" and "rename" but still nothing.

 

Any ideas?

To which location are you restoring ?  If you login to the container, can you see the restored files?

Link to comment
21 hours ago, Djoss said:

This is usually fixed by clearing the browser's cache.

 

Are you using the latest version of the Docker image ?

 

Thanks for your reply. At the time my container was definitely up to date yeah, but I have since checked again and there was an update. I applied it and we're all good now.

 

Thanks!

Link to comment
On 5/19/2020 at 12:34 PM, Djoss said:

To which location are you restoring ?  If you login to the container, can you see the restored files?

I've tried restoring to just about anywhere the interface will let me. Example: the container path of /storage/... maps to the host path of /mnt/user. This allows me to backup from anywhere on the Unraid array. I've tried restoring to several different shares in that path. I've also tried to restore to /config in the container - I log into the container but there are no restored files in that path. I've also tried restoring to "original location" but that yielded the same results.

 

restore_tool_app.log contains these lines for the most recent attempt:

INFO : 2020/05/22 06:28:07.715174 restore_tool.go:99: RestoreTool: Start
INFO : 2020/05/22 06:28:07.763725 restore_tool.go:100: Runtime directory: /tmp/com.code42.restore/app
INFO : 2020/05/22 06:29:47.520276 restore_tool.go:472: Received terminate gracefully message
INFO : 2020/05/22 06:29:47.520737 restore_tool.go:474: Received keep-alive message
INFO : 2020/05/22 06:29:47.526648 restore_tool.go:197: Error received reading size (possibly end-of-file). err=EOF
INFO : 2020/05/22 06:29:48.525365 restore_tool.go:171: RestoreTool: Graceful Exit

 

Tail of history.log.0 contains:

I 05/22/20 06:28AM Starting restore from CrashPlan PRO Online: 3,445 files (135.70MB)
I 05/22/20 06:28AM Restoring files to /config
I 05/22/20 06:29AM Restore from CrashPlan PRO Online completed: 3,445 files restored  @ 52.4Mbps

 

The last line in restore_files.log.0 contains:

 05/22/20 06:29AM 41 Restore from CrashPlan PRO Online completed: 3,445 files restored  @ 52.4Mbps

 

Tail of service.log.0 contains:

[05.22.20 06:29:48.318 INFO  ub-BackupMgr om.backup42.service.AppLogWriter] WRITE app.log in 499ms
[05.22.20 06:29:48.350 INFO  ub-BackupMgr 42.service.history.HistoryLogger] HISTORY:: Restore from CrashPlan PRO Online completed: 3,445 files restored  @ 52.4Mbps

 

Tail of ui.log contains (note times in this log are UTC and I'm in CDT so UTC -5 for local time):

2020-05-22T11:28:07.392Z - info Restore: Successfully created restore job
2020-05-22T11:28:07.688Z - info: Launching process: (20016) /usr/local/crashplan/bin/restore-tool -userName=app -logDir=/config/.code42/log /tmp/restore-pipe-955293482861554839-request /tmp/restore-pipe-955293482861554839-response
2020-05-22T11:29:48.532Z - info: Process exited cleanly with code 0 and signal null

 

A restore takes as long as you would expect, counting up the amount of data it's restoring, etc. The log files even show the throughput figures.

it's just not putting the restored files anywhere I can find.

 

Edited by khager
Link to comment

Well...now restores are working. I do not know what changed. Maybe some folder permissions, maybe I'm crazy. In any case, restores are working like they should and I no longer care why they didn't before. I was able to restore a photograph library from 2016 that got corrupted some time between Jan and May this year. I'm happier about that than I am curious why the previous restore attempts didn't work.

 

So, in the words of the late, great Roseanne Roseannadanna....

....

...never mind...

Link to comment
On 5/24/2020 at 12:49 PM, tknx said:

I just tried to do the WebUI and am getting a Server disconnected code 1006 error. Nothing in log,. Any ideas?

Try with another browser or try to clear the browser cache to see if this helps.

Link to comment
  • 3 weeks later...

When /user/local/sbin/mover runs, it exits with status 1. Because mover redirects its logging to /dev/null, I executed it manually from the command line and it complained because this link is invalid:

root@Truesource:/mnt/user/appdata/CrashPlanPRO/log# ls -ld log
lrwxrwxrwx 1 root root 11 Jun 12 10:20 log -> /config/log

The link is valid within the container's context, just not outside the container.

 

Is anyone else seeing this? If so is there any way to fix it? I'd prefer to see mover exit 0 in my syslog!

 

Thanks.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.