Jump to content
Djoss

[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)

795 posts in this topic Last Reply

Recommended Posts

I was excited to see this Docker, re-activated my CrashPlan account and updated to the Pro subscription, then installed and ran the Docker, entered my login credentials, replaced my previously used workstation with my unRAID server as the singular machine to back up from, and started to look around a bit before the WebUI became unresponsive.

I went to my user-account online, changed the computer name from a cryptic number to "unRAID", then restarted the Docker and got back in to see the machine name has also changed to "unRAID" in the Docker (so the connection is sound), but while trying to configure the backup plan, the WebUI became unresponsive again.

Before I keep going through this cycle of restarting the Docker, I thought I'd post my log file and ask the community what may be causing the WebUI to keep becoming unresponsive.

service.log.0

Share this post


Link to post

But upon my next try, I was able to configure and start a first backup of one of my smaller sub-folders, just to get my feet back, and it's been happily backing up for almost an hour now while I've been able to snoop around the UI to get to know it a little bit.

So after a couple of initial glitches, all appears to be running smoothly now.

Edited by tillkrueger
syntax error

Share this post


Link to post

ok, so it took 15.5hrs to backup 15GB to CrashPlan Pro...this is from a machine that is connected to the internet with a symmetrical 1Gb connection, so that upload bandwidth on the sending end can't be the issue.

is this dog-slow upload performance something that others here have also experienced, or what could be going on?
needless to say that uploading almost 20TB of data at about 1GB/hr to CrashPlan Pro is a ludicrous task...it would take about 835 days or 2 years and 4 months...that can't be right.

I know this is very unlikely to be the Docker's fault, but I'd be interested to find out what other users' experience with uploading to CrashPlan Pro has been.

Share this post


Link to post
2 hours ago, tillkrueger said:

ok, so it took 15.5hrs to backup 15GB to CrashPlan Pro...this is from a machine that is connected to the internet with a symmetrical 1Gb connection, so that upload bandwidth on the sending end can't be the issue.

is this dog-slow upload performance something that others here have also experienced, or what could be going on?
needless to say that uploading almost 20TB of data at about 1GB/hr to CrashPlan Pro is a ludicrous task...it would take about 835 days or 2 years and 4 months...that can't be right.

I know this is very unlikely to be the Docker's fault, but I'd be interested to find out what other users' experience with uploading to CrashPlan Pro has been.

 

Yes it's a known fact that uploading can be slow, deduplication being part of the problem.

 

But feel free to contact CrashPlan support to get their view on this.  Just don't mention that you are running a Docker container, because they are quick to say that it's not supported.

Share this post


Link to post

I'm having a major issue with the CrashplanPro docker at the moment, backups for CrashPlan just stay on "Synchronizing - Comparing the files on your device and the files on the destination" and then will sometimes briefly show as preparing to upload and it appears to show the correct number of files/total file size to upload but it does not progress and further and simply goes back to "Syncronizing" once again.

 

Things I have tried to resolve this:

 

1) Stop the docker, clear the cache folder then restart the docker 

2) Uninstall the docker, delete the entire appdata/CrashPlanPRO folder then reinstall the docker and set up again - have tried this as both taking over the existing backup and replacing and the issue persists.

 

I have set up a temporary local backup location and this does appear to be working, but only if I manually run the backup for some reason it doesn't appear to be automatically backing up. **update** looks like the local backup may actually be working OK.

 

This has been happening for about 3 weeks now and I'm at a loss on how to fix it - I should also note that as a test I set up CrashPlanPRO on my Windows PC and did not have any issues at all backing up to CrashPlan.

 

Any ideas on how I might be able to fix this would be great

 

I've also had a look at '/mnt/user/appdata/CrashPlanPRO/log/service.log.0' but there is nothing in there that suggests what the issue might be to me.

Edited by jdkiwi

Share this post


Link to post
On 8/4/2018 at 8:07 PM, jdkiwi said:

I'm having a major issue with the CrashplanPro docker at the moment, backups for CrashPlan just stay on "Synchronizing - Comparing the files on your device and the files on the destination" and then will sometimes briefly show as preparing to upload and it appears to show the correct number of files/total file size to upload but it does not progress and further and simply goes back to "Syncronizing" once again.

 

Things I have tried to resolve this:

 

1) Stop the docker, clear the cache folder then restart the docker 

2) Uninstall the docker, delete the entire appdata/CrashPlanPRO folder then reinstall the docker and set up again - have tried this as both taking over the existing backup and replacing and the issue persists.

 

I have set up a temporary local backup location and this does appear to be working, but only if I manually run the backup for some reason it doesn't appear to be automatically backing up. **update** looks like the local backup may actually be working OK.

 

This has been happening for about 3 weeks now and I'm at a loss on how to fix it - I should also note that as a test I set up CrashPlanPRO on my Windows PC and did not have any issues at all backing up to CrashPlan.

 

Any ideas on how I might be able to fix this would be great

 

I've also had a look at '/mnt/user/appdata/CrashPlanPRO/log/service.log.0' but there is nothing in there that suggests what the issue might be to me.

Did you check at the history (Tools->History)?  If you want you can send me your service.log.0 in  private message and I can have a second look to it.

Share this post


Link to post
6 hours ago, Djoss said:

Did you check at the history (Tools->History)?  If you want you can send me your service.log.0 in  private message and I can have a second look to it.

 

Thanks for the offer of assistance I am unable to see anything in the history that gives me any idea what might be happening, I've private messaged you a screenshot of this log along with a copy of the service.log.0 - greatly appreciate your help

 

Share this post


Link to post

hey i been having an issue with this, backup sets are always waiting for connection. 

tried the rm cache command but did not work, also tried deleting everything in the cache folder manually that did not work either. 

service.log.0

image.thumb.png.19a651671ffc4a404fe6e45c991621bc.png

 

says offline for 22 mins 

image.thumb.png.709fb69ac95d1ea48056fb69264a3e66.png

 

image.thumb.png.8e5cfa269365209bbd28e9a2effa8552.png

 

tools > history 

image.thumb.png.34a6e108423f20baf92ba23bcee05e1e.png

 

 

 

 

Share this post


Link to post
12 hours ago, AKR said:

hey i been having an issue with this, backup sets are always waiting for connection. 

tried the rm cache command but did not work, also tried deleting everything in the cache folder manually that did not work either. 

service.log.0

image.thumb.png.19a651671ffc4a404fe6e45c991621bc.png

 

says offline for 22 mins 

image.thumb.png.709fb69ac95d1ea48056fb69264a3e66.pngimageproxy.php?img=&key=00b562fcac28e727imageproxy.php?img=&key=00b562fcac28e727

 

image.thumb.png.8e5cfa269365209bbd28e9a2effa8552.png

 

tools > history 

image.thumb.png.34a6e108423f20baf92ba23bcee05e1e.png

 

 

 

 

 

The problem may be related to the fact the CP wants the new version.  I'm currently working on updating the container image with this new version.

Share this post


Link to post

When you get this working, Djoss, would you mind telling me/us what upload performance you're getting?

I cancelled my subscription for a refund after seeing that it took 15hrs for a 14GB folder, on a symmetrical 500Mbit connection. I wish it wasn't so, bc I don't see another unlimited solution out there, and certainly none that is supported with a nice unRAID container such as this one.

Share this post


Link to post
18 minutes ago, tillkrueger said:

When you get this working, Djoss, would you mind telling me/us what upload performance you're getting?

I cancelled my subscription for a refund after seeing that it took 15hrs for a 14GB folder, on a symmetrical 500Mbit connection. I wish it wasn't so, bc I don't see another unlimited solution out there, and certainly none that is supported with a nice unRAID container such as this one.

 

For sure you should not expect high speed upload with CrashPlan.  However, with deduplication, a lot of stuff is not uploaded.  So CP could tell you that the backup will take 6 months to complete, but in reality it will finish in a couple of days.

Share this post


Link to post

well, since *all* of my 20TB of data is unique and original content that was created as part of over 20 years worth of productions, there is nothing to deduplicate (I have done my own deduplication of photos and other assets over the years), so the equation is pretty easy: 20TB = 20,000GB @ 1GB/hr = 20,000hrs = 833.33 days = 2.28 years  ?

Share this post


Link to post
7 minutes ago, tillkrueger said:

well, since *all* of my 20TB of data is unique and original content that was created as part of over 20 years worth of productions, there is nothing to deduplicate (I have done my own deduplication of photos and other assets over the years), so the equation is pretty easy: 20TB = 20,000GB @ 1GB/hr = 20,000hrs = 833.33 days = 2.28 years  ?

 

So you could look the following article: https://support.code42.com/CrashPlan/4/Configuring/Unsupported_changes_to_CrashPlan_de-duplication_settings

 

Maybe in your case disabling deduplication would make sense.  Did you chat with CrashPlan support team to see what they have to tell about your situation?

 

Share this post


Link to post

No, I have not yet talked to CP support about that possibility. Frankly, I have wasted so much of my life, these past few years, uploading 18TB of data into the cloud only to find that unlimited plans would turn into 5TB plans overnight (I'm looking at you, Bitcasa), or companies just go broke and go away, or what have ya, that I am taking a break from banging my head against the wall...when upload speeds of at least 5MB/sec. become the norm, I might take another look, but until then it is an exercise in futility.

Share this post


Link to post
On 8/22/2018 at 9:45 AM, tillkrueger said:

well, since *all* of my 20TB of data is unique and original content that was created as part of over 20 years worth of productions, there is nothing to deduplicate (I have done my own deduplication of photos and other assets over the years), so the equation is pretty easy: 20TB = 20,000GB @ 1GB/hr = 20,000hrs = 833.33 days = 2.28 years  ?

 

deduplication is working at the bloc level, not the file level.

 

So, for example (very simplified), if you have a file A with 3 bloc (01 A1 H7) and a bigger file B with 4 bloc (33 B3 A1 K7), after uploading file A, it will only upload 3 bloc on file B, since A1 is already there, then both file will take the same time to upload.

Share this post


Link to post

Yeah, after reading up on it, I do understand that it’s not deduplicating files.

 

one way or another, though, I doubt that trying to upload all of that data to the cloud (*any* cloud) again, will cost me months of my precious life...again.

Share this post


Link to post

Hi, I've just tried to restore some files to /storage and have found, by reading the Github docs, that /storage is read-only by default and I need to make it read-write.

 

But how do I actually do this in unRAID? The 'storage' path config item  does not seem to have an Edit button, although Flash and Host PAth 3 do.

 

Am I missing something?

Edited by uk100
Typo

Share this post


Link to post
1 hour ago, uk100 said:

Hi, I've just tried to restore some files to /storage and have found, by reading the Github docs, that /storage is read-only by default and I need to make it read-write.

 

But how do I actually do this in unRAID? The 'storage' path config item  does not seem to have an Edit button, although Flash and Host PAth 3 do.

 

Am I missing something?

Toggle the advanced settings/mode and you will then be able to edit the storage setting.

Share this post


Link to post

If I pause a restore operation, stop the docker/server, and then restart them, could the restore operation resumed?

Share this post


Link to post
9 minutes ago, Gico said:

If I pause a restore operation, stop the docker/server, and then restart them, could the restore operation resumed?

 

I never tried this personally, but according to https://support.code42.com/CrashPlan/6/Restoring/Download_files_from_the_Code42_app, it should work:

 

Quote

Alternatively, if you don't want to download all of the files at once, you can shut down or put your device to sleep and the download will resume where it left off when it is powered on again.

 

Share this post


Link to post
On 8/25/2018 at 5:08 PM, Djoss said:

Toggle the advanced settings/mode and you will then be able to edit the storage setting.

 

Ah yes of course - Doh!

 

Thanks.

Share this post


Link to post

hello

crashplan pro on my unraid server had been humming along for months, but recently it crashed and I haven't been able to get it going.

restarted the server, did a parity check, restarted the docker crashplan app multiple times but it only runs for seconds

have done a force update of crashplan docker app, and have also removed it and reinstalled it but still no luck

I am really confused by a lot of this, and not sure what kind of support files I should attach here...but if anyone can suggest anything to try, let me know what type of additional info I can supply

thanks

Share this post


Link to post
On 9/2/2018 at 12:25 PM, timeforanewmac said:

hello

crashplan pro on my unraid server had been humming along for months, but recently it crashed and I haven't been able to get it going.

restarted the server, did a parity check, restarted the docker crashplan app multiple times but it only runs for seconds

have done a force update of crashplan docker app, and have also removed it and reinstalled it but still no luck

I am really confused by a lot of this, and not sure what kind of support files I should attach here...but if anyone can suggest anything to try, let me know what type of additional info I can supply

thanks

 

Can you provide the container's log?  The log is accessible from the Docker page by clicking the icon under the "Log" column.

Share this post


Link to post

Thanks for the reply. I looked at the container's log prior to sending it and saw what the error was. Like some others have mentioned, I also didn't have the M after my memory allocation of 2048. So now it's 2048M, and running smooth with no crashes.

 

I have no idea why it was fine for months and months and suddenly one day the M is gone. Oh well.

 

I greatly appreciate the very helpful response!

Share this post


Link to post

I'm so lost on this. I did the migration to CP for SB months ago and it said it successfully uploaded 300+GB to the cloud (and had the internet usage to prove it, it would seem). I thought everything was running fine as I kept getting green check mark backup reports until I noticed the fine print of 0MB's being backed up... >:[ (it's been a VERY busy year).

 

Crashplan can't seem to actually see some unraid folders for some back up sets and I can only 'see' ~40GB of the supposed 300+ that were uploaded to the cloud. Like where's the rest of my stuff...? How come I can see some UR server folders in some back up sets but not others? How come although I can see some folders in some backup sets, Crasphplan can't actually seem to see them/back them up?

 

I tried contacting Crashplan and got this:

Quote

Thank you for contacting Code42 support!

Can you make sure CrashPlan has read/write access to that /data/user folder and its sub-folders? It looks like CrashPlan is unable to read your file selection at all.

Then, run a file verification scan. To trigger a scan, follow the instructions below:

Open the CrashPlan app.
Press Ctrl + Shift + C
The CrashPlan command-line area opens.
Enter this command: backup.scan
Press Enter.

This should trigger a scan.

Let me know how that goes.

I tried adding a container path named 'User' ( /mnt/user/ ) under 'Add another path, port, variable or device' with full read/write access but all that accomplished was making CP unable to load. So I deleted that obviously.

 

Please help :/ Let me know if there's any other screen captures, logs or any other info that would be useful.

 

BU1 public and music photos.jpg

BU2 media only musicphotos.jpg

cloud media only movies.jpg

CP on UR.jpg

Main Page.jpg

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.