[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)


Recommended Posts

  • 1 month later...

Restores are still failing here:

root@laffy:/mnt/user/appdata/CrashPlanPRO/log# more restore_files.log.0

I 12/23/20 10:25AM 41 Starting restore from CrashPlan PRO Online: 2 files (53.10GB)
I 12/23/20 10:25AM 41 Restoring files to original location
I 12/23/20 10:42AM 41 Restore from CrashPlan PRO Online completed: 0 files restored  @ 445.2Mbps
W 12/23/20 10:42AM 41 2 files had a problem
W 12/23/20 10:42AM 41  - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986488868677447596 (Read-only file system)
W 12/23/20 10:42AM 41  - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986489628182015916 (Read-only file system)

 

Someone said this might be fixed in latest version, but was not sure if I needed to set UID/GID to 0 and if there were any security concerns with that?

 

UPDATE: Set UID/GID to 0 and restores are now in progress.

UPDATE2: Still failed due to read-only status. I have no idea how to restore files. This is pretty serious now.

Edited by snowboardjoe
Link to comment

I know this question isnt really about the app itself so much as Crashplan service but since all you guys are using it I thought I'd ask here. How slow are the backups? I have 12TB of data that I'd like to backup and I add probably around 500GB per month.  From what I've been reading it 'seems' like my server would be in a constant state of backing up because the service is apparently pretty slow.  My upload speed is between 20-25Mbps and from what I gather the Crashplan service is substantially slower than that and one can only expect it to backup around 10GB per day. That means if I install a single 4k blue ray movie it will take nearly a week to backup. With 12TB of data it appears it would take somewhere around 3.5 years to get the initial backup completed.....of course thats if I didnt add any more data.

 

Is this what you guys are experiencing?  How are you able to use it if its really that slow???

Link to comment
13 hours ago, SPOautos said:

I know this question isnt really about the app itself so much as Crashplan service but since all you guys are using it I thought I'd ask here. How slow are the backups? I have 12TB of data that I'd like to backup and I add probably around 500GB per month.  From what I've been reading it 'seems' like my server would be in a constant state of backing up because the service is apparently pretty slow.  My upload speed is between 20-25Mbps and from what I gather the Crashplan service is substantially slower than that and one can only expect it to backup around 10GB per day. That means if I install a single 4k blue ray movie it will take nearly a week to backup. With 12TB of data it appears it would take somewhere around 3.5 years to get the initial backup completed.....of course thats if I didnt add any more data.

 

Is this what you guys are experiencing?  How are you able to use it if its really that slow???

I've just installed the container and activated the 30-days trial.

 

First, not the faintest issue to set up, very easy to install, I just set "Maximum memory" to 4096M to avoid crashes due to low memory. 

 

As for the upload bandwidth, my feelings are mixed so far. I have 2.5 TB to backup, and started the process on Sunday. Until Sunday 11pm (all times CET), the throughput was 16 Mbps (or 2MB/s). And then, it was between 32 and 40 Mbps all Dec. 21st long, which is the practical limit for my 4G internet connection. Great !

I then added other shares to the backup set, and since Dec 22nd, the average is back to 15/16 Mbps.

 

So, somehow, we are throttled when backing up, that's obvious. They admit it between the lines on their FAQ, stating we are not individually throttled, but as the server-side bandwidth is shared, there are limitations. If it's true, they don't have enough bandwidth to supply a decent service. But I have a doubt, as the upload is obviously capped at 16Mbps most of the time for me, which should not be the case all day long, unless the server-side bandwidth is ridiculously undersized.

 

Personally, I let the initial backup finish (4/5 days, less if I get decent speeds again), and I will see if the service is viable on a day-to-day basis. But I must admit I share your doubts ...

  • Like 1
Link to comment
3 hours ago, Gnomuz said:

I've just installed the container and activated the 30-days trial.

 

First, not the faintest issue to set up, very easy to install, I just set "Maximum memory" to 4096M to avoid crashes due to low memory. 

 

As for the upload bandwidth, my feelings are mixed so far. I have 2.5 TB to backup, and started the process on Sunday. Until Sunday 11pm (all times CET), the throughput was 16 Mbps (or 2MB/s). And then, it was between 32 and 40 Mbps all Dec. 21st long, which is the practical limit for my 4G internet connection. Great !

I then added other shares to the backup set, and since Dec 22nd, the average is back to 15/16 Mbps.

 

So, somehow, we are throttled when backing up, that's obvious. They admit it between the lines on their FAQ, stating we are not individually throttled, but as the server-side bandwidth is shared, there are limitations. If it's true, they don't have enough bandwidth to supply a decent service. But I have a doubt, as the upload is obviously capped at 16Mbps most of the time for me, which should not be the case all day long, unless the server-side bandwidth is ridiculously undersized.

 

Personally, I let the initial backup finish (4/5 days, less if I get decent speeds again), and I will see if the service is viable on a day-to-day basis. But I must admit I share your doubts ...

 

Well if your able to upload 2.5Tb in around 5 days then that's not nearly as bad as I was reading in other places (which it was older info). That sounds fast enough to do mine in about a month.

 

How is it on your resources? You mentioned the ram limits you gave it.....how about your CPU and hdd? Is it using enough resources that you can tell it's there working?

Link to comment
19 hours ago, snowboardjoe said:

Restores are still failing here:


root@laffy:/mnt/user/appdata/CrashPlanPRO/log# more restore_files.log.0

I 12/23/20 10:25AM 41 Starting restore from CrashPlan PRO Online: 2 files (53.10GB)
I 12/23/20 10:25AM 41 Restoring files to original location
I 12/23/20 10:42AM 41 Restore from CrashPlan PRO Online completed: 0 files restored  @ 445.2Mbps
W 12/23/20 10:42AM 41 2 files had a problem
W 12/23/20 10:42AM 41  - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986488868677447596 (Read-only file system)
W 12/23/20 10:42AM 41  - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986489628182015916 (Read-only file system)

 

Someone said this might be fixed in latest version, but was not sure if I needed to set UID/GID to 0 and if there were any security concerns with that?

 

UPDATE: Set UID/GID to 0 and restores are now in progress.

UPDATE2: Still failed due to read-only status. I have no idea how to restore files. This is pretty serious now.

Fixed the issue. Not sure if this is already documented somewhere. The /storage mount point is strictly configured to be read-only (and a safe thing to do for security). In order to restore files back, you need to create a new mount point in the container configuration. In my case, I just added /restore and mapped to /mnt/user/scratch/restore. Provided destination /restore to restore job and it worked just fine.

  • Like 1
  • Thanks 1
Link to comment
17 hours ago, SPOautos said:

I know this question isnt really about the app itself so much as Crashplan service but since all you guys are using it I thought I'd ask here. How slow are the backups? I have 12TB of data that I'd like to backup and I add probably around 500GB per month.  From what I've been reading it 'seems' like my server would be in a constant state of backing up because the service is apparently pretty slow.  My upload speed is between 20-25Mbps and from what I gather the Crashplan service is substantially slower than that and one can only expect it to backup around 10GB per day. That means if I install a single 4k blue ray movie it will take nearly a week to backup. With 12TB of data it appears it would take somewhere around 3.5 years to get the initial backup completed.....of course thats if I didnt add any more data.

 

Is this what you guys are experiencing?  How are you able to use it if its really that slow???

I'm getting much faster rates than that and have been using the service for many years now. Is your rate dropping over time? Did it ever complete the initial full backup?

Link to comment
On 12/24/2020 at 3:02 PM, SPOautos said:

 

Well if your able to upload 2.5Tb in around 5 days then that's not nearly as bad as I was reading in other places (which it was older info). That sounds fast enough to do mine in about a month.

 

How is it on your resources? You mentioned the ram limits you gave it.....how about your CPU and hdd? Is it using enough resources that you can tell it's there working?

Hi,

 

Well, the backup process has been running continuously for 4 days and a half, so I can step back a bit more now.

The data to be backed up is 952 GB locally, 891 GB have been completed, and the remaining 61 GB should be uploaded in 11 hours from now. So, the global "performance" should be 952 GB in 127 hours, an average of 180 GB per day. 

Roughly, that is 2.1 MB/s or 17 Mbps, which is consistent with the obvious throttling I can see in Grafana for the CrashPlan container upload speed. Data is compressed before being uploaded, so translating the size of the data to backup into network upload size is not totally accurate, but the level of compression will highly depend on the data you back up. For me, 893 GB backed up so far translated into 787 GB uploaded, i.e. a compression ratio of 88%.

To sum up, if you get the same upload speed and compression ratio as me, your initial 12TB backup should generate 10.8 TB (10,830 GB) to upload at an average speed of 180 GB per day, i.e. circa 60 days ... Btw, as the upload speed of your internet connection seems to be 20/25 Mbps, the best you can expect to upload this amount of data is circa 40/50 days. So, you wouldn't be that throttled.

For the 10GB per day you heard of, I suppose that's what you found in the CrashPlan FAQ. Let's say it's somehow their commitment, even if it's not legally binding, so they take very very little risk not to fulfill it, it's circa 1 Mbps ... 

 

As for the system resources, the container has an avg CPU load of 9% (Ryzen 7 3700X CPU), avg 1.2 GB memory load (out of 32 GB), and a constant Array I/O read of 2 MB/s. So, you can see it's running, but it has a low footprint on the global server load.

 

I hope that can help you decide on your backup strategy.

Edited by Gnomuz
  • Thanks 1
Link to comment
On 12/25/2020 at 2:54 AM, Gnomuz said:

Hi,

 

Well, the backup process has been running continuously for 4 days and a half, so I can step back a bit more now.

The data to be backed up is 952 GB locally, 891 GB have been completed, and the remaining 61 GB should be uploaded in 11 hours from now. So, the global "performance" should be 952 GB in 127 hours, an average of 180 GB per day. 

Roughly, that is 2.1 MB/s or 17 Mbps, which is consistent with the obvious throttling I can see in Grafana for the CrashPlan container upload speed. Data is compressed before being uploaded, so translating the size of the data to backup into network upload size is not totally accurate, but the level of compression will highly depend on the data you back up. For me, 893 GB backed up so far translated into 787 GB uploaded, i.e. a compression ratio of 88%.

To sum up, if you get the same upload speed and compression ratio as me, your initial 12TB backup should generate 10.8 TB (10,830 GB) to upload at an average speed of 180 GB per day, i.e. circa 60 days ... Btw, as the upload speed of your internet connection seems to be 20/25 Mbps, the best you can expect to upload this amount of data is circa 40/50 days. So, you wouldn't be that throttled.

For the 10GB per day you heard of, I suppose that's what you found in the CrashPlan FAQ. Let's say it's somehow their commitment, even if it's not legally binding, so they take very very little risk not to fulfill it, it's circa 1 Mbps ... 

 

As for the system resources, the container has an avg CPU load of 9% (Ryzen 7 3700X CPU), avg 1.2 GB memory load (out of 32 GB), and a constant Array I/O read of 2 MB/s. So, you can see it's running, but it has a low footprint on the global server load.

 

I hope that can help you decide on your backup strategy.

 

Thank you! Even at 60 days, that's better than where I will be in 60 days if I dont do it. I appreciate the very detailed feedback! 

Link to comment

Hi

 

I am having trouble using a reverse proxy to access CrashPlan (note I am using Nginx Proxy Manager to manage this).

 

When accessing crashplan via the reverse proxy I get a red cross and error 1006 server disconnect.

 

I got around this on another docker (Cloudberry) by using HTTPS and setting a login to the gui. I have tried enabling "Secure Connection:" in the docker template, but this did not work.

 

Any suggestions appreciated.

 

The conf file is below.

server {
  set $forward_scheme http;
  set $server         "[local ip of server]";
  set $port           7810;

  listen 8080;
listen [::]:8080;

listen 4443 ssl http2;
listen [::]:4443;


  server_name crashplan.mydomain.com;


  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-14/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-14/privkey.pem;






  # Block Exploits
  include conf.d/include/block-exploits.conf;






  access_log /config/log/proxy_host-10.log proxy;







  location / {

    


    # Force SSL
    include conf.d/include/force-ssl.conf;







    
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;
    

    # Proxy!
    include conf.d/include/proxy.conf;
  }


  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

 

 

Link to comment
7 hours ago, ezzys said:

Hi

 

I am having trouble using a reverse proxy to access CrashPlan (note I am using Nginx Proxy Manager to manage this).

 

When accessing crashplan via the reverse proxy I get a red cross and error 1006 server disconnect.

 

I got around this on another docker (Cloudberry) by using HTTPS and setting a login to the gui. I have tried enabling "Secure Connection:" in the docker template, but this did not work.

 

Any suggestions appreciated.

 

The conf file is below.


server {
  set $forward_scheme http;
  set $server         "[local ip of server]";
  set $port           7810;

  listen 8080;
listen [::]:8080;

listen 4443 ssl http2;
listen [::]:4443;


  server_name crashplan.mydomain.com;


  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-14/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-14/privkey.pem;






  # Block Exploits
  include conf.d/include/block-exploits.conf;






  access_log /config/log/proxy_host-10.log proxy;







  location / {

    


    # Force SSL
    include conf.d/include/force-ssl.conf;







    
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;
    

    # Proxy!
    include conf.d/include/proxy.conf;
  }


  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

 

 

 

Can you try the same proposed steps:

  • Access in a private/incognito window.
  • Look at the browser's console (in developper tools) for any errors.  This should indicate the reason of the failure.

 

Link to comment
17 hours ago, Djoss said:

 

Can you try the same proposed steps:

  • Access in a private/incognito window.
  • Look at the browser's console (in developper tools) for any errors.  This should indicate the reason of the failure.

 

Just tried prviate browser and it works.

 

Looking at the browser tools it was blocking / causing issues with the connection to the websocket.

 

Looks like unblock origin was the cause. Disabled it on crashplan and it now works fine.

Link to comment
43 minutes ago, ezzys said:

Just tried prviate browser and it works.

 

Looking at the browser tools it was blocking / causing issues with the connection to the websocket.

 

Looks like unblock origin was the cause. Disabled it on crashplan and it now works fine.

Not sure to understand what was the issue/fix.  Could you elaborate ?

Edited by Djoss
Link to comment
1 minute ago, Djoss said:

Not sure to understand what was the issue/fix.  Could you elaborate ?

It was not connecting to the vnc. That was the error in the consol. Just had to disable unlock origin add on in Firefox for the cloudberry webpage and the error with the connection to the vnc disappeared.

Link to comment
  • 3 weeks later...
  • 3 weeks later...
  • 3 weeks later...

@Djoss

 

Got this in email today. Unsure if that requires container changes but thought I'd pass along.

"Beginning March 4, 2021, CrashPlan will require two-factor authentication when logging in to your CrashPlan for Small Business account from the administration console. Use of two-factor authentication helps prevent unauthorized account access."

  • Like 1
Link to comment
41 minutes ago, repomanz said:

@Djoss

 

Got this in email today. Unsure if that requires container changes but thought I'd pass along.

"Beginning March 4, 2021, CrashPlan will require two-factor authentication when logging in to your CrashPlan for Small Business account from the administration console. Use of two-factor authentication helps prevent unauthorized account access."

I got the same email.  I've enabled two-factor authentication on my account and it did not affect the signing from the app.

  • Like 1
Link to comment
On 2/27/2021 at 6:15 PM, RealActorRob said:

Got it set up and running but with default users 99/100. 

 

"To find the right IDs to use, issue the following command on the host, with the user owning the data volume on the host:"

 

..who 'owns' the data volumes on unraid?

 

Ultimate question:

Is there a more in depth 'For Dummies' guide to setup?

The template on unRAID already setup the container with proper values.  So if you kept defaults ones, you should be all good! 

Link to comment
  • 2 weeks later...

Noticed that my CPUI was getting hammered this evening, and it looks like CrashPlan is the culprit. It's currently backing up my ~26GB CA Backups appdata.tar.gz file (which it reckons will take 3 days...). This isn't the first time that that file is backed up, but I've never noticed such a spike in CPU usage? Any ideas what could be up?

I heard that maybe not enough RAM allocation can spike CPU so I added the env var for 4096M (as well as via the Crashplan command line), but it still seems to be hitting my CPU harder than I've seen before (while also only using 1.5GB RAM).

Any idea why? Is it common for large single-files to cause this and I've just never noticed before, or is something problematic going on?

Thanks for your help and experience.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.