[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)


Recommended Posts

Hey All,

 

Is there anyway to see a list of changed files?  I have CP PRO set to backup in the wee morning hours between 2AM and 6AM everyday.  I have been checking on it lately in the evening and seeing that it has 30+GB "To Do" almost every day.  I am trying to find what/where the changed files are so I can make sure I am not wasting a crazy amount of bandwidth.  I have already excluded the plex transcode directory thinking that was it but no change. 

 

The other thing to note is that my total backup set size is not increasing.  It is sitting at about the same number since the beginning (which is expected).  Just trying to find out what files are changing to I can determine if they need to be included in the backup or not.

 

Thanks

Geoff

Link to comment
8 hours ago, gzibell said:

Hey All,

 

Is there anyway to see a list of changed files?  I have CP PRO set to backup in the wee morning hours between 2AM and 6AM everyday.  I have been checking on it lately in the evening and seeing that it has 30+GB "To Do" almost every day.  I am trying to find what/where the changed files are so I can make sure I am not wasting a crazy amount of bandwidth.  I have already excluded the plex transcode directory thinking that was it but no change. 

 

The other thing to note is that my total backup set size is not increasing.  It is sitting at about the same number since the beginning (which is expected).  Just trying to find out what files are changing to I can determine if they need to be included in the backup or not.

 

Thanks

Geoff

Maybe you can look at Tools->History to at least have an idea of what's happening?

Link to comment
On 12/26/2017 at 1:04 AM, Djoss said:

 

First, appdata from your old container is not fully compatible with this one.  So the copy you did is useless.  Normally, you would have start with an empty appdata.

 

Then, you can skip the file transfer without issue.  The wizard assumes that the current device doesn't have a local copy of the data in the cloud, which is obviously no the case.

 

Since the file paths between the old and the new container is different (your files are under /storage in the new one), you will need to re-select your files under the correct path, then perform a backup (without removing files marked as missing, which are under the old  /mnt/user path).  Because of deduplication, nothing will be re-uploaded.

 

All these instructions can be found at https://github.com/jlesage/docker-crashplan-pro#taking-over-existing-backup

 

 

@Djoss As I have my monthly configs backup in a different folder. I deleted all previous Crashplan dockers and images. re-downloaded fresh docker, and lettering the docker do its usual stuff. I'm re following the "Taking Over Existing Backup" guidance in GitHub from link you sent. hopefully this will help. Appreciate your guidance! Cheers, Julian

Link to comment

I'm not a Linux expert, so hoping to ave some guidance in what to type in to the CrashPlan/Code42 for small business docker to increase the Ram to 4gb. The wording doesn't tell me (as I'm a newby) what I need to type to field to increase RAM to 4Gb.. any guidance appreciated please?

 

CRASHPLAN_SRV_MAX_MEM Maximum amount of memory the CrashPlan Engine is allowed to use. One of the following memory unit (case insensitive) should be added as a suffix to the size: G, M or K. By default, when this variable is not set, a maximum of 1024MB (1024M) of memory is allowed.

image.thumb.png.37011d71c2a0b9ddbec151137240f9ba.png

Link to comment
9 minutes ago, huntjules said:

I'm not a Linux expert, so hoping to ave some guidance in what to type in to the CrashPlan/Code42 for small business docker to increase the Ram to 4gb. The wording doesn't tell me (as I'm a newby) what I need to type to field to increase RAM to 4Gb.. any guidance appreciated please?

 

CRASHPLAN_SRV_MAX_MEM Maximum amount of memory the CrashPlan Engine is allowed to use. One of the following memory unit (case insensitive) should be added as a suffix to the size: G, M or K. By default, when this variable is not set, a maximum of 1024MB (1024M) of memory is allowed.

image.thumb.png.37011d71c2a0b9ddbec151137240f9ba.png

If you want to increase to 4GB, just put 4G.

Link to comment
19 hours ago, SimonG said:

Also having a problem connecting to the server. Has been working for many weeks but now says "waiting for connection". Tried reboot, increasing memory ... anything else I can try?

Since how many time it shows "waiting for connection"?

Link to comment
On 02/01/2018 at 4:17 PM, SimonG said:

Also having a problem connecting to the server. Has been working for many weeks but now says "waiting for connection". Tried reboot, increasing memory ... anything else I can try?

 

The solution to this issue was:

  • Stop the container.
  • Remove the cache:
  • rm /rf /mnt/user/appdata/CrashPlanPRO/cache/*

    Start the container.

Credit to Djoss.

Link to comment

I did a quick search of the forum for this, has any one had this occur? I don't see an advanced parameter that can be updated like the memory allocation one. 

 

For searching purposes, "CrashPlan for Small Business is exceeding inotify's max watch limit." and "Real-time file watching cannot work properly. The inotify watch limit needs to be increased on the host."

Capture.PNG

Edited by geonerdist
Link to comment
25 minutes ago, geonerdist said:

I did a quick search of the forum for this, has any one had this occur? I don't see an advanced parameter that can be updated like the memory allocation one. 

 

For searching purposes, "CrashPlan for Small Business is exceeding inotify's max watch limit." and "Real-time file watching cannot work properly. The inotify watch limit needs to be increased on the host."

Capture.PNG

This can't be set from the container's settings because the issue is on the host (not the container).

 

In other words, you need to increase the limit of unRAID.  You can use the "Tips and Tweaks" plugin to change the value.

  • Like 1
Link to comment
12 minutes ago, Djoss said:

By default, the /storage mapping is read-only.  Edit the container settings and change the permission to read/write.

 

I'm sorry, I'm not very fluent with container work. I don't see a parameter in readme ther eis "unmask" and there is also information about data volumes.

Do I need to put in extra parameter or change the unmask value?

Link to comment

In unRAID, go into container settings (click the container name from the Docker tab).  Then switch to Advanced View (toggle at the upper right).  You will now have an Edit button beside the "Storage" setting.  From there you can change the Access Mode to Read/Write.

  • Like 1
Link to comment
45 minutes ago, Trylo said:

 

I'm sorry, I'm not very fluent with container work. I don't see a parameter in readme ther eis "unmask" and there is also information about data volumes.

Do I need to put in extra parameter or change the unmask value?

 

Neither; in the container settings page, click "Edit" next to the storage path mapping, and a modal popups; It has a few options, but you'll just change "Access Mode" off of read-only.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.