[CONTAINER] CrashPlan & CrashPlan-Desktop


Recommended Posts

Any thoughts on how fast the initial analyzing step should take ?  I added 15TB of data to be backupped to my set and it has been slowly going thru all my files for two weeks now.. not a single kb actually backupped... Constantly analyzing..

 

I run my Crashplan in a VM. Yesterday I added an additional 25TB of data to my backup files and wondered how the resources in the VM were doing, so I brought up the resources monitor utility. The VM barely used any memory and ran very smoothly as did the unraid stats themselves. I noticed I had too many cores dedicated to the VM, so I removed 5 of them and just now have 2. Everything still runs smooth in the VM and also the host runs good too.  The scanning of the extra files took about 10 minutes.

 

crashmonitor.png.c867c78df5911bfd81d150eb8c8accc6.png

Link to comment

Any thoughts on how fast the initial analyzing step should take ?  I added 15TB of data to be backupped to my set and it has been slowly going thru all my files for two weeks now.. not a single kb actually backupped... Constantly analyzing..

 

I run my Crashplan in a VM. Yesterday I added an additional 25TB of data to my backup files and wondered how the resources in the VM were doing, so I brought up the resources monitor utility. The VM barely used any memory and ran very smoothly as did the unraid stats themselves. I noticed I had too many cores dedicated to the VM, so I removed 5 of them and just now have 2. Everything still runs smooth in the VM and also the host runs good too.  The scanning of the extra files took about 10 minutes.

 

Damn... then something is seriously wrong here... Any thoughts ?  I have mailed crashplan support..

 

How many memory are you giving the VM ?

Link to comment

Any thoughts on how fast the initial analyzing step should take ?  I added 15TB of data to be backupped to my set and it has been slowly going thru all my files for two weeks now.. not a single kb actually backupped... Constantly analyzing..

 

I run my Crashplan in a VM. Yesterday I added an additional 25TB of data to my backup files and wondered how the resources in the VM were doing, so I brought up the resources monitor utility. The VM barely used any memory and ran very smoothly as did the unraid stats themselves. I noticed I had too many cores dedicated to the VM, so I removed 5 of them and just now have 2. Everything still runs smooth in the VM and also the host runs good too.  The scanning of the extra files took about 10 minutes.

 

Damn... then something is seriously wrong here... Any thoughts ?  I have mailed crashplan support..

 

How many memory are you giving the VM ?

 

I have been giving the VM 16MB, but seeing how it only uses %9 of during a peak time I'm lowering that to 2MB. (%10 of 16MB - 1.6MB) - I think 2MB would be fine and I could care less how long it takes. Kind of set it and forget it. I left the resource monitor on the entire time it scanned all the new files and the memory never went over %9. I was under the impression Crashplan was using a lot more resources but when I finally checked myself it was not.

 

What happens when you try to to select just one folder with 1GB or less in it? Does your Crashplan go to work after that?

Link to comment

Any thoughts on how fast the initial analyzing step should take ?  I added 15TB of data to be backupped to my set and it has been slowly going thru all my files for two weeks now.. not a single kb actually backupped... Constantly analyzing..

 

I run my Crashplan in a VM. Yesterday I added an additional 25TB of data to my backup files and wondered how the resources in the VM were doing, so I brought up the resources monitor utility. The VM barely used any memory and ran very smoothly as did the unraid stats themselves. I noticed I had too many cores dedicated to the VM, so I removed 5 of them and just now have 2. Everything still runs smooth in the VM and also the host runs good too.  The scanning of the extra files took about 10 minutes.

 

Damn... then something is seriously wrong here... Any thoughts ?  I have mailed crashplan support..

 

How many memory are you giving the VM ?

 

I have been giving the VM 16MB, but seeing how it only uses %9 of during a peak time I'm lowering that to 2MB. (%10 of 16MB - 1.6MB) - I think 2MB would be fine and I could care less how long it takes. Kind of set it and forget it. I left the resource monitor on the entire time it scanned all the new files and the memory never went over %9. I was under the impression Crashplan was using a lot more resources but when I finally checked myself it was not.

 

What happens when you try to to select just one folder with 1GB or less in it? Does your Crashplan go to work after that?

 

You are right, that is a good way to check...

 

I just removed the 14TB share I added from the backup set.. So now we are back on the level that worked before.. I will let that recalculate and stuff till it tells me its done.. Then I;ll slowly add more..

Link to comment

Its now scanning the changes on my other folders.. It finds them quick enough but scanning -them- still takes a few seconds per file.. Lets say five..

 

The 14TB folders (tv shows) contains approx 250,000 files, that would mean 1,250,000 seconds.. That comes down to 14 days of scanning for 250,000 files.. That is approx the same I was experiencing..

 

I am wondering if some tinkering with the docker settings for cpu and memory could make a difference.. Read somewhere that crashplan advices 1GB per terabyte of backup.. I don't even have that amount of memory in the system...

 

If all else fails I will move to rsync ..

Link to comment

And now it has starting "analyzing" my itunes folders... I forgot about that taking more then a week last time... So I removed that from the backup set also..

 

I am at the point I just want to get this to work, finding out what is going on.. More important than actually having a backup..

Link to comment

One thing I notice... It is still "analyzing", nothing is getting uploaded, but the "remaining files" number decreases... Its doing all little files so the gb counter doesnt change..

 

Can it be that it is actually backupping but not telling me it is ?  It should show "uploading" and a speed counter right ?

Link to comment

Only thing I can think of is that you already have made a backup of these files previously. The behavior sounds a bit like the process when you move CrashPlan to a new computer and need to adopt your old backup to a new setup.

 

Check the Network graph in the unRAID GUI to see how much bandwidth you are using.

Link to comment

Only thing I can think of is that you already have made a backup of these files previously. The behavior sounds a bit like the process when you move CrashPlan to a new computer and need to adopt your old backup to a new setup.

 

Check the Network graph in the unRAID GUI to see how much bandwidth you are using.

 

I agree... I removed the complete folder from the backup however... Maybe it still recognises the blocks as beiing there... Still doesn't explain how they  got there in the first place though... Ever since I added that 14TB folder I have only seen "analyzing"..

Link to comment

If you already have a lot of data stored at CrashPlan all new data will be compared with what you already have stored. This is the data deduplication process; it compares all blocks that you backup with all “old” data.

 

I have nothing with crashplan.. I am backing up to my own secundairy server... The data was never backupped to that other server (unless "analyizing" means its actually backupping).. The folder that it was " analyzing"  for two weeks was removed out of the backup set, crashplan makes a point of it to tell me it is really gone then.. So there should be nothing it can dedupe.. As far as I understand the process..

Link to comment

Strange! Could be running out of memory? Have you changed the memory allocation in CrashPlan from the default 1024 MB?

 

Also found a link about permission problems: http://econdataresearch.blogspot.se/2015/01/resolving-issue-with-crashplan-keep.html

 

Edit: One more thing to check is the latest service.log in the log folder and look for ERROR or OutOfMemoryError

 

service.log shows outofmemory errors in java heap... I have seen solutions for that and will try right away, thanks !!

 

 

 

Link to comment

Cool! Keeping fingers crossed…  ;)

 

And for others that might be interested: The memory settings is in the file

run.conf

in the bin folder.

Use a Linux friendly editor and change the first line. 1024m is the default 1 GByte, increase as needed (2048m is 2 GByte)

SRV_JAVA_OPTS="-Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx1024m

Link to comment

Or, using the desktop interface:

 

  • Open the CrashPlan app
  • Double-click the CrashPlan logo in the upper-right corner
  • Enter the following command, using a value appropriate for the size of your backup selection (for example, 1536 for a 1.5 TB selection):

java mx 1536, restart

 

Crashplan advices to change this parameter if you backup more then 1 terabyte (which is what we probably all do over here). They advice 1GB of memory for every TB (or for every 1 million files) of backupped data. They also state that Crashplan actually needs 600MB of memory for every TB of data, they advice the 1GB because of future growth.

 

I just entered 8192, this should be enough for 14TB according to their calculation.

 

I actually allready backupped 12TB of data with the initial 1GB of java heap storage.. So their calculation is not on the mark..

 

I am thinking that the average media storage is way more relaxed on this requirement then average file storage.. So we'll see..

Link to comment

Or, using the desktop interface:

 

  • Open the CrashPlan app
  • Double-click the CrashPlan logo in the upper-right corner
  • Enter the following command, using a value appropriate for the size of your backup selection (for example, 1536 for a 1.5 TB selection):

java mx 1536, restart

 

Crashplan advices to change this parameter if you backup more then 1 terabyte (which is what we probably all do over here). They advice 1GB of memory for every TB (or for every 1 million files) of backupped data. They also state that Crashplan actually needs 600MB of memory for every TB of data, they advice the 1GB because of future growth.

 

I just entered 8192, this should be enough for 14TB according to their calculation.

 

I actually allready backupped 12TB of data with the initial 1GB of java heap storage.. So their calculation is not on the mark..

 

I am thinking that the average media storage is way more relaxed on this requirement then average file storage.. So we'll see..

 

For Crashplan to quote such impossible memory requirements is ridiculous. Who has 600MB? Who has 1TB of memory? No one. Maybe a super computer that costs as much as a car.

Link to comment

 

 

For Crashplan to quote such impossible memory requirements is ridiculous. Who has 600MB? Who has 1TB of memory? No one. Maybe a super computer that costs as much as a car.

 

600mb of memory ? everyone...

 

1TB of memory would be for approx 1500tb of storage...

 

I think you misread my post.. Their requirements are the same as zfs asks... so probably not that over the top..

Link to comment

Hi guys,

 

I'm new at Dockers, and just installed this package on my Synology.

The systeem seems to install, but i'm encountering 2 problemens.

 

When i want to login to VNC ( the webinterface works ) I cannot connect ( tried using the password PASSWD )

When i restart te container I see this in the terminal screen : Did not find /usr/local/crashplan/bin/run.conf

This is in the log file : 172.17.42.1: ignoring socket not ready

 

Any help would be greatly appreciated!

 

Regards,

Riestad

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.