[Support] Djoss - CrashPlan PRO (aka CrashPlan for Small Business)


Recommended Posts

On 2/13/2020 at 5:08 PM, acurcione said:

Saw there was an update this morning so I updated and I've waited for CP to do it's scanning thing, but wanted to verify settings in the my.services.xml file and... there is no my.services.xml file in the conf directory now! Where the heck did it go??

Editing this file has never been supported by CrashPlan, so they probably took a step further...

Link to comment

My main gaming VM experiences latency issues after about 15 minutes of the crashplanpro docker running and backing up my ~12TB array.  The docker itself only eats up about 3gb (I gave it 16G in config) of ram, but it spawns a bunch of threads (/usr/local/crashplan/bin/CrashPlanService) that are asking for 29gb virtual.  I tried installing a dedicated xfs formatted swap drive using the swapdisk plug in, but it's not even using it.  The host has 64gb of ram, my main VM running experiencing the issue has 20gb allocated  All dockers and VM's combined use 58gb (including crashplanpro). 

 

As soon as I stop the docker, the threads go away and the problem resolves.  And when I start it back up, again it takes about 5-25 minutes for the problems to start again.  The other 3 VM's which have GPU pass through as well don't experience any issues, only the main one.

 

Also, I do have my datasets broken down into 4 groups that run in order:

  1. 49gb - system, crashplan config, and ISOs - backups up find
  2. 3.6TB - data file shares - finished 450gb still running which is where the problems arise.
  3. 300gb - Dockers & VMs (qcow2 files on btrfs) - never backed up
  4. 8.5TB - media files - never backed up

 

Diag attached, thanks in advance for any help!

 

 

image.thumb.png.09d71aeba0668e64dfe14fb6fbd9e49f.png

 

image.png.bf4dd7876539bc31006a1ca9301f0e85.png

 

 

unraid-diagnostics-20200219-1901.zip

Edited by 0x00000111
Link to comment

I don't think this is a docker issue, but there seem to be plenty of folks here who understand CP reasonably well, so I'd like to ask here. (Maybe it is a docker issue, I dunno...)

 

I get a weekly email from CrashPlan with a backup summary. This is what it tells me:

 

Quote

Last backup activity: 29 mins ago

Last completed backup: 50.3 days ago

Selected for backup: 3TB

What does it mean by "completed backup" and why wouldn't it have completed for nearly 2 months? Does that mean that CP is kicking off its backups, but it never finishes pushing data to their servers?

 

OK, I'm more than a bit of a n00b for that question. I just watched this video and it shows where to go in the CP client to check. Now to see if I can track down the 1 file that it's not backing up...

 

EDIT: After digging around some, I could not find a way to get it to tell me what files are and are not backed up. So, I clicked "backup now" and sat here and watched it. I've got a 10GB log file that it's not wanting to backup (dunno why...). I'm removing that from the backup set and I'm going to investigate why the heck my log file is so massive and if there's something I can do to trim it to a reasonable size.

Edited by FreeMan
Link to comment
On 2/20/2020 at 6:51 PM, 0x00000111 said:

My main gaming VM experiences latency issues after about 15 minutes of the crashplanpro docker running and backing up my ~12TB array.  The docker itself only eats up about 3gb (I gave it 16G in config) of ram, but it spawns a bunch of threads (/usr/local/crashplan/bin/CrashPlanService) that are asking for 29gb virtual.  I tried installing a dedicated xfs formatted swap drive using the swapdisk plug in, but it's not even using it.  The host has 64gb of ram, my main VM running experiencing the issue has 20gb allocated  All dockers and VM's combined use 58gb (including crashplanpro). 

 

As soon as I stop the docker, the threads go away and the problem resolves.  And when I start it back up, again it takes about 5-25 minutes for the problems to start again.  The other 3 VM's which have GPU pass through as well don't experience any issues, only the main one.

 

Also, I do have my datasets broken down into 4 groups that run in order:

  1. 49gb - system, crashplan config, and ISOs - backups up find
  2. 3.6TB - data file shares - finished 450gb still running which is where the problems arise.
  3. 300gb - Dockers & VMs (qcow2 files on btrfs) - never backed up
  4. 8.5TB - media files - never backed up

 

Diag attached, thanks in advance for any help!

 

 

image.thumb.png.09d71aeba0668e64dfe14fb6fbd9e49f.png

 

image.png.bf4dd7876539bc31006a1ca9301f0e85.png

 

 

unraid-diagnostics-20200219-1901.zip 144.97 kB · 0 downloads

My guess is that until your backup finish, CrashPlan will probably have a non-negligible impact on the system.

Is the latency caused by the CPU usage or all the performed I/Os?

Link to comment
On 2/22/2020 at 6:41 PM, Djoss said:

My guess is that until your backup finish, CrashPlan will probably have a non-negligible impact on the system.

Is the latency caused by the CPU usage or all the performed I/Os?

I'm not 100% sure it is a CPU latency issue, but the mouse gets all jerky and gaming FPS drops from 100+ to under 15 with major breaks between bursts of good performance.  And audio starts clipping really bad, despite already having the MSI interupt values set (HDMI audio from the GPU) I mean it's almost like the docker is trying to use some of it's cores, despite them being isolated.  Again, it's only happening to one of the 4 gaming GPU pass through VMs, which is what's really confusing me.  If it was all of them, then that would make more sense, but just one and not until a solid 10-25 minutes after backups have been running does it start to get bad. 

Link to comment

While I still haven't figured out the root of the issue, I'm seeing that performance now lasts a little longer, 25-45 minutes before it starts to get choppy.  My workaround currently is using a user script to restart the docker every 30 minutes.  At least stuff is getting backed up still and I can work/game for the most part without issue.

Link to comment
On 2/5/2020 at 6:15 AM, jademonkee said:

Install the Unraid Tips & Tweaks plugin from Community Applications. Then go to Settings (in Unraid, not Crashplan Docker) > Tips and Tweaks. There will be an option in there to increase inotify. Setting it depends on your system, but I have 16GB RAM and have it set to 1048576 which seems fine. 

(although I have had a weird thing happen this morning, see my upcoming post below).

I had to end up upping mine to 1,500,000 and now it seems to be working properly finally. :) 

 

I am backing up about 850GB give or take, and have 128GB of RAM.

  • Like 1
Link to comment
20 hours ago, CorneliousJD said:

I had to end up upping mine to 1,500,000 and now it seems to be working properly finally. :) 

 

I am backing up about 850GB give or take, and have 128GB of RAM.

Odd. I'm backing up almost 3TB. Maybe you have more files than me, though (>2TB of my backup is FLAC audio from my CD rips).

Still, with 128GB RAM, I guess it's no problem to have it higher than mine (only 16GB RAM).

Link to comment

Possibly a dumb question....  But Am I able to backup the contents of the flash drive through this docker?  When I go into the flash folder

under root in the manage files part..  Nothing comes up.  My guess is this changed sometime ago and my flash drive is not really backing up.

 

And I don't see it under the storage tab...

 

Thanks,

 

Jim

 

Link to comment
1 hour ago, jbuszkie said:

Possibly a dumb question....  But Am I able to backup the contents of the flash drive through this docker?  When I go into the flash folder

under root in the manage files part..  Nothing comes up.  My guess is this changed sometime ago and my flash drive is not really backing up.

 

And I don't see it under the storage tab...

 

Hey Jim, I think the best thing to do here IMO would be to use the CA Appdata backup/restore to also backup your flash, and then use Crashplan to backup the backed-up file.

 

I have mine setup to backup to /mnt/user/backups/unRAID/flash/

I then keep that whole "backups" folder backed up by CrashPlan so it's always accessible and uploaded whenever the auto-backups get generated.

Link to comment
3 hours ago, CorneliousJD said:

Hey Jim, I think the best thing to do here IMO would be to use the CA Appdata backup/restore to also backup your flash, and then use Crashplan to backup the backed-up file.

 

I have mine setup to backup to /mnt/user/backups/unRAID/flash/

I then keep that whole "backups" folder backed up by CrashPlan so it's always accessible and uploaded whenever the auto-backups get generated.

 

Indeed this is a better way of backing up the flash drive.  All files of the flash drive are owned by root/root.  Since CrashPlan doesn't run as root , it can't see/read them.

Link to comment

Howdy folks

 

An odd behaviour has cropped up recently.  After some period of time when I go to the WebUI, CP requires me to log in.  

 

Is this just a change by CP to access the application from the (docker'ed) desktop --- or should I be concerned that backups are being impacted?

 

Thanks!   

 

Link to comment
1 minute ago, landS said:

Howdy folks

 

An odd behaviour has cropped up recently.  After some period of time when I go to the WebUI, CP requires me to log in.  

 

Is this just a change by CP to access the application from the (docker'ed) desktop --- or should I be concerned that backups are being impacted?

 

Thanks!   

 

CP actually sent out an email a few weeks or a month or so ago stating that this will be a new thing moving forward - so this is normal as of now. :) 

Link to comment
1 minute ago, landS said:

Howdy folks

 

An odd behaviour has cropped up recently.  After some period of time when I go to the WebUI, CP requires me to log in.  

 

Is this just a change by CP to access the application from the (docker'ed) desktop --- or should I be concerned that backups are being impacted?

 

Thanks!   

 

I'm seeing this too..  I remember something sent to me about them requiring more log in 

 

This what I found on in my e-mail

Quote

Security of your organization’s data is paramount at Code42. With the recent increases in ransomware attacks on the internet, we have implemented product changes to help you better protect your data.

Effective March 3, 2020, Code42 will require all CrashPlan® for Small Business customers to enter their password to access the CrashPlan for Small Business desktop app. This change will help to further ensure the security of your CrashPlan data.
 

 

Link to comment

I seem to be having a problem with Crashplan pro maxing out and crashing due to memory. I have adjusted the setting within the the docker to 8G as I have about 6.5GB of data backed up however still crashing with out of memory. When I start the docker, the memory usage slowly creeps up to around 1.2GB RAM used and then crashes so leads me to think the CRASHPLAN_SRV_MAX_MEM setting is not changing. 

 

Any ideas?

Link to comment
1 hour ago, jasonmav said:

I seem to be having a problem with Crashplan pro maxing out and crashing due to memory. I have adjusted the setting within the the docker to 8G as I have about 6.5GB of data backed up however still crashing with out of memory. When I start the docker, the memory usage slowly creeps up to around 1.2GB RAM used and then crashes so leads me to think the CRASHPLAN_SRV_MAX_MEM setting is not changing. 

 

Any ideas?

Reviewing my logs, its seems that the default 1024M is being preserved. The only way I was able to get my backup working again was to quickly start CP docker, open gui, login, enter the CP command line and enter the command "java mx 7168". This prompted a restart and all working correctly now. 

 

Strange why the variable in the docker setup is not being used....

Link to comment
8 hours ago, jasonmav said:

Reviewing my logs, its seems that the default 1024M is being preserved. The only way I was able to get my backup working again was to quickly start CP docker, open gui, login, enter the CP command line and enter the command "java mx 7168". This prompted a restart and all working correctly now. 

 

Strange why the variable in the docker setup is not being used....

The setting should work... Which value did you use?

Link to comment
10 hours ago, Djoss said:

The setting should work... Which value did you use?

In the docker, I tried all multiples of 1024M up to my maximum of 13312M. Also tried using 6G, 8G, 12G. But every one would still crash when RAM usage went above a gig. :(

Edited by jasonmav
Link to comment
  • 3 weeks later...

I'm having similar issues to jasonmav and scud133b above with CRASHPLAN_SRV_MAX_MEM, see image below of pop-ups.  I tried editing the docker settings for memory to 2048M and 4096M to no avail (I have 16gb ram fwiw).  I only get the red X on the webui header now (second image is butted up under the first image with the 3 memory warning boxes).

 

In doing the above I also broke the memory setting and I hope one of you can post a screenshot/fix.  The last time I edited the docker memory settings I selected 'Remove' thinking it was to remove the value, not the actual tool.  I see how to add back in the variable, I just need to see what the memory tool settings are.  

Capture2.JPG

Capture3.JPG

Edited by Homerr
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.