CrashPlan


agw

Recommended Posts

I believe it should automatically update itself. Mine didn't auto restart so I started getting warnings. Restarted manually and things seem to be working fine. I'm running off the cache drive so no further work needed, but I think that if you are running in RAM, you'll need to do what darias said to tar up the new version so you aren't constantly upgrading every time you reboot.

Link to comment
  • Replies 533
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

It should work on both but my original docs were for 4, I've never installed under 5 so there may be some subtle differences.  It does work under 5 (and I believe there are plugins elsewhere to help you install) as mine runs under 5 though I just copied my original install over rather than installing again from scratch.

 

Mine is running under 5b13, and I use the Crashplan GUI plugin for monitoring. But the installation of Crashplan itself came from following Boof's procedure.

 

Link to comment

Keep an eye on the new version of crashplan (3.2), there are a number of issues (some resolved by now) listed here :

 

http://support.crashplan.com/doku.php/3.2_known_issues

 

The one not listed, which affects OSX primarily (fixed by an updated installer for that platform as listed on the page above) but also Linux (as I found out) is the new version running out of memory.

 

You can fix this by bumping up :

 

-Xmx512m

 

or whatever it's current value is to :

 

-Xmx1024m

 

Or even higher as appropriate, in the SRV_JAVA_OPTS variable in $crashplaninstalldir/bin/run.conf

 

I think this problem rearing it's head and how much memory you need to be allocated in 3.2 depends largely on the backup set size you have running. I have a 1.7 TB backup set and needed to increase from the 512M set by default. I lept all the way to doubling at 1024M which has resolved things but I may have gotten away with a smaller increment with testing.

 

Symptoms of this issue are :

 

- Crashplan Engine constantly starting then failing due to lack of memory then auto restarting. service.log.0 in logs/ will have something similar to : com.code42.exception.DebugException: OutOfMemoryError occurred...RESTARTING!

 

- You may notice the above happening by the crashplan client you're using to connect to your headless server constantly losing connection - only for everything to be fine for a minute or two when you tell it to reconnect...before it disconnects again.

 

- Something  more sinister is that in my particular case this eventually stopped happening and the crashplan engine became stable. Crashplan then went onto apparently catch up backing up new files as required and as normal. It displayed the files needing to be backed up in the Analysing stage, the 'files to be backed up' counter decremented to 0 alongside the amount of data to backup and also incremented the amount of data backed up value. Up until this stage things, without looking too carefully, looked normal. It did not show any sort of transfer speed for the files whilst doing this was the one anomaly which would be easily missed by a casual observation. And analysis of each file required to be backed up took a long time. Once finished crashplan reported my backup was complete and 100% up to date on each of my two remote targets. However it had not actually backed up these new files! Attempting to restore from a backup location that was allegedly 100% up to date showed that those new files were really not there and so not available for restore. There was no indication in the client this had happened, it reported 100% backup completion. The only way I could see this was not the case was attempting to do a restore on a file which should have been freshly backed up. This was resolved after restarting the backup engine with the higher memory limits which forced crashplan to reback up the new files properly (I also had the OOM errors for a while beforehand too). So if you're running this version (3.2 and because of the auto update you sort of have to be) and only paying vague attention to what's happening day to day CHECK YOUR BACKUPS CAN BE RESTORED! as the client may be telling you a white lie if it's saying it's running and at 100%..

 

Do a test backup of a specific new file then immediately do a restore to ensure this file is available on your targets to be restored. Also keep an eye and make sure the file analysis happens quickly and you see a throughput rate whilst the file is backed up.

 

I hope they release a new update soon to fix this as if nothing else bumping up the memory requirement means..the application takes more memory..and 1 gig just for a backup client is starting to get a bit daft. Given this is their first update in about a year (I think?) it's a fairly poor state of affairs in terms of time taken to test etc and has knocked my confidence in this product hugely. It reporting my backups were 100% complete to two seperate locations despite in actuality being at least 4-5 gigs behind is very concerning.

 

I only sorted all this out yesterday and plan to, today, kick off (if I can force it manually) a full verification of each backup set to make sure nothing else has gone silently wrong.

Link to comment

...Given this is their first update in about a year (I think?) it's a fairly poor state of affairs in terms of time taken to test etc and has knocked my confidence in this product hugely...

 

I agree with this, and hope it is not an indication of an undisciplined development process. For a recurring pay-for-use product like this I have higher expectations than I do for a one timer like unRAID. Thank you for the thorough explanation Boof, as usual.

Link to comment

 

 

I have a Synology DS212+ running the latest Crashplan Headless client.

 

When I run the following command:

netstat -an | grep ':424.'

 

I only see the NAS listening on 4243 and not on 4242.

 

I had this running successfully until a couple of weeks ago when crashplan decided to do an auto update. I just finished updating everything manually and can't connect do to the NAS not listening on 4242.

 

Does anybody have any ideas what I might be missing?

 

Thanks...

 

 

Link to comment

crashplan has been really annoying me that after an automatic update, it doesn't restart. I didn't notice it for days that the client was offline :-)

 

the fact that it updated twice in a two week period annoyed me

 

Anyway, I just wanted to vent, now I feel much better lol

Link to comment

Well, oddly enough it started listening on both 4242 and 4243 the next afternoon. I did not change anything during that time so I can't explain why it started working all of a sudden.

 

I've moved on to being aggrevated with the performance of surveillance station.

 

;)

 

Link to comment
  • 2 weeks later...

I have CrashPlan running on my unRAID 4.7 and the client running on my Ubuntu laptop.  Everything works peachy except my unRAID drives never spin down until I kill the CrashPlanEngine on my laptop.  Fine I thought, the CrashPlan client can see the server on my LAN (I'm still in the 30 day CrashPlan trial period where, as I understand it, auto-backups are more frequent than the free version).

 

What caught me off guard was when I went to work and discovered the same laptop was still able to connect to the CrashPlanEngine running on my unRAID despite the CrashPlanEngine being behind a NAT with NO port forwarding rules configured for unRAID.  My expectation was that my CrashPlan client would not be able to connect and therefore my unRAID disks would spin down.  What I discovered was my CrashPlanEngine (running on unRAID) maintains a persistent connection OUT to 173.225.132.33:443 (netstat).  I've determined that IP to in fact be a CrashPlan IP in Minneapolis MN.  My guess is the CrashPlan client on my laptop uses this persistent connection to facilitate backups when the client and server are not on the same LAN.

 

It explains how p2p can work without special FW rules.  It wasn't obvious to me at first so I thought I'd share it here.

Link to comment

Crashplan keeps a heartbeat going with crashplan central constantly which is what you're seeing there. It's how crashplan knows if your machines are up and also allows config changes to propagate from crashplan clients to your web crashplan control panel (and vice versa).

 

I don't *think* it uses this for pushing traffic as I've had many occasions where there are specifically no firewall holes but it has still refused to work. This is also not the otherwise standard port crashplan uses for it's data transfer. It would also mean alot of traffic being pushed to crashplan for no reason other than they're the middle man. Whilst they no doubt have the bandwidth to cope with this (they are able to handle data going to their own servers for instance) it does cost them money and I'm not sure they'd want to account for this just to middle man backups that they're not getting any money for.

 

Having said that - have you looked to see what traffic volume is going via ports? If you're seeing all your throughput going through that connection then...

 

I'd be more inclined to suggest crashplan has opened the normal data ports without you explicitly doing anything. It supports upnp and various other mechanisms to do this in conjunction with most routers - you can check it's log file at startup to see it going though the motions. Telnet to your external ip on the normal data port (4242) from outside your network and see if you connect...

Link to comment
  • 3 weeks later...

Just a follow-up to boof's suggestion...

 

I have confirmed that CrashPlan is establishing a p2p connection between my computer at work and my unRAID box.  Both computers are maintaining a persistent tcp connection to a CrashPlan:443 server as well.  I am unable to connect to my external IP:4242 from outside my network though.  Not sure how CrashPlan is doing it but my guess is that persistent connection to CrashPlan:443 from each computer helps facilitate it in some way.

 

Link to comment

Did you dump per tcp port traffic to see where the data was going?

 

I'd still be very surprised if it went over 443 via crashplan for p2p backups (as above I've had situations where my machines couldn't talk to each other - this new scenario would mean that would never be possible as the failback could always be via 443 and crashplan) but this could be something in the new client released the other month?

Link to comment

Is there a way to create the tunnel using an account other than root?  I've only been able to successfully login as root to configure it via the GUI.  I tried editing my sshd_config to deny root logins, and allow only certain users and IPs, but it's not working.  It accepts the other user passwords when trying to SSH, but immediately closes the connection.

 

[email protected]'s password: 
debug2: we sent a password packet, wait for reply
debug1: Authentication succeeded (password).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug2: channel 0: request shell confirm 1
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Linux 3.1.0-unRAID.
debug2: channel 0: rcvd eof
debug2: channel 0: output open -> drain
debug2: channel 0: obuf empty
debug2: channel 0: close_write
debug2: channel 0: output drain -> closed
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0
debug2: channel 0: rcvd eow
debug2: channel 0: close_read
debug2: channel 0: input open -> closed
debug2: channel 0: rcvd close
debug2: channel 0: almost dead
debug2: channel 0: gc: notify user
debug2: channel 0: gc: user detached
debug2: channel 0: send close
debug2: channel 0: is dead
debug2: channel 0: garbage collecting
debug1: channel 0: free: client-session, nchannels 1
Connection to 128.32.166.86 closed.
Transferred: sent 1728, received 2104 bytes, in 0.0 seconds
Bytes per second: sent 203534.7, received 247822.4
debug1: Exit status 1

 

Any guidance appreciated.  Thanks!

Link to comment

The tunnel is account agnostic - yes you can create it using a normal user.

 

What you're seeing is just a general issue with how you've configured a user in unraid.

 

What did you do to set up the test user account? My initial hunch would be the default shell is wrong but that's a bit of a guess!

Link to comment

Hi Boof,

 

just created the user account using the unRAID GUI.  Is there another way to create a user or to modify the test users credentials/settings?  Thank you.

 

The tunnel is account agnostic - yes you can create it using a normal user.

 

What you're seeing is just a general issue with how you've configured a user in unraid.

 

What did you do to set up the test user account? My initial hunch would be the default shell is wrong but that's a bit of a guess!

Link to comment

Ah ok. If you login (as root) and run :

 

chsh test

 

And give it /bin/bash as the new shell. It will be set to /bin/false by default.

 

Unraid 5 beta12a threw some lib / symbol errors whilst doing this for me but the change did get pushed through.

 

Once done try and login again as test and you should get a 'proper' prompt - with tunnel.

Link to comment

That worked!  Thank you!  ;D

 

Ah ok. If you login (as root) and run :

 

chsh test

 

And give it /bin/bash as the new shell. It will be set to /bin/false by default.

 

Unraid 5 beta12a threw some lib / symbol errors whilst doing this for me but the change did get pushed through.

 

Once done try and login again as test and you should get a 'proper' prompt - with tunnel.

Link to comment

Just saw this thread (after creating my own thread on how to back up my laptops to the unraid server).

 

The following was suggested on my thread:

 

http://www.genie9.com/Free_products/free_timeline.aspx

 

Sounds like you can easily back up to your server. Whereas I can see here there that it looks to be quite difficult to get Crash Plan to work.

 

If all you want to do is dump data from a machine on your lan onto unraid then yes, crashplan isn't the easiest way to go about it.

 

Crashplan does, however, offer many more features and a much more complex 'infrastructure' for backups if you want it to use it.

Link to comment

This thread is 32 pages long.

 

Can you tell me where to find the step by step instructions to install Crash Plan.

 

Thanks.

 

Being rather  blunt, it would take < 30 mins to skim through and find what you wanted in this thread. If you don't have the patience for that you're not going to enjoy crashplan.

 

I've done your leg work for you :

 

http://lime-technology.com/wiki/index.php/CrashPlan

 

http://lime-technology.com/forum/index.php?topic=4008.msg35436#msg35436

 

http://lime-technology.com/forum/index.php?topic=4008.msg36929#msg36929

 

Jump in and if you get stuck post back and we'll see what we can do to help.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.