[Support] Djoss - CrashPlan


Recommended Posts

Here is a quick update:

  • When you migrate your account from Home to Small Business, CrashPlan will try to update itself to the PRO version.
  • Automatic upgrades are currently not supported by the container, the main reason being that it is based on Alpine Linux.
  • However, even if the update fails, new data seems to continue to be successfully backup to the cloud.

Now, there is 3 possible choices for the next step:

  1. Create a new Docker container for the PRO version.  It seems to be the cleanest way to go.  People migrating will just have to remove the old container and install the new one, keeping the same appdata.  CrashPlan will continue to work as before and the adoption process will not be needed.
  2. Add support for the PRO version in the actual container.  In other words, have a single container containing both versions (home and pro).  This will require user to configure which version to use.  But since the home version will eventually die, the work needed to support the 2 versions at the same is temporary.
  3. Add support for the automatic upgrade.  This would behave like a "real" Windows/Linux installation.  However, since the container has the home version initially, it won't be usable for someone that needs to re-install from scratch (without existing appdata): credentials are not working anymore for a "home" account once migration is done.

 

 

Link to comment
  • 2 weeks later...

Hello all.

 

I just switched from the other docker because of problems.  Unfortunately, it seems I'm having similar problems with this docker.

 

I adopted my previous backup.  Now, however, it is showing each folder as only having 1 file and 0 MB.

 

Looking through the logs, I believe this is the pertinent information:

 

STACKTRACE:: org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: 
	Can't load library: /tmp/.cpswt/libswt-gnome-gtk-4427.so
	Can't load library: /tmp/.cpswt/libswt-gnome-gtk.so
	no swt-gnome-gtk-4427 in java.library.path
	no swt-gnome-gtk in java.library.path
	/tmp/.cpswt/libswt-gnome-gtk-4427.so: libgnomevfs-2.so.0: cannot open shared object file: No such file or directory

 

Any ideas on how to proceed?

 

Thanks in advance.

Link to comment

You probably have a problem not related to the container...

Did you kept default container settings?

What are the files permissions?  You can run these command from your unRAID server:

docker exec CrashPlan ls -l /storage
docker exec CrashPlan ls -l /storage/Data

 

Link to comment

Thanks again.  

 

Yes, it appears to be something wrong with the folder structure under /user/appdata/CrashPlan - left over from the other docker.  When I uninstalled that docker, I was able to remove /user0/appdata/CrashPlan but could not remove /user/appdata/CrashPlan.  

 

When installing this docker, I did keep all default container settings.

 

I just checked the service log and this looks related:

[09.06.17 10:53:08.249 WARN  8287_BckpSel .backup.manifest.ManifestManager] Exception initializing ManifestManager com.code42.backup.manifest.BlockManifestRuntimeException: BMF-ERROR: Failed to make block archive directory! /usr/local/crashplan/cache/42/cpbf0000000000000000000, exists=false, isDir=false, ManifestSiloManager@2091880427[ manifestPath = /usr/local/crashplan/cache/42, open = false]; MM[BT 794431381083289193>42: openCount=1, initialized = false, dataFiles.open = false, /usr/local/crashplan/cache/42], com.code42.exception.DebugException: Exception initializing ManifestManager com.code42.backup.manifest.BlockManifestRuntimeException: BMF-ERROR: Failed to make block archive directory! /usr/local/crashplan/cache/42/cpbf0000000000000000000, exists=false, isDir=false, ManifestSiloManager@2091880427[ manifestPath = /usr/local/crashplan/cache/42, open = false]; MM[BT 794431381083289193>42: openCount=1, initialized = false, dataFiles.open = false, /usr/local/crashplan/cache/42]
STACKTRACE:: com.code42.exception.DebugException: Exception initializing ManifestManager com.code42.backup.manifest.BlockManifestRuntimeException: BMF-ERROR: Failed to make block archive directory! /usr/local/crashplan/cache/42/cpbf0000000000000000000, exists=false, isDir=false, ManifestSiloManager@2091880427[ manifestPath = /usr/local/crashplan/cache/42, open = false]; MM[BT 794431381083289193>42: openCount=1, initialized = false, dataFiles.open = false, /usr/local/crashplan/cache/42]

 

Running the commands you suggested show full permissions:  

image.png

Link to comment
16 minutes ago, Djoss said:

I would definitely try to start with a clean, empty appdata folder.  Try to see if you have the appdata folder under anything in /mnt: ls /mnt/*/appdata/CrashPlan.

 

Worst case, edit the container settings and change the appdata mapping.

I couldn't remove /mnt/user/appdata/CrashPlan/, but I did change the settings to map to /CrashPlan2.  Also, I bit the bullet and did not adopt this time, starting fresh.  Painful to upload everything from scratch, but at least it's working.

 

Thanks again, highly appreciated.

Link to comment
  • 1 month later...

With the latest update to this docker image that came out over the weekend (10/7, 10/8), the docker image won't stay powered up.   It keeps powering down after you try to restart it.

 

This is what the log says:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-app-niceness.sh: executing...
[cont-init.d] 00-app-niceness.sh: exited 0.
[cont-init.d] 00-app-script.sh: executing...
[cont-init.d] 00-app-script.sh: exited 0.
[cont-init.d] 00-app-user-map.sh: executing...
[cont-init.d] 00-app-user-map.sh: exited 0.
[cont-init.d] 00-clean-tmp-dir.sh: executing...
[cont-init.d] 00-clean-tmp-dir.sh: exited 0.
[cont-init.d] 00-set-app-deps.sh: executing...
[cont-init.d] 00-set-app-deps.sh: exited 0.
[cont-init.d] 00-set-home.sh: executing...
[cont-init.d] 00-set-home.sh: exited 0.
[cont-init.d] 00-take-config-ownership.sh: executing...
[cont-init.d] 00-take-config-ownership.sh: exited 0.
[cont-init.d] 10-certs.sh: executing...
[cont-init.d] 10-certs.sh: exited 0.
[cont-init.d] 10-nginx.sh: executing...
ERROR: No modification applied to /etc/nginx/default_site.conf.
[cont-init.d] 10-nginx.sh: exited 1.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
[s6-finish] sending all processes the KILL signal and exiting.

 

 

Any thoughts?

Link to comment

Completely forgot about doc'n my experiences with migration.

 

I came over from the gfjardim version and followed the instructions. All of my settings came over smoothly, and had no issues getting things up and running. I did have to point the backup directory which end up causing a re-upload. Not an issue though as it was only 2TB worth anyways and I keep nightly backups using duplicati as well.

 

Since the migration I have to say that I am loving this image. In fact, now the files that have Japanese characters used in their names can be seen properly! Only hickup is sometimes when I load the webui it the Crashplan Pro splash screen says something about not being able to find the backend or w/e, but I just click try again and it loads up with no issues.

 

Just wanted to say thank you very much for your hard work Djoss!!

Link to comment
  • 1 month later...

Just would like to thank Djoss for this amazing container.

 

I do have a slight issue tho. Upon setting up the container everything works fine. its only when the container is restarted things go a little side ways.

 

When I restart I get the engine not found message, and I also notice the ui_info file changes. I've tried giving the ui_info file read only permission but still get the same results when restarted.

 

4243,aa19c4d7-36c9-4dfc-af04-01f51fc1cb99,127.0.0.1 (works fine)

then when restarted

4243,81f08a16-6c87-4efb-91ad-ee53df01c8cb,192.168.1.200 (doesn't work and IP changes to my NAS IP)

 

Can you help?

Im using QNAP Nas btw

Link to comment

Damn  I just read my e-mail about crash plan for home going bye-bye!!  This sucks!  From what I read here it make more sense to migrate to the pro version than go to carbonite?

It seems like that's what most folks are doing.   So what are the down side for me for going to pro?  It seems like it's just the cost?  Right now I'm only backing up

one computer (unraid)  But unraid has acronis backups of all my other computers! :-)

I don't back up to another computer and I only have like 600GB backed up.

 

So am I correct in assuming the only change for me will be the cost (eventually)?

 

Last question...  Should I transition now or later?  I'm good till Oct of next year.  Is the transition stable enough?  Or should I wait a little longer for everyone else to work out the kinks! :-)

 

And thanks to Djoss for the quick update to the container to support the pro transition!  You rock!

 

 

Link to comment
  • 1 month later...

Djoss, 

 

I'm trying to get crashplan to work on unraid, but running into two issues. 

1) I can't login with my crashplan account using the webgui or using the vnc viewer. I just get "login failed"

2) when I enable secure connection I can no longer connect via vnc viewer. VNC just stalls out trying to establish a secure connection

 

I've deleted the docker and the appdata folder and reinstalled to no avail.

I've attached my docker logs.

 

I'm moving from another computer that is running crashplan. I was going to run crashplan on unraid, then migrate to the pro version.

Do you think I should just move to the pro version straight in unraid or upgrade to pro on the existing computer first?

 

thanks for your help and this docker

 

Djoss-Crashplan.txt

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.