Djoss Posted August 24, 2017 Author Share Posted August 24, 2017 Here is a quick update: When you migrate your account from Home to Small Business, CrashPlan will try to update itself to the PRO version. Automatic upgrades are currently not supported by the container, the main reason being that it is based on Alpine Linux. However, even if the update fails, new data seems to continue to be successfully backup to the cloud. Now, there is 3 possible choices for the next step: Create a new Docker container for the PRO version. It seems to be the cleanest way to go. People migrating will just have to remove the old container and install the new one, keeping the same appdata. CrashPlan will continue to work as before and the adoption process will not be needed. Add support for the PRO version in the actual container. In other words, have a single container containing both versions (home and pro). This will require user to configure which version to use. But since the home version will eventually die, the work needed to support the 2 versions at the same is temporary. Add support for the automatic upgrade. This would behave like a "real" Windows/Linux installation. However, since the container has the home version initially, it won't be usable for someone that needs to re-install from scratch (without existing appdata): credentials are not working anymore for a "home" account once migration is done. Quote Link to comment
Djoss Posted August 24, 2017 Author Share Posted August 24, 2017 The new CrashPlan PRO docker container is ready! So for people willing to migrate, the transition is simple and there is no need to go through the adoption process. Make sure to look at the CrashPlan PRO support thread for instructions. 2 Quote Link to comment
ajgriglak Posted September 6, 2017 Share Posted September 6, 2017 Hello all. I just switched from the other docker because of problems. Unfortunately, it seems I'm having similar problems with this docker. I adopted my previous backup. Now, however, it is showing each folder as only having 1 file and 0 MB. Looking through the logs, I believe this is the pertinent information: STACKTRACE:: org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: Can't load library: /tmp/.cpswt/libswt-gnome-gtk-4427.so Can't load library: /tmp/.cpswt/libswt-gnome-gtk.so no swt-gnome-gtk-4427 in java.library.path no swt-gnome-gtk in java.library.path /tmp/.cpswt/libswt-gnome-gtk-4427.so: libgnomevfs-2.so.0: cannot open shared object file: No such file or directory Any ideas on how to proceed? Thanks in advance. Quote Link to comment
Djoss Posted September 6, 2017 Author Share Posted September 6, 2017 3 minutes ago, ajgriglak said: Now, however, it is showing each folder as only having 1 file and 0 MB. Not sure to understand. Can you provide a screenshot showing the issue? Quote Link to comment
ajgriglak Posted September 6, 2017 Share Posted September 6, 2017 6 minutes ago, Djoss said: Not sure to understand. Can you provide a screenshot showing the issue? Thank you for the quick reply! Here it is: Note there is at least 3TB combined in those three folders. Also, when browsing via the "Change" button, the files and folders are visible. Quote Link to comment
Djoss Posted September 6, 2017 Author Share Posted September 6, 2017 You probably have a problem not related to the container... Did you kept default container settings? What are the files permissions? You can run these command from your unRAID server: docker exec CrashPlan ls -l /storage docker exec CrashPlan ls -l /storage/Data Quote Link to comment
ajgriglak Posted September 6, 2017 Share Posted September 6, 2017 Thanks again. Yes, it appears to be something wrong with the folder structure under /user/appdata/CrashPlan - left over from the other docker. When I uninstalled that docker, I was able to remove /user0/appdata/CrashPlan but could not remove /user/appdata/CrashPlan. When installing this docker, I did keep all default container settings. I just checked the service log and this looks related: [09.06.17 10:53:08.249 WARN 8287_BckpSel .backup.manifest.ManifestManager] Exception initializing ManifestManager com.code42.backup.manifest.BlockManifestRuntimeException: BMF-ERROR: Failed to make block archive directory! /usr/local/crashplan/cache/42/cpbf0000000000000000000, exists=false, isDir=false, ManifestSiloManager@2091880427[ manifestPath = /usr/local/crashplan/cache/42, open = false]; MM[BT 794431381083289193>42: openCount=1, initialized = false, dataFiles.open = false, /usr/local/crashplan/cache/42], com.code42.exception.DebugException: Exception initializing ManifestManager com.code42.backup.manifest.BlockManifestRuntimeException: BMF-ERROR: Failed to make block archive directory! /usr/local/crashplan/cache/42/cpbf0000000000000000000, exists=false, isDir=false, ManifestSiloManager@2091880427[ manifestPath = /usr/local/crashplan/cache/42, open = false]; MM[BT 794431381083289193>42: openCount=1, initialized = false, dataFiles.open = false, /usr/local/crashplan/cache/42] STACKTRACE:: com.code42.exception.DebugException: Exception initializing ManifestManager com.code42.backup.manifest.BlockManifestRuntimeException: BMF-ERROR: Failed to make block archive directory! /usr/local/crashplan/cache/42/cpbf0000000000000000000, exists=false, isDir=false, ManifestSiloManager@2091880427[ manifestPath = /usr/local/crashplan/cache/42, open = false]; MM[BT 794431381083289193>42: openCount=1, initialized = false, dataFiles.open = false, /usr/local/crashplan/cache/42] Running the commands you suggested show full permissions: Quote Link to comment
Djoss Posted September 6, 2017 Author Share Posted September 6, 2017 I would definitely try to start with a clean, empty appdata folder. Try to see if you have the appdata folder under anything in /mnt: ls /mnt/*/appdata/CrashPlan. Worst case, edit the container settings and change the appdata mapping. Quote Link to comment
ajgriglak Posted September 6, 2017 Share Posted September 6, 2017 16 minutes ago, Djoss said: I would definitely try to start with a clean, empty appdata folder. Try to see if you have the appdata folder under anything in /mnt: ls /mnt/*/appdata/CrashPlan. Worst case, edit the container settings and change the appdata mapping. I couldn't remove /mnt/user/appdata/CrashPlan/, but I did change the settings to map to /CrashPlan2. Also, I bit the bullet and did not adopt this time, starting fresh. Painful to upload everything from scratch, but at least it's working. Thanks again, highly appreciated. Quote Link to comment
Monteroman Posted October 9, 2017 Share Posted October 9, 2017 With the latest update to this docker image that came out over the weekend (10/7, 10/8), the docker image won't stay powered up. It keeps powering down after you try to restart it. This is what the log says:[s6-init] making user provided files available at /var/run/s6/etc...exited 0.[s6-init] ensuring user provided files have correct perms...exited 0.[fix-attrs.d] applying ownership & permissions fixes...[fix-attrs.d] done.[cont-init.d] executing container initialization scripts...[cont-init.d] 00-app-niceness.sh: executing...[cont-init.d] 00-app-niceness.sh: exited 0.[cont-init.d] 00-app-script.sh: executing...[cont-init.d] 00-app-script.sh: exited 0.[cont-init.d] 00-app-user-map.sh: executing...[cont-init.d] 00-app-user-map.sh: exited 0.[cont-init.d] 00-clean-tmp-dir.sh: executing...[cont-init.d] 00-clean-tmp-dir.sh: exited 0.[cont-init.d] 00-set-app-deps.sh: executing...[cont-init.d] 00-set-app-deps.sh: exited 0.[cont-init.d] 00-set-home.sh: executing...[cont-init.d] 00-set-home.sh: exited 0.[cont-init.d] 00-take-config-ownership.sh: executing...[cont-init.d] 00-take-config-ownership.sh: exited 0.[cont-init.d] 10-certs.sh: executing...[cont-init.d] 10-certs.sh: exited 0.[cont-init.d] 10-nginx.sh: executing...ERROR: No modification applied to /etc/nginx/default_site.conf.[cont-init.d] 10-nginx.sh: exited 1.[cont-finish.d] executing container finish scripts...[cont-finish.d] done.[s6-finish] syncing disks.[s6-finish] sending all processes the TERM signal.[s6-finish] sending all processes the KILL signal and exiting.[s6-finish] sending all processes the KILL signal and exiting. Any thoughts? Quote Link to comment
Djoss Posted October 9, 2017 Author Share Posted October 9, 2017 Yeah sorry about that, new image is coming! Quote Link to comment
Djoss Posted October 9, 2017 Author Share Posted October 9, 2017 New image is ready. 2 Quote Link to comment
ryoko227 Posted October 10, 2017 Share Posted October 10, 2017 Completely forgot about doc'n my experiences with migration. I came over from the gfjardim version and followed the instructions. All of my settings came over smoothly, and had no issues getting things up and running. I did have to point the backup directory which end up causing a re-upload. Not an issue though as it was only 2TB worth anyways and I keep nightly backups using duplicati as well. Since the migration I have to say that I am loving this image. In fact, now the files that have Japanese characters used in their names can be seen properly! Only hickup is sometimes when I load the webui it the Crashplan Pro splash screen says something about not being able to find the backend or w/e, but I just click try again and it loads up with no issues. Just wanted to say thank you very much for your hard work Djoss!! Quote Link to comment
ZeusBfd Posted November 14, 2017 Share Posted November 14, 2017 Just would like to thank Djoss for this amazing container. I do have a slight issue tho. Upon setting up the container everything works fine. its only when the container is restarted things go a little side ways. When I restart I get the engine not found message, and I also notice the ui_info file changes. I've tried giving the ui_info file read only permission but still get the same results when restarted. 4243,aa19c4d7-36c9-4dfc-af04-01f51fc1cb99,127.0.0.1 (works fine) then when restarted 4243,81f08a16-6c87-4efb-91ad-ee53df01c8cb,192.168.1.200 (doesn't work and IP changes to my NAS IP) Can you help? Im using QNAP Nas btw Quote Link to comment
Djoss Posted November 14, 2017 Author Share Posted November 14, 2017 I assume that docker's network is configured in bridge mode? Did you added a mapping for port 4243? Quote Link to comment
ZeusBfd Posted November 14, 2017 Share Posted November 14, 2017 My settings show as follows: Network Mode : NAT Host Container Protocol (blank) 4243 TCP 5901 5900 TCP 5800 5800 TCP Quote Link to comment
Djoss Posted November 14, 2017 Author Share Posted November 14, 2017 Try to map container port 4243 to host port 4243. Quote Link to comment
ZeusBfd Posted November 14, 2017 Share Posted November 14, 2017 Same results :-( Quote Link to comment
Djoss Posted November 14, 2017 Author Share Posted November 14, 2017 And it doesn't work even after you click "Yes" to retry? Quote Link to comment
ZeusBfd Posted November 14, 2017 Share Posted November 14, 2017 Yeah still no joy, says connecting... for about 30sec then unable to connect. 4243,550b1e37-a1e4-4acc-9472-553f128af5c0,127.0.0.1 then when restarted 4243,677087fb-bbf3-4e28-843c-7dd6963cbc3a,192.168.1.200 Quote Link to comment
Djoss Posted November 14, 2017 Author Share Posted November 14, 2017 Did you try to remove the container, its appdata and start over again? If that doesn't work: Look at the container's logs to see if there is any issue. Look at /conf/my.service.xml, under the <serviceUIConfig> config block: verify the port and host IP. Quote Link to comment
ZeusBfd Posted November 14, 2017 Share Posted November 14, 2017 (edited) That seems to have fixed it. For some reason on first boot it put my NAS IP 192.168.1.200 in the my.service.xml file. I've edited it to 127.0.0.1 and all is well. Thank you so much for your help. Much appreciated. Edited November 14, 2017 by ZeusBfd Quote Link to comment
jbuszkie Posted November 15, 2017 Share Posted November 15, 2017 Damn I just read my e-mail about crash plan for home going bye-bye!! This sucks! From what I read here it make more sense to migrate to the pro version than go to carbonite? It seems like that's what most folks are doing. So what are the down side for me for going to pro? It seems like it's just the cost? Right now I'm only backing up one computer (unraid) But unraid has acronis backups of all my other computers! :-) I don't back up to another computer and I only have like 600GB backed up. So am I correct in assuming the only change for me will be the cost (eventually)? Last question... Should I transition now or later? I'm good till Oct of next year. Is the transition stable enough? Or should I wait a little longer for everyone else to work out the kinks! :-) And thanks to Djoss for the quick update to the container to support the pro transition! You rock! Quote Link to comment
Djoss Posted November 15, 2017 Author Share Posted November 15, 2017 Yes, if you are not using to pc-to-pc backup feature, then the only difference with the PRO is the price. For the migration, I think it's up to you. Couple of people already did the transition without issue... Quote Link to comment
ppunraid Posted December 29, 2017 Share Posted December 29, 2017 Djoss, I'm trying to get crashplan to work on unraid, but running into two issues. 1) I can't login with my crashplan account using the webgui or using the vnc viewer. I just get "login failed" 2) when I enable secure connection I can no longer connect via vnc viewer. VNC just stalls out trying to establish a secure connection I've deleted the docker and the appdata folder and reinstalled to no avail. I've attached my docker logs. I'm moving from another computer that is running crashplan. I was going to run crashplan on unraid, then migrate to the pro version. Do you think I should just move to the pro version straight in unraid or upgrade to pro on the existing computer first? thanks for your help and this docker Djoss-Crashplan.txt Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.