Doug Estey Posted November 3, 2020 Share Posted November 3, 2020 5 hours ago, Djoss said: Is your Docker container image up-to-date ? Oh boy. So, I wasn't seeing "Update" on the dashboard menu for CrashPlan... but alas, when I went to "Docker", there it was. 4 hours of synchronizing later, all is well. Thank you @Djoss, and sorry for wasting your time. Quote Link to comment
snowboardjoe Posted December 23, 2020 Share Posted December 23, 2020 (edited) Restores are still failing here: root@laffy:/mnt/user/appdata/CrashPlanPRO/log# more restore_files.log.0 I 12/23/20 10:25AM 41 Starting restore from CrashPlan PRO Online: 2 files (53.10GB) I 12/23/20 10:25AM 41 Restoring files to original location I 12/23/20 10:42AM 41 Restore from CrashPlan PRO Online completed: 0 files restored @ 445.2Mbps W 12/23/20 10:42AM 41 2 files had a problem W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986488868677447596 (Read-only file system) W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986489628182015916 (Read-only file system) Someone said this might be fixed in latest version, but was not sure if I needed to set UID/GID to 0 and if there were any security concerns with that? UPDATE: Set UID/GID to 0 and restores are now in progress. UPDATE2: Still failed due to read-only status. I have no idea how to restore files. This is pretty serious now. Edited December 23, 2020 by snowboardjoe Quote Link to comment
SPOautos Posted December 23, 2020 Share Posted December 23, 2020 I know this question isnt really about the app itself so much as Crashplan service but since all you guys are using it I thought I'd ask here. How slow are the backups? I have 12TB of data that I'd like to backup and I add probably around 500GB per month. From what I've been reading it 'seems' like my server would be in a constant state of backing up because the service is apparently pretty slow. My upload speed is between 20-25Mbps and from what I gather the Crashplan service is substantially slower than that and one can only expect it to backup around 10GB per day. That means if I install a single 4k blue ray movie it will take nearly a week to backup. With 12TB of data it appears it would take somewhere around 3.5 years to get the initial backup completed.....of course thats if I didnt add any more data. Is this what you guys are experiencing? How are you able to use it if its really that slow??? Quote Link to comment
Gnomuz Posted December 24, 2020 Share Posted December 24, 2020 13 hours ago, SPOautos said: I know this question isnt really about the app itself so much as Crashplan service but since all you guys are using it I thought I'd ask here. How slow are the backups? I have 12TB of data that I'd like to backup and I add probably around 500GB per month. From what I've been reading it 'seems' like my server would be in a constant state of backing up because the service is apparently pretty slow. My upload speed is between 20-25Mbps and from what I gather the Crashplan service is substantially slower than that and one can only expect it to backup around 10GB per day. That means if I install a single 4k blue ray movie it will take nearly a week to backup. With 12TB of data it appears it would take somewhere around 3.5 years to get the initial backup completed.....of course thats if I didnt add any more data. Is this what you guys are experiencing? How are you able to use it if its really that slow??? I've just installed the container and activated the 30-days trial. First, not the faintest issue to set up, very easy to install, I just set "Maximum memory" to 4096M to avoid crashes due to low memory. As for the upload bandwidth, my feelings are mixed so far. I have 2.5 TB to backup, and started the process on Sunday. Until Sunday 11pm (all times CET), the throughput was 16 Mbps (or 2MB/s). And then, it was between 32 and 40 Mbps all Dec. 21st long, which is the practical limit for my 4G internet connection. Great ! I then added other shares to the backup set, and since Dec 22nd, the average is back to 15/16 Mbps. So, somehow, we are throttled when backing up, that's obvious. They admit it between the lines on their FAQ, stating we are not individually throttled, but as the server-side bandwidth is shared, there are limitations. If it's true, they don't have enough bandwidth to supply a decent service. But I have a doubt, as the upload is obviously capped at 16Mbps most of the time for me, which should not be the case all day long, unless the server-side bandwidth is ridiculously undersized. Personally, I let the initial backup finish (4/5 days, less if I get decent speeds again), and I will see if the service is viable on a day-to-day basis. But I must admit I share your doubts ... 1 Quote Link to comment
SPOautos Posted December 24, 2020 Share Posted December 24, 2020 3 hours ago, Gnomuz said: I've just installed the container and activated the 30-days trial. First, not the faintest issue to set up, very easy to install, I just set "Maximum memory" to 4096M to avoid crashes due to low memory. As for the upload bandwidth, my feelings are mixed so far. I have 2.5 TB to backup, and started the process on Sunday. Until Sunday 11pm (all times CET), the throughput was 16 Mbps (or 2MB/s). And then, it was between 32 and 40 Mbps all Dec. 21st long, which is the practical limit for my 4G internet connection. Great ! I then added other shares to the backup set, and since Dec 22nd, the average is back to 15/16 Mbps. So, somehow, we are throttled when backing up, that's obvious. They admit it between the lines on their FAQ, stating we are not individually throttled, but as the server-side bandwidth is shared, there are limitations. If it's true, they don't have enough bandwidth to supply a decent service. But I have a doubt, as the upload is obviously capped at 16Mbps most of the time for me, which should not be the case all day long, unless the server-side bandwidth is ridiculously undersized. Personally, I let the initial backup finish (4/5 days, less if I get decent speeds again), and I will see if the service is viable on a day-to-day basis. But I must admit I share your doubts ... Well if your able to upload 2.5Tb in around 5 days then that's not nearly as bad as I was reading in other places (which it was older info). That sounds fast enough to do mine in about a month. How is it on your resources? You mentioned the ram limits you gave it.....how about your CPU and hdd? Is it using enough resources that you can tell it's there working? Quote Link to comment
snowboardjoe Posted December 24, 2020 Share Posted December 24, 2020 19 hours ago, snowboardjoe said: Restores are still failing here: root@laffy:/mnt/user/appdata/CrashPlanPRO/log# more restore_files.log.0 I 12/23/20 10:25AM 41 Starting restore from CrashPlan PRO Online: 2 files (53.10GB) I 12/23/20 10:25AM 41 Restoring files to original location I 12/23/20 10:42AM 41 Restore from CrashPlan PRO Online completed: 0 files restored @ 445.2Mbps W 12/23/20 10:42AM 41 2 files had a problem W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986488868677447596 (Read-only file system) W 12/23/20 10:42AM 41 - Restore failed for /storage/movies/[redacted].mkv: /storage/movies/.cprestoretmp986489628182015916 (Read-only file system) Someone said this might be fixed in latest version, but was not sure if I needed to set UID/GID to 0 and if there were any security concerns with that? UPDATE: Set UID/GID to 0 and restores are now in progress. UPDATE2: Still failed due to read-only status. I have no idea how to restore files. This is pretty serious now. Fixed the issue. Not sure if this is already documented somewhere. The /storage mount point is strictly configured to be read-only (and a safe thing to do for security). In order to restore files back, you need to create a new mount point in the container configuration. In my case, I just added /restore and mapped to /mnt/user/scratch/restore. Provided destination /restore to restore job and it worked just fine. 1 1 Quote Link to comment
snowboardjoe Posted December 24, 2020 Share Posted December 24, 2020 17 hours ago, SPOautos said: I know this question isnt really about the app itself so much as Crashplan service but since all you guys are using it I thought I'd ask here. How slow are the backups? I have 12TB of data that I'd like to backup and I add probably around 500GB per month. From what I've been reading it 'seems' like my server would be in a constant state of backing up because the service is apparently pretty slow. My upload speed is between 20-25Mbps and from what I gather the Crashplan service is substantially slower than that and one can only expect it to backup around 10GB per day. That means if I install a single 4k blue ray movie it will take nearly a week to backup. With 12TB of data it appears it would take somewhere around 3.5 years to get the initial backup completed.....of course thats if I didnt add any more data. Is this what you guys are experiencing? How are you able to use it if its really that slow??? I'm getting much faster rates than that and have been using the service for many years now. Is your rate dropping over time? Did it ever complete the initial full backup? Quote Link to comment
Gnomuz Posted December 25, 2020 Share Posted December 25, 2020 (edited) On 12/24/2020 at 3:02 PM, SPOautos said: Well if your able to upload 2.5Tb in around 5 days then that's not nearly as bad as I was reading in other places (which it was older info). That sounds fast enough to do mine in about a month. How is it on your resources? You mentioned the ram limits you gave it.....how about your CPU and hdd? Is it using enough resources that you can tell it's there working? Hi, Well, the backup process has been running continuously for 4 days and a half, so I can step back a bit more now. The data to be backed up is 952 GB locally, 891 GB have been completed, and the remaining 61 GB should be uploaded in 11 hours from now. So, the global "performance" should be 952 GB in 127 hours, an average of 180 GB per day. Roughly, that is 2.1 MB/s or 17 Mbps, which is consistent with the obvious throttling I can see in Grafana for the CrashPlan container upload speed. Data is compressed before being uploaded, so translating the size of the data to backup into network upload size is not totally accurate, but the level of compression will highly depend on the data you back up. For me, 893 GB backed up so far translated into 787 GB uploaded, i.e. a compression ratio of 88%. To sum up, if you get the same upload speed and compression ratio as me, your initial 12TB backup should generate 10.8 TB (10,830 GB) to upload at an average speed of 180 GB per day, i.e. circa 60 days ... Btw, as the upload speed of your internet connection seems to be 20/25 Mbps, the best you can expect to upload this amount of data is circa 40/50 days. So, you wouldn't be that throttled. For the 10GB per day you heard of, I suppose that's what you found in the CrashPlan FAQ. Let's say it's somehow their commitment, even if it's not legally binding, so they take very very little risk not to fulfill it, it's circa 1 Mbps ... As for the system resources, the container has an avg CPU load of 9% (Ryzen 7 3700X CPU), avg 1.2 GB memory load (out of 32 GB), and a constant Array I/O read of 2 MB/s. So, you can see it's running, but it has a low footprint on the global server load. I hope that can help you decide on your backup strategy. Edited December 26, 2020 by Gnomuz 1 Quote Link to comment
SPOautos Posted December 27, 2020 Share Posted December 27, 2020 On 12/25/2020 at 2:54 AM, Gnomuz said: Hi, Well, the backup process has been running continuously for 4 days and a half, so I can step back a bit more now. The data to be backed up is 952 GB locally, 891 GB have been completed, and the remaining 61 GB should be uploaded in 11 hours from now. So, the global "performance" should be 952 GB in 127 hours, an average of 180 GB per day. Roughly, that is 2.1 MB/s or 17 Mbps, which is consistent with the obvious throttling I can see in Grafana for the CrashPlan container upload speed. Data is compressed before being uploaded, so translating the size of the data to backup into network upload size is not totally accurate, but the level of compression will highly depend on the data you back up. For me, 893 GB backed up so far translated into 787 GB uploaded, i.e. a compression ratio of 88%. To sum up, if you get the same upload speed and compression ratio as me, your initial 12TB backup should generate 10.8 TB (10,830 GB) to upload at an average speed of 180 GB per day, i.e. circa 60 days ... Btw, as the upload speed of your internet connection seems to be 20/25 Mbps, the best you can expect to upload this amount of data is circa 40/50 days. So, you wouldn't be that throttled. For the 10GB per day you heard of, I suppose that's what you found in the CrashPlan FAQ. Let's say it's somehow their commitment, even if it's not legally binding, so they take very very little risk not to fulfill it, it's circa 1 Mbps ... As for the system resources, the container has an avg CPU load of 9% (Ryzen 7 3700X CPU), avg 1.2 GB memory load (out of 32 GB), and a constant Array I/O read of 2 MB/s. So, you can see it's running, but it has a low footprint on the global server load. I hope that can help you decide on your backup strategy. Thank you! Even at 60 days, that's better than where I will be in 60 days if I dont do it. I appreciate the very detailed feedback! Quote Link to comment
ezzys Posted December 30, 2020 Share Posted December 30, 2020 Hi I am having trouble using a reverse proxy to access CrashPlan (note I am using Nginx Proxy Manager to manage this). When accessing crashplan via the reverse proxy I get a red cross and error 1006 server disconnect. I got around this on another docker (Cloudberry) by using HTTPS and setting a login to the gui. I have tried enabling "Secure Connection:" in the docker template, but this did not work. Any suggestions appreciated. The conf file is below. server { set $forward_scheme http; set $server "[local ip of server]"; set $port 7810; listen 8080; listen [::]:8080; listen 4443 ssl http2; listen [::]:4443; server_name crashplan.mydomain.com; # Let's Encrypt SSL include conf.d/include/letsencrypt-acme-challenge.conf; include conf.d/include/ssl-ciphers.conf; ssl_certificate /etc/letsencrypt/live/npm-14/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/npm-14/privkey.pem; # Block Exploits include conf.d/include/block-exploits.conf; access_log /config/log/proxy_host-10.log proxy; location / { # Force SSL include conf.d/include/force-ssl.conf; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_http_version 1.1; # Proxy! include conf.d/include/proxy.conf; } # Custom include /data/nginx/custom/server_proxy[.]conf; } Quote Link to comment
Djoss Posted December 31, 2020 Author Share Posted December 31, 2020 7 hours ago, ezzys said: Hi I am having trouble using a reverse proxy to access CrashPlan (note I am using Nginx Proxy Manager to manage this). When accessing crashplan via the reverse proxy I get a red cross and error 1006 server disconnect. I got around this on another docker (Cloudberry) by using HTTPS and setting a login to the gui. I have tried enabling "Secure Connection:" in the docker template, but this did not work. Any suggestions appreciated. The conf file is below. server { set $forward_scheme http; set $server "[local ip of server]"; set $port 7810; listen 8080; listen [::]:8080; listen 4443 ssl http2; listen [::]:4443; server_name crashplan.mydomain.com; # Let's Encrypt SSL include conf.d/include/letsencrypt-acme-challenge.conf; include conf.d/include/ssl-ciphers.conf; ssl_certificate /etc/letsencrypt/live/npm-14/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/npm-14/privkey.pem; # Block Exploits include conf.d/include/block-exploits.conf; access_log /config/log/proxy_host-10.log proxy; location / { # Force SSL include conf.d/include/force-ssl.conf; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_http_version 1.1; # Proxy! include conf.d/include/proxy.conf; } # Custom include /data/nginx/custom/server_proxy[.]conf; } Can you try the same proposed steps: Access in a private/incognito window. Look at the browser's console (in developper tools) for any errors. This should indicate the reason of the failure. Quote Link to comment
ezzys Posted December 31, 2020 Share Posted December 31, 2020 17 hours ago, Djoss said: Can you try the same proposed steps: Access in a private/incognito window. Look at the browser's console (in developper tools) for any errors. This should indicate the reason of the failure. Just tried prviate browser and it works. Looking at the browser tools it was blocking / causing issues with the connection to the websocket. Looks like unblock origin was the cause. Disabled it on crashplan and it now works fine. Quote Link to comment
Djoss Posted December 31, 2020 Author Share Posted December 31, 2020 (edited) 43 minutes ago, ezzys said: Just tried prviate browser and it works. Looking at the browser tools it was blocking / causing issues with the connection to the websocket. Looks like unblock origin was the cause. Disabled it on crashplan and it now works fine. Not sure to understand what was the issue/fix. Could you elaborate ? Edited December 31, 2020 by Djoss Quote Link to comment
ezzys Posted December 31, 2020 Share Posted December 31, 2020 1 minute ago, Djoss said: Not sure to understand what was the issue/fix. Could you elaborate ? It was not connecting to the vnc. That was the error in the consol. Just had to disable unlock origin add on in Firefox for the cloudberry webpage and the error with the connection to the vnc disappeared. Quote Link to comment
Solverz Posted January 16, 2021 Share Posted January 16, 2021 Is it reccommeded to back up your flash drive, docker appdata, libvirt & domain folder directly with Crashplan? or is there a more reccommended method to do this but still backing up to crash plan? Quote Link to comment
Flemming Posted February 2, 2021 Share Posted February 2, 2021 Anyone have a solution for this? Quote Link to comment
Djoss Posted February 2, 2021 Author Share Posted February 2, 2021 6 hours ago, Flemming said: Anyone have a solution for this? Is your Docker image up-to-date ? Quote Link to comment
repomanz Posted February 18, 2021 Share Posted February 18, 2021 @Djoss Got this in email today. Unsure if that requires container changes but thought I'd pass along. "Beginning March 4, 2021, CrashPlan will require two-factor authentication when logging in to your CrashPlan for Small Business account from the administration console. Use of two-factor authentication helps prevent unauthorized account access." 1 Quote Link to comment
Djoss Posted February 18, 2021 Author Share Posted February 18, 2021 41 minutes ago, repomanz said: @Djoss Got this in email today. Unsure if that requires container changes but thought I'd pass along. "Beginning March 4, 2021, CrashPlan will require two-factor authentication when logging in to your CrashPlan for Small Business account from the administration console. Use of two-factor authentication helps prevent unauthorized account access." I got the same email. I've enabled two-factor authentication on my account and it did not affect the signing from the app. 1 Quote Link to comment
hpka Posted February 18, 2021 Share Posted February 18, 2021 This link was in the same email, with practical details. Quote Link to comment
repomanz Posted February 20, 2021 Share Posted February 20, 2021 On 2/18/2021 at 9:52 AM, Djoss said: I got the same email. I've enabled two-factor authentication on my account and it did not affect the signing from the app. Thanks djoss Quote Link to comment
RealActorRob Posted February 27, 2021 Share Posted February 27, 2021 Got it set up and running but with default users 99/100. "To find the right IDs to use, issue the following command on the host, with the user owning the data volume on the host:" ..who 'owns' the data volumes on unraid? Ultimate question: Is there a more in depth 'For Dummies' guide to setup? Quote Link to comment
Djoss Posted March 1, 2021 Author Share Posted March 1, 2021 On 2/27/2021 at 6:15 PM, RealActorRob said: Got it set up and running but with default users 99/100. "To find the right IDs to use, issue the following command on the host, with the user owning the data volume on the host:" ..who 'owns' the data volumes on unraid? Ultimate question: Is there a more in depth 'For Dummies' guide to setup? The template on unRAID already setup the container with proper values. So if you kept defaults ones, you should be all good! Quote Link to comment
RealActorRob Posted March 1, 2021 Share Posted March 1, 2021 15 minutes ago, Djoss said: The template on unRAID already setup the container with proper values. So if you kept defaults ones, you should be all good! Kthx! @Djoss Quote Link to comment
jademonkee Posted March 11, 2021 Share Posted March 11, 2021 Noticed that my CPUI was getting hammered this evening, and it looks like CrashPlan is the culprit. It's currently backing up my ~26GB CA Backups appdata.tar.gz file (which it reckons will take 3 days...). This isn't the first time that that file is backed up, but I've never noticed such a spike in CPU usage? Any ideas what could be up? I heard that maybe not enough RAM allocation can spike CPU so I added the env var for 4096M (as well as via the Crashplan command line), but it still seems to be hitting my CPU harder than I've seen before (while also only using 1.5GB RAM). Any idea why? Is it common for large single-files to cause this and I've just never noticed before, or is something problematic going on? Thanks for your help and experience. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.