lordbob75 Posted October 22, 2017 Share Posted October 22, 2017 Well it's working now. Didn't change anything. Maybe I just needed to let it sit and load for a while... Quote Link to comment
Djoss Posted October 22, 2017 Author Share Posted October 22, 2017 Yeah it can take a few seconds before being ready, but it should not be that long. Unless you just enable the secure connection, in which case it can that a 1-2 minutes (but only for the first start). Quote Link to comment
JustinAiken Posted October 22, 2017 Share Posted October 22, 2017 After working ever since day 1 of the Pro container, it started failing-to-connect-to-backup-engine consistently yesterday. Bumped up the max mem to `2048M`, now it works great again Quote Link to comment
evan Posted October 23, 2017 Share Posted October 23, 2017 On 10/19/2017 at 8:57 AM, meoge said: What did you increase it to? I increased it to 32G (overkill I know) just to make sure it doesn't crash for now. I will decrease it eventually but I just wanted to make sure my backup finishes for about 3.5M files. Quote Link to comment
JustinAiken Posted October 30, 2017 Share Posted October 30, 2017 Updated to the new 1.20 container today without issue. Quote Link to comment
sauvagii Posted November 3, 2017 Share Posted November 3, 2017 I'm having problems where the Crashplan app cannot locate my existing backup sources, they all show as Missing. I've tried adding paths to each but they don't seem to have any effect - have I missed something completely? (it's highly likely) Quote Link to comment
Djoss Posted November 3, 2017 Author Share Posted November 3, 2017 Are replacing a previous installation? Quote Link to comment
sauvagii Posted November 3, 2017 Share Posted November 3, 2017 Yes, from CP Home. I did the adoption process, it said 'adoption successful' at the time, but still says all of my folders are missing. I assumed when the message about successful adoption came up all was OK, so I logged out and didn't check again as I was away from home for a few days so struggling to remember if I missed a bit! Quote Link to comment
Djoss Posted November 3, 2017 Author Share Posted November 3, 2017 So if your were not using my container for CP home, then the path to your files inside the container are not the same. You need to re-select them, there are under /storage. You will then have 2 sets of selections, one missing and the other not. Quote Link to comment
sauvagii Posted November 3, 2017 Share Posted November 3, 2017 So once I have let this scan with the new selections, is it safe to delete the ones marked as missing ones? Quote Link to comment
Djoss Posted November 3, 2017 Author Share Posted November 3, 2017 If you don't care about file versions provided by the old backup, yes you can remove the missing items. Quote Link to comment
Djoss Posted November 3, 2017 Author Share Posted November 3, 2017 You are welcome! Quote Link to comment
gzibell Posted November 5, 2017 Share Posted November 5, 2017 Anyone else seeing CPU usage spike with this docker running? When my server sitting idle it is normally 0-5% CPU usage. When I start this docker it sits around 30%+ all the time. It is running as expected and doing everything it should but what is it doing all the time? Backups are scheduled to run 2am-6am only. Changing cpu setting in the docker % when away, % when present seem to not change anything. I also tried the niceness setting in advance setup but didn’t see a change their either. Am I missing something? This normal for this docker? This was a fresh install/backup set so no conversion or other stuff like that. Quote Link to comment
Shadowrunner Posted November 6, 2017 Share Posted November 6, 2017 I'm getting the same, although only since the latest update to the docker. Wasn't a problem before that. SR Quote Link to comment
Djoss Posted November 6, 2017 Author Share Posted November 6, 2017 Thanks for reporting I'm looking into this. Quote Link to comment
Lebowski Posted November 7, 2017 Share Posted November 7, 2017 Same issue with me on the CPU usage Quote Link to comment
Djoss Posted November 7, 2017 Author Share Posted November 7, 2017 New update available. It should fix the CPU usage issue! Quote Link to comment
Shadowrunner Posted November 7, 2017 Share Posted November 7, 2017 Looks like that's sorted it. Thanks for the quick response. SR Quote Link to comment
gzibell Posted November 7, 2017 Share Posted November 7, 2017 3 hours ago, Djoss said: New update available. It should fix the CPU usage issue! Thanks!!! Updated to latest and all looks great! Quote Link to comment
mustaqiim Posted November 18, 2017 Share Posted November 18, 2017 Hey guys, i was wondering if this upload speed is normal? Can't seem to get higher than 4Mbps :\ It's not saturating my currently upload speed tho Quote Link to comment
Fatal_Flaw Posted November 21, 2017 Share Posted November 21, 2017 On 11/18/2017 at 4:18 AM, mustaqiim said: Hey guys, i was wondering if this upload speed is normal? Can't seem to get higher than 4Mbps :\ It's not saturating my currently upload speed tho I've gotten much lower speeds since migrating to small business from home. I doubt it's the container because I've been getting terrible speeds since the migration on both gfjardim's container and this one. It rarely goes about 2Mbps. Before the small business migration I'd get 4.5-5.5Mbps. I don't know if they're throttling me since the migration or if small business is inherently slower, but it's a real problem. Quote Link to comment
denishay Posted November 21, 2017 Share Posted November 21, 2017 Hi guys, It took me a while to figure out why Crashplan wasn't maxing (or at least near-maxing) my upload capacity. The thing is that Crashplan seem to have been developed with much lower connections in mind than the ones we have today. • Stop the Crashplan docker • Go to your appdata folder, navigate to your Crahsplan folder • Edit the "my.service.xml" file in the "conf" folder And change the following value to 1: <dataDeDupAutoMaxFileSizeForWan>1</dataDeDupAutoMaxFileSizeForWan> • This will effectively disable the data de-duplication for any file larger than 1 byte for any backup done over the internet • Start your Crashplan docker again This always had me max my connection again. Otherwise, I rarely went beyond 2 MB/s. (currently happily going over 25 MB/s with that) Kind regards, Denis Quote Link to comment
denishay Posted November 21, 2017 Share Posted November 21, 2017 45 minutes ago, Fatal_Flaw said: I've gotten much lower speeds since migrating to small business from home. I doubt it's the container because I've been getting terrible speeds since the migration on both gfjardim's container and this one. It rarely goes about 2Mbps. Before the small business migration I'd get 4.5-5.5Mbps. I don't know if they're throttling me since the migration or if small business is inherently slower, but it's a real problem. It could have been the Pro version overwriting your my.services.xml file during an update (see my post above). Quote Link to comment
Fatal_Flaw Posted November 21, 2017 Share Posted November 21, 2017 19 minutes ago, denishay said: It could have been the Pro version overwriting your my.services.xml file during an update (see my post above). I have double checked my config file and it does still have dedupe disabled. It is uploading at the speed I would expect it to if dedupe were enabled. But with the config file being correct, I don't know where to go from here. <compression>OFF</compression> <dataDeDupAutoMaxFileSize>1</dataDeDupAutoMaxFileSize> <dataDeDupAutoMaxFileSizeForWan>1</dataDeDupAutoMaxFileSizeForWan> <dataDeDuplication>MINIMAL</dataDeDuplication> I went and looked at some old logs I 10/01/17 12:16AM [XXXXXX Backup] Stopped backup to CrashPlan Central in 21.2 hours: 116 files (44.40GB) backed up, 44GB encrypted and sent @ 6.8Mbps and compared to the new logs I 11/21/17 07:43AM [XXXXXX Backup] Stopped backup to CrashPlan Central in 4.7 hours: 25 files (4.70GB) backed up, 4.60GB encrypted and sent @ 1.6Mbps (Effective rate: 1.9Mbps) So i don't know if they're throttling now, or if something is screwed up with my client despite what the config file is saying. I'm tempted just delete the container, reinstall, and instead of attempting to transfer all the settings from the gfjardim's container. just adopt the backup. I'm concerned that with dedup being disabled, I'll end up having to reupload everything. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.