Djoss Posted September 9, 2018 Author Share Posted September 9, 2018 2 hours ago, J.R. said: I'm so lost on this. I did the migration to CP for SB months ago and it said it successfully uploaded 300+GB to the cloud (and had the internet usage to prove it, it would seem). I thought everything was running fine as I kept getting green check mark backup reports until I noticed the fine print of 0MB's being backed up... >:[ (it's been a VERY busy year). Crashplan can't seem to actually see some unraid folders for some back up sets and I can only 'see' ~40GB of the supposed 300+ that were uploaded to the cloud. Like where's the rest of my stuff...? How come I can see some UR server folders in some back up sets but not others? How come although I can see some folders in some backup sets, Crasphplan can't actually seem to see them/back them up? I tried contacting Crashplan and got this: I tried adding a container path named 'User' ( /mnt/user/ ) under 'Add another path, port, variable or device' with full read/write access but all that accomplished was making CP unable to load. So I deleted that obviously. Please help Let me know if there's any other screen captures, logs or any other info that would be useful. This is a permission issue. Where are located the files you want to backup? Under /mnt/user ? Also, the paths in CrashPlan don't seem to fit what you have in your container config. Quote Link to comment
J.R. Posted September 10, 2018 Share Posted September 10, 2018 (edited) On 9/8/2018 at 6:03 PM, Djoss said: This is a permission issue. Where are located the files you want to backup? Under /mnt/user ? Also, the paths in CrashPlan don't seem to fit what you have in your container config. Yeah, under mnt/user etc. Am I to gather that 'Storage' is not the path for the backup storage locations but the data I want backed up? (That's not very intuitive if so...) How do I go about adding my external drives as alternate backup paths then? Edited September 10, 2018 by J.R. Quote Link to comment
Djoss Posted September 10, 2018 Author Share Posted September 10, 2018 46 minutes ago, J.R. said: Yeah, under mnt/user etc. Am I to gather that 'Storage' is not the path for the backup storage locations but the data I want backed up? (That's not very intuitive if so...) How do I go about adding my external drives as alternate backup paths then? You need to add additional "Path". You click on "Add another Path, Port, Variable, Label or Device" in container settings. You can then map your external drive to a path inside the container. 1 Quote Link to comment
J.R. Posted September 10, 2018 Share Posted September 10, 2018 7 minutes ago, Djoss said: You need to add additional "Path". You click on "Add another Path, Port, Variable, Label or Device" in container settings. You can then map your external drive to a path inside the container. Thanks, so 'Storage' I need to switch to mnt/user and then add a 'path' for my external drives... Will attempt when I get home. Quote Link to comment
J.R. Posted September 11, 2018 Share Posted September 11, 2018 11 hours ago, Djoss said: You need to add additional "Path". You click on "Add another Path, Port, Variable, Label or Device" in container settings. You can then map your external drive to a path inside the container. Well on a good note, the cloud backup appears to be doing things! However, I'v tried adding the root location of the external drives (/mnt/disks), the individual drives (/mnt/disks/2TB Backup 01) and the folders on the drives (/mnt/disks/2TB Backup 01/Backup 01/) as new 'paths' and Crashplan can't seem to see any of them as an available backup location. Not sure what I'm missing there? Quote Link to comment
Djoss Posted September 11, 2018 Author Share Posted September 11, 2018 5 hours ago, J.R. said: Well on a good note, the cloud backup appears to be doing things! However, I'v tried adding the root location of the external drives (/mnt/disks), the individual drives (/mnt/disks/2TB Backup 01) and the folders on the drives (/mnt/disks/2TB Backup 01/Backup 01/) as new 'paths' and Crashplan can't seem to see any of them as an available backup location. Not sure what I'm missing there? By "backup location", do you mean a backup destination? Note that when mapping external disks too a container, the access mode should be set "RO/Slave" or "RW/Slave". Quote Link to comment
J.R. Posted September 14, 2018 Share Posted September 14, 2018 On 9/11/2018 at 2:35 AM, Djoss said: By "backup location", do you mean a backup destination? Note that when mapping external disks too a container, the access mode should be set "RO/Slave" or "RW/Slave". Thanks, done that now. What do I put in 'Container Path'? I still can't seem to see the drives in CP. Quote Link to comment
Djoss Posted September 14, 2018 Author Share Posted September 14, 2018 7 hours ago, J.R. said: Thanks, done that now. What do I put in 'Container Path'? I still can't seem to see the drives in CP. The container path is the path of the mapped folder inside the container. It can be anything For example, in your case if the host path is "/mnt /disks/2TB Backup 0 1", your container path could be "/2TB Backup 0 1" Quote Link to comment
J.R. Posted September 15, 2018 Share Posted September 15, 2018 On 9/14/2018 at 2:41 AM, Djoss said: The container path is the path of the mapped folder inside the container. It can be anything For example, in your case if the host path is "/mnt /disks/2TB Backup 0 1", your container path could be "/2TB Backup 0 1" Thanks, that got it working! Quote Link to comment
jbuszkie Posted September 20, 2018 Share Posted September 20, 2018 So I have to migrate by the end of the month. (Yeah I've been procrastinating...) Are the instructions in the very first post still valid? My crashplan docker says there is an update available. I think I was holding off because I didn't know it would have done. Do I update before I move pro docker? Quote Link to comment
Djoss Posted September 20, 2018 Author Share Posted September 20, 2018 31 minutes ago, jbuszkie said: So I have to migrate by the end of the month. (Yeah I've been procrastinating...) Are the instructions in the very first post still valid? My crashplan docker says there is an update available. I think I was holding off because I didn't know it would have done. Do I update before I move pro docker? Yes instructions are still valid. And no need to upgrade the container before moving. 1 Quote Link to comment
jbuszkie Posted September 21, 2018 Share Posted September 21, 2018 Thanks!! Seems to have worked.. I'll see later if it is all really there. It looks like it's synchronizing block info now. Jim Quote Link to comment
DZMM Posted September 24, 2018 Share Posted September 24, 2018 I've just started using this docker today. I have a couple of questions please: 1. where does the docker store temporary files while it is uploading, or does it do everything in ram? 2. Any tips on increasing upload speeds? e.g. does assigning more cores help or does the docker not need a lot of resources? I seem to be averaging only around 3-4Mbps, peaking at 12Mbps if I'm lucky. Quote Link to comment
Djoss Posted September 25, 2018 Author Share Posted September 25, 2018 20 hours ago, DZMM said: 1. where does the docker store temporary files while it is uploading, or does it do everything in ram? I don't know if CrashPlan uses temporary files or if it uses internal buffers, but if it uses temporary files, they will end up on your cache drive. 20 hours ago, DZMM said: 2. Any tips on increasing upload speeds? e.g. does assigning more cores help or does the docker not need a lot of resources? I seem to be averaging only around 3-4Mbps, peaking at 12Mbps if I'm lucky. Upload speed to CrashPlan servers is slow (this is a known fact), but a lot of deduplication is done. You can look at Tools->History to get a better idea of the effective upload speed. Since deduplication requires a lot of calculations, you can try to see if allocating more cores helps. Quote Link to comment
DZMM Posted September 25, 2018 Share Posted September 25, 2018 1 hour ago, Djoss said: I don't know if CrashPlan uses temporary files or if it uses internal buffers, but if it uses temporary files, they will end up on your cache drive. It must be internal buffers as it doesn't seem to be in the docker image and it can't be on the cache drive as there's no mapping for it. 1 hour ago, Djoss said: Upload speed to CrashPlan servers is slow (this is a known fact), but a lot of deduplication is done. You can look at Tools->History to get a better idea of the effective upload speed. Since deduplication requires a lot of calculations, you can try to see if allocating more cores helps. Thanks - I've checked history and the 'effective' rates are much higher than the rates I'm seeing on my network, so I'll have to trust the dedepe is working... I don't think throwing any more cores will help my scenario. I've pinned it to three cores shared with other dockers and they are only running at about 50%. I tried temporarily giving it another 3 cores, but it didn't make a difference. I'll just have to be patient - my valuable files should take around 10 days and then I'll start adding my media a bit at a time, and that will take months to backup. Quote Link to comment
RAINMAN Posted September 26, 2018 Share Posted September 26, 2018 Anyone have crashplan complaining about running out of inotify watches? On my unraid system I should have 2 million available and I can confirm I have free watches through SSH on unraid. Any idea why crashplan is complaining? I assume its inside the docker that it ran out. Is there a way to increase this inside the docker? Quote Link to comment
Djoss Posted September 26, 2018 Author Share Posted September 26, 2018 47 minutes ago, RAINMAN said: Anyone have crashplan complaining about running out of inotify watches? On my unraid system I should have 2 million available and I can confirm I have free watches through SSH on unraid. Any idea why crashplan is complaining? I assume its inside the docker that it ran out. Is there a way to increase this inside the docker? The inotify watches are shared with the host. So both unRAID and the container have the same limit. You can verify with the following commands: cat /proc/sys/fs/inotify/max_user_watches docker exec CrashPlanPRO cat /proc/sys/fs/inotify/max_user_watches Maybe it was a temporary issue? If you restart the container, do you get the same message? Quote Link to comment
RAINMAN Posted September 26, 2018 Share Posted September 26, 2018 7 minutes ago, Djoss said: The inotify watches are shared with the host. So both unRAID and the container have the same limit. You can verify with the following commands: cat /proc/sys/fs/inotify/max_user_watches docker exec CrashPlanPRO cat /proc/sys/fs/inotify/max_user_watches Maybe it was a temporary issue? If you restart the container, do you get the same message? Could have been a one-time issue. I restarted the docker and the error didnt come back. I was just thinking it may be a one time error then it doesnt appear again because it says it is unable to monitor in real-time so maybe it disabled that function. I'll monitor it some more and see. I bumped it up to 3.5million as i have the ram to spare and i did verify that within the docker it is the same as the host as you described. Thanks for the quick reply. Quote Link to comment
plupien79 Posted October 2, 2018 Share Posted October 2, 2018 (edited) I installed this today and all my shares have disappeared. The data and file structure is still there, however there are not listed, nor can I access them from other machines. Any advice? Edited October 2, 2018 by plupien79 speeling Quote Link to comment
Djoss Posted October 2, 2018 Author Share Posted October 2, 2018 9 hours ago, plupien79 said: I installed this today and all my shares have disappeared. The data and file structure is still there, however there are not listed, nor can I access them from other machines. Any advice? Did you installed the container using the Community Applications plugin? With default settings? Quote Link to comment
Rebel Posted October 2, 2018 Share Posted October 2, 2018 (edited) Hi, I installed the container using the app install (having migrated from the old home plug-in that stopped working last week.) it starts up but whenever I try and click on the "replace existing" option to adopt the backup it just goes back to the first screen after login. Running UR 6.6.0 Edit- I've tried the inbuilt console on Firefox and Chrome and VNC direct. Edited October 2, 2018 by Rebel Quote Link to comment
Djoss Posted October 3, 2018 Author Share Posted October 3, 2018 7 minutes ago, Rebel said: Hi, I installed the container using the app install (having migrated from the old home plug-in that stopped working last week.) it starts up but whenever I try and click on the "replace existing" option to adopt the backup it just goes back to the first screen after login. Running UR 6.6.0 Edit- I've tried the inbuilt console on Firefox and Chrome and VNC direct. This is a known issue with the latest version. See https://github.com/jlesage/docker-crashplan-pro/issues/134#issuecomment-425216067 for the workaround. 1 Quote Link to comment
Rebel Posted October 3, 2018 Share Posted October 3, 2018 20 hours ago, Djoss said: This is a known issue with the latest version. See https://github.com/jlesage/docker-crashplan-pro/issues/134#issuecomment-425216067 for the workaround. Thank you sir, you are a god amongst us mere mortals. Disappointing in Code42 just throws that out and doesn't fix it, we can't be the only people in the world using custom keys. Quote Link to comment
plupien79 Posted October 8, 2018 Share Posted October 8, 2018 On 10/2/2018 at 7:38 PM, Djoss said: Did you installed the container using the Community Applications plugin? With default settings? Yes, that's exactly how I did... I did get my shares all back after a reboot though. Quote Link to comment
Seanraz Posted October 9, 2018 Share Posted October 9, 2018 Hi there, For some reason, as of 2 days ago, crashplan's connection went into waiting for connection status. I've done everything I could think to kick this to connected status, even up to reinstalling the docker application. Code42 was less than helpful once they realized I was using a docker application, I'm not sure what else to do, anyone run into this issue before? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.