Jump to content

[Support] Djoss - CloudBerry Backup

This topic contains 456 posts with an estimated read time of 454 minutes. A summary containing the most significant posts is available with an estimated read time of 2 minutes.

Featured Replies

Posted

Support for CloudBerry Backup docker container


Application Name: CloudBerry Backup
Application Site: https://www.cloudberrylab.com/backup/linux.aspx
Docker Hub: https://hub.docker.com/r/jlesage/cloudberry-backup/
Github: https://github.com/jlesage/docker-cloudberry-backup

 

Make sure to look at the complete documentation, available on Github !


Post any questions or issues relating to this docker in this thread.

Edited by Djoss

  • 2 weeks later...
  • Replies 455
  • Views 87.9k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • New container image now available.  It implements the latest CloudBerry Backup version 2.1.0.81.

  • It is not monitoring each file for changes like CrashPlan.  However, you can configure the backup to run every minute if you want.

  • Are you able to write to your share from unraid itself?  Double check the ownership and permissions of your share...

Posted Images

Does this work with Google Drive? The website says it does but maybe it's a different version because I don't see it. Just Google Cloud.

 

Edit: I guess the Linux version has fewer supported cloud providers.

Edited by bobbintb

  • Author

You are right, the Linux version has less features than the Windows counterpart.

 

I will open a ticket with the CloudBerry support to ask if there is any plan to add support of Google Drive.  I know the Windows version has it.

  • Author

So the answer I got from CloudBerry is that they do have plan to add support for Google Drive, but there is no ETA.

Cool, thanks for checking.

  • 2 months later...

Can this Linux version connect to Amazon Glacier? I don't see it in the setup.

  • Author

If you cannot create a 'Glacier' account, I think you need to use Amazon S3. Then, in your AWS console, you create a lifecycle policy that put everything into Glacier.

1 minute ago, Djoss said:

If you cannot create a 'Glacier' account, I think you need to use Amazon S3. Then, in your AWS console, you create a lifecycle policy that put everything into Glacier.

 

Interesting. I'll go look for that.

Trying to use this with Backblaze but every time the docker restarts it wipes all configurations and presents me with the activation window. Thoughts?

1 minute ago, Djoss said:

Is your /config folder properly mapped?

I left it at default. Should it be manually mapped?

  • Author

No, but just double check that it is mapped to something like /mnt/user/appdata/CloudBerryBackup.

 

Also, are you using the trial or you have a registration key?

  • Author
42 minutes ago, ksignorini said:

 

Interesting. I'll go look for that.

I just saw that the lifecycle policy is supported by CloudBerry Backup.  Just select Edit->Lifecycle Policy to select folders going to Glacier.

3 minutes ago, Djoss said:

No, but just double check that it is mapped to something like /mnt/user/appdata/CloudBerryBackup.

 

Also, are you using the trial or you have a registration key?

I'm using the Free version although with the demise of Crashplan I may as well go ahead and buy one. Just not sure if I can get away with the Pro or if I need the Ultimate -- their site references storage limits but doesn't say what they are. 

  • Author

It's working well for me, but I will try to reproduce with a free version, just in case it is saving stuff outside /config...

  • Author

And if it's possible for you, can you try to remove the container and its folder under appdata, then re-install using the Community Apps plugin, keeping all default settings?

55 minutes ago, Djoss said:

I just saw that the lifecycle policy is supported by CloudBerry Backup.  Just select Edit->Lifecycle Policy to select folders going to Glacier.

 

Odd that the how-to videos on their website show it as a separate service.

 

I'll probably go with B2 if I go the CBB route, anyway.

So I've tested CBB (your container, @Djoss) and I can backup from my unRAID box and restore to my Mac (with Mac client) from the same backup set (making sure that the "prefix" is set the same). I'm testing using a free B2 account.

 

However, I can't restore back to my unRAID box. Ever. In the same location or another location. The restore always fails.

 

Any ideas?

 

 

Fail.jpg

Edited by ksignorini

  • Author

By default, the /storage volume is read-only. This can be changed in container's settings.
You can do a quick test and restore the file in /tmp to see if it's working.

Ahhhhhh. I never thought of that.

2 hours ago, Djoss said:

By default, the /storage volume is read-only. This can be changed in container's settings.
You can do a quick test and restore the file in /tmp to see if it's working.

 

I've tested and it restores nicely to /tmp (inside the container). 

 

But, how do I make /storage r/w for the container? This would be necessary in the case of a larger, needed restore. (I can't seem to find a setting in the Edit screen of the docker. I'm pretty new to dockers.)

  • Author

In the container's settings, click on "Show advanced settings ...", near the bottom of the page.  You will see the mapping for the storage.  To be able to edit it, you need to enable the page's Advanced View: At the right-top of the page, click on Basic View.

 

If you don't want to change the defaults, an other solution is to add a new path mapping that will be R/W.  For example, the container path could be /restore.

Does this app support 'always on' type backups like CrashPlan? Like, as soon as a new item appears in the folder, it will be pushed to the cloud? 

Or is does it only backup at a schedule?

If it makes any difference, I think I would use it with Backblaze B2.

Thanks.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...