[Support] Djoss - CloudBerry Backup


Recommended Posts

  • 1 month later...

I have installed the CloudBerry Backup Docker container and it is working well.  Thanks very much for setting this up for us.

I have configured the system to store to IDrive Cloud using CloudBerry-specific configuration details provided by IDrive.  Good job IDrive.

 

I first configured CloudBerry to use the S3 compatible API, but I would not get a list of my buckets in the Storage Add Account Bucket drop-down list.  I tried several times with the S3 keys and endpoint specified by IDrive.  Switching to the Open Stack API worked fine.

 

Have others been successful using the S3 compatible storage API?  Have others worked with IDrive Cloud?

 

Thanks in advance for any feedback.

 

P.S.  IDrive Cloud offers 2TB of cloud storage for $69.50US discounted to $6.95 for the first year.  Ther offer a 5TB account for under $100.

Link to comment
On 3/29/2020 at 10:56 AM, Larz said:

I have installed the CloudBerry Backup Docker container and it is working well.  Thanks very much for setting this up for us.

I have configured the system to store to IDrive Cloud using CloudBerry-specific configuration details provided by IDrive.  Good job IDrive.

 

I first configured CloudBerry to use the S3 compatible API, but I would not get a list of my buckets in the Storage Add Account Bucket drop-down list.  I tried several times with the S3 keys and endpoint specified by IDrive.  Switching to the Open Stack API worked fine.

 

Have others been successful using the S3 compatible storage API?  Have others worked with IDrive Cloud?

 

Thanks in advance for any feedback.

 

P.S.  IDrive Cloud offers 2TB of cloud storage for $69.50US discounted to $6.95 for the first year.  Ther offer a 5TB account for under $100.

I never tried with iDrive cloud, but if they are offering an S3 compatible API and it doesn't work, maybe you could have better luck if you check with them ?

Link to comment
  • 1 month later...

Hello, hoping to get a little direction...

 

I've been using this container for a few months to back up to BackBlaze B2, and it has been terrific!  Recently a couple of files have been giving me trouble, and I'm not sure where to start as far as troubleshooting.  When I try and back up files generated from CA Appdata Backup / Restore v2, get the following message in CBB:

 

SSL_write() returned SYSCALL, errno = 32

 

and the backup job fails.  When I remove these files from the backup job it runs successfully, but adding the files back in to the backup job will cause the error again.  Any idea where the problem may lie, or who to start the right conversation with?

 

Thanks in advance!

Link to comment
3 hours ago, quinnjudge said:

Hello, hoping to get a little direction...

 

I've been using this container for a few months to back up to BackBlaze B2, and it has been terrific!  Recently a couple of files have been giving me trouble, and I'm not sure where to start as far as troubleshooting.  When I try and back up files generated from CA Appdata Backup / Restore v2, get the following message in CBB:

 

SSL_write() returned SYSCALL, errno = 32

 

and the backup job fails.  When I remove these files from the backup job it runs successfully, but adding the files back in to the backup job will cause the error again.  Any idea where the problem may lie, or who to start the right conversation with?

 

Thanks in advance!

Did you set upload bandwidth limits ?

Link to comment
17 hours ago, Djoss said:

Did you set upload bandwidth limits ?

Yes; my thought was to ensure a backup does not interfere with web conferencing software (I'm still working full-time from my home office)...I have the limit for cloud storage set to approx. 80% of available upload bandwidth.

Link to comment
On 5/5/2020 at 1:42 PM, quinnjudge said:

Yes; my thought was to ensure a backup does not interfere with web conferencing software (I'm still working full-time from my home office)...I have the limit for cloud storage set to approx. 80% of available upload bandwidth.

It seems that setting bandwidth limit can cause SSL timeouts.  Did you try to remove it to see if it fixes the issue?

Link to comment
1 hour ago, Djoss said:

It seems that setting bandwidth limit can cause SSL timeouts.  Did you try to remove it to see if it fixes the issue?

I removed the bandwidth limit in CBB, and the backup job was successful; no errors on the files like before - thanks!!!

 

So, is this an issue with CBB or with BackBlaze B2?

Link to comment
On 5/6/2020 at 9:17 PM, quinnjudge said:

I removed the bandwidth limit in CBB, and the backup job was successful; no errors on the files like before - thanks!!!

 

So, is this an issue with CBB or with BackBlaze B2?

It seems to be an issue on CBB side... you can always complains to their support team ;)

Link to comment

I'm going to try using this to get away from CrashPlan (I have INotify limits and high resource usage with CP Pro daily, and my pricing finally went up to $10/mo a while back)

 

That being said, I'm trying to use it with Google Archival Storage, I think I'd like to also supplement with a local backup disk some day as well so I do not have to rely on Google Cloud for restores unless a disaster strikes.

 

My understanding with Google Archival storage is pricing is only $0.0012/GB/mo, very cheap.

But any file that touches the service you automatically pay for storage for a full year on. 365 days.

 

I have my retention policy setup like this, would this be accurate to get the best "bang for my buck" with their storage then since I'm paying for 365 days of storage per file anyways?

 

I also have daily block level backups with monly forced full backups.

PS - the /flash mapping to /boot no longer works anymore it seems, the flash folder is empty inside the container.

 

PSS - If Google Archival storage is a waste of time and i should just go for B2 that's fine, I'm just trying to keep costs as low as possible.

With my ~800GB of data to backup, this is just less than $1/mo where B2 is $4/mo.

 

Thanks in advance!

 

image.png.b4988914b51b7352cdb1057ff5a4618a.png

Link to comment
On 5/13/2020 at 3:34 PM, CorneliousJD said:

I'm going to try using this to get away from CrashPlan (I have INotify limits and high resource usage with CP Pro daily, and my pricing finally went up to $10/mo a while back)

 

That being said, I'm trying to use it with Google Archival Storage, I think I'd like to also supplement with a local backup disk some day as well so I do not have to rely on Google Cloud for restores unless a disaster strikes.

 

My understanding with Google Archival storage is pricing is only $0.0012/GB/mo, very cheap.

But any file that touches the service you automatically pay for storage for a full year on. 365 days.

 

I have my retention policy setup like this, would this be accurate to get the best "bang for my buck" with their storage then since I'm paying for 365 days of storage per file anyways?

 

I also have daily block level backups with monly forced full backups.

PS - the /flash mapping to /boot no longer works anymore it seems, the flash folder is empty inside the container.

 

PSS - If Google Archival storage is a waste of time and i should just go for B2 that's fine, I'm just trying to keep costs as low as possible.

With my ~800GB of data to backup, this is just less than $1/mo where B2 is $4/mo.

 

Thanks in advance!

 

image.png.b4988914b51b7352cdb1057ff5a4618a.png

Well this ended up being $20 for 2 days worth of backups... Looks like I'll migrate over to B2. 

What is everyone's retention policies for B2 and delete policies? 

 

My initial thought would be to keep 30 days from modification date, always keep last version, and 3 versions, and delete 30 days after as well so we aren't dealing with too much data retention.

 

Thanks all.

Link to comment
  • 2 weeks later...

Alright, I'm still going through some hoops getting this setup properly to give me a minimal B2 usage and also be efficient enough to give me what I need.

 

I have it setup like this currently, and I think it's right, the goal here is to

Backup CA Appdata backup files (generated weekly) and keep them for 30 days.

Backup direct appdata/nextcloud/some other files as well.

 

Only keep X days of versions. 

Keep latest 3 versions at minimum (even if older than X days)

If anything is explictly deleted or unchecked from backup jobs, delete them from B2 after 30 days.

 

Does this look correct to you guys?

If there's a better retention that you think I should be using please let me know.

 

I'm currently debating keeping 7 days, 14 days, or 30 days of versions.

image.png.fe891e760cceb16271c993a1c2f79a06.png

Link to comment

I keep running into an issue while trying to test this container too...

 

I just setup a fresh install of it and a fresh B2 bucket, but now I'm seeing that purges from B2 are failing as "File not present".

 

I saw this last week too and CloudBerry support isn't being much help here honestly, I'm not sure what's going on.

I just want to backup my entire /appdata/ folder at midnight every night. CrashPlan is currently doing this and is not complaining (I'm also not ever purging anything either), but I'm really trying to get away from CrashPlan and thought CloudBerry would be leaps and bounds better, but I'm not sure what's going on now, this hasn't even been up and running for a few hours yet and purges are failing?

 

image.png.b5d494b38ea2c80bb7dab054946301ec.png

Link to comment
19 hours ago, CorneliousJD said:

I keep running into an issue while trying to test this container too...

 

I just setup a fresh install of it and a fresh B2 bucket, but now I'm seeing that purges from B2 are failing as "File not present".

 

I saw this last week too and CloudBerry support isn't being much help here honestly, I'm not sure what's going on.

I just want to backup my entire /appdata/ folder at midnight every night. CrashPlan is currently doing this and is not complaining (I'm also not ever purging anything either), but I'm really trying to get away from CrashPlan and thought CloudBerry would be leaps and bounds better, but I'm not sure what's going on now, this hasn't even been up and running for a few hours yet and purges are failing?

 

image.png.b5d494b38ea2c80bb7dab054946301ec.png

I reported this issue a few years ago... so looks like no fix has been done yet.

I suspect that this happens to files that were present during the scan, but got deleted from the host before being uploaded to the cloud.

Link to comment
  • 1 month later...
  • 2 weeks later...
On 7/11/2020 at 4:59 PM, Smackover said:

With Cloudberry Labs now MSP360, is there any concern that this software will go away? I'm finally getting around to setting up cloud backup for my server, and am debating going this way or just using rclone. Endpoint will be Backblaze B2 regardless.

I'm personally not too concerned.  The company has been renamed, but they still sell and develop the software.

Link to comment
  • 2 weeks later...

I am sure I am missing something very simple. I just can not get this to work.

 

I downloaded the CloudBerryBackup from the apps section.

I used the defaults in the settings.

It shows to be running but says Server Disconnected (code: 1006) when I go the [ip_adderss:7802]

 

Thank you!

 

Edit:

 

This is solved.  I tried deleting my cache in Chrome, but that didn't work.  After quite a bit of trial and error, I deleted everything in Chrome, now it works.

Edited by jasonculp
Problem Solved
Link to comment
  • 2 weeks later...

I have configured both CloudBerry and  Duplicati for backups to the same server via sftp.  Of course I run and tested them one at the time and here is my puzzle to solve:

 

With CloudBerry my upload speed stops at 4-6 MB/s where with Duplicati it goes all the way up to 16-18 MB/s.  They both use the same server and same sftp protocol. 

 

I have 1G ISP but my dilemma is why CloudBerry can't match at least the Duplicati upload speed?  Am I doing something wrong?

 

CloudBerry speed bellow.  Any custom settings I should change?

 

 

767603618_ScreenShot2020-08-10at12_36_52PM.thumb.png.749c4f133b2d4fcd0ec5a724292baaa4.png

 

 

Link to comment
14 minutes ago, johnwhicker said:

I have configured both CloudBerry and  Duplicati for backups to the same server via sftp.  Of course I run and tested them one at the time and here is my puzzle to solve:

 

With CloudBerry my upload speed stops at 4-6 MB/s where with Duplicati it goes all the way up to 16-18 MB/s.  They both use the same server and same sftp protocol. 

 

I have 1G ISP but my dilemma is why CloudBerry can't match at least the Duplicati upload speed?  Am I doing something wrong?

 

CloudBerry speed bellow.  Any custom settings I should change?

 

 

767603618_ScreenShot2020-08-10at12_36_52PM.thumb.png.749c4f133b2d4fcd0ec5a724292baaa4.png

 

 

I assume you did not enable compression/encryption ?

It seems that there are a few things you can do to improve upload speed.  See :

https://kb.msp360.com/standalone-backup/general/how-to-increase-upload-speed

Link to comment
47 minutes ago, Djoss said:

I assume you did not enable compression/encryption ?

It seems that there are a few things you can do to improve upload speed.  See :

https://kb.msp360.com/standalone-backup/general/how-to-increase-upload-speed

 

Thanks,  No encryption is not enabled at all.  Actually with Duplicati encryption is enabled and I still get 16MB/S :)

 

I did try all these various settings and still no improvement.   There is gotta be something I am missing

Link to comment
  • 1 month later...
On 10/8/2020 at 10:00 PM, uhf said:

Can CBB be updated from within the GUI, or do I need to wait for the docker to be updated? And do I need to worry about paying for annual maintenance? 

Yes, you need to wait for the Docker image to be updated.

 

For the annual maintenance, it's up to you.  You won't get support if you don't have one.  But that has not been a problem for me.  They also have forums where you can seek help.

Link to comment
  • 1 month later...

Hello! I was wondering if anyone else was having issues with the current version of the docker. I'm having time zone issues using MinIO which in my case is hosted on Truenas core s3 service feature. It sees the SSL and accepts connection, but gives a time zone issue. Searched earlier in this topic and seems to be something that has to be fixed on the build's end. Thanks for your time!

image.png.82695e6290bfe0273887bb9bb87f0303.png

Edited by domingothegamer
wording
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.