[Support] Djoss - CloudBerry Backup


Recommended Posts

2 hours ago, PlanetDyna said:

Is there a possibility to add my second Unraid server via FTPS under "Backup Source"? I want the Unraid server running CloudBerry to have read-only access.

 

To my knowledge CloudBerry Backup does not support FTP as a backup source.  Only files from the local filesystem.

Link to comment
On 11/27/2022 at 1:27 PM, michaelben said:

I've got an issue configuring CloudBerry on Unraid. When I run CloudBerry, the "storage" folder - the main folder to backup is missing from the GUI.

 

image.png.f650e00ca001ce5f8213ebfcd8763f2d.png

 

One peculiarity is that I previously ran docker on BTRFS image (about a year ago), and CloudBerry was working fine. I switched to XFS because I was having some other issues with the image, and CloudBerry stopped working - that is, there is nothing to backup.

 

I tested today and temporarily switched to a new BTRFS docker image, and everything started showing up again! However, I am reluctant to switch to BTRFS permanently unless absolutely necessary to get Cloudberry to work.

 

Why would it work fine with btrfs but not with xfs image?? Is this permission related issue (I don't change any file permissions, just switch docker image over from my docker.img to docker-xfs.img... If so, how can I fix it?

 

 

I'm having the same issue since I switched to XFS. Have you managed to find a solution?

Link to comment
On 12/6/2022 at 1:54 AM, michaelben said:

Is there anything I can do to adjust mounts permissions etc besides changing the docker image type??

 

33 minutes ago, wicked_qa said:

I'm having the same issue since I switched to XFS. Have you managed to find a solution?

 

You should probably try to contact CloudBerry support about this.  No need to indicate that you are using a Docker container.  Just tell that the root folder "/" is missing from the Backup source.

Link to comment
On 12/10/2022 at 6:52 PM, Djoss said:

If you are using the free license of CloudBerry Backup, could you run it also on your second server ?


I got the commercial version. I can run it on my two unraid servers.
Why Iam asking to mount FTPS share: I got the problem on the 2nd Unraid Server that it crashes when adding more than one SMB share. But I guess the solution will be a root share.

 

What is the advantage of CloudBerry vs LuckyBackup? 

Edited by PlanetDyna
Link to comment
On 12/13/2022 at 12:20 AM, Djoss said:

 

 

You should probably try to contact CloudBerry support about this.  No need to indicate that you are using a Docker container.  Just tell that the root folder "/" is missing from the Backup source.

I did that 6 months ago and didn't go anywhere... what about the fact that it works fine under BTRFS and not under XFS? Isn't that the underlying issue? How would CloudBerry support address this if they don't know I'm running in docker?

Link to comment
2 hours ago, michaelben said:

I did that 6 months ago and didn't go anywhere... what about the fact that it works fine under BTRFS and not under XFS? Isn't that the underlying issue?

 

The mount type for "/" is different in both cases.  Since CloudBerry Backup seems to work by mount point, it is possible that it doesn't recognize it.

 

Under XFS, can you try to run "docker exec CloudBerryBackup mount" ?

 

2 hours ago, michaelben said:

How would CloudBerry support address this if they don't know I'm running in docker?

 

From the app's point of view, it's not really important if it is running in a container or not.  However, support might be quick to close your case by answering that running in a container is not officially supported.

Edited by Djoss
Link to comment
9 hours ago, Djoss said:

 

The mount type for "/" is different in both cases.  Since CloudBerry Backup seems to work by mount point, it is possible that it doesn't recognize it.

 

Under XFS, can you try to run "docker exec CloudBerryBackup mount" ?

 

/storage is definitely there when I'm in the container console, but not in the GUI. "mount" command doesn't change anything, it looks already mounted

 

/tmp # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  50.0G     29.9G     20.0G  60% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                    31.4G         0     31.4G   0% /sys/fs/cgroup
shm                      64.0M         0     64.0M   0% /dev/shm
shfs                     50.9T     38.2T     12.8T  75% /storage
shfs                    931.1G    350.2G    580.9G  38% /config
/dev/loop2               50.0G     29.9G     20.0G  60% /etc/resolv.conf
/dev/loop2               50.0G     29.9G     20.0G  60% /etc/hostname
/dev/loop2               50.0G     29.9G     20.0G  60% /etc/hosts

 

Link to comment

Also, is there a reason that the /storage folder has different owner and permissions to every other? Does this matter?

 

/ # ls -al
total 12
drwxr-xr-x    1 root     root            91 Dec 13 02:02 .
drwxr-xr-x    1 root     root            91 Dec 13 02:02 ..
-rwxr-xr-x    1 root     root             0 Dec 13 02:02 .dockerenv
drwxr-xr-x    1 root     root            17 Dec 12 10:43 bin
drwxrwxrwx    1 app      app            101 Dec 11 02:03 config
drwxr-xr-x    1 root     root            27 Dec 12 10:29 defaults
drwxr-xr-x   16 root     root          4080 Dec 15 09:39 dev
drwxr-xr-x    1 root     root            47 Dec 15 09:39 etc
drwxr-xr-x    2 root     root             6 Nov 12 05:03 home
-rwxr-xr-x    1 root     root          6145 Dec 11 07:54 init
drwxr-xr-x    1 root     root            17 Dec 11 07:55 lib
drwxr-xr-x    5 root     root            44 Nov 12 05:03 media
drwxr-xr-x    2 root     root             6 Nov 12 05:03 mnt
drwxr-xr-x    1 root     root            19 Dec 12 10:29 opt
dr-xr-xr-x  861 root     root             0 Dec 15 09:39 proc
drwxr-xr-x    1 root     root            26 Dec 15 09:38 root
drwxr-xr-x    1 root     root            30 Dec 13 02:02 run
drwxr-xr-x    1 root     root           163 Dec 11 07:55 sbin
drwxr-xr-x    2 root     root             6 Nov 12 05:03 srv
-rwxr-xr-x    1 root     root           231 Dec 12 10:42 startapp.sh
drwxrwxrwx    1 app      app            117 Dec 11 04:30 storage
dr-xr-xr-x   13 root     root             0 Dec 15 09:39 sys
drwxrwxrwt    1 root     root           136 Dec 15 09:39 tmp
drwxr-xr-x    1 root     root            19 Dec 12 10:42 usr
drwxr-xr-x    1 root     root            17 Nov 12 05:03 var

 

Link to comment
6 hours ago, michaelben said:

Also, is there a reason that the /storage folder has different owner and permissions to every other? Does this matter?

 

This is expected.  /storage is a different mount mapped to /mnt/user in unRAID.  Other files and folders are from the Docker image itself.

Link to comment
On 12/10/2022 at 8:55 AM, Djoss said:

 

Can you try to run the following command to verify if you see the same problem:

 

docker run --rm jlesage/cloudberry-backup

 

I'm seeing an identical issue on Unraid 6.11.5. Tried re-installing the docker, restarting the server - didn't resolve.

Running the above command works, however not an ideal long term fix to enter the terminal manually whenever it fails to start. 

Any ideas?

Link to comment
On 12/17/2022 at 5:37 PM, Steve Croft said:

I'm seeing an identical issue on Unraid 6.11.5. Tried re-installing the docker, restarting the server - didn't resolve.

Running the above command works, however not an ideal long term fix to enter the terminal manually whenever it fails to start. 

Any ideas?

Which issue exactly you are referring to ?

Link to comment

I've been in touch with Cloudberry support the last 2 weeks about the issue of local storage backup failing with a cloud limit error.

 

Although they didnt say why or how they did reproduce the error and have fixed it apparantly.

 

Quote

While we will be investigating your current issue you can update your backup agent to the latest version. We released it (v. 4.0.2.402) yesterday and initial problem that you were experiencing with CBF plans to local destinations should be fixed. So you should be running your old backup plans without issues after the update. 

 

Quote

Since updating to 4.0.1.310 my local storage backup is no longer working. I get the below error:

Cloud object size limit exceeded (code: 1618)
1 file was not backed up due to a cloud object size limit excess - 5.37 GB

Im running the personal licence and have never seen this error before, i also cannot find error 1618 in any of the cloudberry documentation.

Additionally i have the same backup set running to a Backblaze bucket and this completes fine. The issue only appears to be on my local storage.

 

Link to comment
  • 5 weeks later...

Hi @Djoss,

 

Thanks for your time on this so far! 

 

I'm just struggling with permissions on CloudBerry on restore

 

The container is able to read from my storage and back up those files without issue, but when I do a restore it fails with a permissions error.

 

 

I tried to give all permissions with the following but no luck:

 

chmod a+rwx /mnt/disk2/UNRAID_Backups/

 

 

Running the container with the "Privileged" switch to on did not help either. 

 

This is the location of where I store my app-data files using the app "CA Backup / Restore Appdata".

 

So "CA Backup / Restore Appdata" writes to that directory, CloudBerry then backs it up to Backblaze, but Cloudberry is unable to then restore those files due to a permissions error. 

 

Any ideas? 

 

 

Link to comment

By default the container has only read access to your data.  When you want to restore, you need to change that.

 

Edit the container's settings, toggle the "Advanced View" (upper right corner), then edit the "Storage" setting and change the "Access Mode" to "Read/Write".

Link to comment
25 minutes ago, Djoss said:

By default the container has only read access to your data.  When you want to restore, you need to change that.

 

Edit the container's settings, toggle the "Advanced View" (upper right corner), then edit the "Storage" setting and change the "Access Mode" to "Read/Write".

ahh amazing thanks mate 

Link to comment
  • 3 weeks later...
On 2/13/2023 at 4:00 AM, FCA Administrator said:

I must be missing something. My backups are incredibly slow, even just to a local external HDD, yes it's USB 3.0. 5 hours to backup about 500GB of data....and that's an incremental backup done shortly after a 12+ hour full backup was completed...on the weekend when no one is using the server.

 

Any help would be very much appreciated!

 

Does you backup have encryption, compression, etc enabled ?

You could try to do a test by performing a backup to a non-USB HDD to see if you get the same speed.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.