-
Posts
8 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by michaelben
-
-
Also, is there a reason that the /storage folder has different owner and permissions to every other? Does this matter?
/ # ls -al total 12 drwxr-xr-x 1 root root 91 Dec 13 02:02 . drwxr-xr-x 1 root root 91 Dec 13 02:02 .. -rwxr-xr-x 1 root root 0 Dec 13 02:02 .dockerenv drwxr-xr-x 1 root root 17 Dec 12 10:43 bin drwxrwxrwx 1 app app 101 Dec 11 02:03 config drwxr-xr-x 1 root root 27 Dec 12 10:29 defaults drwxr-xr-x 16 root root 4080 Dec 15 09:39 dev drwxr-xr-x 1 root root 47 Dec 15 09:39 etc drwxr-xr-x 2 root root 6 Nov 12 05:03 home -rwxr-xr-x 1 root root 6145 Dec 11 07:54 init drwxr-xr-x 1 root root 17 Dec 11 07:55 lib drwxr-xr-x 5 root root 44 Nov 12 05:03 media drwxr-xr-x 2 root root 6 Nov 12 05:03 mnt drwxr-xr-x 1 root root 19 Dec 12 10:29 opt dr-xr-xr-x 861 root root 0 Dec 15 09:39 proc drwxr-xr-x 1 root root 26 Dec 15 09:38 root drwxr-xr-x 1 root root 30 Dec 13 02:02 run drwxr-xr-x 1 root root 163 Dec 11 07:55 sbin drwxr-xr-x 2 root root 6 Nov 12 05:03 srv -rwxr-xr-x 1 root root 231 Dec 12 10:42 startapp.sh drwxrwxrwx 1 app app 117 Dec 11 04:30 storage dr-xr-xr-x 13 root root 0 Dec 15 09:39 sys drwxrwxrwt 1 root root 136 Dec 15 09:39 tmp drwxr-xr-x 1 root root 19 Dec 12 10:42 usr drwxr-xr-x 1 root root 17 Nov 12 05:03 var
-
7 hours ago, Djoss said:
Yes sorry, I wanted to see the output of the command.
/ # mount / #
-
9 hours ago, Djoss said:
The mount type for "/" is different in both cases. Since CloudBerry Backup seems to work by mount point, it is possible that it doesn't recognize it.
Under XFS, can you try to run "docker exec CloudBerryBackup mount" ?
/storage is definitely there when I'm in the container console, but not in the GUI. "mount" command doesn't change anything, it looks already mounted
/tmp # df -h Filesystem Size Used Available Use% Mounted on overlay 50.0G 29.9G 20.0G 60% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 31.4G 0 31.4G 0% /sys/fs/cgroup shm 64.0M 0 64.0M 0% /dev/shm shfs 50.9T 38.2T 12.8T 75% /storage shfs 931.1G 350.2G 580.9G 38% /config /dev/loop2 50.0G 29.9G 20.0G 60% /etc/resolv.conf /dev/loop2 50.0G 29.9G 20.0G 60% /etc/hostname /dev/loop2 50.0G 29.9G 20.0G 60% /etc/hosts
-
On 12/13/2022 at 12:20 AM, Djoss said:
You should probably try to contact CloudBerry support about this. No need to indicate that you are using a Docker container. Just tell that the root folder "/" is missing from the Backup source.
I did that 6 months ago and didn't go anywhere... what about the fact that it works fine under BTRFS and not under XFS? Isn't that the underlying issue? How would CloudBerry support address this if they don't know I'm running in docker?
-
14 hours ago, Djoss said:
CloudBerry Backup was always picky on how it shows the directory tree. It seems to be based on mounts.
Using different Docker image type changes the mount type used inside the container. Some mount types don't seem to be recognized by CloudBerry Backup.
Is there anything I can do to adjust mounts permissions etc besides changing the docker image type??
-
I've got an issue configuring CloudBerry on Unraid. When I run CloudBerry, the "storage" folder - the main folder to backup is missing from the GUI.
One peculiarity is that I previously ran docker on BTRFS image (about a year ago), and CloudBerry was working fine. I switched to XFS because I was having some other issues with the image, and CloudBerry stopped working - that is, there is nothing to backup.
I tested today and temporarily switched to a new BTRFS docker image, and everything started showing up again! However, I am reluctant to switch to BTRFS permanently unless absolutely necessary to get Cloudberry to work.
Why would it work fine with btrfs but not with xfs image?? Is this permission related issue (I don't change any file permissions, just switch docker image over from my docker.img to docker-xfs.img... If so, how can I fix it?
-
I am currently running a HP Microserver Gen8 with an LSI 9207-8e card, with a cable connecting to an external enclosure running 8 drives. Everything is running fine.
I am testing migrating this setup to a new server, a Dell R620, I have a DIFFERENT spare LSI 9207-8e card to a DIFFERENT enclosure. Thought I would build a test Unraid build before migrating the real HDDs across.
Problem is LSI card is not seeing any drives behind it. I've tested yet another LSI 9207-8e card (another spare) and still nothing. Both cards are flashed with the latest IT mode firmware and BIOS...
Any ideas / suggestions? I can see the card under devices, but lscsi shows nothing. And no drives are detected.
Thanks In Advance!!
[Support] Djoss - CloudBerry Backup
in Docker Containers
Posted
Yep...
root@Unraid:/# docker exec CloudBerryBackup mount root@Unraid:/# root@Unraid:/#