[Support] borgmatic


sdub

Recommended Posts

On 2/14/2022 at 10:20 PM, sdub said:

I opened an issue on the maintainer's github project.  We'll see what they say.

 

https://github.com/borgmatic-collective/docker-borgmatic/issues/98

 

The package maintainer declined to add ssh-server to the borgmatic docker container... see the discussion in the link.  You'll either need to install this simple borgserver docker or you'll need the borg binary installed on the machine you're trying to remotely backup to.  If the target is also running Unraid this is easy enough with nerdpack.

 

 

  • Thanks 1
Link to comment
On 2/19/2022 at 11:23 AM, -Mathieu- said:

I'm mostly an Unraid newbie. I'm trying to automate borg backups from my laptop to my Unraid box. From what I understand this should better be done using a docker rather than by directly running a borg server on the main Unraid OS. Am I correct that I could use the borgmatic docker to achieve this, since it already has borgbackup and some ssh infrastructure?
 

I'd be grateful for any pointers regarding how to set up the docker to do this. Should I use my Unraid box's local IP address with a specific port to reach the docker? Where do I define this port? Should I generate SSH keys from inside the docker and save the public key to my laptop, or the other way around? The answers to these questions might be obvious but not to me, and Google has failed me so far. Thanks in advance for any advice.

 

- Mathieu

 

In general, I like to keep Unraid as "stock" as possible and install any applications as containers.  This is generally viewed as a best practice.  Note, however that the borgmatic container is intended to be a backup client... it does not have an SSH server built into it.  If you're running borgmatic or vorta on your laptop (via docker or otherwise), your easiest path will be to just install the borg binary from the nerdpack so your unraid box can act as a borg server (note that borg doesn't run as a server daemon, it's just a binary that gets invoked over SSH in "server mode" when necessary)

 

If you go that path, you'll need to set up an SSH keypair between your laptop and unraid box that has nothing to do with borg.  Once you can SSH in passwordless to the Unraid box, you'll simply add "1.2.3.4:/mnt/disks/backup_disk/laptop_repo" as a destination in your borgmatic config file, where 1.2.3.4 is your Unraid IP and /mnt/disks/backup_disk/laptop_repo" is the path on the Unraid box where you want your backup to reside.

 

You can try running a borgserver docker as an alternative, but you'll have to manage SSH keys within that, pick an alternative SSH port, and expose this port to your LAN.  Doable, but a bit more complicated.

Edited by sdub
  • Upvote 1
Link to comment
  • 2 weeks later...
On 12/27/2020 at 12:41 PM, Greygoose said:

FIXED: CRON schedule not running

 

Ok sorted it.

 

I recreated the crontab.txt file in the docker console using vi.

 

Now the cron is working. It must have been a permissions issue, I normally edit the txt file in notepad+ in my Windows 10 machine via the shared network drive.   I will play around and see how I caused it and report back

Thanks!  Had similar issue after copying the crontab posted above to Windows notepad and saving.  I think actual problem was the EOL character (at least it was in my case).  For those that aren't familiar with this, Vi (and linux text editors in general) mark end of lines (EOL) with a 'new line' character; Windows will mark with 'carriage return' and 'new line'. 

 

To correct, I opened config.txt with docker console vi, deleted the ^M (which is what the extra 'carriage return' character will manifest as) and then saved and restarted.  Many Windows text editors (e.g. notepad++) can be configured to just use a 'new line' character at EOL if you don't want to mess with vi.

Link to comment
  • 3 weeks later...

Hello,

 

i got this error:

 

Local Exception
Traceback (most recent call last):
  File "/usr/lib/python3.9/borg/archiver.py", line 5089, in main
Command 'borg prune --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prefix NAS2- --info --list /mnt/borg-repository/NAS2' returned non-zero exit status 2.

 


#borgmatic config.yaml
location:
  # Repository Pfad
  repositories:
    - /mnt/borg-repository/NAS2

  remote_path: /usr/bin/borg

  # Backup Sources
  source_directories:
    - /boot
    - /mnt/user/appdata

  exclude_caches: true
  one_file_system: true
  files_cache: mtime,size
  patterns:
      - '- [Tt]rash'
      - '- [Cc]ache'
  exclude_if_present:
      - .nobackup
      - .NOBACKUP

# Repository-Optionen
storage:
  compression: lz4 # zstd wenn möglich 
  relocated_repo_access_is_ok: true
  archive_name_format: "NAS2-{now:%Y-%m-%d_%H-%M}"
  encryption_passphrase: "xxxx"

# Aufbewahrungszeitraum
retention:
  keep_daily: 7
  keep_weekly: 4
  keep_monthly: 6
  prefix: "NAS2-"

# Backup-Validierung
consistency:
  checks:
    - repository
    - archives
  check_last: 1
  prefix: "NAS2-"

# Farblicher Terminal output
output:
  color: true

 

where is my mistake?

Edited by donbruno
Link to comment

Hi Guys,

 

CA-Backup just killed the container before doing backups, this results in the following error after restart:

 

summary:
/etc/borgmatic.d/config.yaml: Error running configuration file
ssh://<$SERVERNAME>/./backup/unraid: Error running actions for repository
Failed to create/acquire the lock /root/.cache/borg/c78474a458bfb7cf00e996ef92a3db312fc845792a33306672041ae578876656/lock.exclusive (timeout).

 

I could resolve it manually with running the following command inside the container:

 

borg break-lock ssh://<$SERVERNAME>/./backup/unraid

 

My problem is that this will occur everytime CA-Backup stops the docker containers.

 

Is there are a proper way to AUTOMATICALLY mitigate this from inside borg?

 

 

best regards

 

Christoph

 

Link to comment
56 minutes ago, donbruno said:

Hello,

 

i got this error:

 

Local Exception
Traceback (most recent call last):
  File "/usr/lib/python3.9/borg/archiver.py", line 5089, in main
Command 'borg prune --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prefix NAS2- --info --list /mnt/borg-repository/NAS2' returned non-zero exit status 2.

 


#borgmatic config.yaml
location:
  # Repository Pfad
  repositories:
    - /mnt/borg-repository/NAS2

  remote_path: /usr/bin/borg

  # Backup Sources
  source_directories:
    - /boot
    - /mnt/user/appdata

  exclude_caches: true
  one_file_system: true
  files_cache: mtime,size
  patterns:
      - '- [Tt]rash'
      - '- [Cc]ache'
  exclude_if_present:
      - .nobackup
      - .NOBACKUP

# Repository-Optionen
storage:
  compression: lz4 # zstd wenn möglich 
  relocated_repo_access_is_ok: true
  archive_name_format: "NAS2-{now:%Y-%m-%d_%H-%M}"
  encryption_passphrase: "xxxx"

# Aufbewahrungszeitraum
retention:
  keep_daily: 7
  keep_weekly: 4
  keep_monthly: 6
  prefix: "NAS2-"

# Backup-Validierung
consistency:
  checks:
    - repository
    - archives
  check_last: 1
  prefix: "NAS2-"

# Farblicher Terminal output
output:
  color: true

 

where is my mistake?

The error comes from inside the application itself, see line 5089

 

As this is a dockerized app, i would try to delete the image and pull it again from repository before investigating any deeper

Link to comment
56 minutes ago, chrissi5120 said:

Hi Guys,

 

CA-Backup just killed the container before doing backups, this results in the following error after restart:

 

summary:
/etc/borgmatic.d/config.yaml: Error running configuration file
ssh://<$SERVERNAME>/./backup/unraid: Error running actions for repository
Failed to create/acquire the lock /root/.cache/borg/c78474a458bfb7cf00e996ef92a3db312fc845792a33306672041ae578876656/lock.exclusive (timeout).

 

I could resolve it manually with running the following command inside the container:

 

borg break-lock ssh://<$SERVERNAME>/./backup/unraid

 

My problem is that this will occur everytime CA-Backup stops the docker containers.

 

Is there are a proper way to AUTOMATICALLY mitigate this from inside borg?

 

 

best regards

 

Christoph

 

Stop using CA-Backup and use borg exclusively.

  • Like 1
Link to comment
5 hours ago, Solverz said:

Stop using CA-Backup and use borg exclusively.

thank you for your on-point suggestion :)

 

i get stressed out by thinking about a solution to control the docker states from inside the borg container (hen-egg problem)

for now, i will try to use the advanced option from CA-Backup:

 

"Select which applications to NOT stop during a backup"

 

i just selected borg there, in the files i also exclude the borg-cache folders contents.

 

lets see how i will roll with that..

Link to comment
1 minute ago, chrissi5120 said:

thank you for your on-point suggestion :)

 

i get stressed out by thinking about a solution to control the docker states from inside the borg container (hen-egg problem)

for now, i will try to use the advanced option from CA-Backup:

 

"Select which applications to NOT stop during a backup"

 

i just selected borg there, in the files i also exclude the borg-cache folders contents.

 

lets see how i will roll with that..

Personally, I use borg by itself to backup everything including docker containers. As long as you use borgmatics database backup tool for the containers that have a supported database, it should be fine. ☺

Link to comment
6 hours ago, Solverz said:

Stop using CA-Backup and use borg exclusively.

Agree… no reason you can’t back up /boot and /mnt/use/appdata with borg. I don’t like CA appdata because the backup files are huge and not very friendly with deduplication. I’ve used my borg archive to restore Plex multiple times without issue. 

 

With daily backups the risk of corruption on successive backups is limited, especially if you’re using the built-in DB backup capabilities of the various docker containers. 

Edited by sdub
  • Like 1
Link to comment
5 hours ago, donbruno said:

@chrissi5120sorry, I pulled the repro again, but the error is the same 😞

are you sure, you deleted the old image?

 

try docker image ls and then manually delete it to be sure..

if you get no success, make sure to stick as closely to the original example configuration without your comments, yaml is very picky regarding spaces and other invisible stuff.

 

as soon as you get a different error, i would mark this as an success

Link to comment
3 minutes ago, sdub said:

Agree… no reason you can’t back up /boot and /mnt/use/appdata

 

With daily backups the risk of corruption on successive backups is limited, especially if you’re using the built-in DB backup capabilities of the various docker containers. 

a really generalistic question to your containers design and borg in general.. if i would add the appdata path to the config and restart, would he pick the new added path just like that or might i need to do someting else before?

Link to comment
7 hours ago, Solverz said:

Stop using CA-Backup and use borg exclusively.

 If you’re mapping in /mnt/user it should already be there… you just need to add to the config.yaml

 

otherwise yes just add a new path mapping into the container and add that container path to borgmatic’s config.yaml

Link to comment
3 hours ago, chrissi5120 said:

are you sure, you deleted the old image?

 

try docker image ls and then manually delete it to be sure..

if you get no success, make sure to stick as closely to the original example configuration without your comments, yaml is very picky regarding spaces and other invisible stuff.

 

as soon as you get a different error, i would mark this as an success

ok, I delete all images... but when I pull the borgmatic repo, I see that the container with id df9b9388f04a

was not pulled, it still exits... 😞 !?!?!

 

How can I delete it?

Link to comment
12 minutes ago, donbruno said:

ok, I delete all images... but when I pull the borgmatic repo, I see that the container with id df9b9388f04a

was not pulled, it still exits... 😞 !?!?!

 

How can I delete it?

 

From the command line:

docker image remove df9b9388f04a

 

but I seriously doubt the image is corrupted.  A little more context on the error would be helpful.  It's probably an error in the config.yaml, the repo can't be contacted (especially if it's a remote repo), or there's a stale lock.

 

Can you execute the following command from within the container?

borg list /path/to/repo

 

Link to comment
17 minutes ago, sdub said:

 

From the command line:

docker image remove df9b9388f04a

 

but I seriously doubt the image is corrupted.  A little more context on the error would be helpful.  It's probably an error in the config.yaml, the repo can't be contacted (especially if it's a remote repo), or there's a stale lock.

 

Can you execute the following command from within the container?

borg list /path/to/repo

 

 

 

 

ok I can't delete the image 😞

 

IMAGE ID [552791213]: Pulling from b3vis/borgmatic.
IMAGE ID [df9b9388f04a]: Already exists.
IMAGE ID [c3072aa2c468]: Pulling fs layer. Downloading 100% von 200 B. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [aa1b65f07332]: Pulling fs layer. Downloading 100% von 57 MB. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [b149f4edaf62]: Pulling fs layer. Downloading 100% von 15 MB. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [9c4b43301842]: Pulling fs layer. Downloading 100% von 311 B. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [2b36c3e70dc7]: Pulling fs layer. Downloading 100% von 313 B. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [7eee898c9a22]: Pulling fs layer. Downloading 100% von 315 B. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [956b6be7964a]: Pulling fs layer. Downloading 100% von 331 B. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [9cbaddbea532]: Pulling fs layer. Downloading 100% von 333 B. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [f4a66899375e]: Pulling fs layer. Downloading 100% von 333 B. Verifying Checksum. Download complete. Extracting. Pull complete.
Status: Downloaded newer image for b3vis/borgmatic:latest

GEHOLTE DATENMENGE: 72 MB

 

 

/mnt/borg-repository/NAS2: Error running actions for repository
Command 'borg prune --keep-hourly 2 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 10 --prefix backup- /mnt/borg-repository/NAS2' returned non-zero exit status 2.
Error during prune/create/check.
/etc/borgmatic.d/test.yaml: Error running configuration file

summary:
/etc/borgmatic.d/test.yaml: Error running configuration file
/mnt/borg-repository/NAS2: Error running actions for repository
Local Exception
Traceback (most recent call last):
  File "/usr/lib/python3.9/borg/archiver.py", line 5089, in main
Command 'borg prune --keep-hourly 2 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 10 --prefix backup- /mnt/borg-repository/NAS2' returned non-zero exit status 2.

Need some help? https://torsion.org/borgmatic/#issues

 

 

 

and a list:

NAS2-2022-04-18_22-16                Mon, 2022-04-18 22:16:06 [2097a847ec9313d21e3398de3c6d9ef221f8f3867a195795fa13852d86b8e6cd]

 

the borg check says all ok.

 

and the repro /mnt/borg-repository/NAS2 is a remote nfs mount point from another NAS

 

/mnt/user/appdata/borgmatic/config # validate-borgmatic-config
All given configuration files are valid: /etc/borgmatic.d/test.yaml

Edited by donbruno
Link to comment
48 minutes ago, donbruno said:

and the repro /mnt/borg-repository/NAS2 is a remote nfs mount point from another NAS

 

 

I wonder if there is some lower level error happening with the NFS mount.  I would start with running:

borg prune --verbose --keep-hourly 2 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 10 --prefix backup- /mnt/borg-repository/NAS2

 

to see if there are any more detailed error messages (note the --verbose) you can get from borg (bypassing the borgmatic script altogether)

 

This might be a better question to ask in the borg github issues page to see if they can help understand the specific borg error. 

Edited by sdub
Link to comment
  • 2 weeks later...
On 4/20/2022 at 4:10 AM, donbruno said:

another question, where I can find the log files for borgmatic?

 

The documentation on that is here! https://torsion.org/borgmatic/docs/how-to/inspect-your-backups/#logging

 

In general, if anyone has an issue that appears to be borgmatic-specific rather than Unraid-specific, you are welcome to file it in borgmatic's issue tracker: https://torsion.org/borgmatic/#issues

Link to comment

Hi Guys,

 

my Backup just finished for the first Time:)

 

Next, I wanted to restore some files with the help of the /mnt/borg:/mnt/fuse mountpoint.

 

To have a GUI which I can “fire and forget” a copy command, i tried to use double commander.

Sadly, “double commander” is unable to even see the mount point while it is mounted due to not matching permissions..

Both, double commander and borg are started to UID/GID 99/100, but the borg mount command results in file permissions being root:root.

 

I could probably circumvent this by running double commander in privileged mode with root UID/GID but i would rather not like to.

Do you guys have this issue at all and/or how do you circumvent it?

 

best regards

 

Christoph

 

 

 

 

 

Link to comment

I’ve never successfully exposed a fuse mount in one docker container to another. I usually perform browse and restore operations from the command line within the Borgmatic docker

 

A couple of options that may work are:

 

1) install vorta container, which gives a borg gui so you can at least visually browse the archive, even if it’s not really a file manager.

 

2) install borg and llfuse to the base Unraid image using nerdpack. I would think this should work but I’ve never gotten the fuse mount to appear via samba or a file manager docker container. Not sure why because I think it should work. 
 

3) install an Ubuntu VM with borg and llfuse and try to mount it there, browsing the archive visually through VNC. I haven’t actually tried this one yet. 
 

Let us know if you figure out a working solution!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.