[Support] borgmatic


sdub

Recommended Posts

4 hours ago, SynIQ said:

But there are a few things I have not yet understood like why Docker containers are not allowed to run during a backup of the appdata folder. What can happen if they run in worst case and why?

Because say that during the backup of Container X and it's still running.

 

During the backup, both files A & B get changed.  However the new version of B hits the backup, but when the backup was running only A got backed up prior to it being changed.

 

Now A & B are out of sync with each other on the backup.  May or may not cause problems if you have to restore.  In reality, the likelyhood of this happening is minimal (but non zero), hence by default the backup plugin stops everything to guarantee the appdata is in a constant state.  You can however tell the plugin to not stop individual containers via advanced options.

  • Thanks 1
Link to comment

At the moment I'm using CA Backup to backup my appdata folder which is located on my cache drive. But for some other schedules it is really annoying that all containers has to be stopped during the backup. I already know that there is a way not to stop some containers but I'm searching for a easier solution. Therefore I would like to backup my appdata folder with borgmatic. 
 
But there are a few things I have not yet understood like why Docker containers are not allowed to run during a backup of the appdata folder. What can happen if they run in worst case and why?
 
I need this information to contrast the potentially risk with the advantages of backing up the appdata Folder with borgmatic without stoping the containers.



As others have said the issue is with the file-based databases like SQLite that many dockers employ. There’s a small chance that the backup will not be valid if it happens mid-write on a database.

I don’t like stopping Docker to do backups and prefer to have it be part of my continuous Borg backup however so my approach is:

1. Wherever possible opt for Postgres or some other database that can be properly backed up in Borg.

2. The backup is probably going to be ok unless it happens to occur mid-write on the database.

3. If this does happen and I need to restore, the odds of successive backups both being corrupt is infinitesimal. That’s the advantage of rolling differential backups.

4. Most Docker applications that use SQLite databases have a built-in scheduled database backup functionality that’s going to be valid. I make sure to turn this on wherever I can (Plex, etc.) and make sure that backup path is included in the Borg backup.

I think with these multiple lines of defense, my databases are sufficiently protected using Borg backup, not using CA backup, and not stopping the Docker containers.

If anyone sees a flaw in this logic I’d really like to know!


Sent from my iPhone using Tapatalk
  • Thanks 1
Link to comment
On 2/16/2021 at 2:13 PM, ZappyZap said:

What did you put in "Borg Repo (Backup Destination):"  ? since you dont use local repo ?

I have the same question. I got SSH working, but I don't know what to put in "Borg Repo (Backup Destination):" to initialize the repo. Did you finally get it working? If so, what was the solution you found?

 

Thanks!

Link to comment
I have the same question. I got SSH working, but I don't know what to put in "Borg Repo (Backup Destination):" to initialize the repo. Did you finally get it working? If so, what was the solution you found?
 
Thanks!

That’s the local path in Unraid where you want the borg repo to live. It can be inside your array or on a dedicated volume. It gets mapped into the /mnt/borg-repository folder INSIDE the Docker container.

Once you open an SSH shell inside the Docker container you’ll initialize with a command like

borg init --encryption=repokey-blake2 /mnt/borg-repository

Regardless of the location you chose for the repo outside of Docker.

Once you initialize with this command, you should see the repo files in the location you put in the "Borg Repo (Backup Destination):" variable.

Sent from my iPhone using Tapatalk
Link to comment
35 minutes ago, sdub said:


That’s the local path in Unraid where you want the borg repo to live. It can be inside your array or on a dedicated volume. It gets mapped into the /mnt/borg-repository folder INSIDE the Docker container.

Once you open an SSH shell inside the Docker container you’ll initialize with a command like

borg init --encryption=repokey-blake2 /mnt/borg-repository

Regardless of the location you chose for the repo outside of Docker.

Once you initialize with this command, you should see the repo files in the location you put in the "Borg Repo (Backup Destination):" variable.

Sent from my iPhone using Tapatalk

Thank you for your answer. I understand that's how it works if I would be backing up to an unassigned disk or inside the array, however I'm trying to backup to a remote local machine using ssh. My understanding is that there's no local mount to be set in unraid for that, so I don't really understand which value to put for that path (correct me if I'm wrong). 

 

Thanks again 😀

Link to comment
Thank you for your answer. I understand that's how it works if I would be backing up to an unassigned disk or inside the array, however I'm trying to backup to a remote local machine using ssh. My understanding is that there's no local mount to be set in unraid for that, so I don't really understand which value to put for that path (correct me if I'm wrong). 
 
Thanks again

You’ll need to be able to SSH from the Docker container into the remote server with SSH keys (no password). Then you can init the remote repo with:

borg init user@hostname:/path/to/repo

Where /path/to/repo is on the remote machine.

Note that the remote server needs to have borg installed since the commands get executed remotely.

In the case where you have no local repo, just this remote one, you can leave that path variable blank or delete it from the Docker template.

I’d encourage you to have both a local and remote backup though… as they say, one is none!


Sent from my iPhone using Tapatalk
Link to comment
On 7/7/2021 at 11:33 PM, sdub said:


You’ll need to be able to SSH from the Docker container into the remote server with SSH keys (no password). Then you can init the remote repo with:

borg init user@hostname:/path/to/repo

Where /path/to/repo is on the remote machine.

Note that the remote server needs to have borg installed since the commands get executed remotely.

In the case where you have no local repo, just this remote one, you can leave that path variable blank or delete it from the Docker template.

I’d encourage you to have both a local and remote backup though… as they say, one is none!


Sent from my iPhone using Tapatalk

 

Thanks for your input. What I needed to do was not bothering with /mnt/borg-repository and putting the ssh path in the config file:

location:
    source_directories:
        - /mnt/user/FOLDER
    repositories:
        - [email protected]:/media/USER/barracuda/borg/
    one_file_system: true
    .....

 

Link to comment

This is probably a question for the Borg community rather than here, but I'm giving it a try. I'm experiencing slow speed backing up to a LAN computer over SSH. I'm getting around 35MB/sec when I should be getting around 110MB/sec. When doing a simple cp command between the 2 machines, I'm getting around 110MB/sec, so it's definitely not related to the network equipment.

 

Could it be that SSH transfers slowly, or borgmatic encryption slows down the process significantly? Both machines indicates low CPU consumption.

 

Thanks!

 

 

Link to comment
1 hour ago, touz said:

 

Could it be that SSH transfers slowly, or borgmatic encryption slows down the process significantly? Both machines indicates low CPU consumption.

 

 


Probably a question for the borg forum like you said, but it could be several things when encryption is on. Some encryption modes are faster than others, and it could be Docker throttling CPU or memory usage that is the root cause of the bottleneck, even if the host isn’t pegged at 100%. Also the encryption is single threaded probably so your overall CPU utilization might not be representative if a single core is at 100%

 

have you tried creating a test repo with no encryption for comparison?  

Link to comment
7 hours ago, sdub said:


Probably a question for the borg forum like you said, but it could be several things when encryption is on. Some encryption modes are faster than others, and it could be Docker throttling CPU or memory usage that is the root cause of the bottleneck, even if the host isn’t pegged at 100%. Also the encryption is single threaded probably so your overall CPU utilization might not be representative if a single core is at 100%

 

have you tried creating a test repo with no encryption for comparison?  

 

You're right, I should have tested that 1st before posting. I just tested and there's no speed difference between encrypted and unencrypted repo.

 

I've tested transferring a file via SSH directly from the Unraid terminal to the same destination and I'm getting full speed.

 

I also tried switching the container network type from bridge to host without success.

 

So I guess it's either something like a Docker limitation on Unraid or Borgmatic slowing down somehow.

 

May I ask what's your transfer speed when doing a local backup to an external hdd? I think that's your setup?

 

BTW, thank you for the great container, it's really awesome. If only I could get decent speeds.

 

Link to comment
On 7/6/2021 at 10:42 AM, SynIQ said:

Hi at all. 

[...]

This topic has already been addressed but until now I couldn't find a solution to stop containers with before_backup and after_backup hooks. I wonder if there is an easy way to control docker daemon on unRAID from inside of a docker container.

 

Thanks in advance and sorry for the possibly stupid questions.

Sorry for being late to the party. I think the question about why it is a good idea to stop containers when running a backup has been addressed adequately. Check out the following post for how you can achieve it:

 

  • Thanks 1
Link to comment
  • 1 month later...
3 hours ago, hrv231 said:

Just letting you all know that there is a fork from b3vis that it seems more up-to-date.

https://hub.docker.com/r/modem7/borgmatic-docker
https://www.modem7.com/books/docker-backup/page/backup-docker-using-borgmatic

 

 

Thanks for the heads up... I'll take a look.  It's too bad that b3vis has fallen behind on versions.   Is there a specific feature in 1.5.13+ that you're looking for, or just don't want to fall behind the latest version too far?

 

This modem7 fork is brand new, so I might not jump ship just yet... definitely something to keep an eye on.

Edited by sdub
Link to comment
  • 1 month later...
On 11/21/2020 at 6:14 PM, sdub said:

It is recommended that your Borg repo and cache be located on a drive outside of your array

 

Why is the repo recommended to be on an unassigned drive? What are the downsides to using a disk in the array that is only used for backups? You then also get the parity protection. Generally curious on this one. Thanks.

Link to comment
2 hours ago, manHands said:

 

Why is the repo recommended to be on an unassigned drive? What are the downsides to using a disk in the array that is only used for backups? You then also get the parity protection. Generally curious on this one. Thanks.

 

The point was independence... you don't want a problem with your array making the data and the backup both corrupt.  As long as you're backing up to a 3rd offsite location, the risk is realistically pretty low. 

 

You could specifically exclude the disk from all of your user shares and disable cache writes to minimize exposure to an Unraid bug, but the parity writes will slow down the backups which can take several hours for a multi-terabyte backup. 

Edited by sdub
  • Thanks 1
Link to comment
12 minutes ago, sdub said:

 

... the parity writes will slow down the backups which can take several hours for a multi-terabyte backup. 

 

Ah yes, totally overlooked the write speed. Thanks for pointing that out.

 

On 11/21/2020 at 9:23 PM, sdub said:
  • Backup share acts like a funnel for other data to be backed up
    • Other machines on my network back themselves up to an unRAID "backup" share (Windows 10 backup, time machine, etc.)
    • Docker images that use mysqlite are configured to place their DB backups in the "backup" share

 

Curious on your thinking on this approach as well. Why not back up everything listed above directly to the unassigned drive that serves as your borg backup? Because essentially don't you end up having 4 copies of everything that is backed up to your unRAID "backup" share?

  1. Original copy
  2. unRAID "backup" share
  3. borg funnel backup of "backup" share
  4. remote backup

Having an additional backup definitely doesn't harm anything (other than space I suppose), but again just curious on your approach as I have a similar set-up.

 

I need to backup an iMac, Macbook, and PC in additional to the various shares on unRAID. I was going to try out Vorta (borg client for macOS). And with that, set up a similar backup schedule that I'll have with your borg container on unRAID. However, rather than back up my mac devices to a backup share on the unRAID array, create multiple borg repos for each "device" (unRAID, Macbook, iMac, etc.) on the unassigned disk. And then of course sync each repo to a remote backup location. Still tying to figure out my PC solution.

 

Does that make sense? Anything I might be missing with this approach or any other thoughts you may have? I appreciate the help.

Link to comment
39 minutes ago, manHands said:

Does that make sense? Anything I might be missing with this approach or any other thoughts you may have? I appreciate the help.

 

 

Funny you mention that, because I was just thinking about this "issue" earlier this week... of course an extra backup isn't really an "issue", just wasted space.

 

I've slowly moved almost all of  the content I care about off the computers on the network... they almost exclusively access data stored on the Unraid NAS or via Nextcloud, which makes this issue largely go away.  The one exception is Adobe Lightroom...  that catalog really wants to be on a local disk.

 

I just set up Borgmatic in a Docker container under Windows to backup the Lightroom and User folders from my Windows PC to an Unraid share that is NOT backed up by Borg (pretty easy and works great).  If I include that share in my Unraid backup, I'll have way more copies of the data than I really need.  Don't need an incremental backup of an incremental backup.

 

I think I'm moving away from the "funnel" mentality.  Instead I'll have Unraid/Borg create local and remote repos for its important data.  I'll have Windows/Borg create local and remote repos for it's important data.  The Windows "local" repo will be kept in an un-backed up Unraid user share or the unassigned drive I use for backup.

 

Going forward, I'll probably do the same on any and all other PC's that have data I care about.  The reality is that all of my other family members work almost exclusively out of the cloud (or my Unraid cloud), so there's very little risk of data loss.  

 

Edited by sdub
  • Thanks 1
Link to comment
16 hours ago, sdub said:

I just set up Borgmatic in a Docker container under Windows to backup the Lightroom and User folders from my Windows PC to an Unraid share that is NOT backed up by Borg (pretty easy and works great).

 

Genious! Sound like I have my Windows backup solution now.

 

One other quick question on your unRAID borg backup. How do you have the unassigned drive formatted? Did you just stick with XFS, or use something else for greater compatibility just in case you had to put the disk in another machine as part of a restore?

Link to comment
34 minutes ago, manHands said:

 

Genious! Sound like I have my Windows backup solution now.

 

One other quick question on your unRAID borg backup. How do you have the unassigned drive formatted? Did you just stick with XFS, or use something else for greater compatibility just in case you had to put the disk in another machine as part of a restore?

Yeah I just used xfs…. It’s not as universal as FAT but it’s still pretty universal and there are ways to mount an xfs partition under windows if you had to. 

  • Thanks 1
Link to comment
  • 3 weeks later...
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.