rsync Incremental Backup


Recommended Posts

1 hour ago, Archonw said:

which attributes do i need for rsync?

 

I would:

 

1. Stop the container.

 

2. Backup the old files:

 

mv /mnt/user/appdata/mariadb /mnt/user/appdata/mariadb-old

 

3. Restore the backup:

 

cp -aT /mnt/user/unraidbackup/appdata/20220717_043012/mariadb /mnt/user/appdata/mariadb

 

4. Start the container 

 

 

So it's simply copying the files, except that "-a" copies the required file owner and permissions, too.

  • Thanks 2
Link to comment
  • 2 weeks later...

Would it work out to use $MOUNTPOINT as destination?

e.g.

 

backup_jobs=(
  # source                          # destination
  "/mnt/user"                 "$MOUNTPOINT/Backups"
)

 

Any reason why its not used by default? Thanks

Link to comment
8 hours ago, mgutt said:

What should this variable contain?

 

 

 

MOUNTPOINT : where the partition is mounted

 

In my case for my backup usb disk it would be /mnt/disks/WCJ65BZT and I wont have to bother with mountpoints for backup at all.

 

 

Link to comment
On 8/2/2022 at 9:00 PM, Thomas K said:

 

backup_jobs=(
  # source                          # destination
  "/mnt/user"                 "$MOUNTPOINT/Backups"
)

 

 

Did some try runs. It works fine and you don't have to worry "where" the disk is mounted in the script.

Edited by Thomas K
typo
Link to comment

hello friends,

first of all a big thank you to mgutt for the script.

Since I am still very inexperienced, I hope I do not ask too stupid questions.

The following, have the script completely copied out and adapted the following:

# backup source to destination
backup_jobs=(
  # source # destination
  #"/mnt/user/Music" # "/mnt/user/Backups/Shares/Music"
  #"user@server:/home/Maria/Photos" #"/mnt/user/Backups/server/Maria/Photos"
  #"/mnt/user/Documents" #"user@server:/home/Backups/Documents"
   "/mnt/user/appdata/Debian #mnt/disks/Volume/backup_vm

 

Four entries commented out, and added these for testing:

 

"/mnt/user/appdata/Debian" "/mnt/disks/Volume/backup_vm"

 

So now unraid saves since yesterday evening about 22.00 o'clock continuously until now and further on my external USB HDD which I formatted as NTFS. It has copied about 1.20 TB so far! There is something wrong or? That's already over 12 hours!

Do not want to cancel the process now, if everything should be correct.

Who can help me further?

I would like to backup my Debian VM completely. That is my goal.

Sorry for my bad english. 
If you want to answer in german you are welcome to do so.

:)

Greetings

Screenshot 2022-08-05 115329.jpg

Edited by geromme
Link to comment

When you use ntfs some features will not work. Like hardlinks and permissions. 

I think it is better to use a linux-based file system.

 

And you can always cancel these jobs, because rsync can handle this. It resumes and did not start all over. 

 

Rsync solltest du unbedingt, auf ein Linux Filesystem schreiben lassen. Ansonsten funktionieren einige Sachen nicht wie geplant. Hardlinks oder das speichern der Userrechte. Ohne diese ist das Backup nachher kaum zu gebrauchen. 

Nimm daher z.B ext4 oder btrfs oder ähnliches.

 

Rsync kann man bedenkenlos abrechen. Es setzt dort wieder an, wo es aufgehört hat. Das ist einer der vielen Vorzüge von Rsync.

Edited by Archonw
Link to comment
  • 2 weeks later...
On 8/5/2022 at 11:16 AM, Thomas K said:

 

Did some try runs. It works fine and you don't have to worry "where" the disk is mounted in the script.

Your idea sounds interesting. But I guess you run the script via UD/Device script and not per schedule?

I'm currently using 2 usb drives rotating every month and script is run by cronjob/user scrips. But curently this means i need to run two jobs (one for each usb drive) and one of them will fail as that drive is missing. So a possibility for just one Job using "whatever UD is available" would be nice

Link to comment
14 minutes ago, mgutt said:

Shouldn't this default error avoid errors?

 

skip_error_no_such_file_or_directory=1

maybe - i'm still on version 0.3 or 0.4 😅

But i'll update to the newest version soon - that's why i'm after loooooong time back to this thread.

I'm currently looking what has changed since than and what needs to be reconfirgured for my usecase.

 

Edit: Even if this works, i still would need two scripts (for each drive one). Or can i use asterisks in the destination path? My usb drives are mounted as "backup01" and "backup02" - if i could use "mnt/disks/backup*" i could use a single user sript for a my rotating backups. Does that work?

Edited by jj1987
Link to comment
  • 3 weeks later...

Ich habe gerade das Skript in der neuste Version (v1.3) parat gemacht und wollte es ausführen. Dafür habe ich auch eine frisch formatierte (xfs) ext HDD fertig gemacht, nur leider bekomme ich folge "Fehlermeldung" direkt nach dem Start.

 

 

Script is already running!
Script Finished Sep 05, 2022 13:11.15

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Backup_Script_HDD_v1.3/log.txt

 

 

Das "alte" Skript welches eine lange Zeit problemlos lief zeigt nun das gleiche Verhalten. Was mache ich falsch bzw. was habe ich übersehen oder vergessen? Hat jemand eine Idee?

 

Vielen Dank im Voraus.

 

Viele Grüße,

Marc
 

Link to comment

Danke für deine schnelle Rückmeldung. Da muss man erstmal drauf kommen. Nach dem Löschen der tmp Datei und des tmp Ordners unter /tmp/ war das Starten des Skriptes erfolgreich und so wie es aussieht lief es auch sauber durch.

 

Vielen Dank noch mal für deine Arbeit rund um das Skript 👍

Link to comment

Hello,

first of all thanks to mgutt for the script.
Unfortunately my experience is rather limited in the topic.
I have the problem that the script throws an error: "line 223: (1662569620 - ) / 86400 : syntax error: operand expected (error token is ") / 86400 ")".


The only thing I changed in the script (v1.3) was:
Line 68 "#..."
Line 69 "#..."
Line 70 ""/mnt/user/source" "root@server.duckdns.org:/mnt/user/def/destination""
Line 129 "alias rsync='rsync -e "ssh -p 5533"'"

 

Started manually. The folders exist and the script works after modification (see workaround below).

Full Script feedback message:

Spoiler

# #####################################
last_backup: '_'
date: invalid date '_'
/tmp/user.scripts/tmpScripts/AAA Backup/script: line 223: (1662569620 - ) / 86400 : syntax error: operand expected (error token is ") / 86400 ")

 

For some reason it does not take over a value for "last_backup", it seems.

Does anyone have an idea where my error is?

 
My Workaround for now - line 223 changed:
Line 223 "last_backup_days_old=$(( ($(date +%s) - $(date +%s -d "${last_backup:0:4}${last_backup:4:2}${last_backup:6:2}") + 1 ) / 86400 / 19242 ))"

Then I get:

Spoiler

# #####################################
last_backup: '_'
date: invalid date '_'
Create incremental backup from /mnt/user/source to root@server.duckdns.org:/mnt/user/def/destination/20220907_185606 by using last backup root@server.duckdns.org:/mnt/user/def/destination/_


--link-dest arg does not exist: /mnt/user/def/destination/_
cd+++++++++ ./

Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 0 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 190
Total bytes received: 103

sent 190 bytes received 103 bytes 586.00 bytes/sec
total size is 0 speedup is 0.00
File count of rsync is 2
Make backup visible ...

... through rsync (slow)


# #####################################
Clean up outdated backups


Keep daily backup: 20220907_185606
Keep multiple backups per day: 20220907_185428
Keep multiple backups per day: 20220907_184632


 

Link to comment
  • 1 month later...

Hi, first of all, thank you for the great script!

 

Unfortunately the script seems to have some trouble with the ssh alias option.

 

I have a remote ssh source i want to backup. Since i don't use passwords but ssh-keys instead, i must specify the specific identity file to ssh which why i added the ssh alias:

alias ssh='ssh -i /root/.ssh/id_ed25519_userX'

unfortunately this alias don't seem to be picked up (even if i put the -vvv option in it, no details why "permission denied" pops up.

 

As a workaround i created a .ssh/config file on unraid and added my Host with it's related identity file there. Now it works!

 

Is this a bug or should it behave like this?

Link to comment

I got another question regarding the hardlinks.

 

lets assume i have "/mnt/user/appdata/folder1/folder2/file.jpg" and i create the first backup. In this case, the file will be transferred and be created on my backup drive.

Now, i "mv" the file on unraid from "/mnt/user/appdata/folder1/folder2/file.jpg" to "/mnt/user/appdata/folder1/file.jpg". After, i create a second Backup.

 

On the unraid host, i just "moved" the hardlink. What happens to the backups? will the hardlink be moved also or will the file be copied a second time since the drive is a additional filesystem with it's own inodes and rsync just copies the current state?

 

 

Link to comment
11 hours ago, afl said:

On the unraid host, i just "moved" the hardlink. What happens to the backups? will the hardlink be moved also or will the file be copied a second time since the drive is a additional filesystem with it's own inodes and rsync just copies the current state?

I don't know it for sure, but as rsync compares the files between the last backup path and the new backup path and as the last backup subdir "/appdata/folder1" does not contain a file called "file.jpg", it should copy the file, instead of creating a hardlink I think. And yes, I think this is because the inodes between the filesystem are completely different and rsync does not "build a database" to remember that the same inode is used by the file in "/appdata/folder1/folder2".

 

But this should be different if source and destination are on the same filesystem as the backup files would then already use the same inodes as the source files.

 

But I never tested this. 🤷‍♂️

 

Link to comment

Released v1.4 with the following new features:

 

1.) The script can be now called with source and destination path as arguments

 

Example:

/usr/local/bin/incbackup /src_path /dst_path

 

2.) If the source path is set to /mnt/*/appdata it will automatically stop all running docker containers and restart them after the backup has been created (results consistent container backups).

 

3.) If the source path is set to /mnt/cache/appdata or /mnt/diskX/appdata, the script will create a snapshot to /mnt/*/.appdata_snapshot before creating the backup. This reduces docker container downtime to several seconds (!).

 

4.) The script tries to test if the destination path supports hardlinks (needs feedback).

 

 

The following new warnings have been added to the first post of this thread:

 

Quote

Do not use NTFS or other partition formats, which do not support Hardlinks and/or Linux permissions. Format external USB drives with BTRFS and install WinBTRFS, if you want to access your backups through Windows.

 

Do NOT use the docker safe perms tool if you backup the appdata share to the array. By that all file permissions are changed and can not be used by your docker containers anymore. Docker safe perms skips only the /mnt/*/appdata share and not for example /mnt/disk5/Backups/appdata!

 

 

 

 

  • Like 3
Link to comment
On 10/30/2022 at 11:53 PM, mgutt said:

4.) The script tries to test if the destination path supports hardlinks (needs feedback).

The hardlink check failed at my side. Error is: Your destination does not support hardlinks!

I upgraded from 0.6 version to 1.4. That's why I started with an empty backup. My setup is as follows:

  • Empty BRTS formatted hard disk at /mnt/disks/W301C86A (mounted by UD)
  • backup_jobs=(

    # source # destination

    "/mnt/user/archive" "/mnt/disks/W301C86A"

    "/mnt/user/appdata" "/mnt/disks/W301C86A"

    )

  • Other settings have default values

Do I have to add a subfolder for each destination? Any other suggestion?

Link to comment
7 hours ago, enabler said:

The hardlink check failed at my side.

Thank you for your feedback. I will check this. Until now you can remove this part from the script:

    # hardlink test
    touch "$empty_dir/empty.file"
    rsync "${dryrun[@]}" --itemize-changes "$empty_dir/empty.file" "$dst_path/link_dest/"
    # remove ssh login if part of path
    link_dest_path="${dst_path/$(echo "$dst_path" | grep -oP "^.*:")/}"
    transfers=$(rsync --dry-run --itemize-changes --link-dest="$link_dest_path/link_dest/" "$empty_dir/empty.file" "$dst_path/hardlink/")
    [[ "$transfers" ]] && notify "No hardlink support!" "Error: Your destination does not support hardlinks!"
    rm -v "$empty_dir/empty.file"
    rsync "${dryrun[@]}" --recursive --itemize-changes --delete --include="/link_dest**" --exclude="*" "$empty_dir/" "$dst_path"
    [[ "$transfers" ]] && exit 1

 

7 hours ago, enabler said:

# source # destination

"/mnt/user/archive" "/mnt/disks/W301C86A"

"/mnt/user/appdata" "/mnt/disks/W301C86A"

Please add different subdirs to the destinations, else /archive and /appdata will be copied to the same dir "/W301C86A/<timestamp>". Several versions ago I changed the script, so it does not create additional subdirs in the destination. It is now up to the user, to define those subdirs.

Link to comment
21 minutes ago, mgutt said:

Released v1.5 released

- fixed hardlink test

 

FYI - I just upgrade from 1.4 which was fine and am now getting the error that enabler reported and the script aborts.

 

Script Starting Nov 01, 2022 17:57.30

Full logs for this script are available at /tmp/user.scripts/tmpScripts/backup/log.txt

# #####################################
created directory /dst/link_dest
>f+++++++++ empty.file
--link-dest arg does not exist: /dst/link_dest
removed '/tmp/_tmp_user.scripts_tmpScripts_backup_script/empty.file'
Error: Your destination /dst does not support hardlinks!
Script Finished Nov 01, 2022 17:57.33

Full logs for this script are available at /tmp/user.scripts/tmpScripts/backup/log.txt

 

 

Link to comment
6 minutes ago, bclinton said:

I just upgrade from 1.4 which was fine and am now getting the error that enabler reported and the script aborts.

You did not change the paths 😉 (/dst is the default path of the script)

 

EDIT: Changed the default paths to avoid RAM flooding if someone forgets to change the paths.

Link to comment
7 minutes ago, mgutt said:

You did not change the paths 😉 (/dst is the default path of the script)

 

EDIT: Changed the default paths to avoid RAM flooding if someone forgets to change the paths.

 

You are correct. I copied my changes below that instead of replacing it. All is fine now, thanks! :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.