rsync Incremental Backup


119 posts in this topic Last Reply

Recommended Posts

  • Replies 118
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

The following script creates incremental backups of a share to a target path by using rsync. As an example the default setting is creating backups of the shares "Music" and "Photos" to the path "/mnt/

My plan is to realize a plugin out of it. So it's not a good idea to publish it at Github at this time. It will be done if it reaches the plugin state.

Good point. I will upgrade the script so it supports multiple and complete paths (not only a share name).

Posted Images

next question :)

is there way to exclude folder?

 

i want backup a whole share to an UD. inside that share is the ".Recycle.Bin" i don't want to backup, but everything else.

Link to post
  • 3 weeks later...

Thanks for this Script and your effort :)

 

My first idea was that I would use this script to create a backup in a folder which is synchronized with gdrive (with the rclone script for plex) so I would have an online backup in case anything happens to my server. On a second thought this was probably a silly idea as I can't use the hardlinks with gdrive this way and a recovery would be difficult.

 

But I think I will use this script to create a monthly backup which should always be a full backup and synchronize this with gdrive. So in a worst case scenario I would only loose 1 month of data.

 

If I understand correctly, I only need to replace this part of the script:

 

    if [[ -n "${last_backup}" ]]; then
        echo "Create incremental backup ${new_backup} by using last backup ${last_backup}"
        rsync -av --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}"
    else
        echo "Create full backup ${new_backup}"
        # create very first backup
        rsync -av --stats "${source_path}" "${backup_path}/.${new_backup}"
    fi

 

with:

        echo "Create full backup ${new_backup}"
        # create very first backup
        rsync -av --stats "${source_path}" "${backup_path}/.${new_backup}"

 

An create a monthly cron job which executes on the first day of each month.

Do you see any issues with this idea?

 

Script I use to Synchronize with GDrive:

 

Edited by Symon
Link to post
On 5/29/2021 at 6:51 AM, sonic6 said:

next question :)

is there way to exclude folder?

 

i want backup a whole share to an UD. inside that share is the ".Recycle.Bin" i don't want to backup, but everything else.

It would be a good Thing ...

 

--exclude ='.Recycle.Bin'

 

.... but better ask @mgutt

Edited by schoppehermann
Link to post

hey @mgutt, great script, thanks a lot. One question: I am using it "heavily" every 2 hours with notifications enabled. Is it possible to get only notifications when an Error occurs? I don't need the "Everything OK" notifications ;)

Link to post
On 6/22/2021 at 2:43 AM, nicx said:

hey @mgutt, great script, thanks a lot. One question: I am using it "heavily" every 2 hours with notifications enabled. Is it possible to get only notifications when an Error occurs? I don't need the "Everything OK" notifications ;)

All ok pings are important to verify that you will see an error ping. Otherwise there is no difference between major failure and everything working as designed.

Link to post
  • 2 weeks later...

First thanks for the work !

 

I'm trying to implement the script between my Unraid server and an offsite Synology NAS. The synology has Disks mounted in Raid 5 and volume formated in BTRFS.

 

The synology is mounted as an SMB share (this is maybe whre the mistake is) and everytime I start the script (from my unraid server to backup on the Synology) It does a full backup. I assume the hardlinks are not working on this setup.

 

What do I do wrong ? do I need to mount the synology in another way ?

Link to post
37 minutes ago, chocorem said:

The synology is mounted as an SMB share

a) enable unix extensions in the SMB settings of the Synology NAS

or

b) mount unraid on your Synology NASand execute the script on the Synology NAS

Link to post
13 hours ago, mgutt said:

a) enable unix extensions in the SMB settings of the Synology NAS

or

b) mount unraid on your Synology NASand execute the script on the Synology NAS

I will try in this way. One additional question. Is the script tranferring all the data, compairing and then delete/creating hardlink ?

or is the script checking existing and only transferring the changed information.

 

Data to save is very big at my side, and a lot of files, and the backup takes more than 10 hours (when running locally) So remote, it would be worse if the script is transferring everything

Link to post
1 hour ago, chocorem said:

or is the script checking existing and only transferring the changed information

It compares filesize and timestamps before transfering them.

Link to post

Hey Marc, 

 

Have you got this script on GitHub, that way 

 

(delete as applicable for your own Use Case) 

 

- The (extra) lazy amongst us could just pull changed versions rather than copy/paste

- The more anal amongst us can more easily validate that we're content with the changes before implementing

- The more helpful amongst us can do PRs and "help" with your script development

 

Cheers

 

Badger

Link to post
7 hours ago, Meles Meles said:

Have you got this script on GitHub

My plan is to realize a plugin out of it. So it's not a good idea to publish it at Github at this time. It will be done if it reaches the plugin state.

Link to post
  • 2 weeks later...

@mgutt Found an issue when backing up to an NTFS-formatted unassigned disk:

Since NTFS is not aware of Linux permissions/users/groups, rsync thinks that the files are different even if they are the same because the NTFS permissions on the backed up files don't match the Linux permissions. 

 

In order to fix this and avoid duplicating all files, I had to add these arguments to rsync:

 

# create incremental backup
if [[ -n "${last_backup}" ]]; then
echo "Create incremental backup ${new_backup} by using last backup ${last_backup}"
rsync -av --no-perms --no-owner --no-group --modify-window=5 --progress --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}"
else
echo "Create full backup ${new_backup}"
# create very first backup
rsync -av --no-perms --no-owner --no-group --stats "${source_path}" "${backup_path}/.${new_backup}"
fi

 

Perhaps consider adding these arguments to your script? I don't think ownership/permissions checks would ever be needed for UnRAID backups, since everything should be owned by nobody/users anyways. 

Link to post
7 hours ago, CoolTNT said:

I don't think ownership/permissions checks would ever be needed for UnRAID backups, since everything should be owned by nobody/users anyways.

They are important for the appdata share, so don't use NTFS as a target if you like to create backups of this.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.