rsync Incremental Backup

Recommended Posts

  • 3 weeks later...

Thanks for this Script and your effort :)


My first idea was that I would use this script to create a backup in a folder which is synchronized with gdrive (with the rclone script for plex) so I would have an online backup in case anything happens to my server. On a second thought this was probably a silly idea as I can't use the hardlinks with gdrive this way and a recovery would be difficult.


But I think I will use this script to create a monthly backup which should always be a full backup and synchronize this with gdrive. So in a worst case scenario I would only loose 1 month of data.


If I understand correctly, I only need to replace this part of the script:


    if [[ -n "${last_backup}" ]]; then
        echo "Create incremental backup ${new_backup} by using last backup ${last_backup}"
        rsync -av --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}"
        echo "Create full backup ${new_backup}"
        # create very first backup
        rsync -av --stats "${source_path}" "${backup_path}/.${new_backup}"



        echo "Create full backup ${new_backup}"
        # create very first backup
        rsync -av --stats "${source_path}" "${backup_path}/.${new_backup}"


An create a monthly cron job which executes on the first day of each month.

Do you see any issues with this idea?


Script I use to Synchronize with GDrive:


Edited by Symon
Link to comment
On 5/29/2021 at 6:51 AM, sonic6 said:

next question :)

is there way to exclude folder?


i want backup a whole share to an UD. inside that share is the ".Recycle.Bin" i don't want to backup, but everything else.

It would be a good Thing ...


--exclude ='.Recycle.Bin'


.... but better ask @mgutt

Edited by schoppehermann
  • Like 1
Link to comment
On 6/22/2021 at 2:43 AM, nicx said:

hey @mgutt, great script, thanks a lot. One question: I am using it "heavily" every 2 hours with notifications enabled. Is it possible to get only notifications when an Error occurs? I don't need the "Everything OK" notifications ;)

All ok pings are important to verify that you will see an error ping. Otherwise there is no difference between major failure and everything working as designed.

Link to comment
  • 2 weeks later...

First thanks for the work !


I'm trying to implement the script between my Unraid server and an offsite Synology NAS. The synology has Disks mounted in Raid 5 and volume formated in BTRFS.


The synology is mounted as an SMB share (this is maybe whre the mistake is) and everytime I start the script (from my unraid server to backup on the Synology) It does a full backup. I assume the hardlinks are not working on this setup.


What do I do wrong ? do I need to mount the synology in another way ?

Link to comment
13 hours ago, mgutt said:

a) enable unix extensions in the SMB settings of the Synology NAS


b) mount unraid on your Synology NASand execute the script on the Synology NAS

I will try in this way. One additional question. Is the script tranferring all the data, compairing and then delete/creating hardlink ?

or is the script checking existing and only transferring the changed information.


Data to save is very big at my side, and a lot of files, and the backup takes more than 10 hours (when running locally) So remote, it would be worse if the script is transferring everything

Link to comment

Hey Marc, 


Have you got this script on GitHub, that way 


(delete as applicable for your own Use Case) 


- The (extra) lazy amongst us could just pull changed versions rather than copy/paste

- The more anal amongst us can more easily validate that we're content with the changes before implementing

- The more helpful amongst us can do PRs and "help" with your script development





Link to comment
  • 2 weeks later...

@mgutt Found an issue when backing up to an NTFS-formatted unassigned disk:

Since NTFS is not aware of Linux permissions/users/groups, rsync thinks that the files are different even if they are the same because the NTFS permissions on the backed up files don't match the Linux permissions. 


In order to fix this and avoid duplicating all files, I had to add these arguments to rsync:


# create incremental backup
if [[ -n "${last_backup}" ]]; then
echo "Create incremental backup ${new_backup} by using last backup ${last_backup}"
rsync -av --no-perms --no-owner --no-group --modify-window=5 --progress --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}"
echo "Create full backup ${new_backup}"
# create very first backup
rsync -av --no-perms --no-owner --no-group --stats "${source_path}" "${backup_path}/.${new_backup}"


Perhaps consider adding these arguments to your script? I don't think ownership/permissions checks would ever be needed for UnRAID backups, since everything should be owned by nobody/users anyways. 

Link to comment
7 hours ago, CoolTNT said:

I don't think ownership/permissions checks would ever be needed for UnRAID backups, since everything should be owned by nobody/users anyways.

They are important for the appdata share, so don't use NTFS as a target if you like to create backups of this.

Link to comment

Hello all,

and many thanks to @mgutt for the excellent script for backup. 

Before I switched to unraid, I used hardlinkbackup (Windows software that uses rsysnc for backup). Since I don't always want to run a virtual machine for backup, I was happy to find this script here.

I have now split my data to be backed up. The ones that no one will miss if I'm gone, I back up to an external hard drive using btrfs. The data that is important to everyone I back up to NTFS so everyone can plug the disk into a different system.

The backups worked fine in the beginning. But now the backup of the data to the NTFS partition keeps hanging. 

Thereby the external disk appears in unraid afterwards like this (0 B used and free):



When I try to restart unraid, I get the following error messages.




Sometimes it works that I can end the process with kill. But often not even that works. If I then force the system to reboot with "reboot", the parity check is started every time.


After the reboot the disk is mounted normally again.




For the backup to the ntfs partition I have adapted the script like this :


rsync -v -rltD --modify-window=10 --stats --progress --itemize-changes --compress --delete --exclude =".rsync" --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}"


Does anyone have any idea what is causing the script to crash?


many greetings

Link to comment
4 minutes ago, bonbonJaeger said:

I get the following error messages.



This means UD has a timeout which stops reading the NTFS disk size if it takes longer than 2 seconds. This is not really a bug, but it shows how slow your disk became. Maybe to many Hardlinks for NTFS?!


Maybe you should try to format it into BTRFS. With WinBTRFS you would be still able to access your files through a windows client if needed. Or of course XFS / ext4.


Finally it should be the better option to use an Unix based file system as permissions and filename restrictions are different compared to NTFS as well.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.