Jump to content

rsync Incremental Backup


Recommended Posts

Hi, Thank you for this awesome script! I use it against ransomware - one question: can you use chattr +i after the incrimental Backup on the backup server and the script will add more hardlinks with the next backup? Or are there no more hardlinks possible after chattr+i ? ('i' A file with the 'i' attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file, most of the file's metadata can not be modified, and the file can not be opened in write mode. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.)

 

Same in German:

Hallo, danke für das tolle Script! Ich benutze das Script bei mir gegen Ransomeware und natürlich für ein Backup. Meine Frage jetzt; kann ich als zusätzlichen Schutz noch chattr+i über die bereits gesicherten Files drüberlaufen lassen? Oder werden dann keine Hardlinks mehr erstellt? 

Edited by Naitor
Link to comment
  • 2 weeks later...

Hi!

Great script, I am planning on using this for main daily filebackup.

 

If I check the filesizes via the console like

du -d1 -h /mnt/disks/backup | sort -k2
13M     /mnt/disks/backup_hdd/#testbackup
11M     /mnt/disks/backup_hdd/#testbackup/20240228_171307
4.0K    /mnt/disks/backup_hdd/#testbackup/20240228_171556
4.0K    /mnt/disks/backup_hdd/#testbackup/20240228_171932
4.0K    /mnt/disks/backup_hdd/#testbackup/20240228_175035
1.8M    /mnt/disks/backup_hdd/#testbackup/20240228_175310

 

I can see looking at the foldersizes, that only changes/new files are copied, so hardlinks seem to work.

 

But when calculating the foldersize with the unraid file manager it adds all folders up (as if the backups are all full backups).

Same behaviour within the webui of my Synology NAS.

 

How can I prevent Unraid to show wrong values of used disk space? Or is it only shown wrong within Unraid's file manager?

 

Thanks for your help!

 

 

edit:

Since I'm quite new to the behaviour of hardlinks I was a bit hesitant and wanted to make sure, my backup storage won't be eaten up because of a wrong config on my side.

But I tested it and the used disc space on the Unraid 'Main' page doen't change, when doing an incremental backup without any new data. But the backupfolder size gets bigger, which is irritating at first :)

 

 

Thanks again for sharing this script :)

Screenshot 2024-02-28 180642.jpg

Edited by Jaytie
  • Upvote 1
Link to comment
  • 2 weeks later...

Please help 🙂 Can somebody explain please how to change the Path where the logfile is saved after the Backup? I want one Folder outside the destination path with all the log files from all my backups, so i can go through and check if all the backups are made.  Would be very kind, im a newbie when it comes to bash!


 

# move log file to destination

log_path=$(rsync --dry-run --itemize-changes --include=".$new_backup/" --include="$new_backup/" --exclude="*" --recursive "$dst_path/" "$empty_dir" | cut -d " " -f 2)

[[ $log_path ]] && rsync "${dryrun[@]}" --remove-source-files "$log_file" "$dst_path/$log_path/$new_backup.log"

[[ -f "$log_file" ]] && rm "$log_file" 

 

Link to comment

Hi friends. What setting would I change to have the script only keep the latest backup? In other words, after it backs up it would delete the previous backup folder and my backup drive would simply have 1 (the latest) backup folder. I changed the days retention to 1, will that do it? I backup once a week on Mondays.

 

Would this do the trick?

 

# keep backups of the last X days
keep_days=1

# keep multiple backups of one day for X days
keep_days_multiple=0

# keep backups of the last X months
keep_months=0

# keep backups of the last X years
keep_years=0

Link to comment

Hey, 

 

Thank you for the wonderful script! Saved me a lot of time.

 

I tried to figure out how I can ensure that the backed up data is not acessible anymore once the backup is done. My understanding is that spinning it down and detaching it will do the trick. This can be achieved by using the unassigned device plugin as mentioned by OP.

 

Here is a short manual which works for me. Any feedback is appreciated.

 

Updating Unassigned Devices Default Script

1. First, you want to adapt the unassigned devices script by uncommenting the sections of the default script. Click on the settings button in the unassigned devices section (Main tab).

 

1240306453_Screenshot2024-03-21085912.thumb.png.5f91a9f57299f8ae6283acd2587cf606.png

 

2. Copy Disk Serial and Disk Name to a notepad. You will need them later. My Disk Serial in this example ends at "74". Do not copy blank space or the brackets.

 

Note: @dlandon pointed out: Do not use "devX" device names. Assign a unique name to the 'Disk Name' field as an alias and use that in your script. Take the time to update it here and save the change before proceeding to the next steps.

1087689353_Screenshot2024-03-21090952.thumb.png.9ddc9e9cb7fd799d42c306c148a2f2de.png

 

3. Scroll down to find the script section. Click on "Default". It will load the default script to the Device Script content. At this stage, the script does nothing.

 

566115990_Screenshot2024-03-21091628.thumb.png.9eb72e1549c332670a735d4fbe26a835.png

 

4. Uncomment the "remove" section activities by removing the hashtag. Effect: Once unmounted, the disk spins down and deattaches. Click on Save.

 

1465231033_Screenshot2024-03-21091849.png.18ea6fe8b57cd496b5fea7b74d2a4c2d.png

 

5. Scroll up and select the device script. Yours will have a different name, but there is probably only one to select.55824473_Screenshot2024-03-21092153.thumb.png.3bdbdcf615fafacd67b794449f9c7ed3.png

 

--------------------------------------------------

Switch to the User Scripts:

 

6. Open the script which you created for the backup based on OPs script.

 

7. Add lines at the beginning and the end of the script to attach, mount and unmount the usb device. You will need the saved information from the first step: Disk Serial and Device Name. Note: I am using different devices than at the beginning here, do not let this confuse you. Just copy & paste the information you saved before.

1597176825_Screenshot2024-03-21092423.png.7ff276ab96227bdcd4a26fbea275054d.png

 

1579723431_Screenshot2024-03-21092628.png.c07ed33b275689e3e5b6aa20c89299e4.png

 

### mount external USB drive

/usr/local/sbin/rc.unassigned attach 'device serial number'
/usr/local/sbin/rc.unassigned mount name='device name'

 

# unmount the usb device
/usr/local/sbin/rc.unassigned umount name='device name'

 

You can test the command without processing the full script by copy and pasting them in the command line of your unraid system. Note that attaching and mounting will take some time (~10 seconds for me). Just wait and see if it pops an error or hopefully just works.

 

Source: 

 

 

Edited by cloudyhome
added note regarding proper choice of disk name.
Link to comment

@cloudyhome Nice writeup!  One suggestion.  Don't use the 'devX' designation from the 'Disk Name' field.  UD defaults this to the 'devX' designation Unraid assigns to unassigned disks and it can change.  Assign a unique name to the 'Disk Name' field as an alias and use that in your script.

  • Like 1
Link to comment
  • 2 weeks later...
On 10/18/2020 at 9:14 AM, mgutt said:

The following script creates incremental backups by using rsync. Check the settings to define your own paths.

 

Explanations

  • All created backups are full backups with hardlinks to already existing files (~ incremental backup)
  • All backups use the most recent backup to create hardlinks or new files. Deleted files are not copied (1:1 backup)
  • There are no dependencies between the most recent backup and the previous backups. You can delete as many backups as you like. All backups that are left, are still full backups. This could be confusing as most incremental backup softwares need the previous backups for restoring the data. But this is not valid for rsync and hardlinks. Read here if you need more informations about links, inodes and files.
  • After a backup has been created the script purges the backup dir and keeps only the backups of the last 14 days, 12 month and 3 years, which can be defined through the settings
  • logs can be found inside of each backup folder
  • Sends notifications after job execution
  • Unraid exclusive: Stops docker containers if the source path is the appdata path, to create consistent backups
  • Unraid exclusive: Creates a snapshot of the docker container source path, before creating a backup of it. This allows an extremely short downtime of the containers (usually only seconds).

How to execute this script?

  • Use the User Scripts Plugin (Unraid Apps) to execute it by schedule
  • Use the Unassigned Devices Plugin (Unraid Apps) to execute it after mounting a USB drive
  • Call the script manually (Example: /usr/local/bin/incbackup /mnt/cache/appdata /mnt/disk6/Backups/Shares/appdata)

Hello @mgutt! I was looking at your script and really like it. I think this would be great for backups from my Unraid to a NAS or some other external storage source. I have a question and I might be missing some information. Can this script be modified to not make incremental backups but backup one Unraid system to a second off site Unraid that I have running? My home unraid is my primary (now) but I still have files on my old unraid off site. I would like to backup all appdata and containers and plugins from my off site to home first and have it merge and keep newer files (assuming my home Unraid has more recent files). The problem I ran into is that some files are too large and are timing and erroring out (my nextcloud data). I also have manually tried backing up my containers and then restoring them on my home one. I have the appdata folder and data but the container is not showing up in my home unraid to start. I tried installing the container and then running it but it is still isn't working right and throwing errors (thinking something with my database isn't syncing correctly leading to problems).

 

Sorry for the long post and any confusion, if I need to I can try to clearify but the gist is:

Off site unraid full backup to Home unraid (including docker containers and configs, etc.)

Need the home unraid to be able to run all containers from sync (ex MariaDB for NExtcloud user data both synced from off site)

Need to keep newer and existing data on Home unraid.

After above is working, have my Home Unraid as my primary that then syncs back to offsite. 

Keep off site as a failover if Home unraid dies (I have an idea on how to do this with a load balancer in the cloud ran through cloudflare) -- This means the failover needs to have all my docker containers and auto start settings and appdata backup.

 

 

Edit:

On your script for the custom command would I be able to change the rsync custom one to something like this?

rsync -Pav -e "ssh -i $HOME/.ssh/key.pubkey" /from/dir/ username@hostname:/to/dir/

I don't understand your script well enough to know if this will break something since your script custom command is:

alias rsync='sshpass -p "<password>" rsync -e "ssh -o StrictHostKeyChecking=no"'

 

Edited by myxxmikeyxx
Link to comment
On 7/30/2023 at 1:00 PM, mgutt said:

The target filesystem has to be BTRFS or XFS or ZFS. The source filesystem does not matter except it contains the appdata directory. Then it should be BTRFS or XFS (At the moment ZFS does not support reflink files, which is a feature used by my script.

Is ZFS still not supported?

My pool where my appdata folder is, is a zfs pool.

The script runs fine (I guess) but after completing the backup the container do not start up automatically.

 

EDIT:

I guess I found a bug in the script. When appdata is on a ZFS pool then there is the following error:

rm: cannot remove '/mnt/cache/.appdata_snapshot': No such file or directory

 

As far as I understand this is because $snapshot_path is always set:

       else
          # set snapshot path
          snapshot_dirname=$(dirname "$src_path")
          snapshot_basename=$(basename "$src_path")
          snapshot_path="$snapshot_dirname/.${snapshot_basename}_snapshot"

and because of this it always tries to delete the snapshot instead of starting the container

    # remove snapshot (containers are already running)
    if [[ $snapshot_path ]]; then
      rm -r "$snapshot_path"
    # start containers
    else
      echo "Start containers (slow method):"
      docker container start "${container_ids[@]}"
    fi

 

Am I right?

 

EDIT 2:

It seems I am right. I added a unset snapshot_path here, and it works now:

else
   notify "Backup causes long docker downtime!" "The source path $src_path is located on a filesystem which does not support reflink copies!"
   unset snapshot_path

This should be fixed in the next version.

OK, the script is running now, but only in "slow mode" because of ZFS. May I better switch to btrfs? I was hoping I can use automatic snapshots for my pool and array.

Edited by UnKwicks
possible reason/bug/fix added
Link to comment
On 4/7/2024 at 10:09 PM, UnKwicks said:

This should be fixed in the next version.

I realized that my fix above is not a clean solution, because even with ZFS the script creates a snapshot that is not getting deleted if I unset the variable.

So my fix for now is to just remove the else where the container needs to get startet:

# final container steps (unraid exclusive)
  if [[ ${#container_ids[@]} -gt 0 ]]; then
	
    # remove snapshot (containers are already running)
    if [[ $snapshot_path ]]; then
      rm -r "$snapshot_path"
	  unset snapshot_path
      # start containers
	  echo "Start containers (slow method):"
	  docker container start "${container_ids[@]}"
    fi
    
  fi

 

So if containers were stopped and if a snapshot_path is set (so we are in the loop for appdata right now) the snapshot is deleted and the containers get started. For the loop that backups other backup locations container do not get stopped, no snapshot is being created so no need to start the container.

 

Not sure if this is how @mgutt  meant the script to work, but maybe you can bring light into the dark?

As well to answere the question if it is even possible/recommended to run the script when appdata is on a ZFS share.

 

Thanks!

Edited by UnKwicks
Link to comment

I am just finding this solution as I am trying to do remote backups. I have a Raspberry Pi with a USB hard drive connected over Wireguard to my Unraid server. I want to back up folders on my unraid drive to the remote hard drive, which is shared with SMB. Right now it is NTFS formatted, but I can reformat it to another format if necessary. On the first page it was said that the destination and link-dest have to be on the same volume, but I don't quite understand. I found a link to this page from serverfault.com and in that answer it said you can do remote backups. Will this meet my needs? Any particular requirements I need to be aware of? Thanks in advance.

Link to comment
  • 3 weeks later...

How do you noticed that it doesn"t work? Did you look at the file size of the backups or someting else?

With every cycle of a backup you will find all filenames in it. So it looks like another "full backup". Did you tried the bash command from the first post to check the actual folder size of your backups? 

Link to comment
Posted (edited)

Yeah, sorry, deleted the post because i thought it was working but it is not.

 

so i see what it's doing but i'm not sure why it's doing it. I have not touched the script at all outside of source and destination

 

I'm rsyncing a test folder "ftp" from the main "xfs" array to destination "ftp_back" which is in a zfs pool.

backup_jobs=(
  # source                          # destination
  "/mnt/user/ftp"                 "/mnt/z8fs/ftp_backup"

 

 

For some reason, it is copying the dated folder to both spots, that is the "ftp" source directory on the array and also to the destination "ftp_backup" on the zfs pool. It is then including and nesting the new dated folders from the source into subsequent backups at the destination as seen below.

root@Mars:/mnt/user/ftp# ls -la
total 88
drwxrwxrwx 1 nobody users    79 May  3 12:03 ./
drwxrwxrwx 1 nobody users   218 May  3 12:04 ../
-rw-rw-rw- 1 nobody users 46183 May  3 12:00 testfile.txt
drwxrwxrwx 1 nobody users     5 May  3 12:03 20240503_120313/
drwxrwxrwx 1 nobody users     6 May  3 12:03 20240503_120323/
-rw-rw-rw- 1 nobody users  9068 May  3 12:00 testfile2.txt

 

 

 

root@Mars:/mnt/z8fs/ftp_backup# ls -la
total 57
drwxrwxrwx  5 nobody users  5 May  3 12:12 ./
drwxrwxrwx 12 nobody users 12 May  3 12:04 ../
drwxrwxrwx  4 nobody users  7 May  3 12:04 20240503_120414/
drwxrwxrwx  4 nobody users  7 May  3 12:04 20240503_120427/
drwxrwxrwx  4 nobody users  7 May  3 12:12 20240503_121227/
root@Mars:/mnt/z8fs/ftp_backup# cd 20240503_121227/
root@Mars:/mnt/z8fs/ftp_backup/20240503_121227# ls -la
total 117
drwxrwxrwx 4 nobody users     7 May  3 12:12 ./
drwxrwxrwx 5 nobody users     5 May  3 12:12 ../
-rw-rw-rw- 3 nobody users 46183 May  3 12:00 testfile.txt
drwxrwxrwx 2 nobody users     5 May  3 12:03 20240503_120313/
drwxrwxrwx 3 nobody users     6 May  3 12:03 20240503_120323/
-rw------- 1 root   root   1125 May  3 12:12 20240503_121227.log
-rw-rw-rw- 3 nobody users  9068 May  3 12:00 testfile2.txt
root@Mars:/mnt/z8fs/ftp_backup/20240503_121227# 

 

 

root@Mars:/mnt/z8fs# du -d1 -h ftp_backup/ | sort -k2
478K	ftp_backup/
64K	ftp_backup/20240503_120414
350K	ftp_backup/20240503_120427
64K	ftp_backup/20240503_121227
root@Mars:/mnt/z8fs# 

Hope that makes sense

 

 

EDIT: ok, seems like even though I specified the zfs pool in the destination of the script UNRAID still put that directory on the main array and not the zfs pool. When I clicked into the new "share" and changed from array to pool, it now seems to work. I have no idea why unraid would do that

Edited by ffhelllskjdje
Link to comment
  • 2 weeks later...

Hi, i want to use this with an OMV 7.0 NAS (which is more conveniant in my case) and always get "hardlinks not supported". Anyone knows what to provide on the OMV side of things (debian linux) to enable hardlinks there?

I did not find anything in the OMV forums or elsewhere.

Link to comment
Posted (edited)
11 hours ago, mgutt said:

Double colon means you are using the rsync protocol. I never tested my script with the rsync daemon as it does not support encryption (= insecure transfers).

Yes indeed: I wanted to use rsync via ssh, as this is working perfectly over local LAN. So you're using SMB mounts, or what am i missing? The clever improvement i was expecting from your script was versioned backups via rsync. The "normal" rsync copy and forget job is just one line and working fine here.

I removed the double colons. That changed nothing:

 

created directory media@nas02/link_dest
--link-dest arg does not exist: /backup/media@nas02/hard_link/backup/media@nas02/link_dest
removed '/tmp/_tmp_user.scripts_tmpScripts_ rsync-Daily_script/empty.file'
Error: Your destination [email protected]::backup/media@nas02 does not support hardlinks!

 

Edited by azche24
Link to comment
8 hours ago, azche24 said:

: I wanted to use rsync via ssh

Which you aren't using, as you have "::" in your destination path

 

: = ssh, port 22, encrypted

:: = rsyncd, port 873, unencrypted 

 

Regarding my research the module paths of the rsyncd configuration must be setup in a specific way to use --link-dest. But as I said, I never tested this scenario and have no experience. I'm using only local path and ssh.

 

Link to comment
  • 4 weeks later...

Hey Guys,

 

first of all wonderful Script. 

 

I use this script to backup mine entiere unraid server. In my case Appdata, Disk 1&2&3. Lokaly on my Backup-Nas it worked without an issue. 

Since i moved the NAS to my sisters House i have some problems.

 

Our two Houses are connected with Lan to Lan Connection over Wireguard (Fritzbox). I have tested the speeds from my Unraid Server to the NAS with iperf3 as shown. 

 

My Problem is when the script runs it drops significant in performance. Can someone explain to me where i can see the actual transferspeed of the script or some code i put into it for the log?

 

Grüße, Flo

 

 

Fritzbox_ Tempo im Netzwerk messen.png

Link to comment
On 6/14/2024 at 7:15 AM, boooch said:

My Problem is when the script runs it drops significant in performance. Can someone explain to me where i can see the actual transferspeed of the script or some code i put into it for the log?

 

A week of back an forth after the Post i finally got it to work, sort of. I start the Script now on my Remote NAS with only the read permission and the Rsync Container how suggested on the first Page. 

 

Now i can backup my data at full speed. But one Error is left. Can someone explain to me this Error? 

The Size of Bilder is around 27G and the Files are on the Remote NAS but markt as failed Backup

 

sent 247.84K bytes  received 27.59G bytes  4.99M bytes/sec
total size is 27.59G  speedup is 1.00
grep: : No such file or directory
File count of rsync is 0
Error: rsync transferred less than 2 files! (Success: Backup of [email protected]:/mnt/user/Bilder was successfully created in /share/Backup_Disk2/20240620_105600 (0)!)!

 

Grüße, Flo

Edited by boooch
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...