rsync Incremental Backup


Recommended Posts

  • 2 weeks later...
On 8/20/2023 at 9:32 PM, Marcel40625 said:

Hey there,

 

using your script for a while and very happy with it, thanks again,

 

but now for a new backup i have a bit trouble

 

  Reveal hidden contents


# #####################################
last_backup: '_'
date: invalid date '_'
/tmp/user.scripts/tmpScripts/backup-****_daily/script: line 231: (1692559676 - ) / 86400 : syntax error: operand expected (error token is ") / 86400 ")
Script Finished Aug 20, 2023 21:27.56

 

syncing local to a remote with keyfile

 

  Reveal hidden contents

# backup source to destination

backup_jobs=(
  # source                          # destination
"/mnt/directory/.....host"        "[email protected]:/mnt/backups/....cloud/cloud/"
"/mnt/directory/.....host"        "[email protected]:/mnt/backups/....cloud/config/"
)

 

.....

 

# user-defined rsync command
alias rsync='rsync -e "ssh -p 2****2 -i /root/.ssh/id_rsa.ve********p"'

 

any idea whats wrong?

 

rsync is installed on destination

 

 

Edit: Seems for me you didnt count for if there is no last_backup? and it returns "_"

@mgutt you have an idea whats wrong?

Link to comment

Hello

 

I found this script recently as I want to use an UD drive to backup my shares to.  I have a question though...if this has been answered already and I have missed it or not understood that it's been answered, I apologise.

 

What happens when the drive I am backing up to gets full?

 

My initial backup that I have done contains around 1.5TB of data that I undoubtedly be deleting from my array in the next few weeks, and I will no longer need a backup of this data, but there is no harm really in having this data on the backup drive while there is space on the drive. But what happens when the backup drive gets full? Does this script have checks in place so that when the destination is full, it begins deleting old backup data?

 

Again, apologies if this has been asked and/or already answered...and sorry if what I am asking is not clear.


Thank you and thank you for the script!

Link to comment
  • 5 weeks later...
On 9/19/2023 at 8:54 PM, Stupot said:

Hello

 

I found this script recently as I want to use an UD drive to backup my shares to.  I have a question though...if this has been answered already and I have missed it or not understood that it's been answered, I apologise.

 

What happens when the drive I am backing up to gets full?

 

My initial backup that I have done contains around 1.5TB of data that I undoubtedly be deleting from my array in the next few weeks, and I will no longer need a backup of this data, but there is no harm really in having this data on the backup drive while there is space on the drive. But what happens when the backup drive gets full? Does this script have checks in place so that when the destination is full, it begins deleting old backup data?

 

Again, apologies if this has been asked and/or already answered...and sorry if what I am asking is not clear.


Thank you and thank you for the script!

 

Bumping in hope of a response!  Thank you!

Link to comment
On 9/19/2023 at 9:54 PM, Stupot said:

Does this script have checks in place so that when the destination is full, it begins deleting old backup data?

 

Hello,

 

I faced this situation a few weeks ago.

I can't find the email error, but the short answer is : no.

You will receive a notification saying "can't write on destination, disk is full".

 

The script is pretty dumb (probably even for keeping backups, based only on date, not on unicity of file), but still, i love it and use it every day :)

 

K.

Link to comment
On 10/19/2023 at 7:28 AM, Keichi said:

 

Hello,

 

I faced this situation a few weeks ago.

I can't find the email error, but the short answer is : no.

You will receive a notification saying "can't write on destination, disk is full".

 

The script is pretty dumb (probably even for keeping backups, based only on date, not on unicity of file), but still, i love it and use it every day :)

 

K.

 

Hello!

 

Thank you for your response. It's good knowing there are checks in place that it will fail-over when the disk is full.

 

I guess my next question then, what steps would one take to free up space on the disk? Can you just start deleting older folders?

 

Thank you!

Link to comment
10 hours ago, Stupot said:

I guess my next question then, what steps would one take to free up space on the disk? Can you just start deleting older folders?

 

I configure the script like that :

backup_jobs=(
  # source                                  # destination
  (...)
  "[email protected]:/mnt/user/Medias"   "/mnt/user/Yggdrasil/Medias"
  (...)
)

 

In File Explorer (Krusader here), it looks like :

180102394_Capturedecran2023-10-22a12_24_51.thumb.png.1b408796c7d28f850202b81db94ba15e.png

 

Every folder seems to have the same size, but the script do symlink, so if you check that, you can see it is incremental :

root@Helheim:~# du -d1 -h /mnt/user/Yggdrasil/Medias | sort -k2
13T     /mnt/user/Yggdrasil/Medias
13T     /mnt/user/Yggdrasil/Medias/20231009_185050
18G     /mnt/user/Yggdrasil/Medias/20231011_121936
2.4G    /mnt/user/Yggdrasil/Medias/20231012_122623
1.1G    /mnt/user/Yggdrasil/Medias/20231013_122558
12G     /mnt/user/Yggdrasil/Medias/20231014_122649
1.7M    /mnt/user/Yggdrasil/Medias/20231015_122126
18G     /mnt/user/Yggdrasil/Medias/20231016_122350
1.7M    /mnt/user/Yggdrasil/Medias/20231017_122821
1.7M    /mnt/user/Yggdrasil/Medias/20231018_122338
25G     /mnt/user/Yggdrasil/Medias/20231019_122321
115G    /mnt/user/Yggdrasil/Medias/20231020_122923
19G     /mnt/user/Yggdrasil/Medias/20231021_122356

 

Just delete the old folder, and if the files were only present in that one, it will be deleted.

 

K.

  • Like 1
Link to comment
  • 3 weeks later...
On 10/22/2023 at 11:28 AM, Keichi said:

 

I configure the script like that :

backup_jobs=(
  # source                                  # destination
  (...)
  "[email protected]:/mnt/user/Medias"   "/mnt/user/Yggdrasil/Medias"
  (...)
)

 

In File Explorer (Krusader here), it looks like :

180102394_Capturedecran2023-10-22a12_24_51.thumb.png.1b408796c7d28f850202b81db94ba15e.png

 

Every folder seems to have the same size, but the script do symlink, so if you check that, you can see it is incremental :

root@Helheim:~# du -d1 -h /mnt/user/Yggdrasil/Medias | sort -k2
13T     /mnt/user/Yggdrasil/Medias
13T     /mnt/user/Yggdrasil/Medias/20231009_185050
18G     /mnt/user/Yggdrasil/Medias/20231011_121936
2.4G    /mnt/user/Yggdrasil/Medias/20231012_122623
1.1G    /mnt/user/Yggdrasil/Medias/20231013_122558
12G     /mnt/user/Yggdrasil/Medias/20231014_122649
1.7M    /mnt/user/Yggdrasil/Medias/20231015_122126
18G     /mnt/user/Yggdrasil/Medias/20231016_122350
1.7M    /mnt/user/Yggdrasil/Medias/20231017_122821
1.7M    /mnt/user/Yggdrasil/Medias/20231018_122338
25G     /mnt/user/Yggdrasil/Medias/20231019_122321
115G    /mnt/user/Yggdrasil/Medias/20231020_122923
19G     /mnt/user/Yggdrasil/Medias/20231021_122356

 

Just delete the old folder, and if the files were only present in that one, it will be deleted.

 

K.

 

Awesome, thank you!

Link to comment
  • 1 month later...
19 minutes ago, mgutt said:

Yes

 

Interesting....still trying to wrap my little brain around hardlinks.....on my backup drive (below) I can delete the 254G and the 154G folder and run the back up again and it will backup all the files back to the original 6TB backup on 11-27? My driive is 8TB and what I have been doing is formatting it when it gets close to 8TB and running a fresh backup and stating over. Being able to delete the incremental only would save a little time.

 

6.4T    /mnt/disks/backup
6.0T    /mnt/disks/backup/20231127_181243
254G    /mnt/disks/backup/20231204_033001
154G    /mnt/disks/backup/20231211_033001

Link to comment
6 minutes ago, bclinton said:

.on my backup drive (below) I can delete the 254G and the 154G folder and run the back up again and it will backup all the files back to the original 6TB backup on 11-27?

Yes.

 

Can be confusing, too: If you would delete the 254G backup and repeat the command, the 154G backup becomes "bigger".

 

And of course: If you delete the 6.0T backup, the 154G backup will "become" 6.x T.

 

  • Like 1
Link to comment
5 hours ago, mgutt said:

Yes

 

Thanks for your answer.


Sometime ago, I did a mistake and began to move files from the GUI (not with Rsync) and it just filled my disks with full copies.
I've deleted the non important backups, but looking at the overall size of the backups and considering my backup is from a folder where I only add files, never delete, it is way bigger than it should be if I didn't mess with a bad moving command.

Since then, I've followed your commands in that topic to move all my backups will be on the Share / Disks I want with the Rsync command. Is it possible then to run a "check" and if a file is there multiple times (from a same source path), full copies of the same file in different folders, to "convert" all this copies to one hardlink each and shrink my backup size ?

 

edit : oh and one more question, to better understand the "system" : if a backup share is on multiple disk, when we move with Rsync from one disk to another, can hardlinks from one drive link to data on another disk ? Because when we do "rsync --archive --hard-links --remove-source-files" it only speaks about disks, not shares. How could it know ?

Edited by xoC
Link to comment
34 minutes ago, xoC said:

Is it possible then to run a "check" and if a file is there multiple times (from a same source path),

Not with rsync, but with different apps:

https://unix.stackexchange.com/questions/3037/is-there-an-easy-way-to-replace-duplicate-files-with-hardlinks

 

Someone asked if it could be added to the nerd tools:

 

 

Feel free to check that or participate in the thread and ask for it.

  • Thanks 1
Link to comment
  • 2 weeks later...
On 12/11/2023 at 12:36 PM, mgutt said:

Yes.

 

Can be confusing, too: If you would delete the 254G backup and repeat the command, the 154G backup becomes "bigger".

 

And of course: If you delete the 6.0T backup, the 154G backup will "become" 6.x T.

 

Sorry for the ?'s but I have one more. :)

 

In this example can I simply keep the original backup and the last backup and be fine? In other words simply delete the 1223 backup to save space and only keep the 1225 and the 1127. The next week backup will be 010124 so I can delete the 1225 backup once the 0101 is finished? 

 

Capture.thumb.JPG.b06c9bdd0e34cd786000ceefe24f1d31.JPG

Link to comment
  • 2 weeks later...

I fixed it for now. rsync was disabled on remote host 🙄

I get another error now but I check this first

 

---- FIXED

 

Hi, thanks again for your awesome script. 

 

I am struggeling by running the script for backup to a remote server via ssh.

I configured authentication with a ssh key file and I can successfully login to my remote server via ssh passwordless from a terminal session.

No I added my remote location and configured the user-defined ssh and rsync commands. But I get this error:

Error: ()!

 

I have no idea what I am doing wrong. I added my script below. I changed nothing beside the remote destination and the user defined aliases.

Thanks for any advice!

incremental_remote_Backup.sh

Edited by UnKwicks
Solved
Link to comment

EDIT

 

I figured it out again.... documenting it here for any other having this problem.

The issue is that when I ssh to my remote server I get the following error message on the console:

hostfile_replace_entries: link /root/.ssh/known_hosts to /root/.ssh/known_hosts.old: Operation not permitted
update_known_hosts: hostfile_replace_entries failed for /root/.ssh/known_hosts: Operation not permitted

Because of these errors the rsync command in the backup script fails.

Maybe it is possible to catch this issue in a future script version?

 

This issue and solution is covered here:

 

so doing a 

ssh-keyscan -H TARGET_HOST >> ~/.ssh/known_hosts

solves the errors and the script runs fine then since the last_backup date can be set with rsync.

 

----- FORMER Problem // SOLVED ----

 

Ok, I need help.

For the script, please see the post above.

I am able to successfully login via ssh to my remote server using a ssh key. Also rsync works from console.

But when I run the script I get the following error:

# #####################################
last_backup: '_'
date: invalid date ‘_’
/tmp/user.scripts/tmpScripts/incremental_remote_backup/script: line 232: (1704284573 - ) / 86400 : syntax error: operand expected (error token is ") / 86400 ")

I seems like the script is not able to get the last backup date via the rsync command.

Is there anything else I have to configure? I set the aliases and also added a config ssh file in the meantime because I read that the alias ssh command set in the script is never used?

Appreciate any advice what else I have to set in the script to do a remote backup via ssh/rsync

Edited by UnKwicks
Link to comment

Hello,

 

For some folder, i have lines like

cd..t...... StableDiffusion/Models/
cd..t...... StableDiffusion/Models/codeformer/
cd..t...... StableDiffusion/Models/controlnet/
cd..t...... StableDiffusion/Models/embeddings/

 

Is it possible to know what it means ?

I can't find.

 

Thanks!

Link to comment

Hello again,

 

The backup script seems to be working great for me, thank you. The only thing I find is that I am unable to open the log files that are created by the script in each backup folder. I can open them through the Unraid GUI but not from my Windows computer.

 

The log files have the permission "root" and "-rw-------".

 

Is there a way to make it so I can open the log files from my computer?

 

Thank you!

 

Link to comment
6 hours ago, Stupot said:

 

Is there a way to make it so I can open the log files from my computer?

I tend to say no. I mean why should anyone provide access to backups through a network share. This sounds like a very bad idea.

Link to comment
8 hours ago, Stupot said:

Is there a way to make it so I can open the log files from my computer?

You could run the New Permissions tool against that file ('newperms' from the command line).  The permissions shown will not let it be visible across the network.

Link to comment
16 hours ago, mgutt said:

I tend to say no. I mean why should anyone provide access to backups through a network share. This sounds like a very bad idea.

 

Ok that would make sense, only the log file is the *only* file that I cannot open in the backups.  All the files it backs up I can open from a Windows PC using SMB and my normal login credentials.

Link to comment
On 1/7/2024 at 12:55 AM, Stupot said:

 

Ok that would make sense, only the log file is the *only* file that I cannot open in the backups.  All the files it backs up I can open from a Windows PC using SMB and my normal login credentials.

I also have remote access to my backups (read only) from my home lan. I saw the same problem some time ago : I think it's the owner of the log file that is assigned to "root" rather than the default "nobody"!

 

Bonne année.

Fred.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.