rsync Incremental Backup


Recommended Posts

Hello, im using the script to backup my docker appdata with great success. It´s very nice to have everything in one place. But im still having trouble that some of my containers dont get started when the snapshot it taken. 

What does ("echo "Start containers (fast method):") fast method mean? How can i try a other method of starting my containers? Maybe this could solve my problem. I looked up the docker command documentation but i only found "docker start container". And this method is used by the script.

Greetings

Marc

Link to comment
14 minutes ago, Marc Heyer said:

Hello, im using the script to backup my docker appdata with great success. It´s very nice to have everything in one place. But im still having trouble that some of my containers dont get started when the snapshot it taken. 

What does ("echo "Start containers (fast method):") fast method mean? How can i try a other method of starting my containers? Maybe this could solve my problem. I looked up the docker command documentation but i only found "docker start container". And this method is used by the script.

Greetings

Marc

Is autostart set for all containers on the docker tab in unraid?

Link to comment
2 hours ago, Marc Heyer said:

some of my containers dont get started when the snapshot it taken. 

Slow means that the script was not able to create a snapshot. This happens if the user uses /mnt/user as source path.

 

It does not influence the way how containers are stopped / started. In fact it's only docker start containername. So if this fails, something else must be the reason. Check tools > syslog to find error messages from the docker service.

 

Quote

Is autostart set for all containers

This is not relevant. My scripts stops and starts only currently running containers.

Link to comment
2 hours ago, mgutt said:

Slow means that the script was not able to create a snapshot. This happens if the user uses /mnt/user as source path.

 

It does not influence the way how containers are stopped / started. In fact it's only docker start containername. So if this fails, something else must be the reason. Check tools > syslog to find error messages from the docker service.

 

This is not relevant. My scripts stops and starts only currently running containers.

Oh okay, thank you. I assumed "fast method" refers to the docker start command. Makes more sense now. I will look up the syslog.

Link to comment
  • 1 month later...

Hi, I get this warning:

 

Quote

Backup probably inconsistent! (/mnt/cache/appdata)
Docker appdata files were found in too many locations (cache: prefer, paths: /mnt/cache/appdata /mnt/disk1/appdata /mnt/user/appdata /mnt/user0/appdata)!

 

Is there maybe some problem in my unraid setup?

 

Link to comment
4 hours ago, Denis77 said:

Is there maybe some problem in my unraid setup?

Yes, if you use prefer, then a share should be completely on the cache, while your appdata is spread across cache and disk1. You should set docker in the settings to no and start the mover. This should solve your issue.

Link to comment

This script is soo much faster than the CA auto update plugin and love that it does incremental as well... Thanks for your work on this!

 

I'm struggling to figure out how to properly set this up for a remote ssh destination if anyone could point me in the right direction. I do have my 2 unRAID servers setup with passwordless authentication.

 

I see in your example you have root@tower in your source, but when I try to do that for a destination for mine I'm just getting  error()!

 

I do also have my two unRAID servers shares mounted to each other via smb with unassigned devices plugin and when I tried to use that path it wouldn't work saying it doesn't support hard links, even though it should?

Edited by jmagar
Link to comment
17 minutes ago, jmagar said:

I see in your example you have root@tower in your source, but when I try to do that for a destination for mine I'm just getting  error()!

My script does not return "error()". All my error messages start with "Error:"?!

 

What happens if you try a simple command like:

 

ssh root@tower hostname

 

And no, SMB does not support hardlinks.

Link to comment
28 minutes ago, mgutt said:

My script does not return "error()". All my error messages start with "Error:"?!

 

What happens if you try a simple command like:

 

ssh root@tower hostname

 

And no, SMB does not support hardlinks.

Apologies was just trying to go off my memory which isn't that great haha. If I do ssh root@tower myip it gives me

 

ssh: connect to host tower port 22: Connection timed out

Link to comment

Hello,
I've setup the script quite a long time ago, with 2 dedicated disks, and they became full. I allowed the share to be on other disks at that time.
I've extended the size of my backup disk (new bigger disk) and just began naively copying (inside unraid GUI with the dynamix plugin) from theses other disks and I just stopped it since it seems to copy each file and make a new version with all the data and it was filling my new drive quite a bit. I had 80 GB to transfer and it already filled 550 GB on my new disk, and copy isn't finished.

 

Keep in mind I'm totally a newb with file transfer, rsync and all that, so how could I :

1) migrate the share on the wanted new disk

2) "delete" all copies which are just taking space multiples times for the same file.

Thanks a lot in advance !

Edited by xoC
Link to comment
34 minutes ago, xoC said:

and just began naively copying (inside unraid GUI with the dynamix plugin)

This probably does not respect hardlinks. Ask the plugin dev for this new feature if it's the case.

 

35 minutes ago, xoC said:

"delete" all copies which are just taking space multiples times for the same file.

If you want to move all backups including the storage-friendly hardlinks, you need to use a command which supports this. There exists two:

 

Copy

cp --archive /mnt/disk3/sharename/Backups /mnt/disk5/sharename

 

Move

rsync --archive --hard-links --remove-source-files /mnt/disk3/sharename/Backups /mnt/disk5/sharename

find /mnt/disk3/sharename/Backups -depth -type d -empty -delete

 

Both create the "Backups" subdir in the destination, but rsync moves the files (and the additional find command removes the empty dirs from the source as rsync removes only transferred files and not directories).

 

Note: If you add " & disown" behind the command, the command runs in the background even if you close the terminal. This could be useful if the transfer takes a lot of time and you don't want to keep the window open all the time. Example:

 

rsync --archive --hard-links --remove-source-files /mnt/disk3/sharename/Backups /mnt/disk5/sharename & disown

 

 

Link to comment
On 3/26/2023 at 9:38 PM, mgutt said:

This probably does not respect hardlinks. Ask the plugin dev for this new feature if it's the case.

 

If you want to move all backups including the storage-friendly hardlinks, you need to use a command which supports this. There exists two:

 

Copy

cp --archive /mnt/disk3/sharename/Backups /mnt/disk5/sharename

 

Move

rsync --archive --hard-links --remove-source-files /mnt/disk3/sharename/Backups /mnt/disk5/sharename

find /mnt/disk3/sharename/Backups -depth -type d -empty -delete

 

Both create the "Backups" subdir in the destination, but rsync moves the files (and the additional find command removes the empty dirs from the source as rsync removes only transferred files and not directories).

 

Note: If you add " & disown" behind the command, the command runs in the background even if you close the terminal. This could be useful if the transfer takes a lot of time and you don't want to keep the window open all the time. Example:

 

rsync --archive --hard-links --remove-source-files /mnt/disk3/sharename/Backups /mnt/disk5/sharename & disown

 

 

 

Awesome, thanks a lot for your answer !


And on the actual files that got duplicated - instead of hardlinks - after my naive copy, is there a "search function" or something like that that could take care of that ?

Link to comment

Hello everyone,

first thanks for the owesome script. Took me some time to get it smooth because im new at unraid an Linux, but now it’s running like a good fashion bike :)

But to learn a little more i would like what happened if i accidentally change a file in a Backup. 

 

Is than only this file corrupted or even more? For example

Backup dates

20230312 - 10 files - 0 new

20230313 - 10 files -0 new-  1 changed afterwards in the backup folder at 20230315

20230314 - ???- files -1 new 

20230315 - ???files - 1 new 

 

Thanks for the information

 

Link to comment
1 hour ago, Tidus1307 said:

Is than only this file corrupted or even more?

A hardlink is a link to an already existing file. If the file becomes corrupt, it's corrupt in all backups.

 

Depending on the type of corruption further backups even still create only hardlinks. This is because rsync's default behavior is to check only size and timestamp of a file.

 

Conclusion: It could be useful to create sometimes a backup with the "checksum" option. I already thought about adding this by default every 30 backups. Not sure how others think about that.

Link to comment

Hey, Thanks for the explanation.

2 hours ago, mgutt said:

A hardlink is a link to an already existing file. If the file becomes corrupt, it's corrupt in all backups.

Does it mean thah even the Existing files in older Backups coud be crashed or only the afterwards?

 

2 hours ago, mgutt said:

onclusion: It could be useful to create sometimes a backup with the "checksum" option. I already thought about adding this by default every 30 backups. Not sure how others think about that.

That would be realy nice to got the Option.

Link to comment
1 hour ago, Tidus1307 said:

Does it mean thah even the Existing files in older Backups coud be crashed or only the afterwards?

There is no "afterwards" file. Any following backup creates only a link to the file, which was created through the first backup.

 

Example: If you open file.txt of backup1 and replace a single character while leaving the modification time untouched, you will overwrite file.txt in backup2, backup3, backup4..., too and if backup5 is created... it still creates only a link to file.txt as rsync checks only size and modification time of the file.

 

The same happens if you change file.txt of backup3. It will change the file in backup1, backup2, backup4... This is because it is the same file. There is no additional copy available.

 

You can't have both: Using less storage AND having multiple copies of a file.

 

So what can you do against file corruption? Create backups on multiple mediums or let the script create backups of the same source to multiple subdirectories. A friend of mine uses my script and swaps every now and then the USB drive. By that he has two physical copies of all his data.

 

Link to comment
1 hour ago, mgutt said:

here is no "afterwards" file. Any following backup creates only a link to the file, which was created through the first backup.

 

Example: If you open file.txt of backup1 and replace a single character while leaving the modification time untouched, you will overwrite file.txt in backup2, backup3, backup4..., too and if backup5 is created... it still creates only a link to file.txt as rsync checks only size and modification time of the file.

 

The same happens if you change file.txt of backup3. It will change the file in backup1, backup2, backup4... This is because it is the same file. There is no additional copy available.

AH ok... now i got it. Thanks for the long Explanation.

1 hour ago, mgutt said:

You can't have both: Using less storage AND having multiple copies of a file.

 

So what can you do against file corruption? Create backups on multiple mediums or let the script create backups of the same source to multiple subdirectories. A friend of mine uses my script and swaps every now and then the USB drive. By that he has two physical copies of all his data.

That is totaly clear. And not a bad thing. The most important part is, not to change Files in the backup. I dont want to, but you never know, if exedently.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.