mgutt Posted May 25, 2021 Author Share Posted May 25, 2021 @sonic6 never tested but should return an error notification (rsync can not check if there is enough free space in advance if this is your question). Quote Link to comment
sonic6 Posted May 29, 2021 Share Posted May 29, 2021 next question is there way to exclude folder? i want backup a whole share to an UD. inside that share is the ".Recycle.Bin" i don't want to backup, but everything else. Quote Link to comment
Symon Posted June 13, 2021 Share Posted June 13, 2021 (edited) Thanks for this Script and your effort My first idea was that I would use this script to create a backup in a folder which is synchronized with gdrive (with the rclone script for plex) so I would have an online backup in case anything happens to my server. On a second thought this was probably a silly idea as I can't use the hardlinks with gdrive this way and a recovery would be difficult. But I think I will use this script to create a monthly backup which should always be a full backup and synchronize this with gdrive. So in a worst case scenario I would only loose 1 month of data. If I understand correctly, I only need to replace this part of the script: if [[ -n "${last_backup}" ]]; then echo "Create incremental backup ${new_backup} by using last backup ${last_backup}" rsync -av --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}" else echo "Create full backup ${new_backup}" # create very first backup rsync -av --stats "${source_path}" "${backup_path}/.${new_backup}" fi with: echo "Create full backup ${new_backup}" # create very first backup rsync -av --stats "${source_path}" "${backup_path}/.${new_backup}" An create a monthly cron job which executes on the first day of each month. Do you see any issues with this idea? Script I use to Synchronize with GDrive: Edited June 13, 2021 by Symon Quote Link to comment
schoppehermann Posted June 15, 2021 Share Posted June 15, 2021 (edited) On 5/29/2021 at 6:51 AM, sonic6 said: next question is there way to exclude folder? i want backup a whole share to an UD. inside that share is the ".Recycle.Bin" i don't want to backup, but everything else. It would be a good Thing ... --exclude ='.Recycle.Bin' .... but better ask @mgutt Edited June 15, 2021 by schoppehermann 1 Quote Link to comment
luk Posted June 21, 2021 Share Posted June 21, 2021 @mgutt very good work! I would like to request the implementation of AGE in your script. @ich777 has one which is working with AGE: his backup script: https://forums.unraid.net/applications/core/interface/file/attachment.php?id=117956 The combination of both scripts would be really awesome :-) Thank you and Regards, Luk Quote Link to comment
nicx Posted June 22, 2021 Share Posted June 22, 2021 hey @mgutt, great script, thanks a lot. One question: I am using it "heavily" every 2 hours with notifications enabled. Is it possible to get only notifications when an Error occurs? I don't need the "Everything OK" notifications Quote Link to comment
nicx Posted June 27, 2021 Share Posted June 27, 2021 @mgutt or any other experienced user: any hints to my question or ist it just not possible (yet?)? Quote Link to comment
sonic6 Posted June 28, 2021 Share Posted June 28, 2021 @nicx may commenting out the notification will be enough? 1 Quote Link to comment
nicx Posted June 28, 2021 Share Posted June 28, 2021 @sonic6 wow, what a great and simple idea... I could have thought of that myself 🙄 Thanks a lot for your hint, that is really enough 👍 1 Quote Link to comment
JonathanM Posted June 28, 2021 Share Posted June 28, 2021 On 6/22/2021 at 2:43 AM, nicx said: hey @mgutt, great script, thanks a lot. One question: I am using it "heavily" every 2 hours with notifications enabled. Is it possible to get only notifications when an Error occurs? I don't need the "Everything OK" notifications All ok pings are important to verify that you will see an error ping. Otherwise there is no difference between major failure and everything working as designed. Quote Link to comment
chocorem Posted July 7, 2021 Share Posted July 7, 2021 First thanks for the work ! I'm trying to implement the script between my Unraid server and an offsite Synology NAS. The synology has Disks mounted in Raid 5 and volume formated in BTRFS. The synology is mounted as an SMB share (this is maybe whre the mistake is) and everytime I start the script (from my unraid server to backup on the Synology) It does a full backup. I assume the hardlinks are not working on this setup. What do I do wrong ? do I need to mount the synology in another way ? Quote Link to comment
mgutt Posted July 7, 2021 Author Share Posted July 7, 2021 37 minutes ago, chocorem said: The synology is mounted as an SMB share a) enable unix extensions in the SMB settings of the Synology NAS or b) mount unraid on your Synology NASand execute the script on the Synology NAS Quote Link to comment
chocorem Posted July 8, 2021 Share Posted July 8, 2021 13 hours ago, mgutt said: a) enable unix extensions in the SMB settings of the Synology NAS or b) mount unraid on your Synology NASand execute the script on the Synology NAS I will try in this way. One additional question. Is the script tranferring all the data, compairing and then delete/creating hardlink ? or is the script checking existing and only transferring the changed information. Data to save is very big at my side, and a lot of files, and the backup takes more than 10 hours (when running locally) So remote, it would be worse if the script is transferring everything Quote Link to comment
mgutt Posted July 8, 2021 Author Share Posted July 8, 2021 1 hour ago, chocorem said: or is the script checking existing and only transferring the changed information It compares filesize and timestamps before transfering them. Quote Link to comment
Meles Meles Posted July 13, 2021 Share Posted July 13, 2021 Hey Marc, Have you got this script on GitHub, that way (delete as applicable for your own Use Case) - The (extra) lazy amongst us could just pull changed versions rather than copy/paste - The more anal amongst us can more easily validate that we're content with the changes before implementing - The more helpful amongst us can do PRs and "help" with your script development Cheers Badger Quote Link to comment
mgutt Posted July 13, 2021 Author Share Posted July 13, 2021 7 hours ago, Meles Meles said: Have you got this script on GitHub My plan is to realize a plugin out of it. So it's not a good idea to publish it at Github at this time. It will be done if it reaches the plugin state. 2 Quote Link to comment
CoolTNT Posted July 28, 2021 Share Posted July 28, 2021 @mgutt Found an issue when backing up to an NTFS-formatted unassigned disk: Since NTFS is not aware of Linux permissions/users/groups, rsync thinks that the files are different even if they are the same because the NTFS permissions on the backed up files don't match the Linux permissions. In order to fix this and avoid duplicating all files, I had to add these arguments to rsync: # create incremental backup if [[ -n "${last_backup}" ]]; then echo "Create incremental backup ${new_backup} by using last backup ${last_backup}" rsync -av --no-perms --no-owner --no-group --modify-window=5 --progress --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}" else echo "Create full backup ${new_backup}" # create very first backup rsync -av --no-perms --no-owner --no-group --stats "${source_path}" "${backup_path}/.${new_backup}" fi Perhaps consider adding these arguments to your script? I don't think ownership/permissions checks would ever be needed for UnRAID backups, since everything should be owned by nobody/users anyways. Quote Link to comment
mgutt Posted July 28, 2021 Author Share Posted July 28, 2021 7 hours ago, CoolTNT said: I don't think ownership/permissions checks would ever be needed for UnRAID backups, since everything should be owned by nobody/users anyways. They are important for the appdata share, so don't use NTFS as a target if you like to create backups of this. Quote Link to comment
NasKaya Posted August 3, 2021 Share Posted August 3, 2021 Hey guys, i want to run the Backup every Sunday at 03:00. I found this cron "0 3 * * SUN" but its not working. Am i missing something. Maybe someone can help Quote Link to comment
Chris Mot Posted August 4, 2021 Share Posted August 4, 2021 Hello, The site https://crontab.guru/ is very useful to verify your parameters. In your case you need to set it at "0 3 * * 0" Quote Link to comment
mgutt Posted August 4, 2021 Author Share Posted August 4, 2021 3 hours ago, Chris Mot said: The site https://crontab.guru/ is very useful to verify your parameters. Ironically it returns "SUN" as a valid parameter as well: https://crontab.guru/#0_3_*_*_SUN "MON" works, too. But maybe it does not work in Unraid and passing numbers is a requirement here. @NasKaya so please test "0" instead of "SUN" Quote Link to comment
Squid Posted August 4, 2021 Share Posted August 4, 2021 4 hours ago, mgutt said: But maybe it does not work in Unraid It doesn't Quote Link to comment
NasKaya Posted August 4, 2021 Share Posted August 4, 2021 Indeed with a number it works. Thanks guys. Quote Link to comment
bonbonJaeger Posted August 6, 2021 Share Posted August 6, 2021 Hello all, and many thanks to @mgutt for the excellent script for backup. Before I switched to unraid, I used hardlinkbackup (Windows software that uses rsysnc for backup). Since I don't always want to run a virtual machine for backup, I was happy to find this script here. I have now split my data to be backed up. The ones that no one will miss if I'm gone, I back up to an external hard drive using btrfs. The data that is important to everyone I back up to NTFS so everyone can plug the disk into a different system. The backups worked fine in the beginning. But now the backup of the data to the NTFS partition keeps hanging. Thereby the external disk appears in unraid afterwards like this (0 B used and free): When I try to restart unraid, I get the following error messages. Sometimes it works that I can end the process with kill. But often not even that works. If I then force the system to reboot with "reboot", the parity check is started every time. After the reboot the disk is mounted normally again. For the backup to the ntfs partition I have adapted the script like this : rsync -v -rltD --modify-window=10 --stats --progress --itemize-changes --compress --delete --exclude =".rsync" --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/.${new_backup}" Does anyone have any idea what is causing the script to crash? many greetings bonbonJaeger Quote Link to comment
mgutt Posted August 6, 2021 Author Share Posted August 6, 2021 4 minutes ago, bonbonJaeger said: I get the following error messages. This means UD has a timeout which stops reading the NTFS disk size if it takes longer than 2 seconds. This is not really a bug, but it shows how slow your disk became. Maybe to many Hardlinks for NTFS?! Maybe you should try to format it into BTRFS. With WinBTRFS you would be still able to access your files through a windows client if needed. Or of course XFS / ext4. Finally it should be the better option to use an Unix based file system as permissions and filename restrictions are different compared to NTFS as well. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.