enabler Posted November 5, 2022 Share Posted November 5, 2022 On 11/1/2022 at 11:33 PM, mgutt said: Released v1.5 released - fixed hardlink test Thank you @mgutt. Script is running fine so far. Quote Link to comment
mgutt Posted November 7, 2022 Author Share Posted November 7, 2022 I found a strange bug, which seems to be caused of Unraid's virtual filesystem. It is noticeable if you use /mnt/user/... as destination path and as long the last backup is located on the cache. It does not happen if the last backup is fully moved to the array or if /mnt/user/... is not used as destination path. What results through this bug: Usually the timestamp of directories are correctly taken from the source. In this example, the directories were created in 2016: But if the last backup is on the cache and I use it through /mnt/user as link-dest Parameter in rsync, the timestamps are overwritten by the current time: AND: Strangely it creates all dirs on the array with the correct timestamps, too: So they were created twice through a single rsync command?! This is the command I used to induce the bug: rsync --archive /mnt/remotes/MARC-PC/Users/Marc/Downloads/ /mnt/cache/test rsync --archive --link-dest=/mnt/user/test /mnt/remotes/MARC-PC/Users/Marc/Downloads/ /mnt/user/test10 It does not happen if I use /mnt/cache as the link-dest: rsync --archive --link-dest=/mnt/cache/test /mnt/remotes/MARC-PC/Users/Marc/Downloads/ /mnt/user/test17 Or if I use /mnt/cache as destination: rsync --archive --link-dest=/mnt/user/test /mnt/remotes/MARC-PC/Users/Marc/Downloads/backup-2.6.2020_04-07-17_bindner/homedir/mail/iontophoresis-device.com/versand/ /mnt/cache/test19 So it happens only if both paths are set to /mnt/user/... which is the default behaviour if the destination path has been set to /mnt/user/... At the moment I'm not sure if the script should disallow the usage of /mnt/user/... as destination path 🤔 Quote Link to comment
zer0.de Posted November 9, 2022 Share Posted November 9, 2022 Hi, ich schreib mal in Deutsch, sollte ja kein Thema sein? Also ich hab mir mal dein Script geschnappt und wollte es um ein paar pers. sachen erweitern. Sichern im LAN/Extern auf ner NAS, info per Telegrambot. Leider bekomme ich immer wieder im --dry-run folgende Fehler die mir nicht weiter helfen ... Warte darauf das er Hochfährt... Online. Error: ()! Error: ()! Error: ()! Error: ()! Error: ()! das müsste zwischen 242 und 264 sein, dabei sind die Ordner vorhanden, aber natürlich leer, erstellt hat die Ordner das Script zwar nicht aber das wird am dry-run liegen - hab sie per hand erstellt und denoch kommen obere Fehler. Script anbei, maybe kannst du das einbauen? bzw. mit einfliessen lassen? oder mir sagen wo ich einen fehler gemacht hab? rsync Incremental Backup.sh Quote Link to comment
mgutt Posted November 9, 2022 Author Share Posted November 9, 2022 On 10/18/2022 at 3:01 PM, afl said: i must specify the specific identity file to ssh which why i added the ssh alias: The script uses the commands rsync (over ssh) and ssh. You need the identity file for both. Quote Link to comment
mgutt Posted November 9, 2022 Author Share Posted November 9, 2022 2 hours ago, zer0.de said: Leider bekomme ich immer wieder im --dry-run folgende Fehler die mir nicht weiter helfen ... Ich tippe darauf, dass du die identity file falsch übergibst: alias rsync='rsync -e ssh -i /root/.ssh/remote-rsync.key' So wird "-i" ein Parameter von rsync. Du musst das in Anführungszeichen setzen: alias rsync='rsync -e "ssh -i /root/.ssh/remote-rsync.key"' Und statt den gesamten Code mit einem if zu umgeben und dem gruseligen eval, solltest du einfach das machen: echo -n "Warte darauf das er Hochfährt..." sleep 1m if ! ping -t 3 -c 1 $BACKUP_IP >/dev/null 2>&1; then echo "Error: Offline!" exit 1 fi echo "Online." Den Telegram-Teil könntest du zB auch direkt in die notify-Funktion packen: notify() { echo "$2" if [[ -f /usr/local/emhttp/webGui/scripts/notify ]]; then /usr/local/emhttp/webGui/scripts/notify -i "$([[ $2 == Error* ]] && echo alert || echo normal)" -s "$1 ($src_path)" -d "$2" -m "$2" fi if [ "$TELEGRAM" == true ]; then curl -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_ID}/sendMessage?chat_id=${TELEGRAM_CHAT_ID}\&text=$1" fi } Wobei es ein schlechter Stil ist Variablen groß zu schreiben. 1 Quote Link to comment
zer0.de Posted November 10, 2022 Share Posted November 10, 2022 ok, vielen dank für die schnell und hilfreiche antwort. Quote Link to comment
HansOtto Posted November 10, 2022 Share Posted November 10, 2022 Since backing up dockers works perfectly now, is there also a way to back up the unraid flash drive? Thanks for the great script, it works very well 👍 Quote Link to comment
mgutt Posted November 10, 2022 Author Share Posted November 10, 2022 9 minutes ago, HansOtto said: Since backing up dockers works perfectly now, is there also a way to back up the unraid flash drive? Simply use /boot as the source path. Quote Link to comment
HansOtto Posted November 10, 2022 Share Posted November 10, 2022 1 hour ago, mgutt said: Simply use /boot as the source path. thanks for the quick answer 👍 Quote Link to comment
DivideBy0 Posted November 29, 2022 Share Posted November 29, 2022 Any file integrity checks during transfer? Making sure nothing gets corrupted? Quote Link to comment
Marc Heyer Posted December 7, 2022 Share Posted December 7, 2022 (edited) How does V1.5 behave, when the Appdata folder is not named Appdata? My Appdata path is "mnt/gamma/docker". I ran the script with this as my appdata path and i think it ran through despite giving the error that "mnt/*/appdata" could not be accessed? From the log: "ls: cannot access '/mnt/*/appdata': No such file or directory Created snapshot of /mnt/gamma/docker to /mnt/gamma/.docker_snapshot Start containers (fast method):" I think it worked anyway, the files are there and the containers started. Is it save this way or should i edit something? Edited December 7, 2022 by Marc Heyer Quote Link to comment
mgutt Posted December 11, 2022 Author Share Posted December 11, 2022 On 12/7/2022 at 2:26 PM, Marc Heyer said: I think it worked anyway, the files are there and the containers started. Is it save this way or should i edit something? Yes, it is save. I simply forget to re-use the already obtained appdata path for an additional check: I will fix this in the next version. Ignore it until then. Quote Link to comment
mgutt Posted December 11, 2022 Author Share Posted December 11, 2022 On 11/29/2022 at 10:52 PM, johnwhicker said: Any file integrity checks during transfer? Making sure nothing gets corrupted? I would suggest: - create the script twice with the same path / settings except the 2nd uses in addition the rsync_options "--checksum" - once per month, execute the 2nd script directly after the 1st and check the logs. Every logged file update in the 2nd script logs is a possible bit flip. This should work for media collections, but it won't work for appdata files as those will change directly after a docker container has been restarted. Quote Link to comment
Marc Heyer Posted December 16, 2022 Share Posted December 16, 2022 Spoiler Top, thank you very much. Another small thing i noticed is that the script is not able to start some of my containers, when they are in the network of another container (i have a container with built in vpn and 4 other containers are in the vpn network and route there traffic through it). Is this something that i can fix or is this a limitation of the script? The CA Backupd and Restore plugin starts these containers fine. 4 times this message: Error response from daemon: cannot join network of a non running container: 0a846d99759136a6fa326f96a37f84c3d02d76eda7a1b5eb2bba8d0e1e0baef4 and the last line: Error: failed to start containers: 19ad2717be4d, 3d6775704304, 9288c9002b3e, edd9b4ad4544 Quote Link to comment
UnKwicks Posted January 11, 2023 Share Posted January 11, 2023 On 10/30/2022 at 11:53 PM, mgutt said: 3.) If the source path is set to /mnt/cache/appdata or /mnt/diskX/appdata, the script will create a snapshot to /mnt/*/.appdata_snapshot before creating the backup. This reduces docker container downtime to several seconds (!). Thanks for your awesome script @mgutt which I am currently testing. Just running my first backup. Since I added "/mnt/user/appdata" as source path I thought my docker downtime would be only a few seconds, but it took as long as all appdata files were copied. Do I have to use "/mnt/cache/appdata" instead of "/mnt/user/appdata" to make the snapshot feature work? What happens when I run the script again at the same day after it finished the first run? Thank you very much!! Quote Link to comment
mgutt Posted January 12, 2023 Author Share Posted January 12, 2023 4 hours ago, UnKwicks said: Do I have to use "/mnt/cache/appdata" instead of "/mnt/user/appdata" to make the snapshot feature work? Yes. Check the script logs. 4 hours ago, UnKwicks said: What happens when I run the script again at the same day after it finished the first run? It creates an additional Backup, but all already existing files aren't copied. Instead it creates hardlinks to those files. So the second backup is much faster finished than the first one. Quote Link to comment
UnKwicks Posted January 12, 2023 Share Posted January 12, 2023 21 hours ago, mgutt said: Yes. Check the script logs. Thanks, its working fine now. Donation incoming 🥳 May it be possible to copy the appdata folder to a second target in the same run? This would prevent a second downtime from a second script. Quote Link to comment
mgutt Posted January 12, 2023 Author Share Posted January 12, 2023 3 minutes ago, UnKwicks said: May it be possible to copy the appdata folder to a second target in the same run? Sounds like a very special scenario to me. Isn't it possible to create a second script and use the first destination as the source for the second destination ? Quote Link to comment
UnKwicks Posted January 12, 2023 Share Posted January 12, 2023 5 minutes ago, mgutt said: Isn't it possible to create a second script and use the first destination as the source for the second destination ? The target of the main script is an external drive which gets unmounted when the script is done. I do a duplicati backup to a cloud storage as well so I used CA appdata Backuo in the past to do a backup to an unraid share and upload from there to the cloud. But this means a second (even longer) downtime as well. This is why it would be great to have a second copy to my share for the upload via duplicati. Quote Link to comment
mgutt Posted January 12, 2023 Author Share Posted January 12, 2023 40 minutes ago, UnKwicks said: This is why it would be great to have a second copy to my share for the upload via duplicati. Sounds like you should do your first backup to a local share and then use this as your source for your cloud and usb destination?! Quote Link to comment
UnKwicks Posted January 13, 2023 Share Posted January 13, 2023 17 hours ago, mgutt said: first backup to a local share and then use this as your source I guess I have to think a bit about the best process. If I do a backup first to a local share and use this as source I might have doubled incremental backups because the script as well as the next one copying to the external disk is doing incrementals as well. Also the foldername (date) changes every time and then duplicati might do a full backup every time. So not that ideal.... Not sure how to get around this. If it would be possible to copy the appdata to a second destination while the main script is running it may be the cleanest solution. Quote Link to comment
mgutt Posted January 13, 2023 Author Share Posted January 13, 2023 I would say: - create backup to local share like /mnt/disk3/Backup/appdata and at the end of the script: - obtain most recent backup path with last_backup=$(ls -t /mnt/disk3/Backup/appdata/ | head -n 1) and create/update symlink ln -sfn /mnt/disk3/Backup/appdata/$last_backup /mnt/disk3/Duplicati/appdata And duplicati uses /mnt/disk3/Duplicati as source path. And as the appdata subfolder automatically points to the most recent backup, it should not do full copies. Quote Link to comment
UnKwicks Posted January 13, 2023 Share Posted January 13, 2023 (edited) 1 hour ago, mgutt said: - create backup to local share like /mnt/disk3/Backup/appdata and at the end of the script: - obtain most recent backup path with last_backup=$(ls -t /mnt/disk3/Backup/appdata/ | head -n 1) and create/update symlink ln -sfn /mnt/disk3/Backup/appdata/$last_backup /mnt/disk3/Duplicati/appdata Thanks, this helps me a lot. I am running a test now with a new script: backup_jobs=( # source # destination "/mnt/cache/appdata" "/mnt/user/Backups/Unraid/appdata" "/boot" "/mnt/user/Backups/Unraid/flash" ) The "keep backup"-Settings in the script I set all to 1 and create symlinks at the end of the script: # Get last appdata backup lastAppdataBackup=$(ls -t /mnt/user/Backups/Unraid/appdata/ | head -n 1) echo "Last appdata backup: $lastAppdataBackup" # Get last flash backup lastFlashBackup=$(ls -t /mnt/user/Backups/Unraid/flash/ | head -n 1) echo "Last flash backup: $lastFlashBackup" # create symlink to last appdata backup ln -sfn /mnt/user/Backups/Unraid/appdata/$lastAppdataBackup /mnt/user/Backups/Unraid/appdata/last echo "Symlink to last appdata backup created" # create symlink to last flash backup ln -sfn /mnt/user/Backups/Unraid/flash/$lastFlashBackup /mnt/user/Backups/Unraid/flash/last echo "Symlink to last flash backup created" /mnt/user/Backups/Unraid/appdata/last and /mnt/user/Backups/Unraid/flash/last are the sources I will use then for my main Backup script to the external drive as well as for the duplicati Backup. When I overthought this right it might work this way without any redundancy. Let's see Edited January 13, 2023 by UnKwicks Quote Link to comment
Death_Monkey Posted January 21, 2023 Share Posted January 21, 2023 (edited) Ich habe mein Dateisystem etwas aufgesplittet und eine SSD auf der alle Docker / VM / Privaten (Sicherungswürdigen) Dateien sich befinden. Alles andere auf Unraid bleibt nur über 2 Paritys abgesichert, sind "nur" Medien. Daher würde ich gerne mehr oder weniger die komplette disk1 Sichern auf eine sich im Heimnetz befindliche Synology DS. Funktioniert das mit dem Skript und dann auch das Aufwecken der DS über WOL, ich hab die DS118 dann nämlich nur rein dafür und wollte bedingt durch die Paritys vermutlich nur alle paar Tage sichern, da sich nicht sooo oft etwas darin ändert. Oder führe ich dafür dann ein Skript aus zum aufwecken, dann etwas warten damit die auch hochgefahren ist und lasse dann dieses Skript im anschluss ausführen? Edited January 21, 2023 by Death_Monkey Quote Link to comment
mgutt Posted January 21, 2023 Author Share Posted January 21, 2023 4 hours ago, Death_Monkey said: Funktioniert das mit dem Skript und dann auch das Aufwecken der DS über WOL Auf Deutsch geht es hier weiter. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.