mgutt Posted January 25, 2022 Author Share Posted January 25, 2022 On 1/18/2022 at 11:51 PM, schwarzzrawhcs said: I did that by restricting the maximal SMB protocol to SMB1. But now i cant connect it as a remote share in Unraid. Try to mount manually with a fixed version. Example: mkdir /mnt/remotes/syno mount -t cifs -o "username=MyUserName,password=myPassword,vers=1.0" //DISKSTATION/Sharename /mnt/remotes/syno Quote Link to comment
MX-Hero Posted January 26, 2022 Share Posted January 26, 2022 On 12/30/2021 at 8:27 PM, mgutt said: Ok, I solved it as follows: # stop docker container host_path="$src_path" if [[ $( uname -r ) == *"Unraid"* ]]; then host_path=${src_path/\/mnt\/$( echo "$src_path" | grep -oP "(?<=/mnt/).*(?=/)")//mnt/.*} fi for container_id in $(echo "$docker_mounts" | grep -oP ".*?(?=:$host_path)" | uniq); do container_name=$(docker container inspect -f '{{ .Name }}' "$container_id") container_name=${container_name//\/} echo "Stop container $container_name as it uses $host_path" #docker stop "$container_id" done Now I need to expand this to support SSH sources as well. i want to try this because i get failed backups if i backup my appdata folder. Is it possible to start the stopped container after backup finished? Quote Link to comment
mgutt Posted January 26, 2022 Author Share Posted January 26, 2022 11 minutes ago, MX-Hero said: i want to try this because i get failed backups if i backup my appdata folder. Is it possible to start the stopped container after backup finished? It will autostart in the next version. Will be released in a few days. Quote Link to comment
bclinton Posted January 26, 2022 Share Posted January 26, 2022 (edited) 18 minutes ago, MX-Hero said: i want to try this because i get failed backups if i backup my appdata folder. Is it possible to start the stopped container after backup finished? The way I handle my appdata folder along with plex is back it up to a backup share with appdata backup folders once a week. I do this with other folders that are not really live backup friendly.....that way the script backups a nice clean tar file. Edited January 26, 2022 by bclinton Quote Link to comment
Archonw Posted February 6, 2022 Share Posted February 6, 2022 (edited) I tried your script, but getting this error. Could not obtain last backup! (/mnt/user/appdata) Error: ()! my paths # backup source to destination backup_jobs=( # source # destination "/mnt/user/appdata" "/mnt/user/backup/unraid/appdata" ) I changed the destination to /mnt/disk/backup/ and this worked. Now it also worked with the user share path.... Edited February 7, 2022 by Archonw Quote Link to comment
toasti Posted February 16, 2022 Share Posted February 16, 2022 (edited) Thanks for the latest version, but in my understanding the existing "backup chain" should still be used with the new version - is that correct? So far I used version 0.6 and updated one backup job with the new script for a test and now it looks like the backup chain will be re-created (like a new initial backup) and ate a lot of additional space. If I change every job to the new script my backup disk will be full before all the jobs end. Do I something wrong? Please let me know. Many thanks! Edited February 16, 2022 by toasti Quote Link to comment
Archonw Posted February 16, 2022 Share Posted February 16, 2022 Hello, maybe someone else could need this. I want to stop and restart all my docker container while the script is running. I used this commands: # ##################################### # Settings # ##################################### docker container stop $(docker container list -qa) done docker container start $(docker container list -qa) Now i have also persistent data from my database container. 1 Quote Link to comment
mgutt Posted February 17, 2022 Author Share Posted February 17, 2022 On 1/2/2022 at 5:36 PM, mgutt said: The old backups will be preserved, but on the next backup hardlinking (space saving) does not work. If you even want that, you need to move the files of your most recent backup in the upper dir as follows: mv /mnt/diskX/backup/daten/20220102_044001/daten/* /mnt/diskX/backup/daten/20220102_044001 @toasti Look above. You need to move the last backup one dir up, so the new version finds the last backup. Quote Link to comment
toasti Posted February 17, 2022 Share Posted February 17, 2022 10 hours ago, mgutt said: @toasti Look above. You need to move the last backup one dir up, so the new version finds the last backup. OK, thanks but is it not possible to change the script instead? I don't understand why it is not 100% compatible to the old version - was that not possible? Quote Link to comment
mgutt Posted February 17, 2022 Author Share Posted February 17, 2022 1 hour ago, toasti said: OK, thanks but is it not possible to change the script instead? Maybe users were confused regarding the additional subdir with the same name as the target dir. I'll check if the script could rename the last backup automatically to enhance backwards compatibility. Quote Link to comment
toasti Posted February 17, 2022 Share Posted February 17, 2022 (edited) Yes, that's the case, but what was nothing which bothered me. I just accepted it. It is more confusing and not ideal that we have to tweak the backups when upgrading to the new version. I will look now for the "fix", hope it will work. But what is about the old backup states in the chain? Will the cleanup work, I mean, the old backups before the last one are not moved to the new place. Edit: I moved the latest backup as suggested and then wanted to have a look if the script do the cleanup in the correct way, but the log is not complete. I backup 2 folders in this job, I can only see the details of the second folder backup, for example which backup states were deleted and which ones were preserved. Edited February 17, 2022 by toasti Quote Link to comment
LyDjane Posted March 1, 2022 Share Posted March 1, 2022 Hello all, thank you for the script @mgutt! I have a question because during the backup again and again a folder error is produced. it is a docker in appdata. the following error message appears: Error: >f+++++++++ speedtest/www/vendor/symfony/http-kernel/Exception/PreconditionFailedHttpException.php >f+++++++++ speedtest/www/vendor/symfony/process/Exception/ProcessFailedException.php rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1330) [sender 3.2.3] (23)! Is it possible to exclude certain folders from the backup? I could do without this docker in the backup. Quote Link to comment
epilot5280 Posted March 2, 2022 Share Posted March 2, 2022 (edited) Hello, Been using the script for a while now and am finally getting around to setting up the excudes. What would be the best way to go about deleting the directories I want to exclude from all of the previous backups? Would an rm -rf with a wildcard for the date directory level do it? Edited March 2, 2022 by epilot5280 Quote Link to comment
mgutt Posted March 3, 2022 Author Share Posted March 3, 2022 16 hours ago, epilot5280 said: Would an rm -rf with a wildcard for the date directory level do it? Yes, if the subdir is on the same level in each backup. But if you want to delete every dir with the name /temp on different levels, this won't work. In this case I would use find mentioned here: https://stackoverflow.com/a/13032747/318765 Instead of -f use -i to test the command (-i means "interactive", where you need to confirm each deletion). 1 Quote Link to comment
LyDjane Posted March 3, 2022 Share Posted March 3, 2022 On 3/1/2022 at 11:31 AM, LyDjane said: Hello all, thank you for the script @mgutt! I have a question because during the backup again and again a folder error is produced. it is a docker in appdata. the following error message appears: Error: >f+++++++++ speedtest/www/vendor/symfony/http-kernel/Exception/PreconditionFailedHttpException.php >f+++++++++ speedtest/www/vendor/symfony/process/Exception/ProcessFailedException.php rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1330) [sender 3.2.3] (23)! Is it possible to exclude certain folders from the backup? I could do without this docker in the backup. nobody can help here? Quote Link to comment
bclinton Posted March 3, 2022 Share Posted March 3, 2022 (edited) 47 minutes ago, LyDjane said: nobody can help here? The way I backup the appdata folder is using the backup/restore appdata plugin and store it in a backup share along with the plex backup and various other backup files. I then backup the backup share instead of the appdata folder with the rsync script. Edited March 3, 2022 by bclinton Quote Link to comment
LyDjane Posted March 4, 2022 Share Posted March 4, 2022 On 3/3/2022 at 1:38 PM, bclinton said: The way I backup the appdata folder is using the backup/restore appdata plugin and store it in a backup share along with the plex backup and various other backup files. I then backup the backup share instead of the appdata folder with the rsync script. that's what I do as well. I would also like to have the full and incremental backup running. Quote Link to comment
mgutt Posted March 5, 2022 Author Share Posted March 5, 2022 On 3/3/2022 at 1:38 PM, bclinton said: I then backup the backup share instead of the appdata folder with the rsync script. The appdata Backup plugin creates tar files which differ on every backup. So you don't benefit from hardlinks?! Quote Link to comment
bclinton Posted March 5, 2022 Share Posted March 5, 2022 28 minutes ago, mgutt said: The appdata Backup plugin creates tar files which differ on every backup. So you don't benefit from hardlinks?! Yep....but mine are only about 5 gigs and I have room to spare on my backup drive. Quote Link to comment
bclinton Posted March 5, 2022 Share Posted March 5, 2022 16 hours ago, LyDjane said: that's what I do as well. I would also like to have the full and incremental backup running. Do you terminate the docker containers before backing up the folders? Quote Link to comment
LyDjane Posted March 6, 2022 Share Posted March 6, 2022 23 hours ago, bclinton said: Yep....but mine are only about 5 gigs and I have room to spare on my backup drive. with photprism and other dockers my appdata is round about 400gb. So it would be nice, if the rsyn can exclude folders. Quote Link to comment
mgutt Posted March 6, 2022 Author Share Posted March 6, 2022 1 hour ago, LyDjane said: So it would be nice, if the rsyn can exclude folders. Simply add a additional exclude to the options. Quote Link to comment
LyDjane Posted March 6, 2022 Share Posted March 6, 2022 7 hours ago, mgutt said: Simply add a additional exclude to the options. could you give ma an example? Quote Link to comment
mgutt Posted March 6, 2022 Author Share Posted March 6, 2022 Copy this line: --exclude="Cache/" And as an example modify it as follows: --exclude="dir/this-subdir-will-be-ignored/" 1 Quote Link to comment
LyDjane Posted March 7, 2022 Share Posted March 7, 2022 (edited) On 3/6/2022 at 6:54 PM, mgutt said: Copy this line: --exclude="Cache/" And as an example modify it as follows: --exclude="dir/this-subdir-will-be-ignored/" like this? "/mnt/user/appdata --exclude=/mnt/user/appdata/plex" and if i want to exclude multiple folders? Thanks!!! Edited March 7, 2022 by LyDjane Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.