KluthR Posted April 24, 2023 Author Share Posted April 24, 2023 On 4/23/2023 at 12:35 PM, Roddi said: Don't understand the problem as no settings have been changed.... 5 hours ago, Anym001 said: but I get other warnings in the log. Warnings are just warnings, no issue here. But I can change that to info. I need some more time to adjust the beta again, there is another issue. Quote Link to comment
Anym001 Posted April 24, 2023 Share Posted April 24, 2023 1 minute ago, KluthR said: But I can change that to info. That would be fine. Thank you. Quote Link to comment
KluthR Posted April 24, 2023 Author Share Posted April 24, 2023 On 4/22/2023 at 11:44 PM, MothyTim said: Ok, yes of course, happy to test Lets go 10 hours ago, Anym001 said: I have now tried my backup with the current beta version. Could you update as well and upload a new debug log? I saw an issue inside your log and want to confirm its fixed. Quote Link to comment
Anym001 Posted April 25, 2023 Share Posted April 25, 2023 10 hours ago, KluthR said: 21 hours ago, Anym001 said: I have now tried my backup with the current beta version. Could you update as well and upload a new debug log? I saw an issue inside your log and want to confirm its fixed. Here the debug.log with the updated version. Does it make sense to output the following information as info instead of warning? [25.04.2023 06:54:10][swag][warning] Exclusion "/mnt/cache/appdata/Jellyfin/config/log" matches a container volume - ignoring volume/exclusion pair [25.04.2023 06:54:11][swag][warning] Exclusion "/mnt/cache/appdata/nextcloud/log" matches a container volume - ignoring volume/exclusion pair [25.04.2023 06:54:12][swag][warning] Exclusion "/mnt/cache/appdata/vaultwarden/vaultwarden.log" matches a container volume - ignoring volume/exclusion pair The excludes of "debian-bullseye" and "PhotoPrism" Container are not considered. [25.04.2023 06:54:22][Debian-Bullseye][debug] Container got excludes! /mnt/cache/appdata/other/debian-bullseye/ [25.04.2023 06:54:22][Debian-Bullseye][info] Calculated volumes to back up: /mnt/cache/appdata/debian-bullseye, /mnt/cache/appdata/other/debian-bullseye/debian.sh [25.04.2023 07:07:10][PhotoPrism][debug] Container got excludes! /mnt/cache/appdata/other/PhotoPrism/ [25.04.2023 07:07:10][PhotoPrism][info] Calculated volumes to back up: /mnt/cache/appdata/photoprism/config, /mnt/cache/appdata/other/PhotoPrism/.ppignore ab.debug.log Quote Link to comment
KluthR Posted April 25, 2023 Author Share Posted April 25, 2023 1 hour ago, Anym001 said: Does it make sense to output the following information as info instead of warning? IMO thats worth a warning. 1 hour ago, Anym001 said: The excludes of "debian-bullseye" and "PhotoPrism" Container are not considered They are not exact matches, so this is working as expected. I dont resolve any path matching syntax. Thats tars task. Only if a mapping is exact listed 1:1 inside exclusion, I dont event take it to tar. So, I assume all current issues are now fixed. @MothyTimYour nc should be backing up now as well. Quote Link to comment
Anym001 Posted April 25, 2023 Share Posted April 25, 2023 1 hour ago, KluthR said: They are not exact matches, so this is working as expected. I dont resolve any path matching syntax. Thats tars task. Only if a mapping is exact listed 1:1 inside exclusion, I dont event take it to tar. Many thanks for the information and the prompt adjustments. Quote Link to comment
MothyTim Posted April 25, 2023 Share Posted April 25, 2023 4 hours ago, KluthR said: ISo, I assume all current issues are now fixed. @MothyTimYour nc should be backing up now as well. Hi @KluthR, yes looks like its now working well and backing up NC! Log attached for your info! Thanks again for your work on this plugin! Cheers, Tim ab.debug.log Quote Link to comment
KluthR Posted November 3, 2023 Author Share Posted November 3, 2023 (edited) New beta will be out in the next 30 minutes, containing: Global pattern exclusion list Setting to notify user if containers were updated during backup Do container updates between backup stop/starting it https://forums.unraid.net/topic/137710-plugin-appdatabackup/?do=findComment&comment=1287369 Edited November 3, 2023 by KluthR Quote Link to comment
Marty56 Posted December 12, 2023 Share Posted December 12, 2023 Hi, How can I specify more than one backup destinations? Quote Link to comment
KluthR Posted December 12, 2023 Author Share Posted December 12, 2023 (edited) Thats not supported you could copy it with a custom script Edited December 12, 2023 by KluthR Quote Link to comment
Marty56 Posted December 12, 2023 Share Posted December 12, 2023 Thanks for the feedback. Do you already know a potential source for such a custom script? And another question. If I want to keep a backup version forever, is it ok to just rename the folder to some name which does not match the date format of the normal backup folders, so it is protected against automatic deletion ? Quote Link to comment
KluthR Posted December 15, 2023 Author Share Posted December 15, 2023 (edited) On 12/12/2023 at 2:28 PM, Marty56 said: Thanks for the feedback. Do you already know a potential source for such a custom script? Just built yourself a bash script with a cp command and make use of the plugin custom scripts feature. It calls your script with arguments. One of them is the current backup path. and yes: renaming a backup folder will make it permanent Edited December 15, 2023 by KluthR Quote Link to comment
BlueBull Posted February 13 Share Posted February 13 (edited) I have been using the beta for a while with no issues at all. Seems quite stable. I've even done a few restores already and that worked, too. It took a while to figure out but I was even able to migrate my config file from stable -> beta I have a feature request but it's more of a nice-to-have than a must-have. I hope you don't mind me posting it here. Would it be possible to integrate a faster way to do a manual backup of one or a few containers? At the moment, to do a manual backup of for example a single container, we have to take a screenshot of current "skip?"-settings, put all other containers except the one we want to back up on "Skip? - Yes", do the manual backup and then use the screenshot (or recreate them by heart) to put the skip settings back to how they were before. It would be very cool if there was, for instance, a checkbox next to the container names somewhere, which is coupled to the manual backup button to quickly tell it which containers you would like to manually back up At a later stage, such a checkbox could even be expanded to something like a batch edit functionality if you want, so you can batch change backup settings for multiple containers at once, including the skip settings. But just the manual backup thing would already be very cool Edited February 13 by BlueBull Quote Link to comment
KluthR Posted February 13 Author Share Posted February 13 Feature was already being requested (https://github.com/Commifreak/unraid-appdata.backup/issues/12) but I didnt saw a quick way of implement that. A quick hack of the settings regarding the "Skip?" seems a great way! I would prefer a new internal flag to adjust the logging to that single-backup wish, but yes, this should be "easily" implementable. Quote Link to comment
BlueBull Posted February 13 Share Posted February 13 (edited) 3 hours ago, KluthR said: Feature was already being requested (https://github.com/Commifreak/unraid-appdata.backup/issues/12) but I didnt saw a quick way of implement that. A quick hack of the settings regarding the "Skip?" seems a great way! I would prefer a new internal flag to adjust the logging to that single-backup wish, but yes, this should be "easily" implementable. Oh, I wasn't aware there was a GitHub for this plugin. I should have known though, since I now realize there's a raw.githubusercontent.com link to the PLG in the original post and I know that can be used to find the underlying repository. Apologies for the double request Lol, yes, I know that in development, things that on the surface and/or for a layman seem easy to implement, rarely actually are. If it ever materializes, great and if not then I'm still just as grateful for the enormous amount of effort you put in to develop this and share it with us. If there's any way in which I could help with this request or with something else (providing logs, testing, feedback,....) don't hesitate to let me know, I'll gladly do so Edited February 13 by BlueBull Quote Link to comment
Artiom97es Posted April 25 Share Posted April 25 no idea why, what im doing wrong Quote Link to comment
KluthR Posted April 25 Author Share Posted April 25 What about posting some more details? At least the entered source??? Quote Link to comment
Artiom97es Posted April 25 Share Posted April 25 32 minutes ago, KluthR said: What about posting some more details? At least the entered source??? no idea what is that, im new with this Quote Link to comment
Kilrah Posted April 25 Share Posted April 25 1) You don't want to put your backup destination into the appdata share itself 2) Is that folder actually on the cache drive? What's your share configuration for the appdata share? Quote Link to comment
Artiom97es Posted April 25 Share Posted April 25 14 minutes ago, Kilrah said: 1) You don't want to put your backup destination into the appdata share itself 2) Is that folder actually on the cache drive? What's your share configuration for the appdata share? i maded a backup, and now i created a ¨cache¨ as main storage, and i want to restore everything, its because the restore folder its in appdata? i need to move it outside from there? Quote Link to comment
SlingerAJ Posted May 25 Share Posted May 25 Using CA Backup modified to minimise Plex Downtime I've been using the CA Backups Beta Plugin for some time now and very happy with the results. Like many people, I have a large ../appdata/binhex-plex folder (200G / 240,000 files including lots of small metadata ones) and the backup process for this was taking Plex offline for an hour a day (either with stop all or stop individually as the other containers are fairly trivial), which I want to minimise. The solution I've implemented works well, and takes plex off-line for only 14 minutes now. Here's how it works. Settings / process - total runtime 42 mins, docker downtime 16 mins Stop all containers, backup, start all (feels safer as the dockers all cross-talk) Use Compression "Yes, multicore" using all cores binhex-plexpass "Skip backup" = Yes Post-backup script "/mnt/user/admin/copyplex.sh" (this runs with the dockers stopped) - runtime 10 mins This creates a rsync mirror of the "appdata/binhex-plexpass" folder (on NVME) to "../admin/binhex-plexpass-copy" (on a SSD share "admin") - it takes 10 minutes (after the initial run) as it's just updating changed files. Pre-run script "/mnt/user/admin/tarplex.sh" (this runs with the dockers running) - runtime 25 mins This actually runs first, while plex is running, and creates a "zstdmt" backup of binhex-plex-copy in the day's backup folder (/mnt/user/backups/20240522_0400 say), which is cache plus array share and managed by mover tuner to send the older backups to array. It would be more logical to run 5 before 4 but the "Post-run script" seems to execute before the dockers start, so it would add to the downtime for plex, hence I run it first, and it backs up yesterday's copy of plex created in #4. Plex can be recovered from the /mnt/users/admin/binhex-plexpass-copy folder also. How can i make this even faster? Would be be faster to create the rsync copy of plex (step 4) on the same NVME drive (there's plenty of space) or would the increased disk IO on that drive make it slower? I'm also concerned about shortening the life of the drive doing all this reading/writing. Anyone know why "Post-run script" doesn't run after all the dockers are re-started? It runs at 4am, so 14 mins downtime for Plex is ok, but my users are friends/family all over the globe. As the database grows i'd like to minimise the downtime and make restoring plex as fast as possible and given i have space not need to restore all the metadata from the internet. Anyone have a similar (or better solution)? Aidan Quote Link to comment
Naidu Posted Friday at 07:49 PM Share Posted Friday at 07:49 PM Hello, Is there any plans to add snapshots for backup instead of traditional zip all? Thanks Quote Link to comment
Recommended Posts
Posted by KluthR,
0 reactions
Go to this post
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.