Jump to content

[Plugin][BETA] Appdata.Backup


Recommended Posts

On 4/23/2023 at 12:35 PM, Roddi said:

Don't understand the problem as no settings have been changed....

 

5 hours ago, Anym001 said:

but I get other warnings in the log.

Warnings are just warnings, no issue here. But I can change that to info.

 

I need some more time to adjust the beta again, there is another issue.

Link to comment
On 4/22/2023 at 11:44 PM, MothyTim said:

Ok, yes of course, happy to test

Lets go :)

10 hours ago, Anym001 said:

I have now tried my backup with the current beta version.

Could you update as well and upload a new debug log? I saw an issue inside your log and want to confirm its fixed.

Link to comment
10 hours ago, KluthR said:
21 hours ago, Anym001 said:

I have now tried my backup with the current beta version.

Could you update as well and upload a new debug log? I saw an issue inside your log and want to confirm its fixed.

 

Here the debug.log with the updated version.

 

Does it make sense to output the following information as info instead of warning?

[25.04.2023 06:54:10][swag][warning] Exclusion "/mnt/cache/appdata/Jellyfin/config/log" matches a container volume - ignoring volume/exclusion pair
[25.04.2023 06:54:11][swag][warning] Exclusion "/mnt/cache/appdata/nextcloud/log" matches a container volume - ignoring volume/exclusion pair
[25.04.2023 06:54:12][swag][warning] Exclusion "/mnt/cache/appdata/vaultwarden/vaultwarden.log" matches a container volume - ignoring volume/exclusion pair

 

The excludes of "debian-bullseye" and "PhotoPrism" Container are not considered. 

[25.04.2023 06:54:22][Debian-Bullseye][debug] Container got excludes! 
/mnt/cache/appdata/other/debian-bullseye/
[25.04.2023 06:54:22][Debian-Bullseye][info] Calculated volumes to back up: /mnt/cache/appdata/debian-bullseye, /mnt/cache/appdata/other/debian-bullseye/debian.sh
[25.04.2023 07:07:10][PhotoPrism][debug] Container got excludes! 
/mnt/cache/appdata/other/PhotoPrism/
[25.04.2023 07:07:10][PhotoPrism][info] Calculated volumes to back up: /mnt/cache/appdata/photoprism/config, /mnt/cache/appdata/other/PhotoPrism/.ppignore

 

ab.debug.log

Link to comment
1 hour ago, Anym001 said:

Does it make sense to output the following information as info instead of warning?

IMO thats worth a warning.

 

1 hour ago, Anym001 said:

The excludes of "debian-bullseye" and "PhotoPrism" Container are not considered

They are not exact matches, so this is working as expected. I dont resolve any path matching syntax. Thats tars task. Only if a mapping is exact listed 1:1 inside exclusion, I dont event take it to tar.

 

So, I assume all current issues are now fixed. @MothyTimYour nc should be backing up now as well.

Link to comment
1 hour ago, KluthR said:

They are not exact matches, so this is working as expected. I dont resolve any path matching syntax. Thats tars task. Only if a mapping is exact listed 1:1 inside exclusion, I dont event take it to tar.

 

Many thanks for the information and the prompt adjustments.

Link to comment
  • 6 months later...

Thanks for the feedback. Do you already know a potential source for such a custom script?

 

And another question.

If I want to keep a backup version forever, is it ok to just rename the folder to some name which does not match the date format of the normal backup folders, so it is protected against automatic deletion ?

Link to comment
On 12/12/2023 at 2:28 PM, Marty56 said:

Thanks for the feedback. Do you already know a potential source for such a custom script?

Just built yourself a bash script with a cp command and make use of the plugin custom scripts feature. It calls your script with arguments. One of them is the current backup path.

 

and yes: renaming a backup folder will make it permanent

Edited by KluthR
Link to comment
  • 1 month later...

I have been using the beta for a while with no issues at all. Seems quite stable. I've even done a few restores already and that worked, too. It took a while to figure out but I was even able to migrate my config file from stable -> beta

I have a feature request but it's more of a nice-to-have than a must-have. I hope you don't mind me posting it here. Would it be possible to integrate a faster way to do a manual backup of one or a few containers? At the moment, to do a manual backup of for example a single container, we have to take a screenshot of current "skip?"-settings, put all other containers except the one we want to back up on "Skip? - Yes", do the manual backup and then use the screenshot (or recreate them by heart) to put the skip settings back to how they were before. It would be very cool if there was, for instance, a checkbox next to the container names somewhere, which is coupled to the manual backup button to quickly tell it which containers you would like to manually back up

At a later stage, such a checkbox could even be expanded to something like a batch edit functionality if you want, so you can batch change backup settings for multiple containers at once, including the skip settings. But just the manual backup thing would already be very cool

Edited by BlueBull
Link to comment
3 hours ago, KluthR said:

Feature was already being requested (https://github.com/Commifreak/unraid-appdata.backup/issues/12) but I didnt saw a quick way of implement that.

 

A quick hack of the settings regarding the "Skip?" seems a great way! I would prefer a new internal flag to adjust the logging to that single-backup wish, but yes, this should be "easily" implementable.

Oh, I wasn't aware there was a GitHub for this plugin. I should have known though, since I now realize there's a raw.githubusercontent.com link to the PLG in the original post and I know that can be used to find the underlying repository. Apologies for the double request

Lol, yes, I know that in development, things that on the surface and/or for a layman seem easy to implement, rarely actually are. If it ever materializes, great and if not then I'm still just as grateful for the enormous amount of effort you put in to develop this and share it with us. If there's any way in which I could help with this request or with something else (providing logs, testing, feedback,....) don't hesitate to let me know, I'll gladly do so

Edited by BlueBull
Link to comment
  • 2 months later...
14 minutes ago, Kilrah said:

1) You don't want to put your backup destination into the appdata share itself

2) Is that folder actually on the cache drive? What's your share configuration for the appdata share?

i maded a backup, and now i created a ¨cache¨ as main storage, and i want to restore everything, its because the restore folder its in appdata? i need to move it outside from there?

Link to comment
  • 5 weeks later...

Using CA Backup modified to minimise Plex Downtime

 

I've been using the CA Backups Beta Plugin for some time now and very happy with the results.  Like many people, I have a large ../appdata/binhex-plex folder (200G / 240,000 files including lots of small metadata ones) and the backup process for this was taking Plex offline for an hour a day (either with stop all or stop individually as the other containers are fairly trivial), which I want to minimise.

 

The solution I've implemented works well, and takes plex off-line for only 14 minutes now.  Here's how it works.

  1. Settings / process - total runtime 42 mins, docker downtime 16 mins
    1. Stop all containers, backup, start all (feels safer as the dockers all cross-talk)
    2. Use Compression "Yes, multicore" using all cores
    3. binhex-plexpass "Skip backup" = Yes
    4. Post-backup script "/mnt/user/admin/copyplex.sh" (this runs with the dockers stopped) - runtime 10 mins
      1. This creates a rsync mirror of the "appdata/binhex-plexpass" folder (on NVME) to "../admin/binhex-plexpass-copy" (on a SSD share "admin") - it takes 10 minutes (after the initial run) as it's just updating changed files.   
    5. Pre-run script "/mnt/user/admin/tarplex.sh" (this runs with the dockers running) - runtime 25 mins
      1. This actually runs first, while plex is running, and creates a "zstdmt" backup of binhex-plex-copy in the day's backup folder (/mnt/user/backups/20240522_0400 say), which is cache plus array share and managed by mover tuner to send the older backups to array. 
    6. It would be more logical to run 5 before 4 but the "Post-run script" seems to execute before the dockers start, so it would add to the downtime for plex, hence I run it first, and it backs up yesterday's copy of plex created in #4.  Plex can be recovered from the /mnt/users/admin/binhex-plexpass-copy folder also.
  2. How can i make this even faster?
    1. Would be be faster to create the rsync copy of plex (step 4) on the same NVME drive (there's plenty of space) or would the increased disk IO on that drive make it slower?  I'm also concerned about shortening the life of the drive doing all this reading/writing.
    2. Anyone know why "Post-run script" doesn't run after all the dockers are re-started?
    3. It runs at 4am, so 14 mins downtime for Plex is ok, but my users are friends/family all over the globe.  As the database grows i'd like to minimise the downtime and make restoring plex as fast as possible and given i have space not need to restore all the metadata from the internet.  Anyone have a similar (or better solution)?

Aidan

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...