[Plugin] Appdata.Backup


Recommended Posts

So one issue I am running into is exclusions, I got the exclusions for each docker container figured out, but one big issue for me is the exclusion of the /.Recycle Bin/ folder from the Recycle Bin plugin. I find this plugin necessary for accidental deletion of files/folders and had it excluded in the old CA backup. I see under each docker container configuration there is an exclusion list and I can select basically any folder on the system from there, but will that actually propagate to the main backup? Seems a bit confusing as is. Any advice would be appreciated, thanks!

Edited by TheOgre
Link to comment

What do you mean with „Main backup“?

 

where is this Recycle Bin folder located?

 

If this works alongside with docker and is just a folder which is not used in any container then its not a part from the backup. The plugin does not just backup the appdata folders. It backups folders in use by containers.

Link to comment
29 minutes ago, jcofer555 said:

how do i backup the actual appdata like the old plugin?

Not at all anymore. All needed data is collected from each containers volume mapping. Therefore you need to check the appdata source setting and notice the help block inside it.

Link to comment
34 minutes ago, KluthR said:

Not at all anymore. All needed data is collected from each containers volume mapping. Therefore you need to check the appdata source setting and notice the help block inside it.

i see, so much extra effort than before but thank you for the response

Link to comment

I upgraded from 6.12.0 to 6.12.1 and switched my cache pools from btrfs to zfs and changed docker.img to directory.

 

Are there any issues regarding zfs and using this plugin? I now had two times a kernel panic which killed my entire unraid system (--> needed a hard reset of the system).

 

I don't shutdown any docker before backup, is this maybe connected to this (but an error shouldn't kill the unraid host)? I don't have log files because i wasn't able to get them after it happend. But it was something like "unraid kernel panic not syncing" and zfs was mentioned elsewhere in the console tab (its a VM inside a proxmox host).

 

I never had such massive problems before and now I'm afraid that there is a major issue which causes this.

 

I disabled the plugin to observe the behaviour now.

Link to comment
6 hours ago, enJOyIT said:

Are there any issues regarding zfs and using this plugin? I now had two times a kernel panic which killed my entire unraid system (--> needed a hard reset of the system).

I've had ZFS related kernel panics on 6.12 as well during heavy filesystem operation. The frequency went down drastically when I changed docker from macvlan to ipvlan mode. I did get a lock up just recently but I haven't had a chance to debug it yet as my ISP is trying to fix their internet and I just got back from out of town. 

 

I'll be posting updates here when I get to them: https://forums.unraid.net/bug-reports/stable-releases/zfs-related-kernel-panic-r2458/

  • Like 1
Link to comment

I would like to make a suggestion. While I like the "container by container" approach of doing the backups to reduce the time a container isn't available, in some instances this could also be a problem. For example, stopping the database of a service will potentially break the service accessing that database.

 

A more specific example, I have a Tautulli container which can monitor the Plex logs which are acessible through a volume mapping to the Plex container config. However, currently the Backup of Tautulli fails because the Plex Logs are changing. I had to now remove that feature and the volume mapping for Tautulli getting backed up correctly.

 

I also see that a ToDo is to "Use Dockerman default order when no order was set before" which is good but doesn't necessarily fix the issue I highlighted above.

 

What I would like to see is that you can "group" Containers so that they are stopped as a whole and then started after they are backed up. So, for example, Plex and Tautulli are in such a group and both are getting stopped, Tautulli is backed-up and started, then Plex is backed-up and started.

  • Upvote 1
Link to comment
On 6/25/2023 at 9:23 AM, Fribb said:

the Backup of Tautulli fails because the Plex Logs are changing

Exclude this volume path from the tautilly container. So the Plex logs are only backed up within Plex' context.

 

19 hours ago, Revan335 said:

Can i select Backups for Revisions/Points and days?

 

By rotated Drives is for example 90 days worse than revisions.

Sorry, I dont understand :(

 

16 hours ago, iXNyNe said:

Does the Update containers after backup feature require CA Auto Update Applications to be installed?

No. The new plugin does it the native unraid way.

  • Like 1
Link to comment
35 minutes ago, KluthR said:

Sorry, I dont understand :(

The Retention that I can selected is x Days and not x Points.

Booth Options are very cool.

Than i can choose through Days or Points.

 

By Days can deleted a job all Backups on a old USB Disk. For example the last used are 120 Days and have 90 Points/Backups the Retention is 80 Days/Points. The Backup Job/Retention deleted all Backups outside the Retention.

 

Backup Run 2x daily.

 

By Points the Retention deleted only the Points outside the Retention.

 

By days removed the Retention more Backups.

Link to comment
1 hour ago, Kilrah said:

Just set more days for retention?

That's an Option when the USB Disk have enough Storage.

 

But Unfortunately, this does not solve the problem if the other disk was not connected for so long that it is outside the retention and therefore all backups would be deleted there when connecting and backing up. Instead of retention with points only those above the set retention points.

Therefore a choice option would be super. Then one can take days and the other takes points.

Link to comment
20 minutes ago, Revan335 said:

That's an Option when the USB Disk have enough Storage.

 

But Unfortunately, this does not solve the problem if the other disk was not connected for so long that it is outside the retention and therefore all backups would be deleted there when connecting and backing up. Instead of retention with points only those above the set retention points.

Therefore a choice option would be super. Then one can take days and the other takes points.

Did you try this option?

image.png.1c3bce8aa4a4b6ed296b8daf48987ced.png

  • Thanks 1
Link to comment
1 hour ago, Revan335 said:

But Unfortunately, this does not solve the problem if the other disk was not connected for so long that it is outside the retention and therefore all backups would be deleted there when connecting and backing up. Instead of retention with points only those above the set retention points.

If you're always doing 2 backups per day then it doesn't matter whether you keep x backups or keep backups for Y days, you can always get the same result. Keeping 30 days of backups equals keeping 60 backups.

Edited by Kilrah
Link to comment

Awesome work @KluthR! I used to be able to configure notifications when containers were updated upon restart (I think it fired via CA Auto Update Notifications). That seems to be gone / I don't see how to send a webhook notification an update was completed anymore / etc. Am I missing something?

Link to comment
2 hours ago, NegZero said:

I used to be able to configure notifications when containers were updated

Yea, thats not possible currently. I never used the CA Auto Update plugin.

 

if it helps, I could add a log entry with Warning level for done updates. If you set the notification level accordingly, it would produce an email.

Link to comment
3 hours ago, KluthR said:

Yea, thats not possible currently. I never used the CA Auto Update plugin.

 

if it helps, I could add a log entry with Warning level for done updates. If you set the notification level accordingly, it would produce an email.

I'd personally like that, it's nice to have a reminder that something was updated without digging through logs. That way if you see something behaving oddly you immediately have the option of investigating the container update.

Link to comment
On 6/24/2023 at 4:47 PM, Renegade605 said:

I've had ZFS related kernel panics on 6.12 as well during heavy filesystem operation. The frequency went down drastically when I changed docker from macvlan to ipvlan mode. I did get a lock up just recently but I haven't had a chance to debug it yet as my ISP is trying to fix their internet and I just got back from out of town. 

 

I'll be posting updates here when I get to them: https://forums.unraid.net/bug-reports/stable-releases/zfs-related-kernel-panic-r2458/

 

I thought it is related to that, so I decided a similar way... I set up a dedicated docker network (to a seperate eth) as my router needs seperate MAC-addresses to identify my devices. Since then it works without issues so far...

Edited by enJOyIT
Link to comment

Excellent work. Much cleaner interface, more options, inline help added. I like being able to run a script before or after the backup. I think it could already do part of that but the options were a bit confusing before. I have a script that will reboot the server if UnRAID was updated and awaiting reboot. I previously had it in user scripts but it's nicer to include it with this process as it makes more sense and is easier to manage.

  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.