[Plugin] CA Appdata Backup / Restore v2.5


KluthR

Recommended Posts

18 hours ago, KluthR said:

Anything which could access it during backup?

 

Nothing should be accessing it during backup cause it's a file which only useful to PMS and we can see that it wasn't running at that time, weirdly this morning when it ran schedule backup, it ran without any errors. 

So its quite confusing why sometimes it throws error will sometimes it doesn't.

Link to comment
On 1/3/2023 at 1:37 AM, KluthR said:

Yep. Because your USB flash backup destination points to your appdata. This usb flash backup uses rsync. Rsync is being told to make a 1:1 copy and therefore deletes any additional file which was not at the source. This mechanism is about to change in the future!

 

I hope to make further progress soon to catch everything :)

I am rebuilding my appdata from scratch at this point. Big lesson learned!

Thanks for your hard work!

Link to comment

Im a few months into using unraid. I like it. a lot. :) However I have found that with between me learning unraid and how/where it stores its appdata stuff, I found that my appdata folder(s) are scattered all over the place depending on when I made what change in global settings or each container, let alone whatever location the container auth has predefined.

 

Which brings me to point, I installed this plugin. played with it for a min. I like it. It did backup the folders I expected it to but I had to run the backup 3 times where I manually changed the source path to the various different docker appdata locations. 

 

Is there a way to scriptomagically or chronomagically run the backup at the same time to include the various source paths?

 

Im learning that before I startup a docker container for the first time I need to make sure that the container paths are where I want them to be vs their default values!

Link to comment

Mine's erroring too since the upgrade. I've turned off verification, but it looks to be erroring on files changing as it's backing up.

I can't stop my dockers whilst it backs up as I need some of them running almost 24/7, i don't mind if files are changing etc. If it's erroring because files are changing as it's being backed up, could you implement a way to ignore this? As it's not deleting old backups because of it and using up a lot of storage.

 

Quote

[06.01.2023 01:01:02] Backing Up appData from /mnt/cache/appdata/ to /mnt/user/Backup/Unraid/Appdata/[email protected]
[06.01.2023 01:01:02] Separate archives disabled! Saving into one file.
[06.01.2023 01:01:02] Backing Up
/usr/bin/tar: ./PlexMediaServer/Library/Application Support/Plex Media Server/Cache: file changed as we read it
/usr/bin/tar: ./wordpress-site1/wp-admin: file changed as we read it
[06.01.2023 01:42:09] tar creation/extraction failed!
[06.01.2023 01:42:09] done
[06.01.2023 01:42:09] A error occurred somewhere. Not deleting old backup sets of appdata
[06.01.2023 01:42:09] Backup / Restore Completed

 

Link to comment
On 1/6/2023 at 5:36 PM, jmmrly said:

I can't stop my dockers whilst it backs up as I need some of them running almost 24/7, i don't mind if files are changing etc. If it's erroring because files are changing as it's being backed up, could you implement a way to ignore this? As it's not deleting old backups because of it and using up a lot of storage.

 

This is my scenario as well. I have been forced to turn off my dockers for the duration of backup job, which takes my Internet down for atleast 30 minutes.

 

I would love to see a fix for this.

Link to comment
39 minutes ago, Babar said:

 

This is my scenario as well. I have been forced to turn off my dockers for the duration of backup job, which takes my Internet down for atleast 30 minutes.

 

I would love to see a fix for this.

There is no fix. You can't back up files that are in use, as they can change at any second. Sure you can ignore it, but then you might as well not do the back up at all. What is the point of having a backup where the files backed up are not complete? There is no use in having a half complete backup..  It would do more harm than good.

 

Just schedule the backup when no one is using the internet (at night when everyone is sleeping maybe). If you can't live without internet for 30 min you have to look at other solutions. Like maybe get yourself a pi that can run 24/7 and run pihole (or whatever is taking down your internet) on that. 

 

A backup is much like an update. Like windows can't update without shutting down because the files needing updates are in use. And you can't backup files that are in use/being written to. Or you can, but the backup would not be a backup anymore. It would just be different files on the source/destination.  

 

Edited by strike
Link to comment
12 minutes ago, strike said:

There is no fix. You can't back up files that are in use, as they can change at any second. Sure you can ignore it, but then you might as well not do the back up at all. What is the point of having a backup where the files backed up are not complete? There is no use in having a half complete backup..  It would do more harm than good.

 

Just schedule the backup when no one is using the internet (at night when everyone is sleeping maybe). If you can't live without internet for 30 min you have to look at other solutions. Like maybe get yourself a pi that can run 24/7 and run pihole (or whatever is taking down your internet) on that. 

 

A backup is much like an update. Like windows can't update without shutting down because the files needing updates are in use. And you can't backup files that are in use/being written to. Or you can, but the backup would not be a backup anymore. It would just be different files on the source/destination.  

 

 

Seems very harsh.

 

One would imagine all Internet services were being shut down at "night" for their backup jobs to run. 😉

Link to comment
6 minutes ago, Babar said:

 

Seems very harsh.

 

One would imagine all Internet services were being shut down at "night" for their backup jobs to run. 😉

Those who need to backup/update things that must run 24/7 have redundancy in place so they can run their backups/updates. So if you can't live without internet you might want to look into that.  I guess you have never seen a service being down due to updates/moving servers/restoring backups or whatever. Like I said you can run the backup while the docker is running, but there is no guarantee that the files being backed up are 100% the same between the source/destination when the backup is finished. For static files that are not being written to every second there are of course no problems. 

 

You could do very frequent backups, then the chance of having a "half complete" backup would decrease.  

Link to comment
3 hours ago, strike said:

There is no fix. You can't back up files that are in use, as they can change at any second. Sure you can ignore it, but then you might as well not do the back up at all. What is the point of having a backup where the files backed up are not complete?

In many cases those services are configured and running, what changes is nothing important but just logfiles that people don't care about. Them changing is irrelevant and definitely not making the backup unusable.

It's up to the user to be smart about what they stop and what they don't but given the number of common things that behave that way it's important to have an option to ignore changed files.

Edited by Kilrah
  • Upvote 1
Link to comment

Hi,

 

Thanks for the work keeping this updated.

 

Since the update to V3, however, I've noticed that it takes a _significantly_ longer amount of time to do the backups. It's a total of about 200GB (mostly b/c of plex), but it used to only take about 3 hours to perform the backup + verifications. Now it takes 5 hours to perform the backup and another 3+ or so to verify it.

 

Is there a way of speeding this up?

 

Thanks.

Link to comment
7 hours ago, Kilrah said:

It's up to the user to be smart about what they stop and what they don't but given the number of common things that behave that way it's important to have an option to ignore changed files.

 

Agree that option to supress error about changing files would be awesome.

 

Link to comment
On 12/31/2022 at 5:12 AM, MJFox said:

wouldn't it be better to change the process like this in such a case:

 

-) stop the first container

-) backup the first container

-) start the first container

-) stop the second container

-) backup the second container

-) start the second container 

etc.

 

this would greatly reduce the downtime for each container 

 

On 1/2/2023 at 1:30 PM, KluthR said:

Thats planned - a per container config. But for this, we need also dependency chaining for container which need each other.

 

The simpler and IMHO more robust solution would be to imitate unraid's start process:

  • stop all containers
  • backup the first container
  • start the first container
  • backup the second container
  • start the second container
  • etc.

Avoid the headache of debugging user- or container-specific dependencies - if boot works, backup works.

 

And shorter downtime for primary containers (relative to the current version) should reduce the need to exclude, reducing errors.

Edited by CS01-HS
  • Like 2
Link to comment
3 hours ago, KluthR said:

The mechanism wasnt changed. Are you sure your exclusion list etc is set correctly?

I didn't have an exclusion list before, so I'm not sure what could be causing the increase in time. Very useful feedback, I know ^_^;

Edited by MrTyton
Link to comment
7 hours ago, Kilrah said:

In many cases those services are configured and running, what changes is nothing important but just logfiles that people don't care about. Them changing is irrelevant and definitely not making the backup unusable.

It's up to the user to be smart about what they stop and what they don't but given the number of common things that behave that way it's important to have an option to ignore changed files.

 

6 hours ago, jmmrly said:

Yeah I don't care if files change, especially the ones I've seen it error on. All I need is an option to ignore errors of files in use/changes so it runs properly and deletes older app data backups. I've had to revert to v2 which runs fine without errors.

 

7 hours ago, Kilrah said:

In many cases those services are configured and running, what changes is nothing important but just logfiles

In many cases yes, but what about those other cases and the important files which can change?

 

I think it's a bad idea. I don't mean to be condescending but most users are not smart. How do you suggest the developer determine which files is ok to be changed and which are not? As I said earlier if you don't care about files being changed why do the backup at all? Isn't that what the backup is for, an exact copy of your files? 

 

If you really want it that way there is no one stopping you from just runnning the rsync command against your appdata yourself, just create a user script. You don't really need this plugin if all you want is to copy the filles and nothing else. You clearly don't care about the files and you clearly don't want the containers to stop, so what do you need this plugin for? 

 

Those smart users you're where talking about can just figure out which files they don't care about and exclude them if they really want to. That will stop the errors. 

Link to comment
2 minutes ago, KluthR said:

@MrTyton Do you have enabled separate archives?

I don't. I don't think that I had that enabled before. Would that speed things up?

 

I can try enabling it for the next run.

 

Also, +1 on the treat the dockers the same way that unraid does. It's definitely not as efficient - you do lose some parallelization that might be possible otherwise, but it definitely seems like the easiest way to enable this feature and minimize downtime.

Link to comment

I can understand why some people aren’t wanting to stop critical dockers for backup but it occurs to me that these dockers may already have builtin backup functionality. I know pfsense and pi-hole already are able to create and store a backup file that can be imported to restore all configuration.

 

Just an idea that would allow those dockers to be excluded and allow a successful backup for non-critical dockers.

Link to comment
4 hours ago, strike said:

In many cases yes, but what about those other cases and the important files which can change?

You don't disable the check and stop them.

 

4 hours ago, strike said:

I think it's a bad idea. I don't mean to be condescending but most users are not smart. How do you suggest the developer determine which files is ok to be changed and which are not?

Not the developer's job. Default is already to stop and to check.

If a "not smart" user is going to turn both off even after seeing warnings etc then... it's on them and they probably would already have loads of issues using unraid itself as it is anyway since there are countless things that just as important if not more, less obvious and probably less documented. 

 

If the user decides a particular container is OK to keep up and after checking logs determined the only things that changed are unimportant enough to disable checking then they should have the option to do so.

 

Trying to idiot-proof everything only leads to unusable "race to the bottom" products that can't do anything anymore because every feature gets dropped "because someone who shouldn't might use it", no thanks. Unraid's been doing well at simplifying common tasks without nerfing advanced capabilities so far and that's what's great about it.

Edited by Kilrah
Link to comment
6 hours ago, CS01-HS said:

 

 

The simpler and IMHO more robust solution would be to imitate unraid's start process:

  • stop all containers
  • backup the first container
  • start the first container
  • backup the second container
  • start the second container
  • etc.

Avoid the headache of debugging user- or container-specific dependencies - if boot works, backup works.

 

And shorter downtime for primary containers (relative to the current version) should reduce the need to exclude, reducing errors.


I also vote for this. 

Link to comment
  • KluthR changed the title to [Plugin] CA Appdata Backup / Restore v2.5

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.