[Plugin] Appdata.Backup


Recommended Posts

12 hours ago, KluthR said:

The plugin does NOT touch the docker service itself in any way! Please share a debug lof for further inspection.

 

Hi, I remove appdata backup plugin, format cache ssd, copy appdata folder and reinstall all apps. Yesterday at 23:40 I got that issue once again :/ Docker service fail. I make new thread here: if You will be so kind to look at:

Regards

Edited by MarianKoniuszko
Link to comment

Hello!

I was testing out the extra files option and noticed that extra files are backed up after the container backups are finished, even when type is set to 'Stop all containers, backup, start all'. Could this be changed so extra files are backed up before the containers are restarted? I'm worried that those files might be touched by the now restarted containers before backup is complete.

Link to comment
22 hours ago, KluthR said:

You mapped the binhex-plex /config to "/mnt/cache/appdata/", thats why. Prove me wrong, but that seems not right. The /mnt/cache/appdata ist also listed as source directory, si its being skipped. Any other mappings are treated as external volume mappiung as discarded, resulting into no folder being backed up.

 

 

wow I  never noticed that!!! thanks updated the mapping and will run a manual back up and see if it resolves

 

EDIT: Resolved thanks @KluthR

Edited by helvete
Resolved
  • Thanks 1
Link to comment

Hi,

 

First of all, thank you for all the work and effort you put into this.

I have a question that may seem weird.

Is there a way to disable appdata backup? I want to use this plugin solely to backup my USB.

If not, is there another way to backup my USB to a specified location on a schedule?

 

Greetings,

 

Tune

Link to comment
1 hour ago, KluthR said:

Not directly. You could set every container to ignore it. Then you end up with it only backing up USB.

Thank you for taking the time to look and my question, and answering so quickly!

I've taken your advice and set it up to skip all containers.

I get that it is probably a very niche usecase, but perhaps a global toggle could be considered as a feature request?

 

Thanks again!

 

Greetings,

 

Tune

  • Upvote 1
Link to comment
On 8/29/2023 at 7:49 AM, MarianKoniuszko said:

if You will be so kind to look at:

As said in the other forum, the RAM is dead.

 

On 8/29/2023 at 9:38 AM, Malachi said:

I was testing out the extra files option and noticed that extra files are backed up after the container backups are finished

Yes, because this feature was meant to backup not docker related files. I will see how I can change that behavior. Maybe some extra docker related extra files.

 

  • Thanks 1
Link to comment

Hello Guys,

 

New to this app, but I need to start using it lost my PLEX data when a drive failed and I had to do a rebuild. 

 

But I noticed that whenever I use it it maxes out the utilization on my server to the point it becomes unusable. I've tried using the scheduler but it doesn't seem to finish any of the backups and seems to lock up the server. 

 

Is there anyway to get around this? maybe force it to use only 2 out of the 6 cores on my server?

 

Any suggestions?

Link to comment

Just happened upon this plugin. Didn't realize the old CA Backup plugin was deprecated until I actually went to the plugin UI for it. I do like seeing individual configs per docker container. Nice add! Which brings me to a question: Is it possible to also allow custom scheduling per container? Some I would prefer to keep a daily backup while others I don't need nearly that often. Especially if it is capable of only stopping containers that are actively being backed up (If memory serves this was not possible in the old plugin and it stopped everything when a backup ran. But my memory is fuzzy on that. Not sure of that behavior on this plugin. Still going over it.).

 

One problematic container for me that this would help with is the Unifi controller. I've been able to reproduce issues with my APs dropping some clients (mainly Kindle Fire tablets used as 24/7 wall mounted displays) when stopped/started requiring manual intervention on each device. Keeping the Unifi container off unless needed or temporarily disabling the backup process the issue has not come up. Being able to isolate this and keep it at like a monthly backup would be really nice.

Link to comment

Update 2: After adding directories to an excluded folders/files, the backup completed much faster. I overlooked the 'per container' settings option when I configured the app. But, unless I'm misunderstanding the external volume options, I shouldn't have to do this if `save external volumes` is set to `no`.

 

Update: I just found the "per container" settings and Plex did have /mnt/user/ listed but external volumes are NOT selected for backup. I'm going to add /mnt/user/ to the exclude list and start the backup process. I'll update this post and share my findings.

 

Hello,

I've noticed that my backups are now exponentially bigger. For example, the folder for Plex is roughly 350GB and its backup is over 3TB. I didn't have compression enabled at the time... I have since enabled it and the current backup is still bigger than the folder itself (and still growing and running 2+ hours). I'm on unRAID 6.12.3

docker container location is /mnt/user/appdata

Edited by ShadowLeague
new info
Link to comment

Potential bug - During my first scheduled run, no log was generated. While the backup was running, the status/log tab indicated that the backup was not running, and the abort button was inactive.

 

On attempting to share a debug log, it returned `Your debug log ID: Logfile does not exist!`

 

Also, is it possible to exclude all files with a certain extension? Is it as simple as adding `*.ext` to the excluded folders/files list?

 

Thanks! :D

Link to comment

Great plugin!

 

I see there is already a request / on the roadmap for custom scripts per-container. That does sound useful, but I was hoping for just enhancements to the scripting in general for the monitoring use-case. 

  • It would be nice if I could paste a script "curl -fsS -m 10 --retry 5 -o /dev/null https://some-ping-site-here" into the custom scripts box, instead of just script files.
  • Since a common use-case I expect would be using the scripts to monitor if a job has succeeded or not (and report if not) it would be good to be able to have a "success" and "errored" script, rather than just a "completed" script. Or, be able to pass update metadata (like backup folder location, job success/failure etc) into the scripts as parameters.

Thanks again.

Link to comment
On 8/24/2023 at 6:25 PM, Ancient Wizard said:

I have the same problem. Only 2 dockers are installed. Bin Hex Krusader and Bin Hex Plex Pass. The failed files are not dropping off and I cannot manually clear them. Older successful files may be dropping off but I cannot say for certain. A solution would be appreciated.

sorry delayed response i went down the good old linux command route and CD into the dir then removed the dir that way 
 

Link to comment

Thanks for pointing out problem can be resolved via Linux commands. I'm an old guy who is more comfortable with a GUI. Terminal is not my friend. Kind of confirms I should use Krusader to navigate to location and delete the failed backups. Used to do Microsoft Support starting back in the mid 1980's and this Linux stuff is an effort to move from Microsoft and keep my mind active as I am still learning new things.

Link to comment
On 8/2/2023 at 12:47 AM, LoyalScotsman said:

So when i first set up this it was failing so i now have a selection of failed backups that i cannot delete, i have tried to delete them via kurseder but it refuses to delete any advice on how to delete the failed one because taking up unnessaray space

 

image.png.bdcd352d49683424aa139d6124c07f93.png

I had the same problem and could not delete the failed directories. No way found. Krusader would not delete them and although Linux commands have been used I did not know how and it is more than I wanted to learn today.

I hope I never need to restore from Appdata Backup but appreciate that I have it and praise the Maintainer.

Praise to the Community. It helps to have the right tools in your toolbox when doing Maintenance.

For some reason I added Dynamix File Manager plugin today. Praise the developer of this one. I visited the directory where I have the appdata backups to see if I had a failure today and it succeeded but I now have the ability to select the failed directories and delete them one at a time. Could not do multiple selections but happy to free the space occupied by failed backups.

Hope this helps others with same or similar problems.

Link to comment
5 hours ago, ShadowLeague said:

I just shared my logs: 455bd7c3-90a9-4842-a521-de24a1645f7d

Looks like the backup acknowledges to exclude backing up /mnt, but a few lines later it's calculating /mnt's backup. I ended up cancelling the scheduled backup since it was taking several hours; causing its backup to be hundreds of GB in size.

 

I can second this. 

 

ISSUE 1:

Problem occurs for dockers like "binhex-Krusader", "Plex Media Server" or "Crashplan Pro" which have a docker mapped to /mnt/user and the docker data that should be backed up is in /mnt/user/appdata

 

In the "per docker settings" I can't exclude /mnt/user because then it will skip the appdata to as its in the same folder.

What I had to do is to exclude all folders except /appdata/ in my exclude list, but also exclude every other docker data folder IN the /appdata folder.

 

In the backup log it clearly states that the setting is to NOT include external folders but it will still go for the external folders unless I put them in the exclude list.

This makes it very prone to user error and micro management and I don't think it's the expected behaviour.

 

ISSUE2:

While trying trial and error with this I had to abort the script during the backup of the "Crashplan Pro" data since it started working on my whole file system in /mnt. So ofcourse the Crashplan Pro docker didn't get restarted. This made me think that when the script evaluates whether a docker should be re-started or not after the backup should be based on the docker "autostart" on/off setting in UnRAID instead of the fact that the docker was started when the backup script start or not.

 

I have enclosed the backup and config that created the desired behaviour below, showing the tedious exclusions I had to do for some containers.

 

Thank you for this essential and great plugin.

 

config.json backup.log

Link to comment
On 8/30/2023 at 7:23 PM, Nanuk_ said:

Is there anyway to get around this? maybe force it to use only 2 out of the 6 cores on my server?

There could be a tunable setting - but for now you could set the Compression to Yes and NOT to "Yes, multicore".

 

On 8/31/2023 at 5:58 AM, cr08 said:

Is it possible to also allow custom scheduling per container?

No, currently not.

 

On 9/1/2023 at 10:37 PM, ShadowLeague said:

I've noticed that my backups are now exponentially bigger.

Please share a debug log ID from an affected run

 

On 9/2/2023 at 4:31 AM, Dalarielus said:

During my first scheduled run, no log was generated.

If all indicators saying, that the backup was not running, it probably wasnt running then. The script writes down a PID and as long as this PID exists, the backup runs. As soon as it exist, the PID is no longer an active process. Are you sure that the backup was actually running??

On 9/2/2023 at 4:31 AM, Dalarielus said:

Also, is it possible to exclude all files with a certain extension?

Thats already on the list.

 

On 9/2/2023 at 10:13 PM, caesay said:

It would be nice if I could paste a script "curl -fsS -m 10 --retry 5 -o /dev/null https://some-ping-site-here" into the custom scripts box, instead of just script files.

I agree. Some simple one liners dont need an extra file but it simplifies script maintenance because the file is reusable and in a central place?

On 9/2/2023 at 10:13 PM, caesay said:

Since a common use-case I expect would be using the scripts to monitor if a job has succeeded or not

Nice one. The plugin could call the ending scripts with an argument to tell it the result.

 

On 9/2/2023 at 10:58 PM, LoyalScotsman said:

sorry delayed response i went down the good old linux command route and CD into the dir then removed the dir that way

Dont know if it was you, but I wanted a ls -la to get the permssions on those files.

 

On 9/4/2023 at 8:36 AM, casperse said:

When doing app data backup to separate file archieves, I cant find the USB flash backup anymore?

Is this only included if I create one big archieve file?

No, its always seperated. What does the log say? If unsure, share a debug log ID with me and Ill check whats going on.

 

On 9/4/2023 at 11:57 PM, Ancient Wizard said:

Hope this helps others with same or similar problems.

Please, post a ls -la from failed backups. I want to see the file perms as I dont found the bug yet why its not deletable :/

 

7 hours ago, ShadowLeague said:

Looks like the backup acknowledges to exclude backing up /mnt

Found a small yet powerful bug... I fix that later today! There is an empty line in the allowed sources textbox which confuses calclulations! The script should check this but it seems that I missed something.

 

1 hour ago, bunkermagnus said:

ISSUE 1:

Please share a debug log ID.

1 hour ago, bunkermagnus said:

This made me think that when the script evaluates whether a docker should be re-started or not after the backup should be based on the docker "autostart" on/off setting in UnRAID

Yes - for initial setups - but no if the script found it stopped. Any manual abort just stops the current action. If an error occurs within normal run, the container gets back up if it was up before starting the backup.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.