[Plugin] Appdata.Backup


Recommended Posts

17 hours ago, KluthR said:

This setting does exactly what its called: It updates the container(s) after backup, if update is available. This version works independent (previous Appdata backup plugin versions needed a separate plugin).

Thanks one more time! Is it possible to provide more information in notification? For example which containers were updated? 

 

Now I got something like this:

Quote

Appdata Backup
Backup done [0h, 8m]!
normal
The backup was successful and took 0h, 8m!

Would be great to have some similar to watchtower if any containers were updated:

Quote

Appdata Backup
Backup done [0h, 8m]!
normal
The backup was successful and took 0h, 8m!

 

Docker Auto Update
heimdall Automatically Updated
normal

pi-hole Automatically Updated
normal

Or at least: 

Quote

Containers were Automatically Updated:

[heimdall, pi-hole, ...]

 

Edited by d3m3zs
Link to comment
53 minutes ago, fatty-insecurity5877 said:

I just tried the built in flash backup utility again and it also makes the drive read only after a backup.

Hmm, did you had a look into the device log? Feels like IO makes the filesystem ro (read only) because of errors?

Link to comment
12 minutes ago, KluthR said:

See backup log tab, its all documented

I can see these information there

Quote

[25.10.2023 03:07:16][ℹ️][Main] Auto-Update for 'jellyfin' is enabled and update is available!! Installing...

Is it possible to add possibility for adding this to notification? Maybe simple checkbox on UI and just regex for log output for filter "Installing..." and build one notification?

Link to comment
1 hour ago, KluthR said:

Hmm, did you had a look into the device log? Feels like IO makes the filesystem ro (read only) because of errors?

I was able to find the device log but can't seem to locate it now...

 

It was complaining about possible corruption and to run fsck. I took the drive out and tried doing a scan and repair on a Windows machine, but that failed. I have since replaced the drive with a smaller one (the one I was having issues was a brand new Samsung 128gb, now using 32GB Kingston) and as of now, no issues. There were a lot of what appeared to be junk files on the other drive with very odd timestamps and names. Not entirely sure how corruption would have occurred so soon, this build has only been running for 4 days.

Link to comment

Need some assistance please,

 

Getting this error: 
 

[⚠️][Main] An error occurred during backup! RETENTION WILL NOT BE CHECKED! Please review the log. If you need further assistance, ask in the support forum.

 

Saw on a previous post that mappings were to blame. Checked for mappings, but can't tell which one is the culprit here. 

I attached the debug log. Also, If I try to restore via the plug-in , I do not see any back-up

 

image.thumb.png.da9e2e8f3b9289ae1cc691ce0cc2f1af.png

 

 

If I'm not mistaken, even though I got that warning, back-ups are successfully done and I could restore manually, correct?

Nothing is corrupt inside each archive.  Checked few of them and appear to be fine.

 

Ran one back-up without verifying back-up option, set to no.  It succeeded, now also visible in "Select Backup" from restore.

Some containers have unique mappings, like plex and others. So not sure how to proceed here. 

 

So should I just keep "verify no' to ignore the possible mapping issues or is there something I'm doing wrong?

 

TIA

backup.debug.log

Link to comment

Would it be possible to include a list of excluded folders?

 

My main culprit is Plex, the media folder is 58GB and I don't need to back it up as that data will re-generate in a DR scenario.

 

Full path to exclude:

"/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media"

 

I could work around this using your pre-run script to stop the plex docker, mv the Media folder outside of the 'appdata' path, allow the backup to happen, then move it back in the post-run-script, but I feel dirty just writing this paragraph 🙂

 

I'm using the compressed archive option so I end up with a series of tar files, maybe could be done with the built in tar exclude options:

 

tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

 

On a separate note, thanks for maintaining this plugin, it's brilliant.

Link to comment

Hi, quick bug report (i think) on the execution script. From what i understood there macro steps in the backup function are:

  1. Initialize
  2. Stop Dockers
  3. Backup Docker
  4. Restore Docker State
  5. Update (if checked) 
  6. Other Backups
  7. End


We can place the script in: 

  1. "Pre-run" aka step 1
  2. "Pre-backup" supposedly end of step 2 (or just before 3) 
  3. "Post-backup" supposedly end of step 3 (or just before 4)
  4. "Post-run" aka step 7


if we choose pre-backup the script is runned just before stopping dockers. As a matter of fact it runs almost as if it was in a pre-run configuration.

[31.10.2023 00:39:52][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D
[31.10.2023 00:39:52][ℹ️][Main] Backing up from: /mnt/user/appdata, /mnt/cache/appdata
[31.10.2023 00:39:52][ℹ️][Main] Backing up to: /mnt/user/*******/*******/*******/*******
[31.10.2023 00:39:52][ℹ️][Main] Selected containers: Jellyfin, code-server, flaresolverr, homepage, jellyseerr, mc_server-minecraft-server-1, prowlarr, qbittorrent-vue, radarr, red-discordbot, sonarr
[31.10.2023 00:39:52][ℹ️][Main] Saving container XML files...
[31.10.2023 00:39:52][⚠️][Main] XML file for mc_server-minecraft-server-1 was not found!
[31.10.2023 00:39:52][ℹ️][Main] Executing script '/mnt/user/appdata/scripts/runmover.sh' 'pre-backup' '/mnt/user/*******/*******/*******/*******'...
[31.10.2023 00:39:56][ℹ️][Main] Script executed!
[31.10.2023 00:39:56][ℹ️][Main] Method: Stop all container before continuing.
[31.10.2023 00:39:56][ℹ️][sonarr] Stopping sonarr... done! (took 5 seconds)
[31.10.2023 00:40:01][ℹ️][red-discordbot] Stopping red-discordbot... done! (took 4 seconds)
[31.10.2023 00:40:05][⚠️][Main] Backup cancelled! Executing final things. You will be left behind with the current state!
[31.10.2023 00:40:05][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;)
[31.10.2023 00:40:05][ℹ️][Main] ❤️
 


If we choose post-backup the script is runned after restoring the docker state and it runs with dockers online (despite the help block clarifies"(before containers would start)").  

...
[31.10.2023 00:55:50][ℹ️][sonarr] Backing up sonarr...
[31.10.2023 00:55:52][ℹ️][sonarr] Backup created without issues
[31.10.2023 00:55:52][ℹ️][sonarr] Verifying backup...
[31.10.2023 00:55:53][ℹ️][Main] Set containers to previous state
[31.10.2023 00:55:53][ℹ️][code-server] Starting code-server... (try #1) done!
[31.10.2023 00:55:57][ℹ️][flaresolverr] Starting flaresolverr... (try #1) done!
[31.10.2023 00:56:00][ℹ️][homepage] Starting homepage... (try #1) done!
...
[31.10.2023 00:56:21][ℹ️][qbittorrent-vue] Starting qbittorrent-vue... (try #1) done!
[31.10.2023 00:56:27][ℹ️][radarr] Starting radarr... (try #1) done!
[31.10.2023 00:56:32][ℹ️][red-discordbot] red-discordbot is being ignored, because it was not started before (or should not be started).
[31.10.2023 00:56:32][ℹ️][sonarr] sonarr is being ignored, because it was not started before (or should not be started).
[31.10.2023 00:56:32][ℹ️][Main] Executing script '/mnt/user/appdata/scripts/runmover.sh' 'post-backup' '/mnt/user/*******/*******/*******/ab_*******'...
[31.10.2023 00:56:47][ℹ️][Main] Script executed!


I think the naming configuration is a little bit confusing if this behaviour is expected, and it would be nice to have a script placement that allows the script to run with dockers offline. 

Edited by NoOne_Ale
clarification
Link to comment
On 4/10/2023 at 10:23 AM, KluthR said:

ToDo / Roadmap

 

Any word on recreating the previous functionality to allow specifying a different path for the flash drive as well as the VM data?  It's super handy that all the containers back up somewhere, but users may need/want to store their flash drive somewhere else, for example...  If Unraid is down, it would be handy to have the flash drive stored somewhere else.  If you can't boot Unraid, you can't get to your flash backup if it's on the server with everything else...

  • Upvote 2
Link to comment

So if I'm reading some of the posts here correctly, this feature is now disabled, correct?

 

Create Separate Archives?

 

I really liked this feature because if I ever needed to look back at the backup and grab something specific (docker config / etc) I could unzip untar that single folder really quickly, whereas now I have to unzip/untar the entire backup.

 

If this is accurate, would you consider adding it back @KluthR

 

Thanks for all your hard work on keeping this up-to-date. Saved my a** several times.

 

image.png.786e40e36f08d0c50de42db4734b63da.png

Link to comment
On 10/28/2023 at 4:42 PM, MoldavianRO said:

Saw on a previous post that mappings were to blame.

tdarr mapped volume time differs, so it seems the data inside it gets modified during backup. Such things are not handled yet but a container grouping (not yet available) seems to fix those issues here.

 

On 10/29/2023 at 12:40 PM, Anubis-X said:

Did you found some issue? Can I help you with more informations?

Do you have https for your Unraid UI enabled? This would prevent the browser from loading mixed content then, because the png storage is http-only.

 

On 10/31/2023 at 1:00 AM, NoOne_Ale said:

Hi, quick bug report (i think) on the execution script.

Will check this

 

On 11/1/2023 at 5:22 PM, House Of Cards said:

Any word on recreating the previous functionality to allow specifying a different path for the flash drive as well as the VM data?

Which different paths? To backup to? Separate the backup?

 

48 minutes ago, srfnmnk said:

So if I'm reading some of the posts here correctly, this feature is now disabled, correct?

 

Create Separate Archives?

No, why should it be disabled? The plugin is able to create separate archives, one per container.

Link to comment
1 hour ago, KluthR said:

No, why should it be disabled? The plugin is able to create separate archives, one per container.

 

In the new version of Appdata Backup, there is no option to "create separate archives". That's why i assumed the feature was disabled.

Edited by srfnmnk
Link to comment
3 hours ago, KluthR said:

Which different paths? To backup to? Separate the backup?

 

The plugin always creates separate archives for each docker container, yes.  That's not what I'm referring to.  I am referring specifically to the two options for backing up VM and the Flash Drive.

 

Screenshot_20231103_135529.png.6d4b7cb52d1fb1d3d56832b006794ed8.png

 

Basically, if you select yes, enable an option to select an alternate path.  So you could specify that those items are stored somewhere different from the docker backups.  The pretty important use case is that the flash backup is "special", in that most of us store everything on Unraid because that's where our main storage is.  But what if Unraid won't boot?  You can't get to the backups this plugin creates until you get a copy of your flash backup off the server.  This would enable you to keep the relatively small flash backup somewhere separate from the rest.

 

That's what I was previously doing, but that functionality didn't port over to the updated version.  Of course there are workarounds, but this would simplify restoration in a worst-case situation.

 

As always, I appreciate the work you're doing to keep this plugin alive.  I'm by no means complaining, I just think this is a pretty important functionality for backing up, and would be helpful. 

 

Have a great weekend. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.