Jump to content

[Plugin] Appdata.Backup


Recommended Posts

58 minutes ago, KluthR said:

Yep, thats ok. 

 

Awesome, thanks! It would be really nice to have a returncode I could use for my alerting and healthchecks.

 

Would it be too much to ask to include something like this at the end:

 

if ($errorOccured){
    exit (1);
} else {
    exit (0);
}

 

Sorry, don't know much PHP, but you catch mydrift I hope. 😅

 

I guess all the "goto end" stuff would need to be pointed there as well.

Link to comment
On 1/18/2024 at 12:47 PM, SirCadian said:

Essentially, I'm trying to get a working remote backup of appdata to Backblaze using Duplicacy to de-duplicate.  At the moment, appdatabackup wraps everything up into per-container tar files.  This is fine/preferable for local backup but because each tar file probably contains at least one changed file, every tar file is different from the previous backup and the entire contents of appdata (~70Gb for me) gets uploaded every time I run the backup. 

 

Shouldn't deduplicating backup tools handle this situation just fine using rolling hashes (e.g. rabin karp)? As long as the file is not compressed I would not expect issues, did you actually test with compression=off?

 

On 1/18/2024 at 12:47 PM, SirCadian said:

I plan to use a pre-backup script (assuming it runs after each container has been stopped) to copy the container appdata directory to a local backup location and then using a scheduled Duplicacy backup to send the files to Backblaze.  That should give me local tar backups from appdata backup and a nice versioned de-duplicated backup on Backblaze that makes efficient use of my cloud storage and network connection.

 

Maybe not the most elegant solution, but why not just use the post-run-script to extract all the tars (AB backup dir, or to your backup dropdir, doesnt matter), then delete original tar?

Link to comment
13 hours ago, sir_storealot said:

 

Shouldn't deduplicating backup tools handle this situation just fine using rolling hashes (e.g. rabin karp)? As long as the file is not compressed I would not expect issues, did you actually test with compression=off?

I didn't, I had just assumed Duplicacy wouldn't look in the tar files.  After your comment I dug around in the Duplicacy forums and came across this post.  Thanks for the pointer, I'll give things a whirl without compression and see how it goes.

  • Like 1
Link to comment

Hi I have been getting this Plex error.

 

[05.02.2024 03:15:07][][plex] tar creation failed! Tar said: tar: /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata/TV Shows/0/f9b0450807cb79fd02ccc1c93207c3d006c5ad6.bundle/Contents/com.plexapp.agents.thetvdb/posters/: Cannot savedir: Structure needs cleaning; tar: Exiting with failure status due to previous errors

 

I am wondering if this is a Plex issue? 

 

Debug log ID 029ade43-0fb1-4185-b399-574cfc0cd42e

Link to comment
6 hours ago, SirCadian said:

I didn't, I had just assumed Duplicacy wouldn't look in the tar files.  After your comment I dug around in the Duplicacy forums and came across this post.  Thanks for the pointer, I'll give things a whirl without compression and see how it goes.

 

Interesting link, thanks for sharing! I always assumed the best deduplication would be plain files, then uncompressed tarball, but from the thread it seems it should not matter much. Good to know, as I am lazy, and just use uncompressed tar, and let my backup tool handle dedup/compression. Please share your insight if you get around to making some actual first hand tests of untared vs. tared!

Link to comment
5 hours ago, nas_nerd said:

Hi I have been getting this Plex error.

 

[05.02.2024 03:15:07][][plex] tar creation failed! Tar said: tar: /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata/TV Shows/0/f9b0450807cb79fd02ccc1c93207c3d006c5ad6.bundle/Contents/com.plexapp.agents.thetvdb/posters/: Cannot savedir: Structure needs cleaning; tar: Exiting with failure status due to previous errors

 

I am wondering if this is a Plex issue? 

 

Debug log ID 029ade43-0fb1-4185-b399-574cfc0cd42e

 

Not a plex issue, sounds like filesystem corruption. Check here for more info:

https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/

(legacy docs link, but there does not seem to be a current version)

Edited by sir_storealot
Link to comment
On 1/18/2024 at 7:02 PM, JUST-Plex said:

Is the tar done locally and then sent, or is it done directly on the target?

No, there is nothing that would be prepared. I issue the tar command with the destination paths and tar is cllecting and writing directly to the target.

 

On 1/18/2024 at 7:47 PM, ms4sman said:

Any suggestions regarding when it tries to restart the container and detects it is already started, but then in the morning it is clearly not started?

Maybe a hung stop process which succeeded later anyhow? Donst ask me how that could happen, but if docker says it was running, it was running (somehow).

 

On 1/18/2024 at 9:07 PM, SirCadian said:

do PreBackup or PostBackup run in the window while the containers are stopped?  I'm assuming not

postBackup fires right after last backup right before starting up containers.

 

On 1/19/2024 at 2:02 AM, Revan335 said:

But yes, actually the Container is not running and the Backup of the not running Container put this Warning/message in the log.

Found the issue. Will be fixed in the next update

 

On 1/21/2024 at 8:15 PM, schreibman said:

Any ideas on what causes this error?

TBH, no. Please try it again with the upcoming release.

 

On 1/22/2024 at 2:05 PM, Bjur said:

1. Is incremental backup available?

2. What is the general consensus on Plex cache etc. folders. Should they be excluded or not?

1. No - not (yet)

2. Cache and those could be excluded. The plugin will display a hint if it detects a plex container. But there is no general information how to perfectly hanlde plex backups :/

 

 

  • Like 2
Link to comment
On 1/23/2024 at 10:43 PM, vw-kombi said:

Is this simply a verification failed message ?

Yes and it also tells you what the issue was: Some contents differ (between doing the backup and verify it against the just backed up ones). Telling me: something was accessing the file(s) during backup.

 

On 1/25/2024 at 5:14 AM, Avsynthe said:

I could set "save external volumes" to yes and add the other external volumes to the exclusion list?

Yes. Place any not wanted paths to the exclusion list. I never tried setting things like sockets and run a backup, but I believe the .sock file just gets backed up as a 0b file.

 

On 1/28/2024 at 5:03 PM, InReasonNotFish said:

When are dockers stopped?  I assumed that it was before the backup started so that verifications would be successful.

exactly. But you could adapt your config to skip that. Whats the backup log saying?

 

On 2/1/2024 at 9:13 PM, Nodiaque said:

I was wondering if it's possible to have a pre-run backup script per container?

Per container custom scripts are on the to do list

 

 

Link to comment
On 2/4/2024 at 12:01 PM, sir_storealot said:

Awesome, thanks! It would be really nice to have a returncode I could use for my alerting and healthchecks.

Should be returning 0 or 1 in the next update.

 

20 hours ago, sir_storealot said:

I always assumed the best deduplication would be plain files, then uncompressed tarball, but from the thread it seems it should not matter much. Good to know, as I am lazy, and just use uncompressed tar, and let my backup tool handle dedup/compression

I saw many users asking, if the plugin could create unpacked backups for exactly that reason. So, all Duplicacy users could use the uncompressed setting (= .tar) and duplicacy can work with it?

  • Like 1
Link to comment

The new update is coming

It been a while since the last stable update. There were some betas (never got feedback though) but I had other work to do the last weeks. I tested the major changes again and I see that they are working.

I will release the update today and I hope that it does not break anything.

 

In parallel I started to implement new fixes and features. This will be available inside a new beta soon.

  • Like 1
  • Thanks 5
Link to comment

not introduced with new update, but persists from previous version: [06.02.2024 09:24:03][⚠️][Main] An error occurred during backup! RETENTION WILL NOT BE CHECKED! Please review the log. If you need further assistance, ask in the support forum.

 

Any ideas on how to fix this?

 

3b3a06ad-77e4-499b-b583-673f10c3913b

 

Edited by lehmand001
Link to comment
3 hours ago, KluthR said:

I hope that it does not break anything.

Argh! Just found two small bugs and fixed them. One of them would mess with the grouped container order. 2024.02.06a is on its way...

 

2 hours ago, lehmand001 said:

Any ideas on how to fix this?

/mnt/cache/appdata/storj/identity/storagenode_optiplex1/logs/node.log: file changed as we read it

It seems this file was being modified during backup. Bit the mapping is only inside one container. Any other app that accesses that file from outside?

Link to comment
On 12/11/2023 at 4:27 PM, Healzangels said:

 

Kept looking through thread and saw this so gave it a whirl. Seems to have fixed my problem as well. I can now save the settings changes! Going to continue testing but seems promising! 

Greetings once again!

I've found each update I need to re-add this line to be able to have my appdata backup function as intended/not have issues saving.


 

<input type="hidden" name="csrf_token" value="<?=_var($var,'csrf_token')?>">

 

I know in the minority as this issue doesn't seem to happen to everyone but wondering if that was a change that could be baked in to not need to make the change when the plugin is updated.

Cheers!

Edited by Healzangels
Link to comment
1 hour ago, Healzangels said:

I've found each update I need to re-add this line to be able to have my appdata backup function as intended/not have issues saving.

Yess, I remember! And I just forgot to include it. I dont know if I published it here: I discussed that within the dev team. The webgui indeed does a hot patch before every POST request and injects the csrf value via JavaScript. However, if that would fail, we get the result you got. I dont know why but exactly this on-the-fly crsf token injection simply does not work. And I dont know why. I remember that you said you dont have any blocking plugins enabled?

 

Anyway, I just published a third update which includes this line and make csrf a permanent form member. That "fixes" any injection issues. More as a workaround.

Edited by KluthR
  • Thanks 1
Link to comment
2 hours ago, KluthR said:

It seems this file was being modified during backup. Bit the mapping is only inside one container. Any other app that accesses that file from outside?

 

I ran a pre script to stop the running containers that might have been looking at those logs. That seemed to work. Many thanks!

Link to comment
6 hours ago, KluthR said:

Should be returning 0 or 1 in the next update.

 

I saw many users asking, if the plugin could create unpacked backups for exactly that reason. So, all Duplicacy users could use the uncompressed setting (= .tar) and duplicacy can work with it?

I'm in the process of testing that now and will let you know.  Local uncompressed backups are about double the size of the compressed backups.  Duplicacy has built in compression which reduces the remote backup size down to around half the size of the local backups.  It seems to have de-duplicated properly as it only transferred around 10% of the appdata backup chunks during the backup process.  I want to run another backup overnight to double check this though.

Link to comment
On 2/5/2024 at 4:10 PM, sir_storealot said:

 

Interesting link, thanks for sharing! I always assumed the best deduplication would be plain files, then uncompressed tarball, but from the thread it seems it should not matter much. Good to know, as I am lazy, and just use uncompressed tar, and let my backup tool handle dedup/compression. Please share your insight if you get around to making some actual first hand tests of untared vs. tared!

Not testing untared vs tared but am testing that tarfiles get deduped properly (see post above).  Looks to work but want to double check.

  • Upvote 1
Link to comment

OK.  Last night's backup went through as expected.

2024-02-07 06:45:58.617 INFO BACKUP_STATS Files: 65 total, 81,341M bytes; 65 new, 81,341M bytes
2024-02-07 06:45:58.617 INFO BACKUP_STATS File chunks: 12739 total, 81,341M bytes; 792 new, 4,772M bytes, 3,009M bytes uploaded
2024-02-07 06:45:58.617 INFO BACKUP_STATS Metadata chunks: 4 total, 943K bytes; 4 new, 943K bytes, 723K bytes uploaded
2024-02-07 06:45:58.617 INFO BACKUP_STATS All chunks: 12743 total, 81,342M bytes; 796 new, 4,773M bytes, 3,010M bytes uploaded
2024-02-07 06:45:58.617 INFO BACKUP_STATS Total running time: 00:15:55

Duplicacy is definitely de-duplicating and only uploading changed chunks.  Upload last night was around 4% of the total backup size.  Duplicacy is also compressing the uncompressed data on upload so it takes ~50% less space in the remote bucket.

 

I'll likely still only run this once a week as I have copies held both locally on the array and on my local PC (once daily, a user script waits until it sees Unraid mount a remote share on my PC and then copies the local backups across).  Now I just need to have a think about VM backup, particularly my Home Assistant instance...really don't fancy having to rebuild that from scratch in the event of a catastrophic failure.

  • Upvote 2
Link to comment

@SirCadian glad it is working fine for you! You actually inspired me to do a little experiment regarding deduplication efficiency of the Appdata Backup output.

 

Test scenario:
Appdata Backup Full flash + VM meta (1 VM) + docker config backup (13 containers), AB compression disabled
This is then backed up to an external repository using (a) restic and (b) kopia to compare the two, including best-speed deflate compression.

 

AB Backup 1 taken at t0 (source dir size 1552 MB)
AB Backup 2 taken at t1 = t0 + 24h (source dir size 1548 MB)

 

Normal system/container usage during the 24h period, nothing special. Note the 2nd backup is slightly smaller, maybe some logs inside containers got deleted, database trimmed or whatever.

 

Results:
Repo size (MB) after 1st backup:
1038    kopia
988    restic

 

Repo size (MB) after 2nd backup:
1291    kopia (+253 MB)
1086    restic (+98 MB)

 

Results - Variant "untarred":

I also did a test (with fresh repositories) untarring the the appdata backup tars (and then deleting them), to see if backing up the raw files improves deduplication, and it did:

kopia untarred size increase t0->t1 154 MB (99 MB less)
restic untarred size increase t0->t1 75 MB (23 MB less)

 

My takeaway:
As long as the Appdata Backup compression is turned off, deduplication will work ok. For best deduplication, the files need to be untarred. Could be an interesting addition for the AB plugin to add an option to just copy files instead of tarring.

 

Notes:
Flash backup was still compressed all the time, did not want to mess with this.

Link to comment
21 hours ago, KluthR said:

Yess, I remember! And I just forgot to include it. I dont know if I published it here: I discussed that within the dev team. The webgui indeed does a hot patch before every POST request and injects the csrf value via JavaScript. However, if that would fail, we get the result you got. I dont know why but exactly this on-the-fly crsf token injection simply does not work. And I dont know why. I remember that you said you dont have any blocking plugins enabled?

 

Anyway, I just published a third update which includes this line and make csrf a permanent form member. That "fixes" any injection issues. More as a workaround.

Thanks so much! I double checked yesterday again for possible conflicts with extensions/browser but to no avail. 
Tested with new release and "fix" is working great so thanks again! -Cheers! 

Link to comment
21 hours ago, Revan335 said:

Container like PlexRipper are false detected as Plex Media Server.

Yes, because it contains the name "Plex" in it. To be honest, I dont have the plex container. But it seems I should check the image, not the custom name. I will check this.

 

 

Edited by KluthR
Link to comment

HI all

I use this nice plugin to backup all my containers :) Dont know if im totally off course here :)

I do the backup to a external SSD.

Everything good.

 

But i also use duplicacy to backup the external drive to a google drive.

But when i restore i get 

“ERROR RESTORE_CHOWN Failed to change uid or gid: lchown”

Something about permissions 

Restoring with duplicacy fails becurse of permissions? - General Support - Unraid

 

Dont know if i overthink this can be done with a  -ignore-owner in duplicacy.

But can i be that the "App data" plugin is been making the backuop with a user and when duplicacy is restoring its trying to set the owner that Appdata backup is using?

 

Can find a good answer to this.

Regards Daniel

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...