[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

18 hours ago, KluthR said:

Please post the line number which is missing on your error report :)

 

Could you also please open a terminal and run:

 

ls -lh /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log

?

 

EDIT2. Ahh, I see - the previous version logged every tar output which could grow very large. It also only gave the last x lines of output back. I changed that. The new version displays the whole log. So, if you still have a "old" logfile, it could be very large - too large to load.

 

Could you please confirm:

 

  • The error message only appears on the Backup status page?
  • A manual (or automatic run) sorts the issue out?

 

Basically this error message disappears after the first run after the update. Not all users are affected.

 

 

 

The line number was 196 sorry, just updated my post to include it.

 

I ran that command in terminal and got back the following:

-rw-r--r-- 1 root root 81M Dec  4 06:34 /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log

 

It does appear on the backup status page but the backup now button is greyed out and it is not running scheduled backups either.  So as far as I can tell I have no way to run a backup at the moment.  I copied one of my successful backup TARs from the older version to another location and then deleted them all from the appdata backup folder to see if that made any difference and it did not.

Link to comment
1 hour ago, KluthR said:

Are you sure, that not just a folder with same name exists in your source folder?

You are correct. Not sure where those came from but looks like it’s time to do a little housekeeping.😁

 

1 hour ago, KluthR said:

Addiotionally, you could also take a look into the Backup status tab. Could you do that and post the result?

This tab only shows results from the most recent backup. I’ve done several without the error repeating so any info that may have been there is long gone. Sorry. If the error pops up again I’ll check there.

Link to comment
3 hours ago, KluthR said:

Just published an update.

I tested all cases multiple times - I would say, it should fix all open issues :)

Excellent work. My backups are much, much faster now that plex's and radarr's metadata are excluded:

root@Rack:/mnt/user/backups/Community_Applications/Backups/Appdata_Backup# du -h --max-depth=1 &&
ls -lh 2022-12-05\@09.56/CA_backup_plex.tar.gz &&
ls -lh 2022-12-05\@11.16/CA_backup_plex.tar.gz
13G     ./[email protected]
94G     ./[email protected]
105G    ./[email protected]
105G    ./[email protected]
94G     ./[email protected]
409G    .
-rw-r--r-- 1 root root 77G Dec  5 10:44 2022-12-05\@09.56/CA_backup_plex.tar.gz
-rw-r--r-- 1 root root 3.8G Dec  5 11:25 2022-12-05\@11.16/CA_backup_plex.tar.gz
root@Rack:/mnt/user/backups/Community_Applications/Backups/Appdata_Backup#

The 94G and 105G backups are from me testing the (at the time) broken exclude feature both with (94G) and without (105G) compression. The archive for plex dropped from 77G compressed to just 3.8GB compressed! For anyone interested, I exclude:

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Cache/,
/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media/,
/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata/,
/mnt/user/appdata/radarr/MediaCover/

 

  • Like 2
Link to comment

@KluthR Thanks for continuing this very important plugin, also for new option "separated arhives" its very useful when need to open arhive and in my case becose of very big size of one *.tar.gz it takes a lot of time ... 

 

Maybe you will aslo check the ability to minimize downtime of stoped container that what CS01-HS posted above if you will have time.

 

  • Like 1
Link to comment
1 hour ago, CS01-HS said:

One big advantage (maybe for a future version :) is with separate archives there's no need to wait until all backups are complete to restart individual containers, which will minimize downtime

Nice idea which would require a link between docker container and its volumes. This link is already on my list. Your idea would work then.

 

I extended my notices with that 😉

  • Like 4
Link to comment

Also as Hawkins is doing, is there an easy way to exclude subfolders from the interface? This would be a nice touch, maybe something along the lines of an edit button where it shows the full list of what is going to be excluded and can be edited right there in the interface? (Also all these improvement asked for and done so far, I wanted years ago, and recommended several times, so glad someone was able to have the time and update and continue this project.

 

Also one more suggestion for backing up, would be nice if there was a way to say backup these folders daily, backup these folders weekly, and these backup monthly, and then keep x copies of each backup by frequency, with more options.

 

So I have apps I would like a daily backup, but also want a few weekly and monthly, 

Example: Daily Backups keep for 7 day (One rolling week worth), Weekly 4 rolling weeks worth backups , and monthly 3 months, so I would have apps listed in all 3 backup types, this way I would have a nice history of backups should they ever be needed. (AKA 3 different tabs maybe for backups with each getting a subfolder in the destination folder:

 

/$destination_folder/Daily/

/$destination_folder/Weekly/

/$destination_folder/Monthly/

 

 

Also I have 2 different appdata drives, but I can only select one in Source, would be great if we could select multiple ones and the restore process would remember where each backup came from, or allow us to restore to any of the appdata locations should it be needed, Select the restore location upon restoring.

 

Thanks a million!

Edited by almulder
  • Like 2
Link to comment
3 hours ago, Hawkins said:

 

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Cache/,
/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media/,
/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata/,
/mnt/user/appdata/radarr/MediaCover/

 

Where / How do you edit the excludes, I have been running a script for plex for about a year now so it would not grab all the metadata and such and would love to go back to using this and also being able to exclude it for the arr's as its a lot of space.

Link to comment
5 hours ago, almulder said:

Where / How do you edit the excludes

Just type them in here, each separated by a comma:

420253758_Screenshot2022-12-05200727.thumb.png.0da1b191727591a7628fe82c64b8f829.png

As for what to type in that field, I use QDirStat running in a Docker container to figure out where the large (in file size and in file count) directories are. I think it would be *stellar* to see that kind of visibility within the backup plugin itself. Something like this:

1553243776_Screenshot2022-12-05203528.png.7ed9135d1e33447a696ace5d957630ce.png

Now you can see why I picked to exclude these directories.

5 hours ago, almulder said:

keep x copies of each backup

I second this! I've been bitten by the 'days old' setting before. If you configure it to backup every n days and you set the 'days old' setting to a number less than n, I believe it will delete your previous backups (all of them!). I think it would be great to have setting like, "keep 3 backs minimum, regardless of age, and 6 backups maximum".

  • Like 4
Link to comment

I don't think this warning is as clear as it could be. Do the excluded folders really need to be in the dialog?

 

image.png.c2d6601bffea0e5fc313df86e7e88157.png

 

Shouldn't it just say: 

Warning: All files within /mnt/user/backups/appdata backup/ will be overwritten / deleted during a backup process!

  • Upvote 1
Link to comment
9 minutes ago, flyize said:

Shouldn't it just say: 

Warning: All files within /mnt/user/backups/appdata backup/ will be overwritten / deleted during a backup process!

I had the same thought. Also it's not really true that all files within that location will be deleted. I have existing backups as well as custom folders and files there, they all stay fine. Only thing ever getting overwritten there are the contents of subfolders caBackupUSBStick and caBackupVM/libvirt.img when running a backup.

Edited by dedi
Link to comment
9 minutes ago, dedi said:

Also it's not really true that all files within that location will be deleted

Thats currently true. Those folders MUST be named with the generated date/time format. ALL OTHER namings would be ignored (such with the -error suffix for instance)

Link to comment
13 hours ago, Hawkins said:

Just type them in here, each separated by a comma:

420253758_Screenshot2022-12-05200727.thumb.png.0da1b191727591a7628fe82c64b8f829.png

As for what to type in that field, I use QDirStat running in a Docker container to figure out where the large (in file size and in file count) directories are. I think it would be *stellar* to see that kind of visibility within the backup plugin itself. Something like this:

1553243776_Screenshot2022-12-05203528.png.7ed9135d1e33447a696ace5d957630ce.png

Now you can see why I picked to exclude these directories.

I second this! I've been bitten by the 'days old' setting before. If you configure it to backup every n days and you set the 'days old' setting to a number less than n, I believe it will delete your previous backups (all of them!). I think it would be great to have setting like, "keep 3 backs minimum, regardless of age, and 6 backups maximum".

by excluding /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Cache/, /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media/, /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata/, /mnt/user/appdata/radarr/MediaCover/
Wouldnt you lose your watch history doing this if you were do set it back? 
Would plex have to repull all the metadata of all your media ?

I didnt know you could exclude subdir like this its awesome thanks

any idea of an easy way to copy the backups to a second location?

@KluthR how do i buy ya a beer for all your hard work?

Edited by gaming09
Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.