[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

Hi guys,

 

This is either a feature or a help request I guess.

 

I backup my appdata once a week, then use duplicati to send that off to backblaze. It works pretty well.

The main issue is the fact that every week I'm uploading a very large new tar.gz file. I'd rather only upload any changes.

 

Therefore I would like an option to either backup appdata without putting it in a tar file at all, or help writing a script to extract the latest tar to a directory that can then be sent to backblaze.

 

I have written a script that finds the most recent tar.gz, that I've called find_latest.sh; it contains:

find "$(pwd)" -name '*.tar.gz' | sort -rn | head -n 1

This gives output like this:

/mnt/user/backups/unraid-appdata/[email protected]/CA_backup.tar.gz

I cannot see to pipe this file to tar though.

Trying this (to list the contents):

 ./find_latest.sh | tar -tf -

Gives:

tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors

 

Any ideas of what I'm doing wrong?

Failing that, any thoughts on an option to just copy (or rclone) the appdata sources to a folder somewhere?

 

Cheers,

Pacman

 

Link to comment
7 minutes ago, Squid said:

Ah, I see, thanks Squid.

 

Presumably extracting the archive could produce a similar lockup then?

 

Edit:

Perhaps I should just live with uploading an 18Gb file each week.

I might look and see if anything else can be trimmed from it I guess.

Edited by SudoPacman
Link to comment

Ah, I have an updated plan.

 

I think duplicati will cope fine if the files are in the same directory, so I'm going to try linking a fixed 'latest' directory to the latest backup using the following script, that I can tag onto the end of the backup job.

rm latest/*
find "$(pwd)" -name '*.tar.gz' | sort -rn | head -n 1 | xargs ln -s -t latest

 

I'll feed back with the results when I trigger it over night.

 

Cheers

 

 

EDIT:

Nope, Duplicati doen't like the symlink, even if made to follow.

Trying renaming the directory using:

cd full/path/to/backup/folder

rm -R latest
find "$(pwd)" -name '*@*' | sort -rn | head -n 1 | xargs -I '{}' mv {} latest

EDIT2:

Had to add in cd to full path, since pre-start script obviously gets run from another location.

Edited by SudoPacman
Link to comment

@Squid I hope all is well with you! Just wanted to drop by and point out something I recently noticed, but seems to have been there for a while. I have the Bitwarden_RS docker installed, and it works fine. Starting/Stopping from the GUI works without issue as well.

 

But, what I am seeing is that however the CA Backup/Restore V2 plugin is handling the shutting down of dockers, it seems to not like the Bitwarden_RS docker. When I watch the backup status, I can see when it get's to Bitwarden it just perpetually waits for shutdown, but seems like it doesn't issue the shutdown command, so maybe it has some issue trying to detect the container or the containers status? So, it will just wait for the "Time to wait when stopping app before killing" value to timeout, then kill the docker.

 

If I run a manual backup (did today), and it's sitting in this waiting period, I can then do to the Docker tab in the gui and click and shutdown Bitwarden fine. Once my shutdown from GUI is complete, I check back on the Backup status and it recognizes that the container was shut down and proceeds with the rest of the Docker shutdowns and then does it's thing.

 

Just wanted to bring that up, not sure if there is anything to worry about if it's force killing Bitwarden every time it shuts it down to back it up, I can assume it doesn't like that...

 

EDIT: Looking at it more, it seems to be just how docker stop is handling the container, not specific to this plugin. "docker stop" just doesn't seem to know once the container has shut down.

Edited by cybrnook
Link to comment

What does the following scenario signifiy?

 

Had a disaster, whole cache drive was deleted.

No USB backup so recreated it from scratch.

Restored a backup of appdata, I can see all the folders for the dockers in appdata, however nothing shows up under the docker tab, or under Apps/previous apps. Do I manually have to reinstall all apps?

Link to comment
46 minutes ago, bobo89 said:

No USB backup so recreated it from scratch.

 

this means that you lost all settings (including container settings).   It is advisable to make regular backups of the USB drive either by clicking on it on the Main tab or by using the CA backup plugin.   

 

46 minutes ago, bobo89 said:

Restored a backup of appdata, I can see all the folders for the dockers in appdata, however nothing shows up under the docker tab, or under Apps/previous apps. Do I manually have to reinstall all apps?

The Previous Apps feature relies on templates that are stored on the USB drive.  You wiped this thus losing the templates which means the containers need setting up again.   Since you still have the appdata folders intact the apps will find their working files intact if you use the same settings as you used previously.

  • Like 1
Link to comment
  • 2 weeks later...

I just want to say thank you so much for making this plugin. I unexpectedly lost all the data on my cache drive earlier today but luckily I had this installed, so while I lost some very critical data I didn't have to waste a whole lot of time manually setting up all my containers again. It was just a matter of pressing restore and everything was back minus today's changes. 

 

Thank you!!!

Link to comment

For some reason that I don't understand every time Backup Appdata is run (I have it set to run weekly on Wednesday at 3AM) the entire server restarts. I can post logs if needed, but I'm just curious if I'm overlooking something or have enabled something in the settings for Backup / Restore Appdata to cause this. Thank you for giving this a read.

backupAppdata.PNG

Link to comment

Well I setup the syslog server and was planning to try to replicate the error when I got an alert that there was an update for the Backup / Restore Appdata plugin. I updated it and then manually ran the backup and now there is no error anymore. Guess that fixed it haha. Thank you for your help!

Link to comment

Last couple of days I have woken up to a frozen unraid server. Was using it at 4:00am and woke up at 9:00am and it was frozen (I don't get a lot of sleep. lol). My app backups run daily at 6:00am so I figured it may have something to do with that. 

 

Ran app backups manually today and I noticed the server crashed when it was restarting the Plex docker after all the backups and verification were done.

 

I am able to start/stop plex in the GUI without crashing the server. How is stopping starting dockers handled in CA backup that is different that may cause crashing?

 

I use nvidia unraid plugin and have a quadro k2200 passed through to plex. I can provide diagnostics, etc if requested.

 

Thanks,

Link to comment

Crash also happened when I started a VM yesterday which I had started several times previously without issue. I disabled unsafe interupts and the crash did not happen while running CA backup last night, so it seems like it could have been unsafe interrupts causing my issue.

Link to comment

Hello guys, i run a couple of VMs with medical data ( on the cache ) and i use this plugin to backup the cache (domains) so that i dont lose the patient files in a failure. my question is: My 2 vms each have 400G vdisk with 80G allocated on one and 290 on the other. the cache size is about 430G utilized (raid0). The resulting tar file is 861G. anyone could explain to me if thats normal and ok to happen? thank you in advance!! great plugin!

Link to comment

Short version: If the server has been off for longer then the number of days set in: "Delete backups if they are this many days old" and CA Appdata Backup / Restore runs a backup on a schedule, it will delete all the backups.  
 

Solution: Probably a good idea to have a setting for minimum number of backups.

 

How I came about this issue.

My cache got corrupt and my dockers stopped working. Since I didn't have time to fix it and didn't really need my server running when the containers weren't working I shut it off till I had time to work in it. While working on it I've left it on overnight. Came back to it today and my backups have been deleted. My setting are set to delete after 60 days and well my server was off for over 60 days.

Link to comment

Amazing tool, thank you so much!

 

I have a slightly irregular situation. I changed flash drives when building a new unRAID system. I wanted to start from scratch and have all of my data moved over but I did want to keep my older docker setup. I was trying to use this app though and it doesn't recognize any backups even if I point to the folder where the backup was just taken.

I already have all of my appdata folder on the new system and the docker image since I moved the old cache drive over, but the dockers don't show because they are not configured on the flash drive.

Does anyone know which files I need to get from my backup to move over to the new flash drive from the old one? these files?: \flash\config\plugins\dockerMan\templates-user or is it something more substantial?

 

I'd prefer not to manually reinstall these dockers and have to reconfigure them if possible.

Link to comment
Except that it doesn't delete any backups until a successful backup is completed.  But yeah, retain x number of backups will partially alleviate this if you catch it prior to the threshold being reached

True but the successful backup that it completed is unfortunately not a backup of a full working system. And in this case basically blank. I was lucky I did a manual for copy at the start so it wasn't a complete loss. I usually also keep (by renaming the folder) a permanent backup that CA backup/restore creates once in a while copy in case I don't catch something that's gone awry within the deletion period, so I can go further back if necessary.

 

But yes with a delete backups every 60 days and say a minimum of 2 backups kept it would definitely keep a good backup long enough.

 

In that same right, keep every nth backup indefinitely would be nice too but I can understand not wanting to put in too many options.

 

Thanks squid for the great app!

 

Link to comment
23 hours ago, Vaggeto said:

Amazing tool, thank you so much!

 

I have a slightly irregular situation. I changed flash drives when building a new unRAID system. I wanted to start from scratch and have all of my data moved over but I did want to keep my older docker setup. I was trying to use this app though and it doesn't recognize any backups even if I point to the folder where the backup was just taken.

I already have all of my appdata folder on the new system and the docker image since I moved the old cache drive over, but the dockers don't show because they are not configured on the flash drive.

Does anyone know which files I need to get from my backup to move over to the new flash drive from the old one? these files?: \flash\config\plugins\dockerMan\templates-user or is it something more substantial?

 

I'd prefer not to manually reinstall these dockers and have to reconfigure them if possible.

As I mentioned here I copied these over and to DockerMan on the flash drive and then finally when I went to "previous apps" in CA they were there to reinstall and had both their settings and actual data. So this is solved. I still can't figure out why backup/restore doesn't see any backups though.

Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.