[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

Situation: I lost my cache array due to an issue running RAM out of spec.

 

I managed to restore all docker containers using a backup.

 

I then lost the USB drive without a backup, but managed to recreate parts of the USB using the files located in a diagnostics file the day before. I lost some customizations including plugins, but drive assignments and share were there.

 

I reinstalled the plugin, and pointed it at the old backup, and.... well now it claims the backups are corrupted with:


gzip: stdin: unexpected end of file
Zoneminder/data/events/2/2020-08-19/2267/00122-capture.jpg
/usr/bin/tar: Unexpected EOF in archive
/usr/bin/tar: Unexpected EOF in archive
/usr/bin/tar: Error is not recoverable: exiting now
Backup/Restore Complete. tar Return Value:
Restore finished. Ideally you should now restart your server
Backup / Restore Completed

 

Now I used those backups, that are hosted not on the unraid server to restore just the day previous. Nothing should have been accessing them. Yet now all 3 previous days are corrupted.

 

When I pull down the tar and try and open it it does seem to be corrupted.

 

Any idea what happened ? 

 

 

Link to comment

Hi guys,

 

This is either a feature or a help request I guess.

 

I backup my appdata once a week, then use duplicati to send that off to backblaze. It works pretty well.

The main issue is the fact that every week I'm uploading a very large new tar.gz file. I'd rather only upload any changes.

 

Therefore I would like an option to either backup appdata without putting it in a tar file at all, or help writing a script to extract the latest tar to a directory that can then be sent to backblaze.

 

I have written a script that finds the most recent tar.gz, that I've called find_latest.sh; it contains:

find "$(pwd)" -name '*.tar.gz' | sort -rn | head -n 1

This gives output like this:

/mnt/user/backups/unraid-appdata/2020-10-01@05.30/CA_backup.tar.gz

I cannot see to pipe this file to tar though.

Trying this (to list the contents):

 ./find_latest.sh | tar -tf -

Gives:

tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors

 

Any ideas of what I'm doing wrong?

Failing that, any thoughts on an option to just copy (or rclone) the appdata sources to a folder somewhere?

 

Cheers,

Pacman

 

Link to comment
7 minutes ago, Squid said:

Ah, I see, thanks Squid.

 

Presumably extracting the archive could produce a similar lockup then?

 

Edit:

Perhaps I should just live with uploading an 18Gb file each week.

I might look and see if anything else can be trimmed from it I guess.

Edited by SudoPacman
Link to comment

Ah, I have an updated plan.

 

I think duplicati will cope fine if the files are in the same directory, so I'm going to try linking a fixed 'latest' directory to the latest backup using the following script, that I can tag onto the end of the backup job.

rm latest/*
find "$(pwd)" -name '*.tar.gz' | sort -rn | head -n 1 | xargs ln -s -t latest

 

I'll feed back with the results when I trigger it over night.

 

Cheers

 

 

EDIT:

Nope, Duplicati doen't like the symlink, even if made to follow.

Trying renaming the directory using:

cd full/path/to/backup/folder

rm -R latest
find "$(pwd)" -name '*@*' | sort -rn | head -n 1 | xargs -I '{}' mv {} latest

EDIT2:

Had to add in cd to full path, since pre-start script obviously gets run from another location.

Edited by SudoPacman
Link to comment

@Squid I hope all is well with you! Just wanted to drop by and point out something I recently noticed, but seems to have been there for a while. I have the Bitwarden_RS docker installed, and it works fine. Starting/Stopping from the GUI works without issue as well.

 

But, what I am seeing is that however the CA Backup/Restore V2 plugin is handling the shutting down of dockers, it seems to not like the Bitwarden_RS docker. When I watch the backup status, I can see when it get's to Bitwarden it just perpetually waits for shutdown, but seems like it doesn't issue the shutdown command, so maybe it has some issue trying to detect the container or the containers status? So, it will just wait for the "Time to wait when stopping app before killing" value to timeout, then kill the docker.

 

If I run a manual backup (did today), and it's sitting in this waiting period, I can then do to the Docker tab in the gui and click and shutdown Bitwarden fine. Once my shutdown from GUI is complete, I check back on the Backup status and it recognizes that the container was shut down and proceeds with the rest of the Docker shutdowns and then does it's thing.

 

Just wanted to bring that up, not sure if there is anything to worry about if it's force killing Bitwarden every time it shuts it down to back it up, I can assume it doesn't like that...

 

EDIT: Looking at it more, it seems to be just how docker stop is handling the container, not specific to this plugin. "docker stop" just doesn't seem to know once the container has shut down.

Edited by cybrnook
Link to comment

What does the following scenario signifiy?

 

Had a disaster, whole cache drive was deleted.

No USB backup so recreated it from scratch.

Restored a backup of appdata, I can see all the folders for the dockers in appdata, however nothing shows up under the docker tab, or under Apps/previous apps. Do I manually have to reinstall all apps?

Link to comment
46 minutes ago, bobo89 said:

No USB backup so recreated it from scratch.

 

this means that you lost all settings (including container settings).   It is advisable to make regular backups of the USB drive either by clicking on it on the Main tab or by using the CA backup plugin.   

 

46 minutes ago, bobo89 said:

Restored a backup of appdata, I can see all the folders for the dockers in appdata, however nothing shows up under the docker tab, or under Apps/previous apps. Do I manually have to reinstall all apps?

The Previous Apps feature relies on templates that are stored on the USB drive.  You wiped this thus losing the templates which means the containers need setting up again.   Since you still have the appdata folders intact the apps will find their working files intact if you use the same settings as you used previously.

  • Like 1
Link to comment
  • 2 weeks later...

I just want to say thank you so much for making this plugin. I unexpectedly lost all the data on my cache drive earlier today but luckily I had this installed, so while I lost some very critical data I didn't have to waste a whole lot of time manually setting up all my containers again. It was just a matter of pressing restore and everything was back minus today's changes. 

 

Thank you!!!

Link to comment

For some reason that I don't understand every time Backup Appdata is run (I have it set to run weekly on Wednesday at 3AM) the entire server restarts. I can post logs if needed, but I'm just curious if I'm overlooking something or have enabled something in the settings for Backup / Restore Appdata to cause this. Thank you for giving this a read.

backupAppdata.PNG

Link to comment

Well I setup the syslog server and was planning to try to replicate the error when I got an alert that there was an update for the Backup / Restore Appdata plugin. I updated it and then manually ran the backup and now there is no error anymore. Guess that fixed it haha. Thank you for your help!

Link to comment

Last couple of days I have woken up to a frozen unraid server. Was using it at 4:00am and woke up at 9:00am and it was frozen (I don't get a lot of sleep. lol). My app backups run daily at 6:00am so I figured it may have something to do with that. 

 

Ran app backups manually today and I noticed the server crashed when it was restarting the Plex docker after all the backups and verification were done.

 

I am able to start/stop plex in the GUI without crashing the server. How is stopping starting dockers handled in CA backup that is different that may cause crashing?

 

I use nvidia unraid plugin and have a quadro k2200 passed through to plex. I can provide diagnostics, etc if requested.

 

Thanks,

Link to comment

Hello guys, i run a couple of VMs with medical data ( on the cache ) and i use this plugin to backup the cache (domains) so that i dont lose the patient files in a failure. my question is: My 2 vms each have 400G vdisk with 80G allocated on one and 290 on the other. the cache size is about 430G utilized (raid0). The resulting tar file is 861G. anyone could explain to me if thats normal and ok to happen? thank you in advance!! great plugin!

Link to comment

Short version: If the server has been off for longer then the number of days set in: "Delete backups if they are this many days old" and CA Appdata Backup / Restore runs a backup on a schedule, it will delete all the backups.  
 

Solution: Probably a good idea to have a setting for minimum number of backups.

 

How I came about this issue.

My cache got corrupt and my dockers stopped working. Since I didn't have time to fix it and didn't really need my server running when the containers weren't working I shut it off till I had time to work in it. While working on it I've left it on overnight. Came back to it today and my backups have been deleted. My setting are set to delete after 60 days and well my server was off for over 60 days.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.