[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

2 hours ago, Romany said:

I have the same question also - can you just restore a specific docker and leave the others alone?   I just had some issue with my unifi docker - which I run when I need it.   Could not web to the GUI - console showed the DB trying to start and failing.   I finally just went to my oldest backup created by this script - un-tarred to a temp directory - and cp -a -r the unifi directory over to the appdata directory - that fixed my issue.   If I had to do a "global" restore with this script I would have lost a lot of recent changes in my other dockers....I can work around this limitation - if indeed there is no way in the script to do that...

 

..Romany

 

When you restore there's an option to select what appdata to restore. Simply select the appdata for the Dockers you wish to restore, and skip the rest.

Link to comment
4 minutes ago, apraetor said:

 

When you restore there's an option to select what appdata to restore. Simply select the appdata for the Dockers you wish to restore, and skip the rest.

Not sure that's correct.  The only place I see where to exclude specific dockers is on the first page (BACKUP/SETTINGS) - I think that only applies to backup.   But I went thru and excluded everything but  one docker - and went to the RESTORE page and started the restore process.    During a normal BACKUP the script will shut down the dockers it is going to back up - and during this restore test the script shut down all of my dockers.   To me that implies that it also restored all of the dockers in the archive...hoping that squid sees this question and imposes his imprimatur to remove all doubt.

 

...REK

Link to comment
5 hours ago, Nexius2 said:

Hello, it would be nice to be able keep old backups.

I must restore an app but my backup is 3 days old, I wich I had last months backup.

maybe in future version could we keep 1 or 2 old backups.

thanks

You can. Just set “delete backups if they are this many days old”. If you were to set this to 14 it would keep 14 days worth of backups.

Link to comment
1 hour ago, dianasta said:

Hello,

 

Is it possible to add password protection on the compressed files?

 

Thanks.

No, there is not option for encrypting backups that I've ever found. I think the rationale is that it's pointless, since the source (AppData) isn't encrypted. Most folks store the tarball in their array. If you're pushing the tarball to somewhere else, such a cloud storage server, then you'll have to encrypt it yourself or use an automated tool. 

Link to comment

Is it possible to run a script at the end of the backup, before the dockers restart, passing the name/path of the file just created to the script?

 

I need to copy the created archive folder e.g. "/path/to/[email protected]/CA_backup.tar.gz" to a different location once it's been created (e.g. "/path/on/server/backup-latest/CA_backup.tar.gz"), so my script will empty the destination folder, then copy the most recent .gz there.

 

The reason for this is because I keep 21 days of backups; these are then sent to a different server (backup 3-2-1 topology). 

I need to send only the latest backup to backblaze (offsite backup). Keeping 21 copies on backblaze is costing me too much money!

 

 

Link to comment
4 hours ago, jj_uk said:

Is it possible to run a script at the end of the backup, before the dockers restart, passing the name/path of the file just created to the script?

The path is not passed to the script you supply  (Dropdown to select your script doesn't seem to work on 6.10 though, but you can enter in the path manually)

Link to comment
On 9/21/2021 at 8:08 AM, KnifeFed said:

Was there ever a fix for this?

I had some garbage containers created due to partially failed `docker build` commands. Performing a complete `docker system prune --force --all --volumes` got rid of them everywhere except from CA Appdata Backup/Restore and CA Application Auto Update.

The fix was to reboot the server. Who would have thunk it? :P

Link to comment
On 9/20/2021 at 11:08 PM, KnifeFed said:

Was there ever a fix for this?

I had some garbage containers created due to partially failed `docker build` commands. Performing a complete `docker system prune --force --all --volumes` got rid of them everywhere except from CA Appdata Backup/Restore and CA Application Auto Update.

 

I gave up trying to remove it a long time ago and didn't see your reply. I just checked, and it's not showing up in CA Appdata Backup/Restore anymore. I don't know what got rid of it. Maybe a restart. I know I tried that before and it didn't work.

Link to comment
7 hours ago, AwesomeAustn said:

 

I gave up trying to remove it a long time ago and didn't see your reply. I just checked, and it's not showing up in CA Appdata Backup/Restore anymore. I don't know what got rid of it. Maybe a restart. I know I tried that before and it didn't work.

The post right above yours is actually me saying a reboot did fix it :)

Link to comment

@SquidI am new to Unraid and have the Basic version but want ot make sure I can recover my installed apps should anything go wrong. I have 2 pool SDD's but want to now ensure I have a backup of everything I need to restore allm y docker apps. Does this app enable me to do this and how? Also how do I backup to the network or another server/PC?

 

Thank you vewry much for all you contributions @Squid

Edited by shanebekker
Clarification
Link to comment

This backs up everything within the directory you specify, and works perfectly for easy restoration.  It does not however backup any custom network you've manually created via a docker network create command, so in a disaster recovery you would have to redo that.

 

To get it onto a cloud you would install an appropriate app to sync the backup folder to the cloud.

Link to comment
  • 2 weeks later...

I noticed a problem with a script in User Scripts triggering an Rclone backup where the upload was taking 12+ hours when it should take minutes.

 

I believe I've stumbled onto the problem of a file size too big but it's in my Appdata Backup folder.  I set up Appdata to backup per Space Invader One two years ago and have not touched it since.  I keep 30 days of backups and they are synced with Rclone nightly.

 

My backups average ~700MB.  My 11-1-21 backup file is 19GB.  The next two days are also 19GB and the 11-4-21 from this morning is 56GB.  This large file is what is throwing the Rclone off (I think it's the provider giving an error but Rclone tries to keep going).

 

Any ideas?  Server has been up for 45 days and I haven't touched a thing.

Link to comment

I was playing around with stacks in portainer and ran into an issue where it was recreating a container every 10 minutes (until I realized it was happening). I checked unraid and it had 16 randomly named containers stopped. Rather than go slowly through each one with unraids gui i went through portainer to delete the garbage containers.

 

However ever since doing that, those 16 containers all still appear under the advanced settings section. I can't find them using any `docker` type command on the server itself. I'm not sure where its pulling the list from. There has been no backup of appdata since this occurred. So its not like its pulling the 16 extra containers from a backup file or something.

 

Edit: its a runtime thing. unraid seems to store information about docker its containers in a json file rather than querying directly from docker itself. Anything that then uses the DockerClient php class pulls from this json file which'll return the containers that no longer exist. Once i restarted the server everything reset back to what containers actually exist rather than the missing ones.

Edited by iarp
Link to comment
On 11/4/2021 at 10:21 PM, ur6969 said:

I noticed a problem with a script in User Scripts triggering an Rclone backup where the upload was taking 12+ hours when it should take minutes.

 

I believe I've stumbled onto the problem of a file size too big but it's in my Appdata Backup folder.  I set up Appdata to backup per Space Invader One two years ago and have not touched it since.  I keep 30 days of backups and they are synced with Rclone nightly.

 

My backups average ~700MB.  My 11-1-21 backup file is 19GB.  The next two days are also 19GB and the 11-4-21 from this morning is 56GB.  This large file is what is throwing the Rclone off (I think it's the provider giving an error but Rclone tries to keep going).

 

Any ideas?  Server has been up for 45 days and I haven't touched a thing.

 

Any ideas here?  My backups are now 79GB and growing every few days.

Link to comment

As part of my backup strategy, I use CrashPlan (specifically, this Docker: https://forums.unraid.net/topic/59647-support-djoss-crashplan-pro-aka-crashplan-for-small-business/) to backup the tar.gz output of this plugin to the cloud.

For whatever reason, this takes days to backup to CrashPlan, and - because I run the backup once a month - it means that every month, Crashplan stops backing up my other files for a couple days while this big boy makes its way to the CrashPlan servers.

 

I read a thing recently about the --rsyncable option* in gzip  - is it possible/simple/valuable to add an option to this plugin to support that flag, in the hope** that CrashPlan doesn't have to upload the full 36 GB every month?

Many thanks!

 

*https://beeznest.wordpress.com/2005/02/03/rsyncable-gzip/

**I have no idea if CrashPlan will incrementally upload the same way that rsync would with this flag enabled.

Edited by jademonkee
Link to comment
  • 2 weeks later...

So my monthly backups run on the 25th, and like clockwork I've recived a Pushover notification that it has completed the backup.

 

I've suffered data loss, and I have no idea what happend. My influxdb appdata folder is suddenly completely empty (no settings, no database). I figured I'd just restore the latest backup, but they apparently haven't been stored for the past couple of months..

 

Anyone got any idea what could have happend? Both with the influxdb and with this plugin. My settings:

hBr3Py5.png

 

zv2C2HX.thumb.png.2ddbe3bc3dc8f18848f902e36e1983ab.png

Link to comment

The logs would have stated that an error of some sort happened during the backups.  In an error situation, the old backup sets don't get deleted.  This is why your earliest set 4/25 still exists.  Pretty sure that a notification goes out in the error situation

Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.