[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

5 minutes ago, ptr727 said:

there is no history of any of my docker hub only containers

That "config" as you are calling it is stored on flash as a template. The template is used to fill in the form on the Add Container page. Apps on the Community Apps page know about templates that have already been created by the docker authors, and that is how it is able to help you install a new docker.

 

But anytime you use the Add Container page, whether for one of the Unraid supported dockers from Community Apps, or for something on dockerhub, the settings you make on the Add Container page is stored as a template on flash and it can be reused to get those same settings for the Add Container page.

 

So even those docker hub containers can be setup with the same settings as before.

Link to comment
4 minutes ago, trurl said:

That "config" as you are calling it is stored on flash as a template. The template is used to fill in the form on the Add Container page. Apps on the Community Apps page know about templates that have already been created by the docker authors, and that is how it is able to help you install a new docker.

 

But anytime you use the Add Container page, whether for one of the Unraid supported dockers from Community Apps, or for something on dockerhub, the settings you make on the Add Container page is stored as a template on flash and it can be reused to get those same settings for the Add Container page.

 

So even those docker hub containers can be setup with the same settings as before.

I hear you, but that is not what I see, at least one of my manually created containers, and one from docker hub via apps search, are not listed on the previous apps page (these containers do not have Unraid templates). 

Anyway, restoring to last known is not restoring to a versioned config, e.g. if I restore container data to date x, I may want to restore container config to date x or date y.

But, I'll leave it at that.

Link to comment

 

On 1/8/2020 at 11:47 AM, ptr727 said:

I hear you, but that is not what I see, at least one of my manually created containers, and one from docker hub via apps search, are not listed on the previous apps page (these containers do not have Unraid templates). 

Just tested it, and dockerHub templates (if you did the search and add via CA, or by manually filling out a template completely) will show up in previous apps.  

 

The only thing is that IF you had one, named it AppX and then installed something else and also named it AppX, then the first one would be gone.  Limitation of the underlying system.

 

Anything created via a command line though would never be saved as there was no template to save in the first place

Link to comment

Thanks for all the work you've put in to this, @Squid

 

I think a number of people are having issues with how long backup/verify is taking (particularly users of Plex docker with hundreds of thousands or even millions of files)  I had an idea that might resolve this:

 

If we think of the current backups as being "offline backup" (takes place while the dockers are offline) it would be nice if we could specify a list of folders for "online backup" (ie. paths that don't contain databases and are safe to make copies of while the dockers are online)

 

So in the settings you could specify something like:

 

online backup folders:

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata,

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media

 

These paths would automatically be excluded from the normal "offline backup".  Once the offline backup is completed, the dockers would be restarted and then the script would start copying the folders above in to a second .tar.gz file.

 

For purposes of restore / trimming old backups, the pair of .tar.gz files could be treated as a single backup

 

edit: or just make the "online backup" append to the same .tar.gz file? That would mean a lot less changes required elsewhere in the script.

Edited by ConnectivIT
Link to comment
6 hours ago, ConnectivIT said:

Thanks for all the work you've put in to this, @Squid

 

I think a number of people are having issues with how long backup/verify is taking (particularly users of Plex docker with hundreds of thousands or even millions of files)  I had an idea that might resolve this:

 

If we think of the current backups as being "offline backup" (takes place while the dockers are offline) it would be nice if we could specify a list of folders for "online backup" (ie. paths that don't contain databases and are safe to make copies of while the dockers are online)

 

So in the settings you could specify something like:

 

online backup folders:

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata,

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media

 

These paths would automatically be excluded from the normal "offline backup".  Once the offline backup is completed, the dockers would be restarted and then the script would start copying the folders above in to a second .tar.gz file.

 

For purposes of restore / trimming old backups, the pair of .tar.gz files could be treated as a single backup

 

edit: or just make the "online backup" append to the same .tar.gz file? That would mean a lot less changes required elsewhere in the script.

While nice, way too complicated and won't get implemented.

 

The plugin does however offer the option to not stop any container, (or not stop selected containers)

Link to comment

What is considered the best practice in terms of where to store backups to ensure that you don't lose them.  Presumably the backups should be also stored on a drive somewhwere other than your unRAID server in case it totally dies.  Do people save their backups somewhere in the cloud?  To other PCs on their LAN?  External hard drives?

Link to comment
On 1/11/2020 at 11:33 PM, Squid said:

While nice, way too complicated and won't get implemented.

 

The plugin does however offer the option to not stop any container, (or not stop selected containers)

Thanks for letting me know.

 

For anyone else facing this issue, I'm ended up excluding these in CA Appdata Backup:

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Metadata,

/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Media

 

And then backing up those paths separately using this Borg script:

https://www.reddit.com/r/unRAID/comments/e6l4x6/tutorial_borg_rclone_v2_the_best_method_to/

 

14 hours ago, wayner said:

What is considered the best practice in terms of where to store backups to ensure that you don't lose them

There are many options for this.  The above script is one possible solution and the post discusses some of the others too.

Link to comment
17 hours ago, wayner said:

What is considered the best practice in terms of where to store backups to ensure that you don't lose them.  Presumably the backups should be also stored on a drive somewhwere other than your unRAID server in case it totally dies.  Do people save their backups somewhere in the cloud?  To other PCs on their LAN?  External hard drives?

The big thing is that when backing up to not send them to a cache enabled share.  As to offsite, while ideal my opinion is that if the server ever has a major meltdown and I've actually lost data due, then a backup of the appdata will probably be useless anyways (not to mention that I'll have more important things to worry about than having Plex remember that such and such video has already been watched)

Link to comment

So I have to restore my USB, but I’m having some issues. I’ve formatted my new USB, then copied the files over, then I run the “make bootable.bat” file from the USB as an admin. When I add the USB back to my server, it is not being recognized as an OS. Is there something I’m missing? Thanks for any help!

Link to comment
3 minutes ago, noja said:

So I have to restore my USB, but I’m having some issues. I’ve formatted my new USB, then copied the files over, then I run the “make bootable.bat” file from the USB as an admin. When I add the USB back to my server, it is not being recognized as an OS. Is there something I’m missing? Thanks for any help!

Are you sure your BIOS is set to boot from the new flash?

Link to comment
2 minutes ago, trurl said:

Are you sure your BIOS is set to boot from the new flash?

Well now I’m questioning my sanity, will my supermicro bios not keep its settings from a few hours ago and boot from a USB in the same port as the old USB it’s used for the last few years?
 

I’m only asking as I’m off site and having someone do the physical USB swapping on my behalf. They may be able to manipulate the bios but I’d rather not ask. 

Link to comment
1 minute ago, noja said:

will my supermicro bios not keep its settings from a few hours ago and boot from a USB in the same port as the old USB it’s used for the last few years?

Many don't rely on which port is used but instead expect to see something that identifies the specific drive. And if it doesn't find it, it will try another, likely a hard drive that doesn't have an OS on it.

Link to comment
On 1/13/2020 at 3:08 PM, trurl said:

Many don't rely on which port is used but instead expect to see something that identifies the specific drive. And if it doesn't find it, it will try another, likely a hard drive that doesn't have an OS on it.

So I can confirm that the BIOS is seeing the new correct USB and that is the only media it is attempting to boot from.  

 

Again, here was my process to restore the USB, can someone tell me if I made a boneheaded mistake?

 

1. Insert new Kingston Datatraveller 32GB into Win 10 comp

2. Format USB as FAT32

3. Drag and drop all files from backup folder to USB

4. Right click make_bootable.bat on USB and run as admin (for the record when the popup comes up, I just hit space and then it immediately disappears - I'm not sure if it's actually done anything)

5. Put USB into server 

6. No profit :( 

Link to comment

 

6 minutes ago, noja said:

So I can confirm that the BIOS is seeing the new correct USB and that is the only media it is attempting to boot from.  

 

Again, here was my process to restore the USB, can someone tell me if I made a boneheaded mistake?

 

1. Insert new Kingston Datatraveller 32GB into Win 10 comp

2. Format USB as FAT32

3. Drag and drop all files from backup folder to USB

4. Right click make_bootable.bat on USB and run as admin (for the record when the popup comes up, I just hit space and then it immediately disappears - I'm not sure if it's actually done anything)

5. Put USB into server 

6. No profit :( 

You need to name the drive UNRAID.

  • Thanks 1
Link to comment
56 minutes ago, wgstarks said:

 

You need to name the drive UNRAID.

Sweet jeebus.  THANK YOU! Naming it to UNRAID was exactly the issue. PS - successful disaster recovery from 6500km away is enormously difficult and that much more rewarding.  

Link to comment
3 hours ago, noja said:

So I can confirm that the BIOS is seeing the new correct USB and that is the only media it is attempting to boot from.  

 

Again, here was my process to restore the USB, can someone tell me if I made a boneheaded mistake?

 

1. Insert new Kingston Datatraveller 32GB into Win 10 comp

2. Format USB as FAT32

3. Drag and drop all files from backup folder to USB

4. Right click make_bootable.bat on USB and run as admin (for the record when the popup comes up, I just hit space and then it immediately disappears - I'm not sure if it's actually done anything)

5. Put USB into server 

6. No profit :( 

For future reference, here is simpler and more reliable way to do this.

  1. Prepare flash as new install (this would have taken care of your problem with the volume name of the flash)
  2. Copy config folder from backup to flash.
  3. Boot
Link to comment

@Squid - Sorry up front if this has already been asked, but any thoughts on an option to use zstd compression instead of gzip?

 

Here are some quick tests I did on two of my systems that shows much improved speed and slightly smaller sizes:

 

System 1:

> cd /mnt/user/appdata
> du -d 0 -h .
1.6G    .

> time tar -czf /mnt/user/UnRaidBackups/AppData.tar.gz *
real    1m17.710s
user    1m6.245s
sys     0m6.219s

> time tar --zstd -cf /mnt/user/UnRaidBackups/AppData.tar.zst *
real    0m24.039s
user    0m10.248s
sys     0m5.330s

> ls -lsah /mnt/user/UnRaidBackups/AppData.tar.*
814M -rw-rw-rw- 1 root root 814M Jan 16 14:28 /mnt/user/UnRaidBackups/AppData.tar.gz
783M -rw-rw-rw- 1 root root 783M Jan 16 14:20 /mnt/user/UnRaidBackups/AppData.tar.zst

System 2:

> cd /mnt/user/appdata
> du -d 0 -h .
8.9G    .

> time tar -czf /mnt/user/UnRaidBackups/AppData.tar.gz *
real    4m55.831s
user    4m19.009s
sys     0m27.770s

> time tar --zstd -cf /mnt/user/UnRaidBackups/AppData.tar.zst *
real    2m1.380s
user    0m35.069s
sys     0m23.054s

> ls -lsah /mnt/user/UnRaidBackups/AppData.tar.*
4.6G -rw-rw-rw- 1 root root 4.6G Jan 16 14:39 /mnt/user/UnRaidBackups/AppData.tar.gz
4.4G -rw-rw-rw- 1 root root 4.4G Jan 16 14:34 /mnt/user/UnRaidBackups/AppData.tar.zst

 

  • Like 1
Link to comment
23 minutes ago, EmilionDK said:

 

Hey :) 
I've been thinking about whether it would be possible to do so that each folder in appdata 

gets packed instead of the entire folder?

 

So you don't have to unpack the entire appdata folder, but only the subfolders that one needs.

That is on a long-standing todo list

  • Like 1
Link to comment
2 hours ago, EmilionDK said:

 

Hey :) 
I've been thinking about whether it would be possible to do so that each folder in appdata 

gets packed instead of the entire folder?

 

So you don't have to unpack the entire appdata folder, but only the subfolders that one needs.

If you want, you can try my script I've made exactly for this purpose.
https://github.com/ICDeadPpl/UnraidScripts/blob/master/backup-appdata-usb-vm.sh
It backs up the folders in your appdata location separately. It also backs up the USB flash drive and the libvirt.img file.
I have it running on a schedule on my Unraid.

  • Like 1
Link to comment

Would this plugin in anyway affect the Unraid Nvidia plugin? 

 

I run a backup every Tuesday at 3AM and for whatever reason, every Tuesday morning the two dockers that utilize my Nvidia GPU via the Nvidia plugin have failed to restart. Attempting to start the dockers in the Unraid GUI yields a "execution error" and I typically need to reboot for everything to work again.

 

I'd like to think there's another cause but it happens every backup, without fail. I can't figure out why and it's driving me crazy!

Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.