Jump to content
Squid

[Plugin] CA Appdata Backup / Restore v2

231 posts in this topic Last Reply

Recommended Posts

I don't mean to be pissy on an internet forum, but the switch to tar is a real dealbraker for me.

 

I have around 130 GB of Plex Preview Thumbnails. Because of the sheer unbelievable amount of files Plex creates for metadata, it takes a silly amount of time to copy these files somewhere. Appdata Backup v1 took 5 minutes daily to update this incremental backup through rsync.

 

Taring my appdata (which includes plex) now takes 45 minutes a day (also factor in the disk spinup/write process).. that is without compression/validation.

Exkluding these files is no option either. Recreating those Preview Thumbnails will take a few weeks of full CPU load, my reason to back them up in the first place.

 

Pleeeease, include an optional switch (I don't care how big the warning sign!) to use rsync with incremental backup again. This just does not work for me.

 

I just re-read that mess of a comment. I know its insulting - and thanks for all the hard work. I'm just a desperate guy who can't write plugins himself :P

 

(I know, I can still use the old plugin, but that is not being updated anymore)

Edited by rix
  • Like 4

Share this post


Link to post
16 hours ago, Flick said:

 

Backup ran for 7 hours and 11 minutes. Verify and compression both turned off. TAR is 617GB in size. So, for doing a full backup, I am indeed impressed. I don't mind not having compression turned on but not having verify makes me a wee bit nervous. That would double the time, though, so I guess it's a trade-off.

 

Now if I could just figure out why it insists on creating a new folder called "Backup" on my disk10 when it completes the backup job. Sigh. (My backups go to disk1\Backups).

 

Thank you again for all your work on this.

To be clear, though, I'd still like the option for incrementals. Seven hours of downtime isn't ideal for a single backup. Thank you.

Share this post


Link to post

Going to test this v2 on my main unraid server with Plex data on it and see how long it will take. Been wishing for a way to store Plex data in a container or an image file so that only one file needs to be backup instead of numerous small files.

 

This v2 worked very well on my secondary unraid server. Thanks for this awesome plugin.

Edited by SCSI

Share this post


Link to post

I have my backups set to run at 3 am every day but it doesn't automatically run.  It only works when I do a manual backup.  Any ideas?

Share this post


Link to post

Yeah.  Somehow a regression slipped in where if you reset your server the schedule wasn't getting run.  :S

 

I'll pump out an update tomorrow for this.

Share this post


Link to post
19 hours ago, Squid said:

Yeah.  Somehow a regression slipped in where if you reset your server the schedule wasn't getting run.  :S

 

I'll pump out an update tomorrow for this.

 

that explains it. thank you for all of your hard work.

Share this post


Link to post

Is there any way to get a specific appdata folder out of the tarred file? I tried opening the file with both 7zip and winrar but then it throws an error once it reads 500k files or so. I have ~1 million files due to video thumbnail generation in plex

Share this post


Link to post

You can work out the command for tar itself to do it.  Google tar man page

Share this post


Link to post

On my install, when I typed in something manually into the excluded folders, the apply button was still grayed out. I had to change another setting to get it to unlock, so it would save the setting.

Share this post


Link to post

ok.  fixed

Share this post


Link to post
19 hours ago, Squid said:

ok.  fixed

 

Thank you! app works great, really appreciate it.

Share this post


Link to post

So, found a little problem with this, but truly it's a PEBKAC error.

 

I recently did some reconfiguring of my server and reassigned some disks, long and the short being that my backup location was no longer present.  Script ran regardless and I woke up this morning to find my webui was displaying no disks.

 

Unable to grab logs either via webui or terminal as a load of errors, after a restart everything was back to normal.

 

Merry Christmas!

Share this post


Link to post

Are options "Path To Custom Stop Script" and "Path To Custom Start Script" mixed up or am I understanding them wrong?

Under "Path To Custom Stop Script" I linked script that I want to execute after Backup/Restore has run and under "Path To Custom Start Script" I linked script that I want to execute before Backup/Restore will run. When I switch them all scripts/actions are performed as expected.

Share this post


Link to post
2 minutes ago, Vaseer said:

I understanding them wrong?

This.  Custom stop script is executed prior to stopping the containers.  Similarily, the custom start script is executed after restarting the containers

Share this post


Link to post
3 minutes ago, Squid said:

This.  Custom stop script is executed prior to stopping the containers.  Similarily, the custom start script is executed after restarting the containers

I was missing information regarding stopping/starting containers. Thanks.

 

I know this is off topic, but I am confident you can help me :)

I made script that creates tar.gz file. When I look at the created tar.gz file via terminal I see "normal" name file name - as I configured it in script: "tree_2017-12-30_22:53:19.tar.gz", but when I look at it via file manager (Dolphin, Krusader, Nautilus...) I see file named "TS76AZ~8.GZ".

 

Part of the script that makes tar.gz file:

tar -czvf $DESTINATION/tree"$(date "+_%Y-%m-%d_%H:%M:%S").tar.gz" tree

I can access and extract tar.gz via terminal with no problem, but via file manager it is not working.

 

Any advice what I am doing wrong?

Share this post


Link to post

OTOH, most likely is the colons in the file name, and if you're accessing over SMB, then strange things will result since its an invalid filename

Share this post


Link to post

Is it normal for this to lock up the web GUI when the backup is running?  I started it about an hour ago and since then, I can't access my server via the GUI, but I can SSH into it and when running htop I see that it is pinning the CPU usage at 100 (though only on a single core).  I can still access my shares from other systems, and my dockers are still working (my MineOS server and OpenVPN are still running anyway), but I can't access the GUI.  Was just wondering if anyone else has experienced this issue?

 

I'm going to let the backup run overnight and check back on the server tomorrow.  Thanks for any input you guys can give.

 

EDIT:

This has now been running for 14 hours and continues to peg the CPU at 100% load and I am still unable to access the unRAID GUI.  Still wondering if this is normal?  Should I let it continue or should I hard reboot the server (as I've tried killing the process and can't seem to kill it)?

Edited by FraxTech

Share this post


Link to post

Feature request: Instead of one giant tarball, could this app use separate tarballs for each folder in appdata? That way it would be much easier to restore a specific app's data (manually) or even pull a specific file since most of them could be opened with untar guis. 

 

Plex is the major culprit with it gargantuan folder. 

  • Like 2
  • Upvote 4

Share this post


Link to post

Hi @Squid I dont want this to come accross as being ungrateful, as I am not, but I am struggling with 30gig plus files on each backup for v2.

 

Is it at all possible to allow V1 to be used as template for us to enter a backup command in where we used to add a bespoke rsync command?

 

I use borg backup and it could backup every day and dedup, so I would only ever have one 30g archive that I could restore from manually.

 

The reason I ask, is because you v1 app would shutdown dockers etc etc to allow the backup to complete and that is beyond my knowledge.

 

Thanks for your thoughts...

Share this post


Link to post
Hi [mention=10290]Squid[/mention] I dont want this to come accross as being ungrateful, as I am not, but I am struggling with 30gig plus files on each backup for v2.
 
Is it at all possible to allow V1 to be used as template for us to enter a backup command in where we used to add a bespoke rsync command?
 
I use borg backup and it could backup every day and dedup, so I would only ever have one 30g archive that I could restore from manually.
 
The reason I ask, is because you v1 app would shutdown dockers etc etc to allow the backup to complete and that is beyond my knowledge.
 
Thanks for your thoughts...
In the settings you would use a stop script and tell it to also use the script instead of rsync IIRC

Share this post


Link to post
On 07/01/2018 at 4:52 AM, Squid said:
On 06/01/2018 at 10:47 PM, local.bin said:

The reason I ask, is because you v1 app would shutdown dockers etc etc to allow the backup to complete and that is beyond my knowledge.
 
Thanks for your thoughts...

In the settings you would use a stop script and tell it to also use the script instead of rsync IIRC

 

Do you mean unraid general settings / v2 backup/restore settings or a User script to stop the dockers and then backup?

Share this post


Link to post

Hello Squid, 

 

For some reason after this runs every night, its not starting back up any of my dockers.  They all remain off, and I'm not sure why.  I checked the setting in the backup plugin, and none are clicked to not turn back on.

 

Any ideas? I don't care that it does stop them, just wish they would start back up again.

Here is sys log as well from when backup started

Jan  8 04:00:01 Tower CA Backup/Restore: #######################################
Jan  8 04:00:01 Tower CA Backup/Restore: Community Applications appData Backup
Jan  8 04:00:01 Tower CA Backup/Restore: Applications will be unavailable during
Jan  8 04:00:01 Tower CA Backup/Restore: this process.  They will automatically
Jan  8 04:00:01 Tower CA Backup/Restore: be restarted upon completion.
Jan  8 04:00:01 Tower CA Backup/Restore: #######################################
Jan  8 04:00:01 Tower CA Backup/Restore: Stopping binhex-delugevpn
Jan  8 04:00:03 Tower kernel: vethc027cfb: renamed from eth0
Jan  8 04:00:03 Tower kernel: docker0: port 1(veth003e466) entered disabled state
Jan  8 04:00:04 Tower kernel: docker0: port 1(veth003e466) entered disabled state
Jan  8 04:00:04 Tower kernel: device veth003e466 left promiscuous mode
Jan  8 04:00:04 Tower kernel: docker0: port 1(veth003e466) entered disabled state
Jan  8 04:00:06 Tower CA Backup/Restore: docker stop -t 60 binhex-delugevpn
Jan  8 04:00:06 Tower CA Backup/Restore: Stopping PlexMediaServer
Jan  8 04:00:21 Tower CA Backup/Restore: docker stop -t 60 PlexMediaServer
Jan  8 04:00:21 Tower CA Backup/Restore: Stopping radarr
Jan  8 04:00:46 Tower kernel: docker0: port 2(veth6c99a38) entered disabled state
Jan  8 04:00:46 Tower kernel: veth48f5e3b: renamed from eth0
Jan  8 04:00:46 Tower kernel: docker0: port 2(veth6c99a38) entered disabled state
Jan  8 04:00:46 Tower kernel: device veth6c99a38 left promiscuous mode
Jan  8 04:00:46 Tower kernel: docker0: port 2(veth6c99a38) entered disabled state
Jan  8 04:00:47 Tower CA Backup/Restore: docker stop -t 60 radarr
Jan  8 04:00:47 Tower CA Backup/Restore: Stopping sonarr
Jan  8 04:00:53 Tower kernel: mdcmd (735): spindown 9
Jan  8 04:01:13 Tower kernel: veth081a146: renamed from eth0
Jan  8 04:01:13 Tower kernel: docker0: port 3(veth7d77e50) entered disabled state
Jan  8 04:01:13 Tower kernel: docker0: port 3(veth7d77e50) entered disabled state
Jan  8 04:01:13 Tower kernel: device veth7d77e50 left promiscuous mode
Jan  8 04:01:13 Tower kernel: docker0: port 3(veth7d77e50) entered disabled state
Jan  8 04:01:15 Tower CA Backup/Restore: docker stop -t 60 sonarr
Jan  8 04:01:15 Tower CA Backup/Restore: Backing Up appData from /mnt/cache/ to /mnt/user/App Backup/2018-01-08@04.00
Jan  8 04:01:15 Tower CA Backup/Restore: Using command: cd '/mnt/cache/' && /usr/bin/tar -cvaf '/mnt/user/App Backup/2018-01-08@04.00/CA_backup.tar' --exclude 'docker.img'  * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress
Jan  8 04:01:16 Tower ntpd[1530]: Deleting interface #34 docker0, 172.17.0.1#123, interface stats: received=0, sent=0, dropped=0, active_time=69549 secs
Jan  8 04:01:42 Tower kernel: mdcmd (736): spindown 10

image.thumb.png.dcc0f29ccd9e59794ca37a35bf034f5d.png

 

Thanks!

 

Share this post


Link to post
57 minutes ago, coltonc18 said:

Here is sys log as well from when backup started

Need to see the syslog from when the backup finished

Share this post


Link to post

oh wow.....I had my backup happening at 4 am....and I am usually up at my computer at 7 am....I just looked at system log again to get you the rest, and I see backup was still going at 8:30...and that is why the dockers were turned off still, the backup hadnt finished by the time I woke up.

 

Sorry for wasting your time, i'll move the backup back a few more hours in the night so its done by the time I wake up in the AM.


Thanks!!

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now