[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

I don't mean to be pissy on an internet forum, but the switch to tar is a real dealbraker for me.

 

I have around 130 GB of Plex Preview Thumbnails. Because of the sheer unbelievable amount of files Plex creates for metadata, it takes a silly amount of time to copy these files somewhere. Appdata Backup v1 took 5 minutes daily to update this incremental backup through rsync.

 

Taring my appdata (which includes plex) now takes 45 minutes a day (also factor in the disk spinup/write process).. that is without compression/validation.

Exkluding these files is no option either. Recreating those Preview Thumbnails will take a few weeks of full CPU load, my reason to back them up in the first place.

 

Pleeeease, include an optional switch (I don't care how big the warning sign!) to use rsync with incremental backup again. This just does not work for me.

 

I just re-read that mess of a comment. I know its insulting - and thanks for all the hard work. I'm just a desperate guy who can't write plugins himself :P

 

(I know, I can still use the old plugin, but that is not being updated anymore)

Edited by rix
  • Like 3
Link to comment
16 hours ago, Flick said:

 

Backup ran for 7 hours and 11 minutes. Verify and compression both turned off. TAR is 617GB in size. So, for doing a full backup, I am indeed impressed. I don't mind not having compression turned on but not having verify makes me a wee bit nervous. That would double the time, though, so I guess it's a trade-off.

 

Now if I could just figure out why it insists on creating a new folder called "Backup" on my disk10 when it completes the backup job. Sigh. (My backups go to disk1\Backups).

 

Thank you again for all your work on this.

To be clear, though, I'd still like the option for incrementals. Seven hours of downtime isn't ideal for a single backup. Thank you.

Link to comment

Going to test this v2 on my main unraid server with Plex data on it and see how long it will take. Been wishing for a way to store Plex data in a container or an image file so that only one file needs to be backup instead of numerous small files.

 

This v2 worked very well on my secondary unraid server. Thanks for this awesome plugin.

Edited by SCSI
Link to comment
  • 3 weeks later...

So, found a little problem with this, but truly it's a PEBKAC error.

 

I recently did some reconfiguring of my server and reassigned some disks, long and the short being that my backup location was no longer present.  Script ran regardless and I woke up this morning to find my webui was displaying no disks.

 

Unable to grab logs either via webui or terminal as a load of errors, after a restart everything was back to normal.

 

Merry Christmas!

Link to comment

Are options "Path To Custom Stop Script" and "Path To Custom Start Script" mixed up or am I understanding them wrong?

Under "Path To Custom Stop Script" I linked script that I want to execute after Backup/Restore has run and under "Path To Custom Start Script" I linked script that I want to execute before Backup/Restore will run. When I switch them all scripts/actions are performed as expected.

Link to comment
3 minutes ago, Squid said:

This.  Custom stop script is executed prior to stopping the containers.  Similarily, the custom start script is executed after restarting the containers

I was missing information regarding stopping/starting containers. Thanks.

 

I know this is off topic, but I am confident you can help me :)

I made script that creates tar.gz file. When I look at the created tar.gz file via terminal I see "normal" name file name - as I configured it in script: "tree_2017-12-30_22:53:19.tar.gz", but when I look at it via file manager (Dolphin, Krusader, Nautilus...) I see file named "TS76AZ~8.GZ".

 

Part of the script that makes tar.gz file:

tar -czvf $DESTINATION/tree"$(date "+_%Y-%m-%d_%H:%M:%S").tar.gz" tree

I can access and extract tar.gz via terminal with no problem, but via file manager it is not working.

 

Any advice what I am doing wrong?

Link to comment

Is it normal for this to lock up the web GUI when the backup is running?  I started it about an hour ago and since then, I can't access my server via the GUI, but I can SSH into it and when running htop I see that it is pinning the CPU usage at 100 (though only on a single core).  I can still access my shares from other systems, and my dockers are still working (my MineOS server and OpenVPN are still running anyway), but I can't access the GUI.  Was just wondering if anyone else has experienced this issue?

 

I'm going to let the backup run overnight and check back on the server tomorrow.  Thanks for any input you guys can give.

 

EDIT:

This has now been running for 14 hours and continues to peg the CPU at 100% load and I am still unable to access the unRAID GUI.  Still wondering if this is normal?  Should I let it continue or should I hard reboot the server (as I've tried killing the process and can't seem to kill it)?

Edited by FraxTech
Link to comment

Feature request: Instead of one giant tarball, could this app use separate tarballs for each folder in appdata? That way it would be much easier to restore a specific app's data (manually) or even pull a specific file since most of them could be opened with untar guis. 

 

Plex is the major culprit with it gargantuan folder. 

  • Like 5
  • Upvote 4
Link to comment

Hi @Squid I dont want this to come accross as being ungrateful, as I am not, but I am struggling with 30gig plus files on each backup for v2.

 

Is it at all possible to allow V1 to be used as template for us to enter a backup command in where we used to add a bespoke rsync command?

 

I use borg backup and it could backup every day and dedup, so I would only ever have one 30g archive that I could restore from manually.

 

The reason I ask, is because you v1 app would shutdown dockers etc etc to allow the backup to complete and that is beyond my knowledge.

 

Thanks for your thoughts...

Link to comment
Hi [mention=10290]Squid[/mention] I dont want this to come accross as being ungrateful, as I am not, but I am struggling with 30gig plus files on each backup for v2.
 
Is it at all possible to allow V1 to be used as template for us to enter a backup command in where we used to add a bespoke rsync command?
 
I use borg backup and it could backup every day and dedup, so I would only ever have one 30g archive that I could restore from manually.
 
The reason I ask, is because you v1 app would shutdown dockers etc etc to allow the backup to complete and that is beyond my knowledge.
 
Thanks for your thoughts...
In the settings you would use a stop script and tell it to also use the script instead of rsync IIRC
Link to comment
On 07/01/2018 at 4:52 AM, Squid said:
On 06/01/2018 at 10:47 PM, local.bin said:

The reason I ask, is because you v1 app would shutdown dockers etc etc to allow the backup to complete and that is beyond my knowledge.
 
Thanks for your thoughts...

In the settings you would use a stop script and tell it to also use the script instead of rsync IIRC

 

Do you mean unraid general settings / v2 backup/restore settings or a User script to stop the dockers and then backup?

Link to comment

Hello Squid, 

 

For some reason after this runs every night, its not starting back up any of my dockers.  They all remain off, and I'm not sure why.  I checked the setting in the backup plugin, and none are clicked to not turn back on.

 

Any ideas? I don't care that it does stop them, just wish they would start back up again.

Here is sys log as well from when backup started

Jan  8 04:00:01 Tower CA Backup/Restore: #######################################
Jan  8 04:00:01 Tower CA Backup/Restore: Community Applications appData Backup
Jan  8 04:00:01 Tower CA Backup/Restore: Applications will be unavailable during
Jan  8 04:00:01 Tower CA Backup/Restore: this process.  They will automatically
Jan  8 04:00:01 Tower CA Backup/Restore: be restarted upon completion.
Jan  8 04:00:01 Tower CA Backup/Restore: #######################################
Jan  8 04:00:01 Tower CA Backup/Restore: Stopping binhex-delugevpn
Jan  8 04:00:03 Tower kernel: vethc027cfb: renamed from eth0
Jan  8 04:00:03 Tower kernel: docker0: port 1(veth003e466) entered disabled state
Jan  8 04:00:04 Tower kernel: docker0: port 1(veth003e466) entered disabled state
Jan  8 04:00:04 Tower kernel: device veth003e466 left promiscuous mode
Jan  8 04:00:04 Tower kernel: docker0: port 1(veth003e466) entered disabled state
Jan  8 04:00:06 Tower CA Backup/Restore: docker stop -t 60 binhex-delugevpn
Jan  8 04:00:06 Tower CA Backup/Restore: Stopping PlexMediaServer
Jan  8 04:00:21 Tower CA Backup/Restore: docker stop -t 60 PlexMediaServer
Jan  8 04:00:21 Tower CA Backup/Restore: Stopping radarr
Jan  8 04:00:46 Tower kernel: docker0: port 2(veth6c99a38) entered disabled state
Jan  8 04:00:46 Tower kernel: veth48f5e3b: renamed from eth0
Jan  8 04:00:46 Tower kernel: docker0: port 2(veth6c99a38) entered disabled state
Jan  8 04:00:46 Tower kernel: device veth6c99a38 left promiscuous mode
Jan  8 04:00:46 Tower kernel: docker0: port 2(veth6c99a38) entered disabled state
Jan  8 04:00:47 Tower CA Backup/Restore: docker stop -t 60 radarr
Jan  8 04:00:47 Tower CA Backup/Restore: Stopping sonarr
Jan  8 04:00:53 Tower kernel: mdcmd (735): spindown 9
Jan  8 04:01:13 Tower kernel: veth081a146: renamed from eth0
Jan  8 04:01:13 Tower kernel: docker0: port 3(veth7d77e50) entered disabled state
Jan  8 04:01:13 Tower kernel: docker0: port 3(veth7d77e50) entered disabled state
Jan  8 04:01:13 Tower kernel: device veth7d77e50 left promiscuous mode
Jan  8 04:01:13 Tower kernel: docker0: port 3(veth7d77e50) entered disabled state
Jan  8 04:01:15 Tower CA Backup/Restore: docker stop -t 60 sonarr
Jan  8 04:01:15 Tower CA Backup/Restore: Backing Up appData from /mnt/cache/ to /mnt/user/App Backup/[email protected]
Jan  8 04:01:15 Tower CA Backup/Restore: Using command: cd '/mnt/cache/' && /usr/bin/tar -cvaf '/mnt/user/App Backup/[email protected]/CA_backup.tar' --exclude 'docker.img'  * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress
Jan  8 04:01:16 Tower ntpd[1530]: Deleting interface #34 docker0, 172.17.0.1#123, interface stats: received=0, sent=0, dropped=0, active_time=69549 secs
Jan  8 04:01:42 Tower kernel: mdcmd (736): spindown 10

image.thumb.png.dcc0f29ccd9e59794ca37a35bf034f5d.png

 

Thanks!

 

Link to comment

oh wow.....I had my backup happening at 4 am....and I am usually up at my computer at 7 am....I just looked at system log again to get you the rest, and I see backup was still going at 8:30...and that is why the dockers were turned off still, the backup hadnt finished by the time I woke up.

 

Sorry for wasting your time, i'll move the backup back a few more hours in the night so its done by the time I wake up in the AM.


Thanks!!

Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.