[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

Strange problem with my first appdata backup on 6.11.2 - some containers (which were running and set to autostart) weren't restarted post-backup.

 

Maybe coincidence but all the affected containers use a custom network (none of the non-custom network containers failed to start.)

 

Maybe a fluke or maybe an issue with my setup, although I haven't changed anything and this plugin's been rock solid.

 

Anyone else see this?

Link to comment
10 minutes ago, CS01-HS said:

some containers (which were running and set to autostart) weren't restarted post-backup

Same issue, but not 6.11.2 related, as Iam on 6.11.1. Sometimes two containers are not started, sometimes only 1.

 

If I have more time, I check, whats exactly is going on anbd report back :)

  • Like 1
  • Thanks 1
Link to comment
7 hours ago, CS01-HS said:

Strange problem with my first appdata backup on 6.11.2 - some containers (which were running and set to autostart) weren't restarted post-backup.

 

Maybe coincidence but all the affected containers use a custom network (none of the non-custom network containers failed to start.)

 

Maybe a fluke or maybe an issue with my setup, although I haven't changed anything and this plugin's been rock solid.

 

Anyone else see this?

Same here..

After I run the backup for the first time, it breaks docker..

 

Can't start any of my containers. I restart docker and server, but the problem is still present.

 

Unraid v.6.11.1
image.thumb.png.76cd0e62b5e3fa3b798254b5801d1777.pngimage.thumb.png.dcd921d78ca89339ee8cacd59d59ba51.png

Edited by mkkk
Add more info
Link to comment

 

I did not tracked down the "issue" that docker containers sometimes dont start but I worked on other features:

On 10/25/2022 at 9:42 PM, ichz said:

I dont know if this feature asked before, but it would be nice if every docker appdata folder have its own tar backup file instead of one big tar file?

This is now working:

 

[08.11.2022 18:21:02] Backup of appData starting. This may take awhile
[08.11.2022 18:21:02] Stopping deconz... done (took 1 seconds)
[08.11.2022 18:21:03] Stopping jdownloader... done (took 0 seconds)
[08.11.2022 18:21:03] Backing Up appData from /mnt/cache/appdata/ to /mnt/disks/92E6P00LT/appdata/[email protected]
[08.11.2022 18:21:03] Separate archives enabled!
[08.11.2022 18:21:03] Backing up: deconz (cd '/mnt/cache/appdata/deconz' && /usr/bin/tar -caf '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_deconz.tar' . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $!)
[08.11.2022 18:21:03] Verifying Backup deconz - cd '/mnt/cache/appdata/deconz' && /usr/bin/tar --diff -C '/mnt/cache/appdata/deconz' -af '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_deconz.tar' >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress && wait $!
[08.11.2022 18:21:04] Backing up: discourse (cd '/mnt/cache/appdata/discourse' && /usr/bin/tar -caf '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_discourse.tar' . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $!)
[08.11.2022 18:21:04] Verifying Backup discourse - cd '/mnt/cache/appdata/discourse' && /usr/bin/tar --diff -C '/mnt/cache/appdata/discourse' -af '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_discourse.tar' >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress && wait $!
[08.11.2022 18:21:04] Backing up: discourse_data (cd '/mnt/cache/appdata/discourse_data' && /usr/bin/tar -caf '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_discourse_data.tar' . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $!)
[08.11.2022 18:21:04] Verifying Backup discourse_data - cd '/mnt/cache/appdata/discourse_data' && /usr/bin/tar --diff -C '/mnt/cache/appdata/discourse_data' -af '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_discourse_data.tar' >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress && wait $!
[08.11.2022 18:21:05] Backing up: jdownloader (cd '/mnt/cache/appdata/jdownloader' && /usr/bin/tar -caf '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_jdownloader.tar' . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $!)
[08.11.2022 18:21:05] Verifying Backup jdownloader - cd '/mnt/cache/appdata/jdownloader' && /usr/bin/tar --diff -C '/mnt/cache/appdata/jdownloader' -af '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_jdownloader.tar' >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress && wait $!
[08.11.2022 18:21:05] Backing up: mysql (cd '/mnt/cache/appdata/mysql' && /usr/bin/tar -caf '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_mysql.tar' . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $!)
[08.11.2022 18:21:05] Verifying Backup mysql - cd '/mnt/cache/appdata/mysql' && /usr/bin/tar --diff -C '/mnt/cache/appdata/mysql' -af '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_mysql.tar' >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress && wait $!
[08.11.2022 18:21:06] Backing up: npm (cd '/mnt/cache/appdata/npm' && /usr/bin/tar -caf '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_npm.tar' . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $!)
[08.11.2022 18:21:06] Verifying Backup npm - cd '/mnt/cache/appdata/npm' && /usr/bin/tar --diff -C '/mnt/cache/appdata/npm' -af '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_npm.tar' >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress && wait $!
[08.11.2022 18:21:06] Backing up: urbackup (cd '/mnt/cache/appdata/urbackup' && /usr/bin/tar -caf '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_urbackup.tar' . >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress && wait $!)
[08.11.2022 18:21:12] Verifying Backup urbackup - cd '/mnt/cache/appdata/urbackup' && /usr/bin/tar --diff -C '/mnt/cache/appdata/urbackup' -af '/mnt/disks/92E6P00LT/appdata/[email protected]/CA_backup_urbackup.tar' >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress && wait $!
[08.11.2022 18:21:15] done
[08.11.2022 18:21:15] Starting deconz
[08.11.2022 18:21:16] Starting jdownloader
[08.11.2022 18:21:17] Backup/Restore Complete. tar Return Value: 0
[08.11.2022 18:21:17] Backup / Restore Completed

 

and turns out into:

 

grafik.png.07d384a002fa753b23246233ef9bf8b8.png

 

I still need to adapt restore.

 

 

I also added an option to disable the docker warning (https://github.com/Squidly271/ca.backup2/issues/9).

  • Thanks 1
Link to comment
On 11/7/2022 at 2:09 PM, CS01-HS said:

some containers (which were running and set to autostart) weren't restarted post-backup

Small update: Had this issue tonight with NPM: It was stopped successfully but not started again although it was correctly started via ca.backup. I add some more checks to ca.backup and test if I can get some more context when it happens.

 

In the meanwhile, I wonder why Andrew does not respond to any of my questions/issues. I guess its due to lack of time? :)

If so, at least the tar exit code check thing would be important since this issue breaks reliability.

Link to comment
6 minutes ago, KluthR said:

Had this issue tonight with NPM

Check what other dockers NPM relies on (specifically, more complex sites configured).  When I was running a site (the Jitsi docker, I think), if it wasn't running then NPM would never start.  If Jitsi was running, NPM would start fine.  My guess is that CA is correctly starting NPM, but NPM is shutting itself down, as something it relies on is not present.

 

A few notes: I had this a few years ago, but no longer needed Jitsi (if it was that), so didn't find a solution.  I don't know whether CA Backup/Restore adheres to the docker system's ordering and delay options when restarting containers.

 

It's easy to test - shut down dockers one at a time that NPM is linked to and restart NPM.  If it doesn't restart with a specific docker stopped, it's likely that this docker is not (fully) running when NPM is restarted following a backup.

Link to comment
On 11/7/2022 at 8:09 AM, CS01-HS said:

some containers (which were running and set to autostart) weren't restarted post-backup

 

4 hours ago, KluthR said:

Had this issue tonight with NPM

 

4 hours ago, Cessquill said:

Check what other dockers NPM relies on

 

Interesting - the containers that didn't autostart use my vpn container's custom network.

 

I have the order right (vpn before the others) and wait 60 set on the vpn container but I never considered whether ca.backup observed wait time since it always "just worked" – maybe it used to and now it doesn't.

 

If the plan's to go to container-specific tar files, that delay's probably sufficient to make observance irrelevant.

Link to comment

Got a bit further (regarding: some containers sometimes doesnt start).

 

This time, it happened to mysql:

 

[09.11.2022 17:46:38] Starting mysql
[09.11.2022 17:46:39] Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: time="2022-11-09T17:46:39+01:00" level=fatal msg="failed to set IPv6 gateway while updating gateway: file exists": unknown
Error: failed to start containers: mysql

(the important error seems to be: "failed to set IPv6 gateway while updating gateway: file exists")

 

And I have no clue what that mean. Maybe a 1 sec wait between docker start's do the trick?

 

This issue only occurs occasionally!

 

If anyone, affected by this, want to get some more output while ca.backup is starting containers (maybe you get same/other errors?):

 

  • Open the file: /usr/local/emhttp/plugins/ca.backup2/scripts/backup.php
  • Scroll to line 246
  • Replace "shell_exec("docker start ".$docker);" with:
    backupLog("Starting docker container info: ".shell_exec("docker start ".$docker." 2>&1"));

 

You should now see a line starting with "Starting docker container info: " for each starting container, inside the log in the WebUI. And maybe some more output, if it silently fails.

 

I will add a check, if the docker start returned any error and let the user know. One more improvement :)

Edited by KluthR
Link to comment

I packed all things to an experimental/unofficial plg, which is simply an update. I dont know if its the best option to publish it here - so: If anyone wants to test ALL my changes, please PM me.

 

So far:

 

  • Fixed tar error detection during backup
  • Improve backuplog format
  • Option to create separate backup archives
  • Option to hide Docker warning message box
  • Check docker start result and wait 2 seconds between each start/stop operation (because of the: "failed to set IPv6 gateway" issue - which still needs some interperation of a docker guru)

 

  • Like 2
Link to comment
On 11/7/2022 at 4:29 PM, CS01-HS said:

 

That's probably a different problem - my containers were fine aside from having to start them manually.

Found the problem. I had extra argument in my docker config that failed to execute. I removed them and it worked again

  • Like 1
Link to comment
10 hours ago, KluthR said:

I packed all things to an experimental/unofficial plg, which is simply an update. I dont know if its the best option to publish it here - so: If anyone wants to test ALL my changes, please PM me.

 

So far:

 

  • Fixed tar error detection during backup
  • Improve backuplog format
  • Option to create separate backup archives
  • Option to hide Docker warning message box
  • Check docker start result and wait 2 seconds between each start/stop operation (because of the: "failed to set IPv6 gateway" issue - which still needs some interperation of a docker guru)

 

Amazing work mate!

Are you working on separate restore options also ?

Can we expect you to merge to original master branch or create your own to be added to Community Applications ?

These changes I have been anticipating an am happy to do some testing if need be.

Link to comment

I might have run into another issue as well. it seems that when running on a scheduale it will split the backup into two files or 3 some times causing duplication of taken space. 

I have not been able to figure out what is causing this but my hunch is that it does this when it fills up one disk and has to move to the next or if a move runs on the cache.

I have tried direct to array and also to cache first but always duplicate .tar files

 

Link to comment
5 hours ago, IronBeardKnight said:

Are you working on separate restore options also ?

Please clarify - you want to select the single archives which to restore and which not?

 

5 hours ago, IronBeardKnight said:

Can we expect you to merge to original master branch or create your own to be added to Community Applications ?

I want this merged, of course! As soon as Andrew has time to review everything (https://github.com/Squidly271/ca.backup2/pulls). I already made changes on top of my changes which prevents me currently of creating more PRs but I sort that out later.

 

5 hours ago, IronBeardKnight said:

into two files or 3

Could you show a screeshot?

Any last log entries?

Edited by KluthR
Link to comment
On 11/11/2022 at 3:38 PM, KluthR said:

Please clarify - you want to select the single archives which to restore and which not?

 

I want this merged, of course! As soon as Andrew has time to review everything (https://github.com/Squidly271/ca.backup2/pulls). I already made changes on top of my changes which prevents me currently of creating more PRs but I sort that out later.

 

Could you show a screeshot?

Any last log entries?

Ill have to wait for a backup to run so I can get a screenshot for you of the duplicate issue.

Yes restoring of single archives. for example I only want to restore duplicate or nextcloud etc. Currently if you want to do only a particular application/docker archive you need to extract the tar, stop and replace things manually.

 

Looking forward to the merge and release. I would class this plugin as one of unraid's core utilities and its importance is high in the community. I have setup multiple unraid servers for friends and family and its always first on my list of must have things to be installed and configured.

Currently I'm using it in conjunction with duplicate for encrypted remote backups between a few different unraid servers, pre-request of my time to help them setup unraid etc :) and way cheaper than a google drive etc.

Being able to break backups apart into single archives without having to rely on post script extraction saves on ssd and hdd io and life. Per docker container / folder would make upload and restoring much easier and faster for example doing a restore from remote having to pull a full 80+gb single backup file  can take a very long time and is very susceptible to interruption over only a 2 mb's line for example.

Link to comment
On 11/13/2022 at 9:29 PM, IronBeardKnight said:

Ill have to wait for a backup to run so I can get a screenshot for you of the duplicate issue.

Yes restoring of single archives. for example I only want to restore duplicate or nextcloud etc. Currently if you want to do only a particular application/docker archive you need to extract the tar, stop and replace things manually.

 

Looking forward to the merge and release. I would class this plugin as one of unraid's core utilities and its importance is high in the community. I have setup multiple unraid servers for friends and family and its always first on my list of must have things to be installed and configured.

Currently I'm using it in conjunction with duplicate for encrypted remote backups between a few different unraid servers, pre-request of my time to help them setup unraid etc :) and way cheaper than a google drive etc.

Being able to break backups apart into single archives without having to rely on post script extraction saves on ssd and hdd io and life. Per docker container / folder would make upload and restoring much easier and faster for example doing a restore from remote having to pull a full 80+gb single backup file  can take a very long time and is very susceptible to interruption over only a 2 mb's line for example.

 

Please see below is a couple of screenshots of the duplicate issue that has me a bit stumped.
image.thumb.png.78239d39f3b4c950dce35a499b932a53.png
Untitled.thumb.png.5d3e1d5245b4dc649afbed3160d0971f.png

 

I have been through the code that lives within the plugin as a brief overview but nothing stood out as incorrect in regards to this issue.

"\flash\config\plugins\ca.backup2\ca.backup2-2022.07.23-x86_64-1.txz\ca.backup2-2022.07.23-x86_64-1.tar\usr\local\emhttp\plugins\ca.backup2\"

 

I found an interesting file with paths in it that led me to a backup log file "/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" but as it gets re-written each backup it only shows data for the 4:48am backup and not the 4:00am one. so that is not useful, I did not find any errors within this log either.

The fact that its getting re-written most likely in my opinion means that the plugin is having issues interpreting either the settings entered or the cron that I have set

image.thumb.png.668f0bcfa125e981ab42a26daf46cba0.png

 

Edited by IronBeardKnight
Adding additional investegation
Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.