[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

12 hours ago, Abe677 said:

Do I need to fix this appdata location problem? If so, is the only solution re-installing each docker while paying more attention to where the appdata folder end up?

You should definitely fix that. Where are your appdata folders "spread around"?

 

Usually one would want all appdata under `/mnt/user/appdata` (or `/mnt/cache/appdata`, only if you have a cache pool). When you create a new container, you need to make sure `/config` (or whatever the mount point is called in the particular image) points to `/path/to/appdata/<name of container>`. This usually happens by default when installing from a CA (i.e. not directly from e.g. Docker Hub).

 

Go to `/Settings/DockerSettings` in the web UI and check that `Default appdata storage location` is what you expect it to be. If not, choose `Enable Docker` -> `No` to shut down the Docker service and then you'll be able to change that path. After that, enable Docker again.

Link to comment

My Docker default is set to "/mnt/user/docker/appdata/". The following containers are using this path:

 

mariadb (/mnt/user/docker/appdata/)

PlexMediaServer (/mnt/user/docker/appdata/)
QDirStat (/mnt/user/docker/appdata/)
nextcloud (/mnt/user/docker/appdata/)
binhex-krusader (/mnt/user/docker/appdata/)
Handbrake (/mnt/user/docker/appdata/)
 

The following have no appdata that I can tell:

 

Redis (none)
OpenSpeedTest (none)
db-backup (none)

 

The following have a appdata path different than the default:

 

bookstack (/mnt/user/bookstack/)
paperless-ng (/mnt/user/appdata/paperless-ng/data)
Nginx (/mnt/user/NGINX/)
Influxdb (/mnt/user/appdata/influxdb)

 

I suspect I was following various tutorials to set these up and wasn't aware all the container "appdata" data needed to be in one place.

 

The good news is that the 4 containers that are not set up correctly could be nuked and set up properly. I suppose if I was clever I could shut those containers down, move that data to the proper location, change the container setting, and they should start. I'll have to think about this.

Link to comment
16 hours ago, Abe677 said:

I suppose if I was clever I could shut those containers down, move that data to the proper location, change the container setting, and they should start.

That's exactly what you should do :) 

Never create directories straight in `/mnt/user` because Unraid will treat them as user shares, which is all fucky.

Link to comment
17 hours ago, Abe677 said:

bookstack (/mnt/user/bookstack/)
paperless-ng (/mnt/user/appdata/paperless-ng/data)
Nginx (/mnt/user/NGINX/)
Influxdb (/mnt/user/appdata/influxdb)

 

Most of the apps have a /config path entry defined.  The host path is automatically created using whatever you've defined in Settings - Docker for the default appdata path. 

 

Its when there's additional path mappings that the system can't automatically fill them out because it has no idea what those paths are supposed to be (ie: are they additional config entries that should be in appdata, or are they paths to media etc)

 

Also, if you (at some point) change the settings (or say you've had to delete docker.cfg on the flash drive for some reason), the system will not adjust any paths for anything already installed (or on previously installed templates)

Link to comment

Hello guys!

 

I am very green with Unraid and recently I've completed the setup of my server with all the containers I need and all the VM I need.

 

I have installed this plugin but seems like I don't fully understand how it works so I would like to describe my experience.

 

1. I have made a new share called "backup1".

2. I have set my Appdata Share (Source) to /mnt/user/appdata/  (where the actual appdata is)

3. I have set my destination folder to "/mnt/disk1/backup1/" (where I want the backup to be generated).

4. I have set all dockers to stop but my TailScale.

5. Then I have pressed Apply and generated successfully a backup (while the backup was generating I wasn't touching anything).

6. Then, I have deleted 1 docker container and changed several settings on 2 other docker containers.

7. Then, I went to the restore appdata tab of the plugin and restored to the previously generated backup (again I wasn't touching anything until the process was finished).

8. After the restore, the previously deleted docker container didn't appear and the previously changed settings in some of the containers didn't change back.

 

So, I don't fully understand what does this plugin do if it doesn't restore lost or changed appdata? Am I doing something wrong?

Link to comment

I think you it is possible you are getting confused between what is in appdata (the containers working sets) and the container settings that you set on the page for a container on the docker tab?

 

The latter are NOT part of the appdata backup but are stored as templates on the flash drive.   For instance the way to have got back the deleted Docker would have been via Apps->Previous Apps.

Link to comment
2 hours ago, Rocka374 said:

3. I have set my destination folder to "/mnt/disk1/backup1/" (where I want the backup to be generated).

 

@itimpi mentioned some important things, but I would also add that you shouldn't back up data to a specific disk, use the /mnt/user/ folder.  (Don't use /mnt/user0 )

As always, @SpaceInvaderOne has a good video. It's for an older version of UNRAID, but is still accurate. 

 

Link to comment
On 11/12/2020 at 1:00 PM, Squid said:

If either /tmp/ca.backup2/tempFiles/backupInProgress or /tmp/ca.backup2/tempFiles/restoreInProgress exists then a backup / restore is running

I realize this is an old post, but I have a backup that is going on 6 hours now and it normally does not take anywhere near that.  I see the backupinprocess and verifyinprocess file in that temp dir but is there any way to see if it hung or there are any other issues?  UI is saying verifying still too... Logs align with that as well.  image.thumb.png.2794b936c1139940964da3149c6b26f8.png

Edited by jbrown705
add more info
Link to comment
1 hour ago, itimpi said:

I think you it is possible you are getting confused between what is in appdata (the containers working sets) and the container settings that you set on the page for a container on the docker tab?

 

The latter are NOT part of the appdata backup but are stored as templates on the flash drive.   For instance the way to have got back the deleted Docker would have been via Apps->Previous Apps.

 

So, if I have My servers (which has generated backup of my flash) and this plugin (which has generated backup of my appdata) I will be able to restore the backup of all my docker containers if my only disk fails (I have 1 cache nvme 256gb and 1 drive SSD 1tb)?

 

The other thing I care about is my Ubuntu VM which I have made backup with VM backup plugin.

 

So for my use case I have to use 3 plugins to backup my important things :)

Link to comment
22 hours ago, Rocka374 said:

So, if I have My servers (which has generated backup of my flash) and this plugin (which has generated backup of my appdata) I will be able to restore the backup of all my docker containers if my only disk fails (I have 1 cache nvme 256gb and 1 drive SSD 1tb)?

Correct

Link to comment
23 hours ago, jbrown705 said:

I have a backup that is going on 6 hours now and it normally does not take anywhere near that.  I see the backupinprocess and verifyinprocess file in that temp dir but is there any way to see if it hung or there are any other issues?  UI is saying verifying still too... Logs align with that as well.  

The Progress tab should tell you where and what it's doing.

Link to comment
9 hours ago, Squid said:

The Progress tab should tell you where and what it's doing.

I checked there first and the processing tab and the temp file dir match statuses. However, I was trying to check to see if the verifying step had hung. It did finish but took nearly 10 hours for some reason this time. I was just looking to see if there was a way to see what was happening during that verification stage. Not sure if it’s possible based on how the code works but do you think it’s realistic to ask that maybe in the status tab for a progress bar to be added for each step? They people would be able to see if the bar is not progressing or just moving slowly?

Link to comment
4 hours ago, rcrh said:

Is it possible to use this utility to restore part of my appdata?  I'm getting errors on one of my dockers while the others are running fine.  I'd like to restore the appdata for just that docker.

Thanks in advance.

You need to manually use an applicable tar command.  This long standing feature req is bubbling upwards.

Link to comment
  • 2 weeks later...

Hi all i'm new here 
 I use Unraid now for about 3/4 year, therefore also still relatively new in the topic. 

 

Since Begin an I use the CA APPDATA BACKUP / RESTORE V2 Plugin.
today I noticed that for about three weeks no backup was created.
When checking what is going on, I could no longer find the plugin. 
Does anyone have an idea why the plugin suddenly disappeared, or where and in which log I could look up what was going on? 

Have the plugin now simply reinstalled, this now shows the following error: 

Warning: syntax error, unexpected '^' in Unknown on line 1 in /usr/local/emhttp/plugins/dynamix/include/Helpers.php on line 251


How should I proceed to find out the cause of the disappearance and the current error? 

Thanks

Link to comment

I am having a strange IPv6 set of errors that get logged while the backups are running. The strange thing that that I do not have IPv6 enabled on any of my interfaces. The problem only comes up while AppData Backup is running. It repeats itself several times while AppData Backup is running. Has anybody seen anything like this before?

 

Feb 23 05:09:50 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface veth98d5e7d.IPv6 with address fe80::34d5:1dff:feb4:1d7f.
Feb 23 05:09:50 WrightHome avahi-daemon[9801]: New relevant interface veth98d5e7d.IPv6 for mDNS.
Feb 23 05:09:50 WrightHome avahi-daemon[9801]: Registering new address record for fe80::34d5:1dff:feb4:1d7f on veth98d5e7d.*.
Feb 23 05:09:52 WrightHome kernel: eth0: renamed from veth70d7098
Feb 23 05:09:52 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth8e17035: link becomes ready
Feb 23 05:09:52 WrightHome kernel: br-ae9578e3b8f7: port 12(veth8e17035) entered blocking state
Feb 23 05:09:52 WrightHome kernel: br-ae9578e3b8f7: port 12(veth8e17035) entered forwarding state
Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered blocking state
Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered disabled state
Feb 23 05:09:54 WrightHome kernel: device vethd6b7cf1 entered promiscuous mode
Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered blocking state
Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered forwarding state
Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered disabled state
Feb 23 05:09:54 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface veth8e17035.IPv6 with address fe80::803f:8cff:fe7c:d91c.
Feb 23 05:09:54 WrightHome avahi-daemon[9801]: New relevant interface veth8e17035.IPv6 for mDNS.
Feb 23 05:09:54 WrightHome avahi-daemon[9801]: Registering new address record for fe80::803f:8cff:fe7c:d91c on veth8e17035.*.
Feb 23 05:09:57 WrightHome kernel: eth0: renamed from veth904246f
Feb 23 05:09:57 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd6b7cf1: link becomes ready
Feb 23 05:09:57 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered blocking state
Feb 23 05:09:57 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered forwarding state
Feb 23 05:09:59 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface vethd6b7cf1.IPv6 with address fe80::385b:94ff:fecf:1878.
Feb 23 05:09:59 WrightHome avahi-daemon[9801]: New relevant interface vethd6b7cf1.IPv6 for mDNS.
Feb 23 05:09:59 WrightHome avahi-daemon[9801]: Registering new address record for fe80::385b:94ff:fecf:1878 on vethd6b7cf1.*.
Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered blocking state
Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered disabled state
Feb 23 05:09:59 WrightHome kernel: device vethfc32397 entered promiscuous mode
Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered blocking state
Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered forwarding state
Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered disabled state
Feb 23 05:10:02 WrightHome kernel: eth0: renamed from veth02cb1ad
Feb 23 05:10:02 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfc32397: link becomes ready
Feb 23 05:10:02 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered blocking state
Feb 23 05:10:02 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered forwarding state
Feb 23 05:10:04 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface vethfc32397.IPv6 with address fe80::c867:96ff:fe77:e50c.
Feb 23 05:10:04 WrightHome avahi-daemon[9801]: New relevant interface vethfc32397.IPv6 for mDNS.
Feb 23 05:10:04 WrightHome avahi-daemon[9801]: Registering new address record for fe80::c867:96ff:fe77:e50c on vethfc32397.*.
Feb 23 05:10:04 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered blocking state
Feb 23 05:10:04 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered disabled state
Feb 23 05:10:04 WrightHome kernel: device vetha38f422 entered promiscuous mode
Feb 23 05:10:08 WrightHome kernel: eth0: renamed from veth8e2e7e3
Feb 23 05:10:08 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha38f422: link becomes ready
Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered blocking state
Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered forwarding state
Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered blocking state
Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered disabled state
Feb 23 05:10:08 WrightHome kernel: device veth03df597 entered promiscuous mode
Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered blocking state
Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered forwarding state
Feb 23 05:10:09 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered disabled state

 

wrighthome-diagnostics-20220223-1033.zip

Link to comment

Hello, I've been using this plugin without issue for several months, but last night I ran into a problem. My weekly backup happened to go at the same time that a parity check was running, this caused the backup to run so slowly as to still be running 5 hours later when I woke up (and so all my containers were still shut down). I paused the parity check and the backup and auto-update completed within 20 minutes.

 

I suggest a solution to this would be an option that would allow us to select what to do if the scheduled backup time arrives and a parity check is also running. Likely options should be pause parity check and resume after, skip backup, delay backup, perhaps more options. I looked for an option like this but I couldn't see one.

 

Cheers!

  • Upvote 1
Link to comment

@SquidHey Squid, i'm currently still running the v1 backup and restore app and i really like it as when i need to recover a config file or an individual file i can simply hop into the backup and hop into the folder and grab the file without having to restore the whole backup which i believe was what the v2 app does when i originally moved over to it? You mentioned on the first post:

 

"Due to some fundamental problems with XFS / BTRFS, the original version of Appdata Backup / Restore became unworkable and caused lockups for many users.  Development has ceased on the original version, and is now switched over to this replacement."

 

Can you explain what the issues with the v1 app were exactly and how it was causing issues? Did it ever crash unraid at all for any users? I'm currently troubleshooting my array and my only warning from the community 'fix common issues' plugin is that i still run the v1 plugin... i'd love to know more.

Link to comment
2 hours ago, Renegade605 said:

Hello, I've been using this plugin without issue for several months, but last night I ran into a problem. My weekly backup happened to go at the same time that a parity check was running, this caused the backup to run so slowly as to still be running 5 hours later when I woke up (and so all my containers were still shut down). I paused the parity check and the backup and auto-update completed within 20 minutes.

 

I suggest a solution to this would be an option that would allow us to select what to do if the scheduled backup time arrives and a parity check is also running. Likely options should be pause parity check and resume after, skip backup, delay backup, perhaps more options. I looked for an option like this but I couldn't see one.

 

Cheers!

I might look into what would be needed in the parity check tuning plugin to add such a feature there.  Since that plugin already handles intelligent pause/resume of parity checks it feels as probably the best place to add such a feature.  I recently added an option to that plugin to pause a parity check while mover is running so this seems a similar option.  I would just need to work out the best way to detect if CA Backup is installed (and if not suppress offering this as an option) and if the backup is currently running.

 

Any thoughts on whether this would be a sensible way to handle this?

Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.