[Plugin] Appdata.Backup


Recommended Posts

Hey @KluthR

 

Thanks for your work on this and the new app!

 

I updated to latest Unraid (6.12.6) and installed the new app as requested. 

 

However my backups are not working any more and there are 3 issues I'm unsure about.

 

1: The App is complaining about backing up external volumes - I'm not sure exactly what this means?

2: I think it's related to number 1, it's also complaining about removing container mappings as they are "Source paths" - but again I'm unsure what this means exactly.

3: Perhaps related to 1 and 2, it's just saying that the container doesn't have any volume to backup.  

But if it's removed a mapping due to the "source path" issue, then I guess that is the problem to fix?

 

I've "shared my debug log" with you, the ID is: 939bd04b-535a-47e5-85b0-4eb58aeef6df

 

 

 

 

[12.02.2024 10:21:19][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D
[12.02.2024 10:21:19][ℹ️][Main] Backing up from: /mnt/user/appdata/CloudBerryBackup, /mnt/user/appdata/pihole, /mnt/user/appdata/qbittorrent
[12.02.2024 10:21:19][ℹ️][Main] Backing up to: /mnt/user/Unraid_App_Data_Backups/ab_20240212_102119
[12.02.2024 10:21:19][ℹ️][Main] Selected containers: CloudBerryBackup, pihole, qbittorrent
[12.02.2024 10:21:19][ℹ️][Main] Saving container XML files...
[12.02.2024 10:21:19][ℹ️][Main] Method: Stop all container before continuing.
[12.02.2024 10:21:19][ℹ️][qbittorrent] Stopping qbittorrent... done! (took 6 seconds)
[12.02.2024 10:21:25][ℹ️][pihole] Stopping pihole... done! (took 5 seconds)
[12.02.2024 10:21:30][ℹ️][CloudBerryBackup] Stopping CloudBerryBackup... done! (took 1 seconds)
[12.02.2024 10:21:31][ℹ️][Main] Starting backup for containers
[12.02.2024 10:21:31][ℹ️][qbittorrent] Removing container mapping "/mnt/user/appdata/qbittorrent" because it is a source path (exact match)!
[12.02.2024 10:21:31][ℹ️][qbittorrent] Should NOT backup external volumes, sanitizing them...
[12.02.2024 10:21:31][⚠️][qbittorrent] qbittorrent does not have any volume to back up! Skipping. Please consider ignoring this container.
[12.02.2024 10:21:31][ℹ️][pihole] Should NOT backup external volumes, sanitizing them...
[12.02.2024 10:21:31][ℹ️][pihole] Calculated volumes to back up: /mnt/user/appdata/pihole/pihole, /mnt/user/appdata/pihole/dnsmasq.d
[12.02.2024 10:21:31][ℹ️][pihole] Backing up pihole...
[12.02.2024 10:22:32][ℹ️][pihole] Backup created without issues
[12.02.2024 10:22:32][ℹ️][pihole] Verifying backup...
[12.02.2024 10:22:42][ℹ️][CloudBerryBackup] Removing container mapping "/mnt/user/appdata/CloudBerryBackup" because it is a source path (exact match)!
[12.02.2024 10:22:42][ℹ️][CloudBerryBackup] Should NOT backup external volumes, sanitizing them...
[12.02.2024 10:22:42][⚠️][CloudBerryBackup] CloudBerryBackup does not have any volume to back up! Skipping. Please consider ignoring this container.
[12.02.2024 10:22:42][ℹ️][Main] Set containers to previous state
[12.02.2024 10:22:42][ℹ️][CloudBerryBackup] Starting CloudBerryBackup... (try #1) done!
[12.02.2024 10:22:44][ℹ️][pihole] Starting pihole... (try #1) done!
[12.02.2024 10:22:46][ℹ️][qbittorrent] Starting qbittorrent... (try #1) done!
[12.02.2024 10:22:49][ℹ️][Main] Checking retention...
[12.02.2024 10:22:49][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;)
[12.02.2024 10:22:49][ℹ️][Main] ❤️

 

Link to comment
17 minutes ago, Deadringers said:

2: I think it's related to number 1, it's also complaining about removing container mappings as they are "Source paths" - but again I'm unsure what this means exactly.

For some reason your source path is some container folders..

 

Backing up from: /mnt/user/appdata/CloudBerryBackup, /mnt/user/appdata/pihole, /mnt/user/appdata/qbittorrent

 

See my previous post for what the default should be, i.e. the top-level appdata directory/ies.

Edited by Kilrah
Link to comment

 

Re: Warnings for non-existent volumes:

 

On 2/10/2024 at 1:36 PM, Kilrah said:

There is no internal volume, and external volumes are not backed up, so there's nothing to back up for that container.

 

There's the xml template, or is that backed up even if the container's skipped?

 

On 2/10/2024 at 3:50 AM, Kilrah said:

Warning's a bit annoying though, it's not a problem that there isn't anything to back up and ignoring means unnecessary manual intervention times 20 for me, plus more any time something new that doesn't is installed.

 

That and consider the case of a container that doesn't use volumes:

  • We set it to skip
  • A subsequent version does use volumes
  • We won't know not to skip it until it's too late

 

Backup Order

 

In the new version with Backup type set to stop, backup, start for each container

For grouped containers:

  • They're stopped in reverse of Start order (good)
  • They're backed up in reverse of Start order (fine either way)
  • They're started in Start order (good)

For un-grouped containers

  • They're stopped, backed up, started in reverse of Start order (bug?)
Link to comment
3 hours ago, Kilrah said:

For some reason your source path is some container folders..

 

Backing up from: /mnt/user/appdata/CloudBerryBackup, /mnt/user/appdata/pihole, /mnt/user/appdata/qbittorrent

 

See my previous post for what the default should be, i.e. the top-level appdata directory/ies.

Ahh this was it thanks! 

Link to comment
4 hours ago, CS01-HS said:

 

Re: Warnings for non-existent volumes:

 

 

There's the xml template, or is that backed up even if the container's skipped?

 

 

That and consider the case of a container that doesn't use volumes:

  • We set it to skip
  • A subsequent version does use volumes
  • We won't know not to skip it until it's too late

 

Backup Order

 

In the new version with Backup type set to stop, backup, start for each container

For grouped containers:

  • They're stopped in reverse of Start order (good)
  • They're backed up in reverse of Start order (fine either way)
  • They're started in Start order (good)

For un-grouped containers

  • They're stopped, backed up, started in reverse of Start order (bug?)

I'm also noticing this behavior where my containers are not started in the proper order. For example, I have binhex-delugevpn set to start before binhex-sabnzbd (which uses the VPN network of Deluge), but Sabnzbd attempts to start up first but is missing the proper network (Deluge does not start until after Sabnzbd).

Screenshot 2024-02-12 102444.png

Screenshot 2024-02-12 102654.png

Link to comment
16 minutes ago, richsm said:

I'm also noticing this behavior where my containers are not started in the proper order. For example, I have binhex-delugevpn set to start before binhex-sabnzbd

 

The ordering bug (?) only affects containers with no group assignments.

 

Regardless, you should group these. Where container B depends on container A:

  • without groups: A is stopped for backup which may cause errors with B, before B is stopped for backup
  • with groups: B is stopped, then A is stopped, then both are backed up, then A's started, then B's started
Edited by CS01-HS
Link to comment
1 minute ago, CS01-HS said:

 

The ordering bug (?) only affects containers with no group assignments.

 

Regardless, you should group these. Where container B depends on container A:

  • without groups: A is stopped for backup which may cause errors with B, before B is stopped for backup
  • with groups: B is stopped, then A is stopped, then both are backed up, then A's started, then B's started

I'm a bit of a noob here, but I don't see any way to group containers in Unraid interface. I just assumed since one is assigned to use the network of the other that they would be "grouped". Any tips appreciated.

Link to comment
41 minutes ago, richsm said:

I'm a bit of a noob here, but I don't see any way to group containers in Unraid interface.

When you click a container's name in the appdata backup settings options for that container expand, including grouping.

  • Upvote 2
Link to comment

Hello. I'm getting a few of these warnings: 

[⚠️][postgresql14] postgresql14 does not have any volume to back up! Skipping. Please consider ignoring this container.

[⚠️][Redis] Redis does not have any volume to back up! Skipping. Please consider ignoring this container.

[⚠️][duckdns] duckdns does not have any volume to back up! Skipping. Please consider ignoring this container.

 

While I do not have an appdate directory for Redis nor duckDNS (not sure why not) I do have one for Postgresql14.

 

385340325_Screenshot2024-02-12115105.png.e00721a3ddc0fe55a8fe9f2274d4e79e.png

 

1. Why is this happening with Postgresql14? 

2. Would I not be able to restore Redis and DuckDNS with settings without including them somehow? 

3. In the appdata backup directory I see files for old containers that I removed years ago. Why are these being backed up? 

4. I assume the log note: "Should NOT backup external volumes, sanitizing them..." I get for every container is not important and can be ignored? 

 

Debug ID: 2a4e4db7-239d-4f7d-a94f-00fc8562fb69

 

Thanks.

 

 

 

Edited by adminmat
added debug ID
Link to comment
6 hours ago, CS01-HS said:

There's the xml template, or is that backed up even if the container's skipped?

exactly

 

6 hours ago, CS01-HS said:

Backup Order

 

In the new version with Backup type set to stop, backup, start for each container

The reason is the method: Stop, backup and start does not have any preferred order to be honest. Its also being reversed here because it doesnt not really matter. If you all find it clearer to use the exact order like displayed for that method, I could re-reverse it for that method and add a small notice.

1 hour ago, CS01-HS said:

The ordering bug (?) only affects containers with no group assignments.

Same thing as above.

 

27 minutes ago, adminmat said:

Debug ID: 2a4e4db7-239d-4f7d-a94f-00fc8562fb69

You are missing "/mnt/cache/appdata" as 2nd value for "Allowed source paths", which is currently only "/mnt/user/appdata". Like now, all container volume mappings are considered "external".

  • Thanks 1
Link to comment
57 minutes ago, KluthR said:

You are missing "/mnt/cache/appdata" as 2nd value for "Allowed source paths", which is currently only "/mnt/user/appdata". Like now, all container volume mappings are considered "external".

 

Ok thanks. I added /mnt/cache/appdate in the template. I still get a note: "Should NOT backup external volumes, sanitizing them..." for every container. I assume this can be ignored. 

 

updated debug log: 7b56de88-4ccd-4c1d-83a8-48bd77ffdf47

 

do we know why old, deleted containers are still being backed up? 

Link to comment
32 minutes ago, Kilrah said:

Well if it's old containers and you don't care about the data why don't you just delete it?

 

Maybe I'm confused. The containers are long since deleted and there are no orphan images. There is no appdata for these deleted containers. They only show up in the backup directory created by the Appdata Backup application. 

 

In that directory these files are created upon running the backup:
my-chia.xml.bak

my-chia.xml

my-pihole.xml.bak
my-pihole.xml

 

All of which have been deleted. Do I need to delete and reinstall my Docker image?

Link to comment
8 minutes ago, adminmat said:

In that directory these files are created upon running the backup:
my-chia.xml.bak

my-chia.xml

my-pihole.xml.bak
my-pihole.xml

Your post wasn't clear that it was xmls you were talking about, sounded like you had appdata archives for those old containers.

 

They should be in Previous apps and you can delete them from there.

 

The .xml.bak ones will have been backup copies made by... who knows what and they will not appear in previous apps, you can go clean them up yourself from the flash drive in /boot/config/plugins/dockerMan/templates-user

 

But yeah since someone else had the same question on Discord earlier it seems the plugin switched to copying the whole directory contents in the recent update.

 

Edited by Kilrah
  • Thanks 1
Link to comment

Hallo,

 

First, thanks for the continued work on this plugin!

 

I had an error throw today:

 

[12.02.2024 07:01:28][ℹ️][FoundryVTT] No stopping needed for FoundryVTT: Not started!
[12.02.2024 07:01:28][][FoundryVTT] '/tmp/fvtt' does NOT exist! Please check your mappings! Skipping it for now.
[12.02.2024 07:01:30][ℹ️][FoundryVTT] Should NOT backup external volumes, sanitizing them...
[12.02.2024 07:01:30][ℹ️][FoundryVTT] Calculated volumes to back up: /mnt/user/appdata/FoundryVTT
[12.02.2024 07:01:30][ℹ️][FoundryVTT] Backing up FoundryVTT...
[12.02.2024 07:01:54][ℹ️][FoundryVTT] Backup created without issues
[12.02.2024 07:01:54][ℹ️][FoundryVTT] Verifying backup...
[12.02.2024 07:02:18][ℹ️][FoundryVTT] Starting FoundryVTT is being ignored, because it was not started before (or should not be started).

 

The rest of the data backed up fine. Just wanted to double check that this isn't cause for greater concern. I'm ok with the bit about '/tmp/fvtt'

 

Thanks for reading!

 

Edited by spall
Clarified. Post worded poorly.
Link to comment

Great job on the new grouped containers feature. It was much needed for containers with dependencies or external databases. It works flawlessly.

 

However, I did run into a minor issue with nfs docker volumes that were not properly detected by the plugin and resulted in log errors. It didn't end up causing an actual issue other than log errors, so not that big of a deal in my case, but I'll detail it below.

 

My plex container has a few docker volumes (named volumes) set up using the nfs volume driver (because my media resides on a different server): https://docs.docker.com/storage/volumes/#create-a-service-which-creates-an-nfs-volume

It is a docker volume that automatically mounts a remote nfs path inside the container.

 

The plugin settings don't detect them at all and they are not listed under "Configured Volumes" under plex. However, when I run the backup, I get the following errors where they do get detected but the plugin can't figure out the host path (there is no host path as it's not a bind mount).

 

[06.02.2024 22:25:52][][plex] '3dmovies' does NOT exist! Please check your mappings! Skipping it for now.
[06.02.2024 22:25:55][][plex] '4kmovies' does NOT exist! Please check your mappings! Skipping it for now.
[06.02.2024 22:25:58][][plex] 'movies' does NOT exist! Please check your mappings! Skipping it for now.
[06.02.2024 22:26:00][][plex] 'tvshows' does NOT exist! Please check your mappings! Skipping it for now.
[06.02.2024 22:26:03][][plex] 'music' does NOT exist! Please check your mappings! Skipping it for now.

 

As I mentioned earlier, it didn't end up causing an actual issue for me because the paths were skipped, and that would be my preference anyway since those media paths are remote.

 

For reference, the inspect for one of those volumes looks like this:

# docker inspect movies
[
    {
        "CreatedAt": "2024-02-12T03:40:01-05:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/movies/_data",
        "Name": "movies",
        "Options": {
            "device": ":/mnt/user/Movies",
            "o": "addr=192.168.1.10,rw",
            "type": "nfs"
        },
        "Scope": "local"
    }
]

 

Edited by dumurluk
  • Like 1
  • Thanks 1
Link to comment

Thats interesting. So those are volumes with optional driver (nfs in your case). I bet those were not set up via Unraids UI? Unraid creates bind mounts and this is what the plugin reads (through Unraids dockerMan, not from docker directly

 

The reason they are not beind shown in the UI: The determination function is the same, but if not driven from an backup process, the message "xxx does NOT exist" is being discarded silently.

 

I have at least to play with this. Unraids dockerMan is giving only the name/path of the used volume, not any info about the type.

Link to comment

Right, the volumes were created in command line, but unraid's gui accepts them as the host path. Same with regular named volumes, if the host path does not begin with a forward slash, docker will use a named volume instead of a bind mount. Unraid's gui simply passes that value to docker run and does not interfere with the outcome.

 

Here's what the volume setting looks like:

1210237769_Screenshot2024-02-13121028.png.b7ff9c4e8310c15bd199f6a7c0fd65e6.png

 

  • Like 1
Link to comment

An exception option for Docker containers such as ClamAV would be great, because it is correct that it is running. And if it is still running when the backup is running, then that is correct and does not require a warning.

 

Event: Appdata Backup
Subject: [AppdataBackup] Warning!
Description: Please check the backup log!
Importance: warning

NOT stopping ClamAV because it should be backed up WITHOUT stopping!

 

Link to comment

You probably want to set your notification level to error instead, because yes most things that probably should be "info" since they are a result of a user's setting are currently "warning" level.

Edited by Kilrah
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.