[Plugin] Appdata.Backup


Recommended Posts

1 minute ago, KluthR said:

No, that cant be the case, since its a completely new feature ;)

 

Open the per container settings by clicking on its name. There you will find it.

Not explicitly enabling it means, it should be off. Im curious what its current state is.

 

Ah, maybe it was just backup to an external drive.

 

Regardless, the setting is set to no. Kinda hard to read with the coloring issues. 

 

image.png.9969ac65b40e0741eec00abbb11cb7a8.png

Link to comment
38 minutes ago, KluthR said:

There will be a new update within the next 10 minutes. Could you update and run a manual backup and share the debug log again?

 

That's definitely much better. Exclusions appear to be working as expected, as is the "Save external volumes" setting. The coloring is better as well. 

 

I had a different issue when it tried to backup Grafana. The debug log is attached. Not sure if it's app related, or something with my configuration. 

ab.debug.log

Link to comment
4 hours ago, KluthR said:

You should exclude those for grafana as well:

 

/etc

/proc
/var/run/docker.sock

/var/run/utmp

/sys

 

as such are mapped to the container and would be backed up too (ext volume backup enabled)

 

When I try to exclude my Grafana data directory it slows down to a crawl. The .tar was growing slowly as time went on, and there was I/O activity on the disks. The Grafana tar was 500MB after 20 minutes of running Plex, which is 32GB and has many more files, takes about 90 seconds. I just turned off the external part on Grafana and it's working well now, takes about a minute.  

 

*EDIT*

I wasn't paying attention with Grafana, I didn't realize how the docker path mappings were configured. I was excluding my DB's, but there were other container paths pointing to the root Unraid file system. I didn't catch that even though you pointed it out, but now I get it. The plugin is working fine for me nothing out of the ordinary.  

Edited by darcon
Link to comment
10 hours ago, darcon said:

pointing to the root Unraid file system.

I saw those mapping (/:/rootfs). Those end up being empty in the final tar command. With latest plugin update, those will be excluded from the mapping. Should not have before worked anyway.

Link to comment

I have an error in the backup log because of the volume mapping too. (SWAG Container)


My volume mappings:

/config -> /mnt/cache/appdata/swag
/var/log/nextcloud/ -> /mnt/cache/appdata/nextcloud/log/
/var/log/jellyfin/ -> /mnt/cache/appdata/Jellyfin/config/log/
/var/log/vaultwarden/vaultwarden.log -> /mnt/cache/appdata/vaultwarden/vaultwarden.log

 

Is there a way to do these mappings without getting an error message?

Link to comment
8 hours ago, Anym001 said:

I have an error in the backup log because of the volume mapping too

Please show the debug log.

 

1 hour ago, MothyTim said:

Hi, not sure what the problem is but since installing all 3 backups done so far say failed on them!

code-server has a volume mapping to appdata, which will be backed up. Is that mapping correct??

tautullis backup verification fails, because shfs is accessing the Logs (/mnt/cache/appdata/PlexMediaServer/Library/Application Support/Plex Media Server/Logs)

 

Since tautulli has a mapping from Plex to backup and Plex is already running again, the files are being changed.

Link to comment
41 minutes ago, KluthR said:

@Anym001Duplicacy and jellyfin both having volumes within volumes (cache on each).

 

I solved this problems with die jellyfin and Duplicacy Container but not for the swag one.

Is there any solution for this or should I wait for the future version?

Edited by Anym001
Link to comment
4 hours ago, KluthR said:

Please show the debug log.

 

code-server has a volume mapping to appdata, which will be backed up. Is that mapping correct??

tautullis backup verification fails, because shfs is accessing the Logs (/mnt/cache/appdata/PlexMediaServer/Library/Application Support/Plex Media Server/Logs)

 

Since tautulli has a mapping from Plex to backup and Plex is already running again, the files are being changed.

Thanks, yes need to map appdata to access config files? Can I stop backing up that bit? I don't think the old plugin did that? But this version has a lot more options, which is great! Thanks for your work! :) Same with Tautulli i guess? It'd be nice to be able to group apps I guess, so they stop and start together? Also to check for updates while each app/container is stopped for backup, rather than at the end? Thanks again, this is a great upgrade! 

  • Upvote 1
Link to comment

I updated the plugin so it is now 2023.04.15a and now when I run the manual backup I get the message:

 

[16.04.2023 18:45:45][warning] Backing up EXTERNAL volumes, because its enabled!

 

So I aborted the backup, but from what I can tell it isn't enabled on any of my containers when I check them?

 

image.png.8b0bc6e333527ae08f43e7de1edcf264.png

 

My config.json for the plugin is:

{
    "destination": "/mnt/user/Backups/Appdata Backup",
    "allowedSources": "/mnt/user/appdata\r\n/mnt/cache/appdata",
    "compression": "no",
    "defaults": {
        "verifyBackup": "yes",
        "updateContainer": "no"
    },
    "flashBackup": "yes",
    "backupVMMeta": "yes",
    "deleteBackupsOlderThan": "7",
    "backupFrequency": "daily",
    "backupFrequencyWeekday": "4",
    "backupFrequencyDayOfMonth": "2",
    "backupFrequencyHour": "5",
    "backupFrequencyMinute": "0",
    "backupFrequencyCustom": ""
}

 

Link to comment
11 hours ago, KluthR said:

@Anym001Duplicacy and jellyfin both having volumes within volumes (cache on each). This is working for docker but messes with the tar verification. The plugin script does not handle this currently. It will be, in future version.

 

Sorry i posted the wrong debug log above.

Here is the current debug log, where the problems with duplicacy and jellyfin have been fixed.
Error for swag remains.

ab.debug.log

Link to comment
11 hours ago, Anym001 said:

but not for the swag one

I dont see any errors for swag inside the log.

 

8 hours ago, MothyTim said:

Can I stop backing up that bit?

I dont think so. Adding the appdata path to exclusions would not backup any other appdata folder for that container. I think, I have to extend the plugins capabilities by that :)

 

7 hours ago, halorrr said:

So I aborted the backup, but from what I can tell it isn't enabled on any of my containers when I check them?

Thats the whole config.json? So, I guess you did not saved it for a long time. There are container parts missing like their config and the start order. Which is not a problem because you are working with defaults. But its worth mentioning.

 

Please confirm that this posted config IS the complete file content. If so, we have a bug here which prevents applying default defaults (yes) to older config versions. I will look into that.

 

59 minutes ago, Anym001 said:

Sorry i posted the wrong debug log above.

Ah I see.

swag backups contents of Jellyfin, maybe you want to exclude them (/mnt/cache/appdata/Jellyfin/log) as well as the other container mappings (vaultwarden?).

Link to comment
Quote

Thats the whole config.json? So, I guess you did not saved it for a long time. There are container parts missing like their config and the start order. Which is not a problem because you are working with defaults. But its worth mentioning.

 

Please confirm that this posted config IS the complete file content. If so, we have a bug here which prevents applying default defaults (yes) to older config versions. I will look into that.

 

Yup that is the whole config.json

Link to comment

Ok, thanks. I will fix all current issues within this week, including your config issue. Please leave your snippet here. You can try to save the settings once, that should do it for you.

 

It will take some days as Iam a bit busy currently.

 

Stay tuned!

Link to comment
2 hours ago, KluthR said:

Ok, thanks. I will fix all current issues within this week, including your config issue. Please leave your snippet here. You can try to save the settings once, that should do it for you.

 

It will take some days as Iam a bit busy currently.

 

Stay tuned!

Saving definitely added a lot more to my config and I was able to successfully run a backup.

 

One issue I did notice is that my verification of tautulli failed. Looking at the debug for it:

[17.04.2023 16:43:23][info] Backing up tautulli...
[17.04.2023 16:43:37][debug] Tar out: 
[17.04.2023 16:43:37][info] Backup created without issues
[17.04.2023 16:43:37][info] Verifying backup...
[17.04.2023 16:43:37][debug] Final verify command: --diff -f '/mnt/user/Backups/Appdata Backup/ab_20230417_155805/tautulli.tar' '/mnt/user/appdata/tautulli' '/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs' '/mnt/user/appdata/tautulli'
[17.04.2023 16:43:43][debug] Tar out: tar: Removing leading `/' from member names; mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Server.log: Mod time differs; mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Server.log: Size differs; tar: Removing leading `/' from hard link targets; tar: /mnt/user/appdata/tautulli: Not found in archive; tar: Exiting with failure status due to previous errors
[17.04.2023 16:43:43][error] tar verification failed! More output available inside debuglog, maybe.
[17.04.2023 16:43:44][debug] lsof(/mnt/user/appdata/tautulli)
Array
(
)

[17.04.2023 16:43:44][debug] lsof(/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs)
Array
(
    [0] => COMMAND     PID     USER   FD   TYPE DEVICE SIZE/OFF              NODE NAME
    [1] => Plex\x20M 14924       99    8w   REG   0,45  1959589 12103424043577516 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Media Server.log
    [2] => Plex\x20S 15000       99    3w   REG   0,45   178445 12103424043577525 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log
    [3] => Plex\x20T 15148       99    9w   REG   0,45     2930 12103424043577533 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/Plex Tuner Service.log
    [4] => Plex\x20S 15168       99    3w   REG   0,45     9539 12103424043577538 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/org.musicbrainz.agents.music.log
    [5] => Plex\x20S 15341       99    3w   REG   0,45     7148 12103424043577553 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/tv.plex.agents.movie.log
    [6] => Plex\x20S 15392       99    3w   REG   0,45     9629 12103424043577557 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/tv.plex.agents.music.log
    [7] => Plex\x20S 15394       99    3w   REG   0,45     7157 12103424043577556 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/tv.plex.agents.series.log
    [8] => Plex\x20S 15395       99    3w   REG   0,45     7312 12103424043577558 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.thetvdb.log
    [9] => Plex\x20S 18926       99    3w   REG   0,45     8689 12103424043577693 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.plugins.WebTools.log
    [10] => Plex\x20S 19015       99    3w   REG   0,45     8321 12103424043577698 /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.agents.imdb.log
)

[17.04.2023 16:43:44][debug] lsof(/mnt/user/appdata/tautulli)
Array
(
)

[17.04.2023 16:43:44][debug] AFTER verify: Array
(
    [Image] => lscr.io/linuxserver/tautulli:latest
    [ImageId] => 4778de7538ac
    [Name] => tautulli
    [Status] => Exited (0) 21 seconds ago
    [Running] => 
    [Paused] => 
    [Cmd] => /init
    [Id] => f86573f21691
    [Volumes] => Array
        (
            [0] => /mnt/user/appdata/tautulli/:/logs:ro
            [1] => /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs/:/plexlogs:rw
            [2] => /mnt/user/appdata/tautulli:/config:rw
        )

    [Created] => 5 hours ago
    [NetworkMode] => proxynet
    [CPUset] => 
    [BaseImage] => 
    [Icon] => https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/tautulli-icon.png
    [Url] => http://[IP]:[PORT:8181]/
    [Shell] => 
    [Ports] => Array
        (
            [0] => Array
                (
                    [IP] => 
                    [PrivatePort] => 8181
                    [PublicPort] => 8181
                    [NAT] => 1
                    [Type] => tcp
                )

        )

)

[17.04.2023 16:43:44][info] Starting tautulli... (try #1)

 

I was able to pick out that it was backing up a plex folder as part of the tautulli backup. This seems to be because I have a /plexlogs mapping so that it is easy to tell tautulli where my plex logs live. The plugin considers this an Internal volume (I'm guessing that currently the logic considers anything in /appdata internal, even if it belongs to a different app?):
image.png.04be75f53ec958317f119606da9e5f8a.png 

 

Also worth nothing that the plugin doesn't deduplicate volumes from the list if multiple container folders have been mapped to the same path.

Link to comment
22 hours ago, KluthR said:
23 hours ago, Anym001 said:

Sorry i posted the wrong debug log above.

Ah I see.

swag backups contents of Jellyfin, maybe you want to exclude them (/mnt/cache/appdata/Jellyfin/log) as well as the other container mappings (vaultwarden?).

 

I have now tried to exclude the affected mappings. 
Unfortunately I still get an error message. 
The tar verifications failed again. 

Debug log attached

 

image.thumb.png.459b4ae1f30aa44019dc0c57eb392ea2.png

 

 

[17.04.2023 20:57:45][debug] Backup swag - Container Volumeinfo: Array
(
    [0] => /mnt/cache/appdata/swag:/config:rw
    [1] => /mnt/cache/appdata/Jellyfin/config/log/:/var/log/jellyfin/:ro
    [2] => /mnt/cache/appdata/nextcloud/log/:/var/log/nextcloud/:ro
    [3] => /mnt/cache/appdata/vaultwarden/vaultwarden.log:/var/log/vaultwarden/vaultwarden.log:ro
)

[17.04.2023 20:57:45][debug] Should NOT backup ext volumes, sanitizing them...
[17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/swag' IS within AppdataPath '/mnt/cache/appdata'!
[17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/Jellyfin/config/log' IS within AppdataPath '/mnt/cache/appdata'!
[17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/nextcloud/log' IS within AppdataPath '/mnt/cache/appdata'!
[17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/vaultwarden/vaultwarden.log' IS within AppdataPath '/mnt/cache/appdata'!
[17.04.2023 20:57:45][debug] Final volumes: /mnt/cache/appdata/swag, /mnt/cache/appdata/Jellyfin/config/log, /mnt/cache/appdata/nextcloud/log, /mnt/cache/appdata/vaultwarden/vaultwarden.log
[17.04.2023 20:57:45][debug] Target archive: /mnt/user/backup/appdatabackup/ab_20230417_205409/swag.tar.gz
[17.04.2023 20:57:45][debug] Container got excludes! 
/mnt/cache/appdata/Jellyfin/config/log
/mnt/cache/appdata/nextcloud/log
/mnt/cache/appdata/vaultwarden/vaultwarden.log
[17.04.2023 20:57:45][debug] Generated tar command: --exclude '/mnt/cache/appdata/vaultwarden/vaultwarden.log' --exclude '/mnt/cache/appdata/nextcloud/log' --exclude '/mnt/cache/appdata/Jellyfin/config/log' -c -P -z -f '/mnt/user/backup/appdatabackup/ab_20230417_205409/swag.tar.gz' '/mnt/cache/appdata/swag' '/mnt/cache/appdata/Jellyfin/config/log' '/mnt/cache/appdata/nextcloud/log' '/mnt/cache/appdata/vaultwarden/vaultwarden.log'
[17.04.2023 20:57:45][info] Backing up swag...
[17.04.2023 20:57:51][debug] Tar out: 
[17.04.2023 20:57:51][info] Backup created without issues
[17.04.2023 20:57:51][info] Verifying backup...
[17.04.2023 20:57:51][debug] Final verify command: --exclude '/mnt/cache/appdata/vaultwarden/vaultwarden.log' --exclude '/mnt/cache/appdata/nextcloud/log' --exclude '/mnt/cache/appdata/Jellyfin/config/log' --diff -f '/mnt/user/backup/appdatabackup/ab_20230417_205409/swag.tar.gz' '/mnt/cache/appdata/swag' '/mnt/cache/appdata/Jellyfin/config/log' '/mnt/cache/appdata/nextcloud/log' '/mnt/cache/appdata/vaultwarden/vaultwarden.log'
[17.04.2023 20:57:52][debug] Tar out: tar: Removing leading `/' from member names; tar: /mnt/cache/appdata/Jellyfin/config/log: Not found in archive; tar: /mnt/cache/appdata/nextcloud/log: Not found in archive; tar: /mnt/cache/appdata/vaultwarden/vaultwarden.log: Not found in archive; tar: Exiting with failure status due to previous errors
[17.04.2023 20:57:52][error] tar verification failed! More output available inside debuglog, maybe.
[17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/swag)
Array
(
)

[17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/Jellyfin/config/log)
Array
(
)

[17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/nextcloud/log)
Array
(
)

[17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/vaultwarden/vaultwarden.log)
Array
(
)

[17.04.2023 20:57:52][debug] AFTER verify: Array
(
    [Image] => linuxserver/swag:latest
    [ImageId] => a51e6e60de2c
    [Name] => swag
    [Status] => Exited (0) 3 minutes ago
    [Running] => 
    [Paused] => 
    [Cmd] => /init
    [Id] => ff1513d2b608
    [Volumes] => Array
        (
            [0] => /mnt/cache/appdata/swag:/config:rw
            [1] => /mnt/cache/appdata/Jellyfin/config/log/:/var/log/jellyfin/:ro
            [2] => /mnt/cache/appdata/nextcloud/log/:/var/log/nextcloud/:ro
            [3] => /mnt/cache/appdata/vaultwarden/vaultwarden.log:/var/log/vaultwarden/vaultwarden.log:ro
        )

    [Created] => 5 hours ago
    [NetworkMode] => proxynet
    [CPUset] => 
    [BaseImage] => 
    [Icon] => https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver-ls-logo.png
    [Url] => https://[IP]:[PORT:443]
    [Shell] => 
    [Ports] => Array
        (
            [0] => Array
                (
                    [IP] => 
                    [PrivatePort] => 443
                    [PublicPort] => 4443
                    [NAT] => 1
                    [Type] => tcp
                )

            [1] => Array
                (
                    [IP] => 
                    [PrivatePort] => 80
                    [PublicPort] => 4480
                    [NAT] => 1
                    [Type] => tcp
                )

        )

)

 

 

ab.debug.log

Link to comment
9 hours ago, halorrr said:

even if it belongs to a different app?

Yes, because I cant tell which mapping originates from which container.

 

9 hours ago, halorrr said:

Also worth nothing that the plugin doesn't deduplicate volumes from the list if multiple container folders have been mapped to the same path.

Exact same reason :)

 

I could remove a mapping once it appeared but it could be backed up within wrong container context which causes confusion.

 

2 hours ago, Anym001 said:

I have now tried to exclude the affected mappings. 
Unfortunately I still get an error message. 

hmm. Have to check tars behavior here. Maybe I have to adjust some details.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.