[Plugin] Appdata.Backup


Recommended Posts

[14.02.2024 17:56:20][ℹ️][radarr-nime] Starting radarr-nime is being ignored, because it was not started before (or should not be started).
[14.02.2024 17:56:20][ℹ️][radarr] No stopping needed for radarr: Not started!
[14.02.2024 17:56:20][ℹ️][radarr] Should NOT backup external volumes, sanitizing them...
[14.02.2024 17:56:20][ℹ️][radarr] Calculated volumes to back up: /mnt/user/appdata/radarr
[14.02.2024 17:56:20][ℹ️][radarr] Backing up radarr...
[14.02.2024 17:56:37][][radarr] tar creation failed! Tar said: tar: /mnt/user/appdata/radarr/logs/radarr.24.txt: File shrank by 995287 bytes; padding with zeros; tar: /mnt/user/appdata/radarr/logs/radarr.21.txt: File shrank by 1011456 bytes; padding with zeros; tar: /mnt/user/appdata/radarr/MediaCover/128/fanart.jpg: Read error at byte 20480, while reading 10240 bytes: Input/output error; tar: /mnt/user/appdata/radarr/MediaCover/154/fanart-180.jpg: Read error at byte 0, while reading 6656 bytes: Input/output error; tar: /mnt/user/appdata/radarr/MediaCover/166/fanart-360.jpg: File shrank by 62211 bytes; padding with zeros; tar: /mnt/user/appdata/radarr/MediaCover/169/fanart.jpg: File shrank by 803138 bytes; padding with zeros; tar: /mnt/user/appdata/radarr/MediaCover/172/poster.jpg: File shrank by 1513292 bytes; padding with zeros; tar: /mnt/user/appdata/radarr/MediaCover/321/poster-500.jpg: File shrank by 66810 bytes; padding with zeros; tar: Exiting with failure status due to previous errors
[14.02.2024 17:56:38][ℹ️][radarr] Starting radarr is being ignored, because it was not started before (or should not be started).

 

i keep getting this error and want to restore from this failed backup appdata. is there any problem will occur later?

 

912843844_Screenshot2024-02-14182900.thumb.png.eb3350e01656c3948ca1bc1e38ac514d.png414799388_Screenshot2024-02-14182912.png.89aebf966c50ae063ecd94df34d803f9.png

Link to comment

You need to group all containers that share directories so that one won't change contents while another is being backed up.

 

Files that changed will be corrupted, whether those are important or not you'll only see when restoring...

Edited by Kilrah
Link to comment

Is it normal, that script attached as a Post Run Script is not listed in LOG text, but it's running in the background?

I have scheduled backup with .sh script which packs and upload my backup to a different server (using rclone copy mode).

It's scheduled every 1st and 15th of the each month.

1st of the February worked perfectly, and the information about whole process was written in the LOG of the plugin, but today it seemed to work in the background without any logs?
Any changes in the newst version of the plugin? (im up to date)

Link to comment

Question on the post run script here too.

 

From the help:

"Runs the selected script AFTER everything is done. Sent arguments: post-run, destination path, true|false (true on backup success, false otherwise)"

Are these positional arguments being sent?

 

So:

$1="post-run"

$2="/path/to/where/appdata_backups/puts/the/backups/" # Is there a trailing slash or no?

$3="true OR false"

Link to comment
On 2/12/2024 at 7:24 PM, adminmat said:

I assume this can be ignored.

Yep

 

On 2/13/2024 at 12:40 AM, spall said:

I had an error throw today:

The mentioned folder does not exist on Unraid side, maybe (as its a tmp folder) you want to exclude it?

 

 

Link to comment
22 hours ago, rogales said:

but today it seemed to work in the background without any logs?
Any changes in the newst version of the plugin? (im up to date)

No changes here. The plugin waits until the executed script is done and checks its exit code.

 

20 hours ago, miloian said:

Are these positional arguments being sent?

Yes

20 hours ago, miloian said:

Is there a trailing slash or no?

Erm... *reads the source code* No, no trailing slash!

Link to comment

Thank you for a rework on the appdata plugin. I seem to have problems with named bind volumes from the docker compose plugins.

First of all, this is the error from the debug log:

 

[17.02.2024 03:06:28][][paperless_ngx_private-paperless-ngx-broker-1] 'paperless_ngx_private_redisdata' does NOT exist! Please check your mappings! Skipping it for now.
[17.02.2024 03:06:39][ℹ️][paperless_ngx_private-paperless-ngx-broker-1] Should NOT backup external volumes, sanitizing them...
[17.02.2024 03:06:39][⚠️][paperless_ngx_private-paperless-ngx-broker-1] paperless_ngx_private-paperless-ngx-broker-1 does not have any volume to back up! Skipping. Please consider ignoring this container.
[17.02.2024 03:06:39][][paperless_ngx_private-paperless-ngx-db-1] 'paperless_ngx_private_pgdata' does NOT exist! Please check your mappings! Skipping it for now.
[17.02.2024 03:06:50][ℹ️][paperless_ngx_private-paperless-ngx-db-1] Should NOT backup external volumes, sanitizing them...
[17.02.2024 03:06:50][⚠️][paperless_ngx_private-paperless-ngx-db-1] paperless_ngx_private-paperless-ngx-db-1 does not have any volume to back up! Skipping. Please consider ignoring this container.
[17.02.2024 03:06:50][ℹ️][paperless_ngx_private-paperless-ngx-gotenberg-1] Should NOT backup external volumes, sanitizing them...
[17.02.2024 03:06:50][⚠️][paperless_ngx_private-paperless-ngx-gotenberg-1] paperless_ngx_private-paperless-ngx-gotenberg-1 does not have any volume to back up! Skipping. Please consider ignoring this container.
[17.02.2024 03:06:50][ℹ️][paperless_ngx_private-paperless-ngx-tika-1] Should NOT backup external volumes, sanitizing them...
[17.02.2024 03:06:50][⚠️][paperless_ngx_private-paperless-ngx-tika-1] paperless_ngx_private-paperless-ngx-tika-1 does not have any volume to back up! Skipping. Please consider ignoring this container.
[17.02.2024 03:06:50][][paperless_ngx_private-paperless-ngx-webserver-1] 'paperless_ngx_private_data' does NOT exist! Please check your mappings! Skipping it for now.
[17.02.2024 03:07:00][][paperless_ngx_private-paperless-ngx-webserver-1] 'paperless_ngx_private_media' does NOT exist! Please check your mappings! Skipping it for now.
[17.02.2024 03:07:11][ℹ️][paperless_ngx_private-paperless-ngx-webserver-1] Should NOT backup external volumes, sanitizing them...
[17.02.2024 03:07:11][⚠️][paperless_ngx_private-paperless-ngx-webserver-1] paperless_ngx_private-paperless-ngx-webserver-1 does not have any volume to back up! Skipping. Please consider ignoring this container.

 

The volumes mentioned in the error log are named volumes in docker compose template files. This is necessary, because they are used in different containers in the template:

 

volumes:
  data:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /mnt/user/appdata/paperless-ngx-private/data/
  media:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /mnt/user/share/userdata/paperlessdata/media/
  pgdata:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /mnt/user/appdata/paperless-ngx-private/pgdata/
  redisdata:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /mnt/user/appdata/paperless-ngx-private/redisdata/

 

These are referenced by the containers in the template like so (only posting one example, the rest is the same):

 

  paperless-ngx-broker:
    image: docker.io/library/redis:7
    restart: unless-stopped
    volumes:
      - redisdata:/data
    networks:
      - backend

 

When running the docker compose template, the Unraid docker overview shows them like this:

 

grafik.thumb.png.13460de9701ea03d37f47cb71029a713.png

 

I therefore assume, the appdata plugin tries to pickup the mapping that Unraid provides in the overview instead of the mapping defined in the docker compose.

To avoid the errors, I will exclude the containers now from backups.

 

I was wondering: is there a stable and better way in the plugin to deal with docker compose named volumes?

 

Thank you in advance!

Link to comment

You should simply not use named volumes. Nothing on unraid is designed to work with that. 

4 minutes ago, HumanTechDesign said:

This is necessary, because they are used in different containers in the template

Not necessary at all, you just use bind mounts and point the various containers to the same host path.

Edited by Kilrah
Link to comment
On 2/17/2024 at 1:35 PM, HumanTechDesign said:

The volumes mentioned in the error log are named volumes in docker compose template files.

 

Ah interesting, I'm facing the same issue. Would be nice if volume mounts (mappings with no leading slash) would be ignored.

 

On 2/17/2024 at 1:40 PM, Kilrah said:

You should simply not use named volumes. Nothing on unraid is designed to work with that. 

 

Not true at all. Volume mappings work fine with the docker compose plugin, and in fact once created (with the plugin or the command line), also with the unraid vanilla interface. Volumes are the preferred docker way of storage, and they are very useful on unraid in certain use cases.

Link to comment
1 hour ago, sir_storealot said:

Volume mappings work fine with the docker compose plugin

Of course they do, unraid runs Docker and it's a Docker feature, it'll work.

But within the Unraid ecosystem things are set up to use bind mounts so that you can have your data in a known determined location, using Unraid's shfs layer to decide on what storage device it goes etc so that it can be accessed and backed up conveniently instead of being a cryptic folder deep down in Docker's own filestructure, and unsurprisingly the tools designed for working on it follow the same design philosophy. 

On Unraid the entire Docker filesystem is considered to be disposable, all your valuable data's supposed to be outside of it for safety and convenience.

 

1 hour ago, sir_storealot said:

and they are very useful on unraid in certain use cases

Heard that a few times but have yet to see a convincing example.

Edited by Kilrah
Link to comment
31 minutes ago, Kilrah said:

On Unraid the entire Docker filesystem is considered to be disposable, all your valuable data's supposed to be outside of it for safety and convenience.

 

You are missing the point, this is not about backing up "valuable data". It's about getting rid of the error message (i.e. ignore the volumes) and allow the backup to finish without throwing an error. The volumes ARE disposable (at least in my case).

 

33 minutes ago, Kilrah said:

Heard that a few times but have yet to see a convincing example.

 

Well... Just because you cannot think of a use case, doesn't mean it does not exist for others. You can read up on the general advantages here:

https://docs.docker.com/storage/volumes/

  • Like 1
Link to comment

Hey guys, I am still using the old ca.backup2. I want to upgrade to the new version. But I cannot find it under "Apps". 

 

Under Settings>Community Applications>Hide outdated applications, I changed from yes to no. After that I can find the old ca.backup2 from squid in "Apps". But I cannot find the new version by Robin Kluth.

 

Your support is greatly appreciated!

Link to comment
11 minutes ago, Venari said:

Hey guys, I am still using the old ca.backup2. I want to upgrade to the new version. But I cannot find it under "Apps". 

 

Under Settings>Community Applications>Hide outdated applications, I changed from yes to no. After that I can find the old ca.backup2 from squid in "Apps". But I cannot find the new version by Robin Kluth.

 

Your support is greatly appreciated!

I honestly don't know what the problem was... I changed the "Hide outdated applications" setting back from no to yes, and then I found the new appdata backup plugin... But I don't really think that was the problem... 

 

Nevertheless, everything fine now!

Link to comment
1 hour ago, Venari said:

I honestly don't know what the problem was... I changed the "Hide outdated applications" setting back from no to yes, and then I found the new appdata backup plugin... But I don't really think that was the problem... 

 

Nevertheless, everything fine now!

Not just you, this was delisted for some time ;) seems to be resolved now though.

Link to comment
4 hours ago, Kilrah said:

But within the Unraid ecosystem things are set up to use bind mounts so that you can have your data in a known determined location, using Unraid's shfs layer to decide on what storage device it goes etc so that it can be accessed and backed up conveniently instead of being a cryptic folder deep down in Docker's own filestructure, and unsurprisingly the tools designed for working on it follow the same design philosophy.

 

Thank you for your input. As you can see in the docker compose mappings, these volumes do in fact live on the exact same /mnt/user path - just like any "vanilla" Unraid docker template. The only difference is that docker compose allows for reusing the mapping by using volumes. You are technically correct that it is not strictly necessary to do it this way (I could mount the paths in the different containers separately), but this way it is just cleaner and apart from the appdata backup plugin has never caused any issues (like @sir_storealot described)

  • Like 1
Link to comment

Running Appdata Backup with "stop all containers, backup, start" and I'm getting this error in the log:

 

Quote

[Main] tar creation failed! Tar said: tar: /mnt/user/system/docker/docker.img: file changed as we read it

 

I'm not sure what to do here since everything is stopped.

 

Shared Debug Log as 9e627b20-3451-48f4-982d-c6e39fc17a20 

 

Full log:

 

[20.02.2024 09:12:41][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D
[20.02.2024 09:12:41][ℹ️][Main] Backing up from: /mnt/user/appdata
[20.02.2024 09:12:41][ℹ️][Main] Backing up to: /mnt/user/Appdata.Backup/ab_20240220_091241
[20.02.2024 09:12:41][ℹ️][Main] Selected containers: Channels-DVR, Plex-Media-Server, calibre-web
[20.02.2024 09:12:41][ℹ️][Main] Saving container XML files...
[20.02.2024 09:12:41][ℹ️][Main] Method: Stop all container before continuing.
[20.02.2024 09:12:41][ℹ️][Plex-Media-Server] Stopping Plex-Media-Server... done! (took 8 seconds)
[20.02.2024 09:12:49][ℹ️][Channels-DVR] Stopping Channels-DVR... done! (took 5 seconds)
[20.02.2024 09:12:54][ℹ️][calibre-web] Stopping calibre-web... done! (took 4 seconds)
[20.02.2024 09:12:58][ℹ️][Main] Starting backup for containers
[20.02.2024 09:12:58][ℹ️][Plex-Media-Server] Should NOT backup external volumes, sanitizing them...
[20.02.2024 09:12:58][ℹ️][Plex-Media-Server] Calculated volumes to back up: /mnt/user/appdata/Plex-Media-Server
[20.02.2024 09:12:58][ℹ️][Plex-Media-Server] Backing up Plex-Media-Server...
[20.02.2024 09:13:32][ℹ️][Plex-Media-Server] Backup created without issues
[20.02.2024 09:13:32][ℹ️][Plex-Media-Server] Verifying backup...
[20.02.2024 09:13:51][ℹ️][Channels-DVR] Should NOT backup external volumes, sanitizing them...
[20.02.2024 09:13:51][ℹ️][Channels-DVR] Calculated volumes to back up: /mnt/user/appdata/channels-dvr
[20.02.2024 09:13:51][ℹ️][Channels-DVR] Backing up Channels-DVR...
[20.02.2024 09:15:42][ℹ️][Channels-DVR] Backup created without issues
[20.02.2024 09:15:42][ℹ️][Channels-DVR] Verifying backup...
[20.02.2024 09:16:08][ℹ️][calibre-web] Should NOT backup external volumes, sanitizing them...
[20.02.2024 09:16:08][ℹ️][calibre-web] Calculated volumes to back up: /mnt/user/appdata/calibre-web
[20.02.2024 09:16:08][ℹ️][calibre-web] Backing up calibre-web...
[20.02.2024 09:16:08][ℹ️][calibre-web] Backup created without issues
[20.02.2024 09:16:08][ℹ️][calibre-web] Verifying backup...
[20.02.2024 09:16:08][ℹ️][Main] Set containers to previous state
[20.02.2024 09:16:08][ℹ️][calibre-web] Starting calibre-web... (try #1) done!
[20.02.2024 09:16:10][ℹ️][Channels-DVR] Starting Channels-DVR... (try #1) done!
[20.02.2024 09:16:15][ℹ️][Plex-Media-Server] Starting Plex-Media-Server... (try #1) done!
[20.02.2024 09:16:19][ℹ️][Main] Backing up the flash drive.
[20.02.2024 09:16:57][ℹ️][Main] Flash backup created!
[20.02.2024 09:16:58][ℹ️][Main] VM meta backup enabled! Backing up...
[20.02.2024 09:16:58][ℹ️][Main] Done!
[20.02.2024 09:16:58][ℹ️][Main] Backing up extra files...
[20.02.2024 09:28:14][][Main] tar creation failed! Tar said: tar: /mnt/user/system/docker/docker.img: file changed as we read it
[20.02.2024 09:28:15][ℹ️][Main] Checking retention...
[20.02.2024 09:28:15][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;)
[20.02.2024 09:28:15][ℹ️][Main] ❤️

 

Edited by joey911
Link to comment

Pardon me if I missed it; recently this has started throwing a "alert." It has not always done this; sent a alert on this. I guess I am too stupid to understand it - but is this actually an issue or what does it mean? Did something change and this has been always been an "issue" but wasn't' alerted to it?

image.png.7b4f8b70d1d6765c81051549f6ab9fc2.png

Link to comment
On 2/20/2024 at 1:06 PM, JonathanM said:

Normally you wouldn't back that file up. It's not supposed to contain any of your customized data.

 

Thanks!  It looks like it came along because I specified /mnt/user/system/ as part of the backup. I've added it to the global exclusion section. 

Link to comment
39 minutes ago, joey911 said:

I've added it to the global exclusion section. 

 

It looks like that didn't work as I expected. I was hoping it would just exclude that file.   

 

What I'm attempting to do is also backup the system file since I only have one cache drive.

 

 

Screenshot 2024-02-21 at 13.46.50.png

Link to comment

@KluthR First of all, thanks for the plugin. I'am using it for quite some time now and never had any issues with it.

 

Yesterday I switched from the old 2.5 version to the new one during the latest Unraid update and tested a bit. So far so good, compression works fine, copy flash backup to different location and also the grouping feature and the autoupdate dockers are working. 

 

Now to my issue I have. I have a mariadb and nextcloud container grouped together, stop-backup-start works, but the available update for the mariadb container isn't applied when grouped together. Is this a known bug or not yet implemented for the grouping feature?

 

The logs show no errors and no hints that the plugin tries to update the container like it did on the netdata container which is in no group.

 

--
[21.02.2024 22:20:37][ℹ️][nextcloud] Method: Stop all container before continuing.
[21.02.2024 22:20:37][ℹ️][nextcloud][Nextcloud] Stopping Nextcloud... done! (took 1 seconds)
[21.02.2024 22:20:38][ℹ️][nextcloud][MariaDB-Official] Stopping MariaDB-Official... done! (took 2 seconds)
[21.02.2024 22:20:40][ℹ️][Main] Starting backup for containers
[21.02.2024 22:20:40][ℹ️][Nextcloud] Should NOT backup external volumes, sanitizing them...
[21.02.2024 22:20:40][ℹ️][Nextcloud] Calculated volumes to back up: /mnt/user/appdata/nextcloud/apps, /mnt/user/appdata/nextcloud/config, /mnt/user/appdata/nextcloud/nextcloud
[21.02.2024 22:20:40][ℹ️][Nextcloud] Backing up Nextcloud...
[21.02.2024 22:21:37][ℹ️][Nextcloud] Backup created without issues
[21.02.2024 22:21:37][ℹ️][Nextcloud] Verifying backup...
[21.02.2024 22:22:20][ℹ️][MariaDB-Official] Should NOT backup external volumes, sanitizing them...
[21.02.2024 22:22:20][ℹ️][MariaDB-Official] Calculated volumes to back up: /mnt/user/appdata/mariadb-official/data, /mnt/user/appdata/mariadb-official/config
[21.02.2024 22:22:20][ℹ️][MariaDB-Official] Backing up MariaDB-Official...
[21.02.2024 22:22:30][ℹ️][MariaDB-Official] Backup created without issues
[21.02.2024 22:22:30][ℹ️][MariaDB-Official] Verifying backup...
[21.02.2024 22:22:33][ℹ️][Main] Set containers to previous state
[21.02.2024 22:22:33][ℹ️][MariaDB-Official] Starting MariaDB-Official... (try #1) done!
[21.02.2024 22:22:35][ℹ️][Nextcloud] Starting Nextcloud... (try #1) done!
[21.02.2024 22:22:38][ℹ️][netdata] Stopping netdata... done! (took 2 seconds)
[21.02.2024 22:22:40][ℹ️][netdata] Should NOT backup external volumes, sanitizing them...
[21.02.2024 22:22:40][ℹ️][netdata] Calculated volumes to back up: /mnt/user/appdata/netdata/lib, /mnt/user/appdata/netdata/cache, /mnt/user/appdata/netdata/config
[21.02.2024 22:22:40][ℹ️][netdata] Backing up netdata...
[21.02.2024 22:22:59][ℹ️][netdata] Backup created without issues
[21.02.2024 22:22:59][ℹ️][netdata] Verifying backup...
[21.02.2024 22:23:02][ℹ️][netdata] Installing planned update for netdata...
[21.02.2024 22:23:28][ℹ️][netdata] Starting netdata... (try #1) done!
--

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.