[Plugin] Appdata.Backup


Recommended Posts

Getting “Should backup external volumes, sanitizing them…”. The only thing that is being backed up is the templates and flash drive. The plugin is broken I believe. I have tried drives in the array, different share folders, verified permissions, etc…

Link to comment

I seem to be having an issue with this vs. the original appdata backup: 'file has changed' as experience by several other people in the thread:

 

 [13.01.2024 09:25:55][][FoundryVTT10] tar creation failed! Tar said:
tar: /mnt/user/AppData/FoundryVTT10/data/Data/modules/JB2A_DnD5e/Library/Generic/Impact/GroundCrackFrostImpact_01_Regular_White_600x600.webm: file changed as we read it;
tar: /mnt/user/AppData/FoundryVTT10/data/Data/modules/JB2A_DnD5e/Library/Generic/Impact: file changed as we read it;
tar: /mnt/user/AppData/FoundryVTT10/data/Data/modules/JB2A_DnD5e/Library/Generic: file changed as we read it;
tar: /mnt/user/AppData/FoundryVTT10/data/Data/modules/JB2A_DnD5e/Library: file changed as we read it;
tar: /mnt/user/AppData/FoundryVTT10/data/Data/modules/JB2A_DnD5e: file changed as we read it;
tar: /mnt/user/AppData/FoundryVTT10/data/Data/modules: file changed as we read it;
tar: /mnt/user/AppData/FoundryVTT10/data/Data: file changed as we read it;
tar: /mnt/user/AppData/FoundryVTT10/data: file changed as we read it

 

id: 89a9402e-820c-47c8-b066-a6a8baae4ee8

 

While it isn't reliably that file, it is reliably that folder that has a "changed as we read it". Any recommendations for finding what is touching that file with the docker stopped? That container doesn't share mappings with anything else, and it doesn't matter if the container is mapped via FUSE or not (I haven't gotten around to setting up exclusive share yet).

 

 

Editing to add:

I changed the backup type from "stop, backup, start container" to "stop all containers, backup, start all containers" and that seems to have given whatever process enough time to let go of those files before the plugin tries to touch it.

 

I have noticed in the past that that particular container isn't receptive to being restarted from the gui. It need to be stopped - wait - restarted. I don't know how to explain that behavior from a docker POV but it certainly seems to track with this. Either way issue seems to be resolved for me.

 

 

  

3 hours ago, Braus said:

Getting “Should backup external volumes, sanitizing them…”. The only thing that is being backed up is the templates and flash drive. The plugin is broken I believe. I have tried drives in the array, different share folders, verified permissions, etc…

When this happened to me, it was because the appdata source paths are case sensitive. By default the plugin looks for paths of

image.png.82de6a295d1880417fe80f493d143a54.png

 

So it didn't automatically believe my paths (/mnt/user/AppData and /mnt/cache/AppData) were internal volumes. I just added my paths to appdata source(s):

image.thumb.png.f2c7d3668194f5e4ef6e57024b303760.png

Edited by d3fc0n0wltraps
Link to comment

Hello,


I have a backup scheduled every week and last night it encountered an error.
I also have the same error during a manual backup.
Here are the logs:

 

[14.01.2024 14:30:16][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D
[14.01.2024 14:30:16][ℹ️][Main] Backing up from: /mnt/user/appdata, /mnt/cache/appdata
[14.01.2024 14:30:16][ℹ️][Main] Backing up to: /mnt/disks/Backups_Plex/BackupPlex/ab_20240114_143016
[14.01.2024 14:30:17][ℹ️][Main] Selected containers: Plex-Media-Server, binhex-hexchat, jDownloader2, speedtest-tracker, tautulli, transmission
[14.01.2024 14:30:17][ℹ️][Main] Saving container XML files...
[14.01.2024 14:30:17][ℹ️][Main] Method: Stop/Backup/Start
[14.01.2024 14:30:17][ℹ️][binhex-hexchat] Stopping binhex-hexchat... done! (took 1 seconds)
[14.01.2024 14:30:18][ℹ️][binhex-hexchat] Should NOT backup external volumes, sanitizing them...
[14.01.2024 14:30:18][ℹ️][binhex-hexchat] Calculated volumes to back up: /mnt/user/appdata/binhex-hexchat
[14.01.2024 14:30:18][ℹ️][binhex-hexchat] Backing up binhex-hexchat...
[14.01.2024 14:30:23][ℹ️][binhex-hexchat] Backup created without issues
[14.01.2024 14:30:23][ℹ️][binhex-hexchat] Verifying backup...
[14.01.2024 14:30:36][ℹ️][binhex-hexchat] Starting binhex-hexchat... (try #1) done!
[14.01.2024 14:30:38][ℹ️][jDownloader2] Stopping jDownloader2... done! (took 1 seconds)
[14.01.2024 14:30:39][ℹ️][jDownloader2] Should NOT backup external volumes, sanitizing them...
[14.01.2024 14:30:39][ℹ️][jDownloader2] Calculated volumes to back up: /mnt/cache/appdata/jdownloader2
[14.01.2024 14:30:39][ℹ️][jDownloader2] Backing up jDownloader2...
[14.01.2024 14:31:10][][jDownloader2] tar creation failed! Tar said: zstd: error 70 : Write error : cannot write block : Input/output error; tar: /mnt/disks/Backups_Plex/BackupPlex/ab_20240114_143016/jDownloader2.tar.zst: Wrote only 2048 of 10240 bytes; tar: Child returned status 70; tar: Error is not recoverable: exiting now
[14.01.2024 14:31:12][ℹ️][jDownloader2] Starting jDownloader2... (try #1) done!
[14.01.2024 14:31:15][ℹ️][Plex-Media-Server] Stopping Plex-Media-Server... done! (took 9 seconds)
[14.01.2024 14:31:24][ℹ️][Plex-Media-Server] Should NOT backup external volumes, sanitizing them...
[14.01.2024 14:31:24][ℹ️][Plex-Media-Server] Calculated volumes to back up: /mnt/user/appdata/Plex-Media-Server
[14.01.2024 14:31:24][ℹ️][Plex-Media-Server] Backing up Plex-Media-Server...
[14.01.2024 14:31:24][][Plex-Media-Server] tar creation failed! Tar said: zstd: error 70 : Write error : cannot write block : Input/output error; tar: /mnt/disks/Backups_Plex/BackupPlex/ab_20240114_143016/Plex-Media-Server.tar.zst: Wrote only 4096 of 10240 bytes; tar: Child returned status 70; tar: Error is not recoverable: exiting now
[14.01.2024 14:31:26][ℹ️][Plex-Media-Server] Starting Plex-Media-Server... (try #1) done!
[14.01.2024 14:31:28][ℹ️][tautulli] Stopping tautulli... done! (took 1 seconds)
[14.01.2024 14:31:29][ℹ️][tautulli] Should NOT backup external volumes, sanitizing them...
[14.01.2024 14:31:29][ℹ️][tautulli] Calculated volumes to back up: /mnt/user/appdata/tautulli
[14.01.2024 14:31:29][ℹ️][tautulli] Backing up tautulli...
[14.01.2024 14:31:29][][tautulli] tar creation failed! Tar said: zstd: error 70 : Write error : cannot write block : Input/output error; tar: /mnt/disks/Backups_Plex/BackupPlex/ab_20240114_143016/tautulli.tar.zst: Wrote only 4096 of 10240 bytes; tar: Child returned status 70; tar: Error is not recoverable: exiting now
[14.01.2024 14:31:37][ℹ️][tautulli] Starting tautulli... (try #1) done!
[14.01.2024 14:31:40][ℹ️][transmission] No stopping needed for transmission: Not started!
[14.01.2024 14:31:40][ℹ️][transmission] Should NOT backup external volumes, sanitizing them...
[14.01.2024 14:31:40][ℹ️][transmission] Calculated volumes to back up: /mnt/user/appdata/transmission
[14.01.2024 14:31:40][ℹ️][transmission] Backing up transmission...
[14.01.2024 14:31:40][ℹ️][transmission] Backup created without issues
[14.01.2024 14:31:40][ℹ️][transmission] Verifying backup...
[14.01.2024 14:31:40][ℹ️][transmission] transmission is being ignored, because it was not started before (or should not be started).
[14.01.2024 14:31:40][ℹ️][speedtest-tracker] No stopping needed for speedtest-tracker: Not started!
[14.01.2024 14:31:40][ℹ️][speedtest-tracker] Should NOT backup external volumes, sanitizing them...
[14.01.2024 14:31:40][ℹ️][speedtest-tracker] Calculated volumes to back up: /mnt/user/appdata/speedtest-tracker
[14.01.2024 14:31:40][ℹ️][speedtest-tracker] Backing up speedtest-tracker...
[14.01.2024 14:31:40][][speedtest-tracker] tar creation failed! Tar said: tar (child): /mnt/disks/Backups_Plex/BackupPlex/ab_20240114_143016/speedtest-tracker.tar.zst: Cannot open: Input/output error; tar (child): Error is not recoverable: exiting now; tar: /mnt/disks/Backups_Plex/BackupPlex/ab_20240114_143016/speedtest-tracker.tar.zst: Cannot write: Broken pipe; tar: Child returned status 2; tar: Error is not recoverable: exiting now
[14.01.2024 14:31:42][ℹ️][speedtest-tracker] speedtest-tracker is being ignored, because it was not started before (or should not be started).
[14.01.2024 14:31:42][ℹ️][Main] Backing up the flash drive.
[14.01.2024 14:33:03][][Main] Copying flash backup to destination failed!
[14.01.2024 14:33:05][ℹ️][Main] VM meta backup enabled! Backing up...
[14.01.2024 14:33:05][][Main] Error while backing up VM XMLs. Please see debug log!
[14.01.2024 14:33:13][ℹ️][Main] Starting Docker auto-update check...
[14.01.2024 14:33:13][ℹ️][Main] Docker update check finished!
[14.01.2024 14:33:13][⚠️][Main] An error occurred during backup! RETENTION WILL NOT BE CHECKED! Please review the log. If you need further assistance, ask in the support forum.
[14.01.2024 14:33:15][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;)
[14.01.2024 14:33:15][ℹ️][Main] ❤️

 

I have more than 1 Tb of free space on the destination which is a share created through rClone on a pCloud.
There was no change between last week's backup and yesterday's, I can't explain the errors.

 

Debug Log ID : 4802ee8b-3259-4b00-aa6c-c2a5797f8d93

 

Can you help me?

 

Thanks

 

 

Link to comment

I am having similar issues with the above post re: tar creation failed.

 

Here is the specific debug log error:


[14.01.2024 11:40:13][ℹ️][binhex-plexpass] Stopping binhex-plexpass... done! (took 12 seconds)
[14.01.2024 11:40:25][ℹ️][binhex-plexpass] Should NOT backup external volumes, sanitizing them...
[14.01.2024 11:40:25][ℹ️][binhex-plexpass] Calculated volumes to back up: /mnt/user/appdata/binhex-plexpass
[14.01.2024 11:40:25][ℹ️][binhex-plexpass] Backing up binhex-plexpass...
[14.01.2024 11:40:46][][binhex-plexpass] tar creation failed! Tar said: tar: /mnt/user/appdata/binhex-plexpass/Plex Media Server/Plug-in Support/com.plexapp.plugins.library.db: File shrank by 9927680 bytes; padding with zeros
[14.01.2024 11:40:47][ℹ️][binhex-plexpass] Starting binhex-plexpass... (try #1) done!

 

I have already excluded the suggested paths for plex.

 

Any suggestions are greatly appreciated!

 

Thanks!

Link to comment

[14.01.2024 09:23:09][ℹ️][Stash] Backing up Stash...
[14.01.2024 10:17:55][ℹ️][Stash] Backup created without issues
[14.01.2024 10:17:55][ℹ️][Stash] Verifying backup...
[14.01.2024 10:34:57][][Stash] tar verification failed! Tar said: tar: Removing leading `/' from member names; tar: Removing leading `/' from hard link targets; tar: /mnt/user/stash/stashapp/cache: Not found in archive; tar: /mnt/user/stash/stashapp/blobs: Not found in archive; tar: /mnt/user/stash/stashapp/generated: Not found in archive; tar: Exiting with failure status due to previous errors

 

How fix please?

Link to comment

I have an issue with the backup of Duplicacy. I think it's because i use an Unassigned Device, that is only connected when i want to run the backup to it. I've shared the debug log with you (nice feature btw):  47d57179-09a3-460f-a435-2146ad89d7ba

 

Thank you for this tool, it's much better than the previous one (except for this little issue :) )

Edited by Derek_
Link to comment

Can you add an Option to ignore this or only when the Container is running?

[15.01.2024 00:00:39][⚠️][ClamAV] NOT stopping ClamAV because it should be backed up WITHOUT stopping!

When the Scan is running over one Day he was interrupted when he was stopped through the Backup. Because them its correct Option that he was not stopped through the backup.

The Message came too when the Container not running.

 

The log: 350093f9-298e-4019-8c0f-88e4534489f3

Edited by Revan335
Link to comment

A few quick questions about scripts:

Do pre-run and post-run execute before/after appdatabackup does anything at all, so right at the start and end of the whole backup process?

Are pre-backup and post-backup run per docker container, so if I have 10 containers they will run 10 times?

Are pre-backup/post-backup run after a container has been stopped and before it is restarted?

Are any parameters passed to the scripts?  It would be useful to have the current backup directory name and the current container directory name passed via parameters.  Currently I work out the current backup directory name by just grabbing the most recently created directory like this "BACKUPDIR=$(ls -td /mnt/user/Backups/appdatabackup/* | head -1)" but that feels like a bit of a kludge.

 

Essentially, I'm trying to get a working remote backup of appdata to Backblaze using Duplicacy to de-duplicate.  At the moment, appdatabackup wraps everything up into per-container tar files.  This is fine/preferable for local backup but because each tar file probably contains at least one changed file, every tar file is different from the previous backup and the entire contents of appdata (~70Gb for me) gets uploaded every time I run the backup. 

 

I plan to use a pre-backup script (assuming it runs after each container has been stopped) to copy the container appdata directory to a local backup location and then using a scheduled Duplicacy backup to send the files to Backblaze.  That should give me local tar backups from appdata backup and a nice versioned de-duplicated backup on Backblaze that makes efficient use of my cloud storage and network connection.

Link to comment

I'm constantly getting a tar creation failed every backup for plex and other containers.  The error I get for plex and tdarr for example:


 

[17.01.2024 04:13:23][ℹ️][tdarr] Should NOT backup external volumes, sanitizing them...
[17.01.2024 04:13:23][ℹ️][tdarr] Calculated volumes to back up: /mnt/cache/appdata/tdarr/logs, /mnt/cache/appdata/tdarr/server, /mnt/cache/appdata/tdarr/configs, /mnt/cache/appdata/tdarr/transcode
[17.01.2024 04:13:23][ℹ️][tdarr] Backing up tdarr...
[17.01.2024 04:16:07][][tdarr] tar creation failed! Tar said: tar: /mnt/cache/appdata/tdarr/server/Tdarr/Backups/Backup-version-2.17.01-date-17-January-2024-00-00-01-ts-1705467601057.zip: File shrank by 52828677 bytes; padding with zeros
[17.01.2024 04:16:09][ℹ️][plex] Should NOT backup external volumes, sanitizing them...
[17.01.2024 04:16:09][ℹ️][plex] Calculated volumes to back up: /mnt/cache/appdata/plex
[17.01.2024 04:16:09][ℹ️][plex] Backing up plex...
[17.01.2024 06:14:11][ℹ️][plex] tar creation failed! Tar said: tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/3/55b946d9ba0e353f5c12c3ee2911e38d686bad5.bundle/Contents/Indexes/index-sd.bif: File shrank by 5892895 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/8/bce816be2aa4181d0bcc1dd4f5b3e4fa07adda6.bundle/Contents/Indexes/index-sd.bif: File shrank by 1206301 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/a/889476413724d73a052dac429bd3da5040d5201.bundle/Contents/Indexes/index-sd.bif: File shrank by 5933323 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/e/e081ca77978af00b9f7bed630525f53c728535d.bundle/Contents/Indexes/index-sd.bif: File shrank by 4318187 bytes; padding with zeros

 

Not sure if this is the correct place to post this issue but please let me know if it is not.

Deos anyone know what settings could be causing these issues? Debug ID: e0a37b9a-e599-408a-942d-e6855e3d6ed9

 

Thanks!

  • Upvote 1
Link to comment
1 hour ago, shewishewi said:

I'm constantly getting a tar creation failed every backup for plex and other containers.  The error I get for plex and tdarr for example:


 

[17.01.2024 04:13:23][ℹ️][tdarr] Should NOT backup external volumes, sanitizing them...
[17.01.2024 04:13:23][ℹ️][tdarr] Calculated volumes to back up: /mnt/cache/appdata/tdarr/logs, /mnt/cache/appdata/tdarr/server, /mnt/cache/appdata/tdarr/configs, /mnt/cache/appdata/tdarr/transcode
[17.01.2024 04:13:23][ℹ️][tdarr] Backing up tdarr...
[17.01.2024 04:16:07][][tdarr] tar creation failed! Tar said: tar: /mnt/cache/appdata/tdarr/server/Tdarr/Backups/Backup-version-2.17.01-date-17-January-2024-00-00-01-ts-1705467601057.zip: File shrank by 52828677 bytes; padding with zeros
[17.01.2024 04:16:09][ℹ️][plex] Should NOT backup external volumes, sanitizing them...
[17.01.2024 04:16:09][ℹ️][plex] Calculated volumes to back up: /mnt/cache/appdata/plex
[17.01.2024 04:16:09][ℹ️][plex] Backing up plex...
[17.01.2024 06:14:11][ℹ️][plex] tar creation failed! Tar said: tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/3/55b946d9ba0e353f5c12c3ee2911e38d686bad5.bundle/Contents/Indexes/index-sd.bif: File shrank by 5892895 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/8/bce816be2aa4181d0bcc1dd4f5b3e4fa07adda6.bundle/Contents/Indexes/index-sd.bif: File shrank by 1206301 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/a/889476413724d73a052dac429bd3da5040d5201.bundle/Contents/Indexes/index-sd.bif: File shrank by 5933323 bytes; padding with zeros; tar: /mnt/cache/appdata/plex/Library/Application Support/Plex Media Server/Media/localhost/e/e081ca77978af00b9f7bed630525f53c728535d.bundle/Contents/Indexes/index-sd.bif: File shrank by 4318187 bytes; padding with zeros

 

Not sure if this is the correct place to post this issue but please let me know if it is not.

Deos anyone know what settings could be causing these issues? Debug ID: e0a37b9a-e599-408a-942d-e6855e3d6ed9

 

Thanks!

Same as me since a week without any changes on my side

Link to comment

Hi there. I used to use the old AppData Backup and Restore way back, and got rid of it a while ago, and then just recently started using this new AppData Backup plugin again. I'm trying to use it in combination with Duplicacy in the way I understand to be correct. Basically AppData Backup runs first, and backs up my AppData to another share called Backups. Then Duplicacy backs up that share later to Backblaze B2 storage. 

 

I have AppData Backup configured to stop and backup each container and start it again one at a time. I'm having some issues with the starting and stopping of containers that I never seemed to have with the old plugin. Looking in the logs, I see that many of the containers are having to be force stopped with 'docker stop' due to them not seeming to respond to the regular stop. I'm not sure if this just means they are taking too long to stop the normal way so the plugin gets tired of waiting or what. But I notice that it seems like many of the containers that have trouble stopping, then when its time to start, have trouble starting and end up with a message about already being started. This backup happens while I'm asleep so I dont know for sure if they are or not. All I know is that when I wake up in the morning, I usually have down notifications from UptimeKuma and sure enough, a handful of containers are stopped. But according to the logs, those containers were already started when it went to start them. 

 

I don't have AppData Backup configured to update any containers so that shouldn't be it. It's usually same few containers every time. Always Ghost and Agent DVR, and a couple others. This morning Glances was also a problem child though it hasn't been in the past. Oddly, not ALL the containers in the logs that had issues are not started Jellyfin for example had issues stopping AND starting. But it was running just fine this morning. But all the ones that are not started ARE ones that were logged as having issues.

 

I'll include the logs below. If anyone can help out here, I'd sure appreciate it. I don't like having my containers down for hours until I wake up and then having to manually start them first thing in the morning.

 

Debug log ID is 3ec2e7eb-9fab-42bc-8198-712b82790c04

 

Thanks in advance for any help!

 

 

ab.log

  • Upvote 1
Link to comment
On 1/9/2024 at 5:03 AM, DaveHavok said:

1.) In the event you need to restore a backup - I'm guessing it's a manual process with the compressed filed created with this plugin?

No, theres an "wizard" - which stiil needs to be extended - which does the job.

 

On 1/9/2024 at 11:54 PM, Kol said:

I am replacing my cache drive. I backed up appdata using the plugin, with no issues. I replaced the drive and formatted it and am in the process of restoring it. When I run the restore, the log shows the restore was completed with no errors, yet when I look for the files on the new cache drive, they are not there.

Please show the restore log. The restore uses the same paths as there were used during backup. Did you changed some names?

 

On 1/10/2024 at 3:53 PM, JaviPas said:

but it seems to be forever working to restore the krusader template

The tar command is just still running. Please verify this with "ps aux | grep krusader". If this shows a running tar command, its still running. How many files are inside the source container path?

 

On 1/12/2024 at 7:25 PM, Wolfhunter1043 said:

I am trying to restore to a new cache drive and I can see my various backup versions, each weekly, and I can restore the templates however, it says no containers available. I’ve tried to search through here, and I don’t see anything specific.

Please open up one of the backups and show a picture of the contents. It feels like the templates are there but no container volume got backed up..?

Link to comment
On 1/13/2024 at 3:46 PM, Braus said:

Getting “Should backup external volumes, sanitizing them…”. The only thing that is being backed up is the templates and flash drive. The plugin is broken I believe.

No, its not. Did you read the "Source path" info box?

 

On 1/13/2024 at 3:48 PM, d3fc0n0wltraps said:

Any recommendations for finding what is touching that file with the docker stopped?

Is the volume mapping in question used by another container?

 

On 1/14/2024 at 4:56 PM, JUST-Plex said:

I have a backup scheduled every week and last night it encountered an error.
I also have the same error during a manual backup.

Never saw this one before. Tar is telling, it could only write x blocks from y. How is the destination connected to Unraid? Is it a network path or an internal drive?

 

On 1/14/2024 at 9:00 PM, starlight said:

I am having similar issues with the above post re: tar creation failed.

Feels like the file was being modified by some other process, dont know exactly. Any container using the same mapping?

 

On 1/14/2024 at 11:25 PM, sunwind said:

Not found in archive; tar: Exiting with failure status due to previous errors

 

How fix please?

Are those mentioned paths empty on the source?

 

On 1/15/2024 at 8:59 AM, Derek_ said:

I have an issue with the backup of Duplicacy. I think it's because i use an Unassigned Device, that is only connected when i want to run the backup to it. I've shared the debug log with you (nice feature btw): 

You set to exclude /mnt/user. But everything to backup is there, so its get excluded. Please check your duplicacy Exclusions.

 

On 1/15/2024 at 10:05 AM, Revan335 said:

Can you add an Option to ignore this or only when the Container is running?

You say this message also takes place even the container is not started?

 

5 hours ago, SirCadian said:

A few quick questions about scripts:

There are small outstanding fixes in the next release but basically currently:

 

  1. PreRun is being executed directly after checking the existence of the destination.
  2. PreBackup is being executed ONE TIME After the docker XMLs got backed, right before start backing up the first container
  3. postBackup is being executed right after the last container backup - one time also
  4. postRun fires at last.

But your percontainer scripts sounds useful, will note that.

 

2 hours ago, shewishewi said:

I'm constantly getting a tar creation failed every backup for plex and other containers.

Also a "file shrank" issue. Is there any other container using this mapping?

 

1 hour ago, ms4sman said:

I'm not sure if this just means they are taking too long to stop the normal way so the plugin gets tired of waiting or what

The stop methods actually waits DOCKER_TIMEOUT seconds to stop - this variable is being set via the docker settings page (Stop timeout). So, yes, it seems, some containers take too many time. I have to check if the Timeout means "Wait x secs and kill the container" or if it just stops waiting. I believe its the first. I believe that if the timeout hits, I get a non-success message back and therefore I test docker-stop method directly - which succeeds (container is already stopped then). No issue for that - but please check how long the container needs really to stop. Maybe you should increase the timeout.

 

 

 

That being said: Since christmas, I got nearly zero time doing anything here. There will be some movement the next 2 weeks. The current beta should go live (770 downloads and NO feedback. I guess thats good?). Directly after that, the plugin will be able to detect volume multi-usage and report it to the user. Or it creates auto-grouping, dont know.

 

There is also work needed for the restore-wizard then.

  • Thanks 2
Link to comment
57 minutes ago, KluthR said:
On 1/14/2024 at 4:56 PM, JUST-Plex said:

I have a backup scheduled every week and last night it encountered an error.
I also have the same error during a manual backup.

Never saw this one before. Tar is telling, it could only write x blocks from y. How is the destination connected to Unraid? Is it a network path or an internal drive?

 

It is a network Path.

I've just tried it locally, it works perfectly. It could be that it's coming from the network path then.
Is the tar done locally and then sent, or is it done directly on the target?

 

Thanks

Link to comment
1 hour ago, KluthR said:

The stop methods actually waits DOCKER_TIMEOUT seconds to stop - this variable is being set via the docker settings page (Stop timeout). So, yes, it seems, some containers take too many time. I have to check if the Timeout means "Wait x secs and kill the container" or if it just stops waiting. I believe its the first. I believe that if the timeout hits, I get a non-success message back and therefore I test docker-stop method directly - which succeeds (container is already stopped then). No issue for that - but please check how long the container needs really to stop. Maybe you should increase the timeout.

I will check on the timeout and try increasing it.

 

Any suggestions regarding when it tries to restart the container and detects it is already started, but then in the morning it is clearly not started? Seems like for some reason it's being told the container is running when it isn't? I couldn't find anything about this in my searching online. Only things I saw about "already running" were in regards to the old version of the plugin and had to do with the auto-update.

Link to comment
3 hours ago, KluthR said:
  1. PreRun is being executed directly after checking the existence of the destination.
  2. PreBackup is being executed ONE TIME After the docker XMLs got backed, right before start backing up the first container
  3. postBackup is being executed right after the last container backup - one time also
  4. postRun fires at last.

But your percontainer scripts sounds useful, will note that.

Thanks, I'll keep an eye out for updates.   In the meantime, if I change to stopping all containers, backing up and then restarting all containers...do PreBackup or PostBackup run in the window while the containers are stopped?  I'm assuming not.

Link to comment
7 hours ago, KluthR said:

You say this message also takes place even the container is not started?

Yes. I have set the option Container don't stop through backup to don't interrupted the scan when the container is running for example more than one day.

But yes, actually the Container is not running and the Backup of the not running Container put this Warning/message in the log.

Link to comment
10 hours ago, KluthR said:

You set to exclude /mnt/user. But everything to backup is there, so its get excluded. Please check your duplicacy Exclusions.

 

Oh man. I was fixated on the 'warnings' re /mnt/disks - Your solution simply didn't occur to me. Sorry about that 🙃 I've just bought you a coffee :)

Link to comment
On 1/18/2024 at 12:02 PM, KluthR said:

No, its not. Did you read the "Source path" info box?

 

Is the volume mapping in question used by another container?

 

Never saw this one before. Tar is telling, it could only write x blocks from y. How is the destination connected to Unraid? Is it a network path or an internal drive?

 

Feels like the file was being modified by some other process, dont know exactly. Any container using the same mapping?

 

Are those mentioned paths empty on the source?

 

You set to exclude /mnt/user. But everything to backup is there, so its get excluded. Please check your duplicacy Exclusions.

 

You say this message also takes place even the container is not started?

 

There are small outstanding fixes in the next release but basically currently:

 

  1. PreRun is being executed directly after checking the existence of the destination.
  2. PreBackup is being executed ONE TIME After the docker XMLs got backed, right before start backing up the first container
  3. postBackup is being executed right after the last container backup - one time also
  4. postRun fires at last.

But your percontainer scripts sounds useful, will note that.

 

Also a "file shrank" issue. Is there any other container using this mapping?

 

The stop methods actually waits DOCKER_TIMEOUT seconds to stop - this variable is being set via the docker settings page (Stop timeout). So, yes, it seems, some containers take too many time. I have to check if the Timeout means "Wait x secs and kill the container" or if it just stops waiting. I believe its the first. I believe that if the timeout hits, I get a non-success message back and therefore I test docker-stop method directly - which succeeds (container is already stopped then). No issue for that - but please check how long the container needs really to stop. Maybe you should increase the timeout.

 

 

 

That being said: Since christmas, I got nearly zero time doing anything here. There will be some movement the next 2 weeks. The current beta should go live (770 downloads and NO feedback. I guess thats good?). Directly after that, the plugin will be able to detect volume multi-usage and report it to the user. Or it creates auto-grouping, dont know.

 

There is also work needed for the restore-wizard then.

Yes there is another contraienr using the mapping, but I set it to stop ALL containers before backup starts!

Link to comment

Any ideas on what causes this error?

"[radarr] tar verification failed!"

 

[21.01.2024 09:48:56][][radarr] tar verification failed! Tar said: tar: from member names; /*stdin*\ : Read error (39) : premature end; tar: Unexpected EOF in archive; tar: Child returned status 1; tar: Error is not recoverable: exiting now

 

 

	Line 1114: [21.01.2024 09:45:35][debug][radarr] Container got excludes! 
	Line 1115: /mnt/user/appdata/radarr/MediaCovers/
	Line 1116: /mnt/user/appdata/radarr/Backups/
	Line 1117: /mnt/user/appdata/radarr/logs/
	Line 1118: [21.01.2024 09:45:35][ℹ️][radarr] Calculated volumes to back up: /mnt/user/appdata/radarr
	Line 1119: [21.01.2024 09:45:35][debug][radarr] Target archive: /mnt/remotes/w/backup/ab_20240121_094432/radarr.tar.zst
	Line 1120: [21.01.2024 09:45:35][debug][radarr] Generated tar command: --exclude '/mnt/user/appdata/radarr/MediaCovers' --exclude '/mnt/user/appdata/radarr/Backups' --exclude '/mnt/user/appdata/radarr/logs' -c -P -I zstdmt -f '/mnt/remotes/w/backup/ab_20240121_094432/radarr.tar.zst' '/mnt/user/appdata/radarr'
	Line 1121: [21.01.2024 09:45:35][ℹ️][radarr] Backing up radarr...
	Line 1122: [21.01.2024 09:48:27][debug][radarr] Tar out: 
	Line 1123: [21.01.2024 09:48:27][ℹ️][radarr] Backup created without issues
	Line 1124: [21.01.2024 09:48:27][ℹ️][radarr] Verifying backup...
	Line 1125: [21.01.2024 09:48:27][debug][radarr] Final verify command: --exclude '/mnt/user/appdata/radarr/MediaCovers' --exclude '/mnt/user/appdata/radarr/Backups' --exclude '/mnt/user/appdata/radarr/logs' --diff -f '/mnt/remotes/w/backup/ab_20240121_094432/radarr.tar.zst' '/mnt/user/appdata/radarr'
	Line 1126: [21.01.2024 09:48:56][debug][radarr] Tar out: tar: Removing leading `/' from member names; /*stdin*\ : Read error (39) : premature end; tar: Unexpected EOF in archive; tar: Child returned status 1; tar: Error is not recoverable: exiting now
	Line 1127: [21.01.2024 09:48:56][][radarr] tar verification failed! Tar said: tar: Removing leading `/' from member names; /*stdin*\ : Read error (39) : premature end; tar: Unexpected EOF in archive; tar: Child returned status 1; tar: Error is not recoverable: exiting now
	Line 1128: [21.01.2024 09:49:04][debug][radarr] lsof(/mnt/user/appdata/radarr)
	Line 1133: [21.01.2024 09:49:04][debug][radarr] AFTER verify: Array

 

debug.7z

Link to comment

Hello.

The SSD where my dockers were failed and i've installed a new one, then used 'Restore Appdata' from my latest backup and forced it onto the new SSD.


If i view the SSD in MAIN i can see these:
 

appdata/binhex-krusader
appdata/pihole
etc... a folder for each of my dockers

system/docker/docker.img
system/libvirt/libvirt.img
system/vdisk1.img

 

The VM works after I pointed the VM to /mnt/ssd/system/vdisk1.img

 

In SETTINGS > Docker i have these settings:
Docker vDisk location:

/mnt/user/system/docker/docker.img


Default appdata storage location:

/mnt/user/appdata/

 

When i go to the Docker tab all my Dockers are gone and it sys: No Docker containers installed

 

Can anyone help please?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.