[Plugin] Appdata.Backup


Recommended Posts

On 4/18/2023 at 2:45 AM, KluthR said:

Exact same reason :)

 

I could remove a mapping once it appeared but it could be backed up within wrong container context which causes confusion.


I think at the very least the duplicates should be cleared from the generated tar command because listing the volume twice in the tar command like this is redundant:
 

[19.04.2023 12:10:01][debug] Generated tar command: --exclude '/mnt/user/appdata/plex/*' -c -P -f '/mnt/user/Backups/Appdata Backup/ab_20230419_120957/tautulli.tar' '/mnt/user/appdata/tautulli' '/mnt/user/appdata/tautulli' '/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Logs'


 

Link to comment
On 4/19/2023 at 5:32 PM, DimaS70 said:

I faced the issue during backup image verification

This is because of a bug which is fixed within latest BETA version (volume mapping nesting). You are also mapped the whole appdata - this is being ignored as well within next update

 

On 4/19/2023 at 6:12 PM, halorrr said:

Yes exactly, cause right now exclusions don't seem to work on internal volumes

They are working - but not, if you exclude a whole volume. Only subcontents of it. This seems to be a limitation from tars verify command. The next update remove volumes/exclusion pairs.

 

On 4/19/2023 at 6:16 PM, halorrr said:

because listing the volume twice in the tar command like this is redundant

Should be self solving as well then.

 

On 4/19/2023 at 11:15 PM, Kazino43 said:

I'm getting some warning notifications, even tho the log seems kind of fine.

Warnings are no errors. Just warnings. They tell you, there might be an issue. Some functions change the behavior if some conditions are met. Those produce warnings to let you know.

 

 

That being said: There is a new beta version with all current issues fixed. Before public relleasing, I want you (all quoted ones, and @MothyTim - did I forgot someone?) to test it. Note: This require to configure the beta version from scratch (or your simply copy over the confoig.json from appdata.backup to appdata.backup.beta - yes, automating that would be nice, noted)

Link to comment

@schreibman: Regarding

Not a real bug here - I dont thought, that there will be the same mapping twice - at least for host point of view. Is a nice to have to deduplicate detected volumes - noting.

 

But how does your plex was working before? You just changed the mapping paths as I see, or do you reorganized the files as well? All files were in the same place at first.

Link to comment

Hi, thanks for the great work maintaining this - I've had to use restore recently and you saved my a$$. 

 

Is this the right spot for feature requests? If not feel free please to move or delete.

 

Ideally, looking for an option (or even default behaviour?) to shut down a docker only when the backup is going to start happening, then immediately restart it once the backup is complete? (this assumes the user has selected separate .tar backups) My use case is that I use Agent-DVR and Zoneminder which I'd love to have backed up, but ideally don't want to wait for the massive and time consuming plex backup to complete before the DVR dockers restart. For now they are excluded and set not to stop.

 

thank you!!

Link to comment

Just used this to back up my app data I can see this being a life saver so thank you for that. 
my question is when I come to do a restore do I have to have say Sonarr already installed or just restore the back up and that takes care of that too

Once again thank you 

Link to comment
26 minutes ago, vipermo said:

my question is when I come to do a restore do I have to have say Sonarr already installed or just restore the back up and that takes care of that too

Its described at the restore page: it only restores the data, any container action (like stopping or creating) must be done by you.

 

Restoring isnt that powerful currently. Maybe I add some logic later to do a full automated container restore.

Edited by KluthR
  • Like 1
Link to comment

Tdarr and tdarr node sharing (some) same volumes. Backup at tdarr failed because tdarr node wrote to the logfiles. Verification failed.

 

exclude the shared share in one of these two.

 

you should also check your npm mappings, none of these seem to exist??

Link to comment
4 hours ago, KluthR said:

Tdarr and tdarr node sharing (some) same volumes. Backup at tdarr failed because tdarr node wrote to the logfiles. Verification failed.

 

exclude the shared share in one of these two.

 

you should also check your npm mappings, none of these seem to exist??

Thank you so much! RE NPM - this is just an old docker, i've added this to the excludes also. 

Thanks again 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.