Tinqer Posted August 18 Share Posted August 18 I'm trying to work out why Duplicacy sometimes refuses to restart automatically after backups. This has been happening regularly (but not consistently) for the past couple months. Any ideas? Quote Link to comment
Kilrah Posted August 18 Share Posted August 18 What's the destination for its target? it may be offline/unavailable. Quote Link to comment
Tinqer Posted August 18 Share Posted August 18 4 hours ago, Kilrah said: What's the destination for its target? it may be offline/unavailable. The backup destination for Appdata Backup is just a local Unraid share. I did add my desktop PC as a destination in Duplicacy, and it's off a lot of the time. Didn't think that could cause this to happen, but it might explain why only Duplicacy is having this issue... Quote Link to comment
Kilrah Posted August 18 Share Posted August 18 19 minutes ago, Tinqer said: I did add my desktop PC as a destination in Duplicacy, and it's off a lot of the time. Didn't think that could cause this to happen, but it might explain why only Duplicacy is having this issue... Yeah, if it's an smb mount the container will fail to start if it's not online. Quote Link to comment
wgstarks Posted August 18 Share Posted August 18 I just replaced my old appdata backup plugin with this one. During the initial test run about 90% of my appdata folders give a warning similar to this one- [18.08.2024 13:15:39][ℹ️][binhex-radarr] Stopping binhex-radarr... done! (took 1 seconds) [18.08.2024 13:15:40][ℹ️][binhex-radarr] Should NOT backup external volumes, sanitizing them... [18.08.2024 13:15:40][⚠️][binhex-radarr] binhex-radarr does not have any volume to back up! Skipping. Please consider ignoring this container. I know that these dockers do have an appdata directory and they are backing up to an unraid share not an external folder so I'm confused why they aren't backed up? Shared the full log ID: ed3b455b-6596-41c0-a23b-ee45819b9536 Quote Link to comment
KluthR Posted August 18 Author Share Posted August 18 Without checking the debug log: is the source path(s) set correctly? Are the container settings correct? Did you checked the „external volume“ logic? Quote Link to comment
wgstarks Posted August 18 Share Posted August 18 1 hour ago, wgstarks said: I just replaced my old appdata backup plugin with this one. During the initial test run about 90% of my appdata folders give a warning similar to this one- [18.08.2024 13:15:39][ℹ️][binhex-radarr] Stopping binhex-radarr... done! (took 1 seconds) [18.08.2024 13:15:40][ℹ️][binhex-radarr] Should NOT backup external volumes, sanitizing them... [18.08.2024 13:15:40][⚠️][binhex-radarr] binhex-radarr does not have any volume to back up! Skipping. Please consider ignoring this container. I know that these dockers do have an appdata directory and they are backing up to an unraid share not an external folder so I'm confused why they aren't backed up? Shared the full log ID: ed3b455b-6596-41c0-a23b-ee45819b9536 8 minutes ago, KluthR said: Without checking the debug log: is the source path(s) set correctly? Are the container settings correct? Did you checked the „external volume“ logic? My bad. The default has two paths that both result in the same directory on my system so I deleted one. 😁 I'm guessing the plugin checks the docker config and gets the path to the appdata folder and backs up any that match the one set in the plugin settings. Since i restored the proper source paths it's working properly. Is there any way to get the plugin to backup the docker appdata using the start order for the dockers? I really need the dockers to re-start in then proper order. Quote Link to comment
KluthR Posted August 18 Author Share Posted August 18 16 minutes ago, wgstarks said: My bad. The default has two paths that both result in the same directory on my system so I deleted one. 😁 Its important to use the path which is used for your containers main appdata. Then decide if external paths should be included. Quote Link to comment
wgstarks Posted August 18 Share Posted August 18 39 minutes ago, KluthR said: Then decide if external paths should be included. I didn’t see a setting for external paths but I doubt any of my dockers have any anyway. What about the backup order? Is there anyway to get it to match the start order? Quote Link to comment
wgstarks Posted August 18 Share Posted August 18 1 hour ago, wgstarks said: I didn’t see a setting for external paths but I doubt any of my dockers have any anyway. What about the backup order? Is there anyway to get it to match the start order? Nevermind. Found all of this in the hidden settings for each docker. Quote Link to comment
gabeduartem Posted August 18 Share Posted August 18 Is it possible to avoid creating a tar archive and just create a folder instead? I can disable the compression, but it still creates the tar file... I have an external backup service that has file deduplication enabled, but it won't work with .tar files, but I think it would if it was just normal folders... Quote Link to comment
KluthR Posted August 19 Author Share Posted August 19 6 hours ago, wgstarks said: What about the backup order? Is there anyway to get it to match the start order? No 4 hours ago, gabeduartem said: Is it possible to avoid creating a tar archive and just create a folder instead? No, not yet Quote Link to comment
jwiener3 Posted August 19 Share Posted August 19 I am having an issue where I am getting an error about destination being unavailable or not writeable when running the backup to a NFS share. This was working before and stopped and if I go to the CLI I can go to the destination directory and create folders and files. I have tried re-entering both the source and destination directories and it hasn't helped. Does this process run under different privileges? I have submitted the debug with the following ID, 05996226-dd22-42e0-b7e1-5ff3453cd482, but it looks empty. How can I troubleshoot this? [19.08.2024 08:59:19][ℹ️][Main] 👋 WELCOME TO APPDATA.BACKUP!! :D [19.08.2024 08:59:19][❌][Main] Destination is unavailable or not writeable! [19.08.2024 08:59:19][ℹ️][Main] Checking retention... [19.08.2024 08:59:19][ℹ️][Main] DONE! Thanks for using this plugin and have a safe day ;) Quote Link to comment
TrivariAlthea Posted August 19 Share Posted August 19 Hi @KluthRis there somewhere where I could message you for support? I was using appdata backup for months, but now it seems to be having issues at 2 specific containers (AMP dockerized and adguard home). I tried recreating them and even changing the permissions, but none of it did work. Error in question: [19.08.2024 17:02:39][❌][AMP] tar creation failed! Tar said: zstd: error 70 : Write error : cannot write block : Input/output error; tar: /mnt/cloud1/appdata/ab_20240819_165101/AMP.tar.zst: Wrote only 4096 of 10240 bytes; tar: Child returned status 70; tar: Error is not recoverable: exiting now The weirdest thing is, once it reaches the 2 containers the whole unraid webgui starts to be unresponsive and this was not happening before (they are the 2 last containers in the order too (first and second from the start order). A few seconds before the whole unraid crash I also get this notification 2 times: I have an rclone mount on /mnt/cloud1 that mounts when array is started for the first time using a custom userscript. my docker.img is in /mnt/cache and my appdata is in /mnt/user/appdata. From my previous discussions with unraid support, this mount path should be fine as I do not have any shares defined there. Could you maybe DM me asking for what you need, including the custom user scripts and if it would help you? I am no longer sure what to do as a week ago it was all still working Thank you for any help. Quote Link to comment
JonathanM Posted August 19 Share Posted August 19 49 minutes ago, TrivariAlthea said: is there somewhere where I could message you for support? This is the correct place. 49 minutes ago, TrivariAlthea said: Could you maybe DM me asking for what you need Support should be handled publicly, so that if someone else has the same issue the answer is available without having to be rehashed over and over. Private support should be an absolute last resort, and since it only benefits one individual, it's more fair if it's paid support. This is just a general statement, individual authors are free to handle it how they wish. Quote Link to comment
gabeduartem Posted August 19 Share Posted August 19 11 hours ago, KluthR said: No, not yet Ah, I see... Looking forward to when it gets implemented then, would be very useful for de-duplication purposes Also, I assume there's no way yet to do `post-backup` scripts per container? I'd love to tap into the plugin's workflow and while the containers are stopped also manually do DB backups. Without that, I'd have to again stop the containers (and manually handle that logic), causing more downtime... Quote Link to comment
Kilrah Posted August 19 Share Posted August 19 1 hour ago, gabeduartem said: I'd love to tap into the plugin's workflow and while the containers are stopped also manually do DB backups. Presumably the DB container would already be backed up by AB (and you'd have grouped it with the containers that use it) so you wouldn't need separate DB backups. Quote Link to comment
Blasman Posted August 19 Share Posted August 19 1 hour ago, gabeduartem said: Also, I assume there's no way yet to do `post-backup` scripts per container? It's in the BETA version of the plugin from the Unraid Community Applications. It's working great for me. 1 Quote Link to comment
TrivariAlthea Posted August 19 Share Posted August 19 1 hour ago, JonathanM said: This is the correct place. Support should be handled publicly, so that if someone else has the same issue the answer is available without having to be rehashed over and over. Private support should be an absolute last resort, and since it only benefits one individual, it's more fair if it's paid support. This is just a general statement, individual authors are free to handle it how they wish. Thanks Jonathan for the clarification. Will wait for Kluth to mention what all information he needs. In case there is sensitive information, like with the user scripts, I will try to replace it beforehand. Quote Link to comment
KluthR Posted August 19 Author Share Posted August 19 3 hours ago, TrivariAlthea said: I was using appdata backup for months, but now it seems to be having issues at 2 specific containers That looks weird. Which file system is used on destination side? Quote Link to comment
TrivariAlthea Posted August 19 Share Posted August 19 31 minutes ago, KluthR said: That looks weird. Which file system is used on destination side? Destination side, you mean the docker image or the rclone/pcloud mount? The array itself is encrypted xfs and cache is encrypted btrfs. rclone/pcloud is "fuse.rclone" according to findmnt "FSTYPE" value Docker.img itself is btrfs placed in /mnt/cache/docker/docker.img Quote Link to comment
KluthR Posted August 19 Author Share Posted August 19 (edited) The one which is set as destination: /mnt/cloud1/appdata however: you are sure that the filesystem is ok? The error message is very clear. What does unraid show in its log screen? please try without compression (or use single core option, not milticore). Edited August 19 by KluthR Quote Link to comment
Kilrah Posted August 19 Share Posted August 19 (edited) The filesystem is "virtual" since it's an rclone mount. Maybe it chokes on files taking time to be created, too big, connection lost, whatever changed on the cloud provider side... I'd rather back up to a local destination, then sync that to the rclone mount. Edited August 19 by Kilrah 1 Quote Link to comment
Vitek Posted August 21 Share Posted August 21 How to set "Delete backups if older than x days:" but for "Backup the flash drive?" Is it even possible or I have to manually delete old flash drive backup files? Quote Link to comment
KluthR Posted August 21 Author Share Posted August 21 14 minutes ago, Vitek said: How to set "Delete backups if older than x days:" but for "Backup the flash drive?" Not possible 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.