KluthR

Community Developer
  • Posts

    699
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by KluthR

  1. The plugin is actually using the delay from the docker page: https://github.com/Commifreak/unraid-appdata.backup/blob/f5bfd73ed69afe0a7fb22707e11881d91708baf1/src/include/ABHelper.php#L263
  2. In current .10 release I also see permanent DHCPD6 renew messages, didnt remember the exact term, hold on… dhcpcd[1128]: eth0: requesting DHCPv6 information every ten secs. But maybe its because I habe only Stateful RA enabled router?
  3. Same issue here. Unraid does not pick up the latest prefix from RA. This dynamic prefix shit for private customers is a pain...
  4. „Group“ field inside container specific settings.
  5. It seems that something is messing with the files. Some other „thing“ that would do that? Mover? best way is to share your debug log.
  6. @Revan335 its all described inside restore page except for „one file“. Either all or nothing. Its not selectable to only restore one file.
  7. The start order is used for stop, backup, start. If you put both in one group, both get stopped, then backed up and then started like the group order is. So: yes, that would be your solution
  8. New observings: Its not happening with bridge network interface. See https://forums.docker.com/t/occasional-ping-issues-over-ipv6/140296/2?u=rkluth @ich777 You also have IPv6 experiences with Unraids docker. Did you ever came accros such issue?
  9. The order is effective for both modes as the container lost gets „sorted“ once when starting the backup.
  10. But the log above that field is visible? No recent reboot or anything?
  11. Yea. Just checked. The plugins chmod runs after post-run. Will change that in next release
  12. Check settings. Whats inside allowed appdata source paths? Is the volume in question maybe "external" and therefore excluded? Some of your containers excluding /mnt which leads to empty zips..? Point to the backup and restore all XMLs and appdatas. Use Previous apps to install all containers. Currently you need some manual work. Another check here: I bet this container is inside of a group?
  13. Could you please upload a debug log and share its ID?
  14. Oops. Yes, not CA related Could someone move it for me?
  15. Interesting. Is there something special about /mnt/user/unraid_backup ? Whats happening if you run "mkdir /mnt/user/unraid_backup/appdata_backup/test"?
  16. Feels more and more like a general v6 docker issue. docker and IPv6 are not the best friends. There are several issues regrading v6.
  17. Hey I migrated from an exclusive cmk appliance to the docker version to get rid of one physical system. However, I experience permanent occasional v6 smart piung issues with the container. I can observe this with ping6 from inside the cmk container. Every few minutes I get 100% packet loss, then back to normal etc. Unraid 6.12.8 Anyone with similar issues? The container is set up this way: docker run -d --name='checkmkk_empty' --net='eth0' --ip='192.168.178.29' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="NAS" -e HOST_CONTAINERNAME="checkmkk_empty" -e 'CMK_SITE_ID'='l22' -l net.unraid.docker.managed=dockerman -v '/mnt/ssd/appdata/checkmk_empty/sites':'/omd/sites':'rw' -v '/etc/localtime':'/etc/localtime':'ro' -v '/mnt/ssd/appdata/checkmk/backup':'/backup':'rw' --mac-address 02:00:00:00:00:29 --tmpfs /opt/omd/sites/l22/tmp:uid=1000,gid=1000 'checkmk/check-mk-cloud:2.2.0p23' Some ping pictures which show the v6 issue only within cmk container. HA container does not show such issues..? HA is set up the same way. docker is using macvtap bridge. Thanks in advance!
  18. I try to release an update within the next day.
  19. There is none. The restore tab should be self explanatory.
  20. Did anyone tested the fix
  21. Yes. U can try the hotfix if wanted.
  22. Not an issue as well because there were no changes. The main purpose of this plugin is docker backup. Using it just for Flash backup is actually working but not intended. I will see what I can do. You added "/mnt/user" in many containers to exclude. Since I work with tar's exclusions syntax, Iam not always able to 100% check things for logical correctness. However, you should fix the exclusions to NOT exclude whole "/mnt/user" and want the backup to include "/mnt/user/appdata/*" things.
  23. It should print a red, ylear error then: But yes, it could create the dir. Noted. Damn. My fault. The global exclusion list gets ignored during container mapping determination. It will be checked in the next update. But note: The global exclusion list also supports tar's exclusion syntax, but only exact path matches are supported!