Squid Posted June 4 Author Share Posted June 4 IDK. But I can tell you that the OS does NOT support local storage of icons. Never has. It is designed around cloud storage for them and then saves a local copy to serve to your browser. It is detecting a change in the "URL" and is attempting a re-download. I'd guess it is displaying a question mark for you. Quote Link to comment
xyzeratul Posted June 4 Share Posted June 4 1 hour ago, Squid said: IDK. But I can tell you that the OS does NOT support local storage of icons. Never has. It is designed around cloud storage for them and then saves a local copy to serve to your browser. It is detecting a change in the "URL" and is attempting a re-download. I'd guess it is displaying a question mark for you. Thx, I think they need to add this function, I'd rather have local storage for this. Quote Link to comment
cherrybullet Posted June 10 Share Posted June 10 I'm not able to save the "delay in days" option for plugins. If I enter a number there and hit the Apply button and then refresh, it goes back to a blank value. Has anyone had this happen before and know the fix? I'm also not certain that auto updates for plugins is working, but it is working for docker 1 Quote Link to comment
Revan335 Posted June 23 Share Posted June 23 (edited) Auto Update not Working. Diagnostic on the PM Way. No changes, but not working since yesterday. Edited June 23 by Revan335 1 Quote Link to comment
iXNyNe Posted July 4 Share Posted July 4 On 6/10/2023 at 6:36 AM, cherrybullet said: I'm not able to save the "delay in days" option for plugins. If I enter a number there and hit the Apply button and then refresh, it goes back to a blank value. Has anyone had this happen before and know the fix? I'm also not certain that auto updates for plugins is working, but it is working for docker I can confirm I'm having this issue as well. unRAID 6.12.2 CA Auto Update Applications 2023.07.03 Quote Link to comment
Squid Posted July 4 Author Share Posted July 4 Thanks. Try the update just available. Quote Link to comment
Dominik.W Posted July 5 Share Posted July 5 Hey. I am not able to change „Send Notifications on update“. After a refresh it is always set back to „Yes“. Quote Link to comment
Squid Posted July 5 Author Share Posted July 5 7 hours ago, Dominik.W said: Hey. I am not able to change „Send Notifications on update“. After a refresh it is always set back to „Yes“. Try today's update 1 Quote Link to comment
Dominik.W Posted July 5 Share Posted July 5 12 minutes ago, Squid said: Try today's update Works. Thank you! Quote Link to comment
Masterwishx Posted July 24 Share Posted July 24 using latest version of ca Update on 6.12.3 , i found i have Delay in days: 3 , if i set "" or "0" it shows right in "AutoUpdateSettings.json" but in web GUI , if im entering again to CA Update it shows again "3" Quote Link to comment
randalotto Posted July 31 Share Posted July 31 I've got a bit of a weird one- I've been having issues with Zigbee2MQTT not restarting after an update. In some cases, it will restart properly, but certain devices won't rejoin the network. In order to avoid the issue, I excluded Zigbee2MQTT and related apps (Home Assistant and MQTT) from auto updates and auto backups (and they are also set not to turn off during auto backups) My situation has improved, but I still sometimes find that it has stopped unexpectedly. For the latest crash, I realized it coincided not with the auto app update, but with the auto plugin update. (starting at 4:15:01, there's no data) Here's the log: Jul 30 04:12:37 Tower CA Backup/Restore: Backup / Restore Completed Jul 30 04:15:01 Tower Plugin Auto Update: Checking for available plugin updates Jul 30 04:15:02 Tower kernel: docker0: port 18(vethccf4ece) entered disabled state Jul 30 04:15:02 Tower kernel: veth24ef50b: renamed from eth0 Jul 30 04:15:02 Tower avahi-daemon[8703]: Interface vethccf4ece.IPv6 no longer relevant for mDNS. Jul 30 04:15:02 Tower avahi-daemon[8703]: Leaving mDNS multicast group on interface vethccf4ece.IPv6 with address fe80::9c9f:54ff:fe0f:5de. Jul 30 04:15:02 Tower kernel: docker0: port 18(vethccf4ece) entered disabled state Jul 30 04:15:02 Tower kernel: device vethccf4ece left promiscuous mode Jul 30 04:15:02 Tower kernel: docker0: port 18(vethccf4ece) entered disabled state Jul 30 04:15:02 Tower avahi-daemon[8703]: Withdrawing address record for fe80::9c9f:54ff:fe0f:5de on vethccf4ece. Jul 30 04:15:02 Tower kernel: docker0: port 18(veth7d3c02b) entered blocking state Jul 30 04:15:02 Tower kernel: docker0: port 18(veth7d3c02b) entered disabled state Jul 30 04:15:02 Tower kernel: device veth7d3c02b entered promiscuous mode Jul 30 04:15:02 Tower kernel: docker0: port 18(veth7d3c02b) entered blocking state Jul 30 04:15:02 Tower kernel: docker0: port 18(veth7d3c02b) entered forwarding state Jul 30 04:15:02 Tower kernel: eth0: renamed from vethabbc255 Jul 30 04:15:02 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7d3c02b: link becomes ready Jul 30 04:15:04 Tower avahi-daemon[8703]: Joining mDNS multicast group on interface veth7d3c02b.IPv6 with address fe80::583c:b9ff:fe32:c699. Jul 30 04:15:04 Tower avahi-daemon[8703]: New relevant interface veth7d3c02b.IPv6 for mDNS. Jul 30 04:15:04 Tower avahi-daemon[8703]: Registering new address record for fe80::583c:b9ff:fe32:c699 on veth7d3c02b.*. Jul 30 04:15:04 Tower Plugin Auto Update: fix.common.problems.plg version 2023.07.29 does not meet age requirements to update - 1 days old Jul 30 04:15:04 Tower Plugin Auto Update: unassigned.devices.plg version 2023.07.28 does not meet age requirements to update - 2 days old Jul 30 04:15:04 Tower Plugin Auto Update: Checking for language updates Jul 30 04:15:04 Tower Plugin Auto Update: Community Applications Plugin Auto Update finished Jul 30 04:15:09 Tower kernel: docker0: port 18(veth7d3c02b) entered disabled state Jul 30 04:15:09 Tower kernel: vethabbc255: renamed from eth0 Jul 30 04:15:09 Tower avahi-daemon[8703]: Interface veth7d3c02b.IPv6 no longer relevant for mDNS. Jul 30 04:15:09 Tower avahi-daemon[8703]: Leaving mDNS multicast group on interface veth7d3c02b.IPv6 with address fe80::583c:b9ff:fe32:c699. Jul 30 04:15:09 Tower kernel: docker0: port 18(veth7d3c02b) entered disabled state Jul 30 04:15:09 Tower kernel: device veth7d3c02b left promiscuous mode Jul 30 04:15:09 Tower kernel: docker0: port 18(veth7d3c02b) entered disabled state Jul 30 04:15:09 Tower avahi-daemon[8703]: Withdrawing address record for fe80::583c:b9ff:fe32:c699 on veth7d3c02b. Jul 30 04:30:01 Tower root: Fix Common Problems Version 2023.07.16 Any idea what could cause that? Quote Link to comment
greenflash24 Posted August 27 Share Posted August 27 (edited) On 7/5/2023 at 9:37 PM, Squid said: Try today's update For me the Send Notifications On Update? option is still not respected in the latest version. I have set this option to NO, as a don't want to receive notifications on each container update, but i still get these notifications. Edited August 28 by greenflash24 Quote Link to comment
MylesM Posted August 27 Share Posted August 27 Hello! I'm having an issue with what seems to be leftovers from my gitlab-runner. These show up in the auto updater settings. But I'm a little confused because they don't show up on my docker menu, and even running docker system info and docker ps -a and docker image ls don't show anything leftover. The only trace of these I can see are in the auto updater settings. Any way I can clean these up, or even better, make sure gitlab-runner is cleaning up after itself properly? Quote Link to comment
Squid Posted August 27 Author Share Posted August 27 On the docker tab if you switch to advanced view is there any orphaned images -> delete them Quote Link to comment
MylesM Posted August 27 Share Posted August 27 9 minutes ago, Squid said: On the docker tab if you switch to advanced view is there any orphaned images -> delete them I did this. There were a few orphaned images, but they weren't the ones shown on auto updater, and deleting them didn't seem to help. Quote Link to comment
NLS Posted August 27 Share Posted August 27 Please implement "implicit no" (i.e. default auto update to yes, and set one or a few specifically to no), for auto updates. Right now, you only have "yes" (which cannot set one or some to no), or "no" (which you can manually set few to yes). It should be "default yes" or "default no" and in both cases allow to change some to the other option. Thanks. 1 1 Quote Link to comment
bj___ Posted September 19 Share Posted September 19 @Squid I've a problem with how a docker container is restarted in the auto update plugin. If I read your code correctly you are using the docker update feature of the official Web UI to update a docker which also stops/starts the container as required. However after that is done you are restarting the docker again with a `docker start $containerScript` or a custom script if available. No problems with the custom script but is the docker start line really needed? I've some post exec arguments in the container definition which must be applied and is not regarded by just by manually using `docker start`. I know that I can supply my own script though my question is whether this restart is really needed at that point. Cheers! Quote Link to comment
rama3124 Posted September 23 Share Posted September 23 Hi in the docker update section of this plugin, i wanted the autoupdate to run every 4 hours so used this following cron expression: 0 0/4 * * *. I also have send notifications on update turned on However when i woke up this morning and checked the history of my notifications, there were no updates run for my docker containers. When i checked for updates manually there were several available. Am i doing something wrong? Perhaps this plugin can't check more than once a day? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.