deadnote Posted January 28, 2023 Share Posted January 28, 2023 Hi My logs are full of these messages : Jan 28 16:55:01 Tower avahi-daemon[24594]: New relevant interface veth4df0c00.IPv6 for mDNS. Jan 28 16:55:01 Tower avahi-daemon[24594]: Registering new address record for fe80::4c8d:85ff:fe22:ba26 on veth4df0c00.*. Jan 28 16:59:52 Tower kernel: docker0: port 8(veth4df0c00) entered disabled state Jan 28 16:59:52 Tower kernel: veth389904f: renamed from eth0 Jan 28 16:59:52 Tower avahi-daemon[24594]: Interface veth4df0c00.IPv6 no longer relevant for mDNS. Jan 28 16:59:52 Tower avahi-daemon[24594]: Leaving mDNS multicast group on interface veth4df0c00.IPv6 with address fe80::4c8d:85ff:fe22:ba26. Jan 28 16:59:52 Tower kernel: docker0: port 8(veth4df0c00) entered disabled state Jan 28 16:59:52 Tower kernel: device veth4df0c00 left promiscuous mode Jan 28 16:59:52 Tower kernel: docker0: port 8(veth4df0c00) entered disabled state Jan 28 16:59:52 Tower avahi-daemon[24594]: Withdrawing address record for fe80::4c8d:85ff:fe22:ba26 on veth4df0c00. Jan 28 17:00:09 Tower kernel: docker0: port 8(vethc56628d) entered blocking state Jan 28 17:00:09 Tower kernel: docker0: port 8(vethc56628d) entered disabled state Jan 28 17:00:09 Tower kernel: device vethc56628d entered promiscuous mode Jan 28 17:00:09 Tower kernel: eth0: renamed from veth67bdfe2 Jan 28 17:00:09 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc56628d: link becomes ready Jan 28 17:00:09 Tower kernel: docker0: port 8(vethc56628d) entered blocking state Jan 28 17:00:09 Tower kernel: docker0: port 8(vethc56628d) entered forwarding state Jan 28 17:00:11 Tower avahi-daemon[24594]: Joining mDNS multicast group on interface vethc56628d.IPv6 with address fe80::78f4:80ff:fe8c:fe48. Jan 28 17:00:11 Tower avahi-daemon[24594]: New relevant interface vethc56628d.IPv6 for mDNS. Jan 28 17:00:11 Tower avahi-daemon[24594]: Registering new address record for fe80::78f4:80ff:fe8c:fe48 on vethc56628d.*. I can't find the source of the problem. Can someone help me please ? Diagnostics are attached tower-diagnostics-20230128-1711.zip 1 Quote Link to comment
JorgeB Posted January 29, 2023 Share Posted January 29, 2023 Stop all docker containers then start enabling one by one to see if you can find the culprit. Quote Link to comment
deadnote Posted January 29, 2023 Author Share Posted January 29, 2023 Thanks I had read this thread on the forum before your reply. I disabled all my dockers without really finding the culprit. I then reactivated everything and since then, no more messages in the logs! Quote Link to comment
deadnote Posted January 29, 2023 Author Share Posted January 29, 2023 The messages reappeared. I will continue to investigate Quote Link to comment
Lee B Posted February 4, 2023 Share Posted February 4, 2023 I'm having the same issue, have you tracked it down? Quote Link to comment
Kilrah Posted February 4, 2023 Share Posted February 4, 2023 Check the advanced view on the docker page and look at container uptime, that's typically a container that keeps crashing/restarting due to bad config or such. Quote Link to comment
Solution UnKwicks Posted February 6, 2023 Solution Share Posted February 6, 2023 (edited) I have the same issue. I added "Homer" as docker container from the CAS. Using "bridge" as network for the docker does not start the container and gives me: Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: device veth9adb836 entered promiscuous mode Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered forwarding state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: device veth9adb836 left promiscuous mode Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state When I give the container an IPv4 address from my custom network on interface br0 it starts up without errors. So I guess its somehow related with the bridging. Edit: Maybe I may add: This is the only container that does not start using bridge as interface. I have several other containers running well with bridge. Edit 2: OK I guess I found it. For me it was port related. I had another docker container using port 8080. Unraid did not warn me that the port is already in use (I thought it did in the past, maybe a bug??). But I used another port and now the container starts. Edited February 6, 2023 by UnKwicks 1 Quote Link to comment
deadnote Posted February 11, 2023 Author Share Posted February 11, 2023 Hi Back from holiday All my dockers are up more than 2 days so no problems about accidental restart. A port of unifi-controller was using 8080. I changed it to 8082. I watch the logs now Quote Link to comment
deadnote Posted February 12, 2023 Author Share Posted February 12, 2023 No more logs for today, I think I can say it's solved Thanks Quote Link to comment
Mattaton Posted September 18, 2023 Share Posted September 18, 2023 On 2/11/2023 at 2:36 PM, deadnote said: Hi Back from holiday All my dockers are up more than 2 days so no problems about accidental restart. A port of unifi-controller was using 8080. I changed it to 8082. I watch the logs now I'm having the same issue and Unifi is using 8080 on mine as well. Can you tell me why it using 8080 is bad? Like why did you even think that was the problem? 😄 Obviously you fixed it, I'm just curious why in the world that made a difference! 😄 Thanks! Quote Link to comment
deadnote Posted September 19, 2023 Author Share Posted September 19, 2023 Because 8080 was used by another docker service Quote Link to comment
Mattaton Posted September 19, 2023 Share Posted September 19, 2023 5 hours ago, deadnote said: Because 8080 was used by another docker service Ah okay. Makes sense. I thought this was some sort of fix for anyone with Unifi. 😄 I've read several threads on this and seen several comments that these lines in the log are normal. Guess I'll just let it be. 🙂 Thanks! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.