Jump to content

WebUI down for everything, including containers...


aglyons

Recommended Posts

Can't pull up the UI of the admin nor any docker containers. I can still ping the IP and the docker PiHole is still responding to requests!?!?

 

I have syslog messages going to my Synology. I saw this in the log at the end. Lot's of network renaming going on here. Not sure if this is contributing to the problem. I've never seen this before.

 

Just to be clear, I did nothing to the server prior to this happening. I was logged into the admin to check things over. Closed out and did some work for a couple hours. Went back to the admin and nothing was responding.

 

2022-09-06,09:41:30,Info,KNOXX,kern,kernel,veth16a34ee: renamed from eth0
2022-09-06,09:39:14,Info,KNOXX,kern,kernel,eth0: renamed from veth16a34ee
2022-09-06,09:39:10,Info,KNOXX,kern,kernel,veth41e373e: renamed from eth0
2022-09-06,09:37:42,Info,KNOXX,kern,kernel,eth0: renamed from veth9606116
2022-09-06,09:37:35,Info,KNOXX,kern,kernel,veth7d00851: renamed from eth0
2022-09-06,09:37:07,Info,KNOXX,daemon,avahi-daemon,Registering new address record for fe80::40ca:2fff:fe7a:1dd6 on veth3e9feaf.*.
2022-09-06,09:37:07,Info,KNOXX,daemon,avahi-daemon,New relevant interface veth3e9feaf.IPv6 for mDNS.
2022-09-06,09:37:07,Info,KNOXX,daemon,avahi-daemon,Joining mDNS multicast group on interface veth3e9feaf.IPv6 with address fe80::40ca:2fff:fe7a:1dd6.
2022-09-06,09:37:05,Info,KNOXX,kern,kernel,br-8efd84f1e081: port 5(veth3e9feaf) entered forwarding state
2022-09-06,09:37:05,Info,KNOXX,kern,kernel,br-8efd84f1e081: port 5(veth3e9feaf) entered blocking state
2022-09-06,09:37:05,Info,KNOXX,kern,kernel,IPv6: ADDRCONF(NETDEV_CHANGE): veth3e9feaf: link becomes ready
2022-09-06,09:37:05,Info,KNOXX,kern,kernel,eth0: renamed from veth04df8e5
2022-09-06,09:37:04,Info,KNOXX,kern,kernel,device veth3e9feaf entered promiscuous mode
2022-09-06,09:37:04,Info,KNOXX,kern,kernel,br-8efd84f1e081: port 5(veth3e9feaf) entered disabled state
2022-09-06,09:37:04,Info,KNOXX,kern,kernel,br-8efd84f1e081: port 5(veth3e9feaf) entered blocking state
2022-09-06,09:37:01,Info,KNOXX,daemon,avahi-daemon,Withdrawing address record for fe80::88d7:b5ff:fe84:e71b on veth3edcb58.
2022-09-06,09:37:01,Info,KNOXX,kern,kernel,br-8efd84f1e081: port 5(veth3edcb58) entered disabled state
2022-09-06,09:37:01,Info,KNOXX,kern,kernel,device veth3edcb58 left promiscuous mode
2022-09-06,09:37:01,Info,KNOXX,kern,kernel,br-8efd84f1e081: port 5(veth3edcb58) entered disabled state
2022-09-06,09:37:01,Info,KNOXX,daemon,avahi-daemon,Leaving mDNS multicast group on interface veth3edcb58.IPv6 with address fe80::88d7:b5ff:fe84:e71b.
2022-09-06,09:37:01,Info,KNOXX,daemon,avahi-daemon,Interface veth3edcb58.IPv6 no longer relevant for mDNS.
2022-09-06,09:37:01,Info,KNOXX,kern,kernel,vethe59ebd6: renamed from eth0
2022-09-06,09:37:01,Info,KNOXX,kern,kernel,br-8efd84f1e081: port 5(veth3edcb58) entered disabled state

 

Link to comment

here is the network details from the CLI. Is it just me or is there a lot of IPv6 in here? I thought I had all that disabled in the UI.

 

root@KNOXX:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
11: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 78:2b:cb:47:8f:86 brd ff:ff:ff:ff:ff:ff
12: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 78:2b:cb:47:8f:87 brd ff:ff:ff:ff:ff:ff
13: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc mq master br0 state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
18: br-8efd84f1e081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:d5:9c:d6:00 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-8efd84f1e081
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d5ff:fe9c:d600/64 scope link
       valid_lft forever preferred_lft forever
80: eth0.2@eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc noqueue master br0.2 state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
81: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.88/24 scope global br0
       valid_lft forever preferred_lft forever
82: br0.2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.202.2/24 scope global br0.2
       valid_lft forever preferred_lft forever
83: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default
    link/ether 02:42:5f:04:cb:c3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5fff:fe04:cbc3/64 scope link
       valid_lft forever preferred_lft forever
129: vethafee979@if128: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8efd84f1e081 state UP group default
    link/ether c2:2a:88:7f:90:eb brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::c02a:88ff:fe7f:90eb/64 scope link
       valid_lft forever preferred_lft forever
131: veth5397ea1@if130: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8efd84f1e081 state UP group default
    link/ether de:be:88:6a:51:81 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::dcbe:88ff:fe6a:5181/64 scope link
       valid_lft forever preferred_lft forever
133: veth70709e0@if132: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master docker0 state UP group default
    link/ether da:1e:bf:55:72:32 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::d81e:bfff:fe55:7232/64 scope link
       valid_lft forever preferred_lft forever
135: veth520e30b@if134: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8efd84f1e081 state UP group default
    link/ether 26:54:d0:c5:b8:9a brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::2454:d0ff:fec5:b89a/64 scope link
       valid_lft forever preferred_lft forever
137: vethc7c592a@if136: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8efd84f1e081 state UP group default
    link/ether 92:4e:9a:41:87:88 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::904e:9aff:fe41:8788/64 scope link
       valid_lft forever preferred_lft forever
142: veth3e9feaf@if141: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-8efd84f1e081 state UP group default
    link/ether 42:ca:2f:7a:1d:d6 brd ff:ff:ff:ff:ff:ff link-netnsid 8
    inet6 fe80::40ca:2fff:fe7a:1dd6/64 scope link
       valid_lft forever preferred_lft forever

 

Link to comment

On a side note, and maybe this might help diagnose the cause.

 

I have Homeassistant installed and under the port mapping, it's always been blank.

 

I have tried to run HA on different network configs but it will only respond when set to use 'host'. All the other settings and I get an odd blank URL. The only way I've been able to access the UI is by opening the WebUI from the container menu.

 

PS. I stopped the HA container via SSH when the UI problem kicked up. That's why it's stopped. 

image.thumb.png.ee07df57b868bfacf89d5c4dee3421ad.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...