doesntaffect Posted January 27, 2022 Share Posted January 27, 2022 I removed Pihole (container, image, template) since it seem to be related to DNS issues in my network. Then I created an Adguard container which cannot start now since port 53 is still being occupied. Question - by what / which service? The docker engine throws this error when I create the container: docker: Error response from daemon: driver failed programming external connectivity on endpoint AdGuard-Home (b2fa2aa7baac63aa19aa0c4fbd4b9555fd2f034bac110b646683a171bb6bea29): Error starting userland proxy: listen tcp4 0.0.0.0:53: bind: address already in use. Any advise how this can be fixed? A netstat on the host show this, and I don't see any container listening on 53. netstat -pna | grep 53 tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 27171/wsdd2 tcp 0 0 0.0.0.0:3443 0.0.0.0:* LISTEN 30538/docker-proxy tcp 0 0 0.0.0.0:4533 0.0.0.0:* LISTEN 30022/docker-proxy tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 10590/dnsmasq tcp 0 0 192.168.178.249:80 192.168.178.54:53460 TIME_WAIT - tcp 0 0 192.168.178.249:80 192.168.178.54:53550 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53389 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53497 TIME_WAIT - tcp 0 0 192.168.178.249:80 192.168.178.54:53555 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53556 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53390 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53557 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53553 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53554 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53388 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53387 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53499 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53558 ESTABLISHED 7399/nginx: worker tcp 0 14 192.168.178.249:80 192.168.178.54:53561 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53552 ESTABLISHED 7399/nginx: worker tcp 0 0 192.168.178.249:80 192.168.178.54:53551 ESTABLISHED 7399/nginx: worker tcp6 0 0 :::4533 :::* LISTEN 30029/docker-proxy udp 0 0 0.0.0.0:5353 0.0.0.0:* 27191/avahi-daemon: udp 0 0 0.0.0.0:5355 0.0.0.0:* 27171/wsdd2 udp 0 0 192.168.122.1:53 0.0.0.0:* 10590/dnsmasq udp6 0 0 :::5353 :::* 27191/avahi-daemon: unix 2 [ ACC ] STREAM LISTENING 153014 30047/containerd-sh /run/containerd/s/28ec7daac2528544a3768842fc90f52ffe93983dd533109f5b0b26ef918f62c5 unix 2 [ ACC ] STREAM LISTENING 148264 30559/containerd-sh /run/containerd/s/b7888a0669bb979537068e146779ff32858ff0a3e20c60dd4bda6eea39620139 unix 2 [ ACC ] STREAM LISTENING 149187 31565/containerd-sh /run/containerd/s/68f150ffeb953dca02559017754040e3b6b3bfe77f7cee2f65636bc58a86e4da unix 3 [ ] STREAM CONNECTED 159931 30559/containerd-sh /run/containerd/s/b7888a0669bb979537068e146779ff32858ff0a3e20c60dd4bda6eea39620139 unix 3 [ ] STREAM CONNECTED 153184 27298/containerd unix 3 [ ] STREAM CONNECTED 159753 28168/containerd-sh /run/containerd/s/5ec104dec6088fbefa7b61744247a6dd13c9bf6d90a40e3e7555bd447c1b5588 unix 3 [ ] STREAM CONNECTED 159911 30047/containerd-sh /run/containerd/s/28ec7daac2528544a3768842fc90f52ffe93983dd533109f5b0b26ef918f62c5 unix 3 [ ] STREAM CONNECTED 148368 31565/containerd-sh /run/containerd/s/68f150ffeb953dca02559017754040e3b6b3bfe77f7cee2f65636bc58a86e4da Quote Link to comment
Squid Posted January 27, 2022 Share Posted January 27, 2022 Generally, piHole / AdGuard and the like you would have running on their on dedicated IP address so that there is no problem. The docker run command's error reflects every port, not simply a port another container is doing. If you've switched it to bridge mode, then there are other ports already in use for one reason or another via the OS and all conflicting ports would need to be resolved (eg: 53) Quote Link to comment
theruck Posted June 10, 2022 Share Posted June 10, 2022 (edited) the problem is the conflict from the virbr0 interface 192.168.122.1:53 which shall not exist in the first place but its somehow by default listening on port 53 once the libvirt service is started there is nothing in the libvirt configs regarding that port 53 so it looks to be caused by the dnsmasq package Edited June 10, 2022 by theruck Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.