DieFalse Posted May 5, 2018 Share Posted May 5, 2018 So I have spent over an hour searching and I can find lots of post about docker ip's being used an have 3 containers setup this way. 2 of them I do not need to access directly from the unraid machine itself, however one of them I do. I have noticed that by default any docker with its own ip using br0 will be isolated and not able to communicate between itself and unraid machine directly. (IE ping the ip from the unraid machine itself). I utilize the unraid built in gui a lot due to many personal reasons and now need access to one of the br0 ip'd dockers from that machine. anyone here accomplish this? Link to comment
ken-ji Posted May 6, 2018 Share Posted May 6, 2018 You are going to need to be clearer. What's the network relationship of all four? ie cointainer 1 can reach container 2,3 and unraid? that said - maybe you can keep container 1 as a plain bridged networking? Link to comment
DieFalse Posted May 7, 2018 Author Share Posted May 7, 2018 Container is Pi-Hole Container can not be bridged as it requires port 80 and another container has full rights to port 80. I need to access pi-hole's admin from unraid local desktop gui (built-in gui). I can not because I gave pi-hol Br0 of 192.168.1.4 and can not reach it due to mcvlan(?) isolation. Link to comment
ken-ji Posted May 7, 2018 Share Posted May 7, 2018 I don't use PiHole my self, but I'm fairly sure you can map the admin port to something else like 81 Otherwise, you will need VLAN support or another network interface. Link to comment
DieFalse Posted May 7, 2018 Author Share Posted May 7, 2018 Port 80 is used to serve blank pixels instead of ads. it is required to be port 80 and can not be changed. I have vlan support. just need some direction to get it to allow being seen on the unraid local machine. Anyone with experience doing this? Link to comment
ken-ji Posted May 7, 2018 Share Posted May 7, 2018 You can refer to this I think. Important point is that the VLAN sub interfaces must not have an IP address assigned to it, otherwise, unRAID will try to use the subinterface to talk to the containers and run afoul of the macvlan security feature. Link to comment
DieFalse Posted May 7, 2018 Author Share Posted May 7, 2018 This would work if I wanted to further isolate items. I need to expose the br0 ip of this docker to the unraid local. Link to comment
bonienl Posted May 7, 2018 Share Posted May 7, 2018 Possible solutions. 1. The most simple solution would be to manage your containers from a PC in your network, this doesn't require any changes in your current set up. 2. Change the network type of pihole to "host" and set the pihole management port (host port 3) to something different e.g 8080 so it doesn't conflict with unRAID itself 3. Add a VLAN or physical interface, configured without IP addresses. In the docker settings include this VLAN or physical interface by assigning a network to it. Let pihole use this interface and its management port can stay on 80. Your router and switch must be set up properly to support the new VLAN or physical interface with the network you have assigned. Link to comment
DieFalse Posted May 7, 2018 Author Share Posted May 7, 2018 7 minutes ago, bonienl said: Possible solutions. 1. The most simple solution would be to manage your containers from a PC in your network, this doesn't require any changes in your current set up. 2. Change the network type of pihole to "host" and set the pihole management port (host port 3) to something different e.g 8080 so it doesn't conflict with unRAID itself 3. Add a VLAN or physical interface, configured without IP addresses. In the docker settings include this VLAN or physical interface by assigning a network to it. Let pihole use this interface and its management port can stay on 80. Your router and switch must be set up properly to support the new VLAN or physical interface with the network you have assigned. 1. This is not possible for my setup. I agree this would be easier, but not optimal or feasible in my scenario. 2. This can not be done due to limitations of pi-hole needing to serve blank pixels instead of ads. since port 53 is dns and dns only sends port 80 for this, its not possible to change this, if it was solely an admin management interface, it would, but it is not. 3. This may be the only way and I will have to try it. However, shouldn't there be a way to map it and expose it utilizing macvlan, IE. -p port publishing. https://docs.docker.com/config/containers/container-networking/ Link to comment
bonienl Posted May 7, 2018 Share Posted May 7, 2018 Communication with the Docker host over macvlan When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security. A macvlan subinterface can be added to the Docker host, to allow traffic between the Docker host and containers. The IP address needs to be set on this subinterface and removed from the parent address. Link to comment
ken-ji Posted May 7, 2018 Share Posted May 7, 2018 5 hours ago, fmp4m said: 3. This may be the only way and I will have to try it. However, shouldn't there be a way to map it and expose it utilizing macvlan, IE. -p port publishing. https://docs.docker.com/config/containers/container-networking/ Just to clarify since you are not getting this part right. When a container is connected to a macvlan network, the container becomes a first class member of that network, with its own dedicated IP. At that point, there are no more port mapping trickery involved. So if unRAID is on the main VLAN (br0) (ie 10.0.0.2/24 - gateway 10.0.0.1) and the macvlan network (br0.2) is a VLAN (ie 10.0.1.0/24 - gateway 10.0.1.1) any container on that macvlan network is now a 1st class member of the subnet and gets an IP statically assigned or dynamically by docker (ie 10.0.1.2) however if the macvlan network is br0 (ie 10.0.0.0/24) the container has an IP on that network but the kernel modules will consume any packets from containers trying to connect to the host IP directly. There are other workarounds, which @bonienl mentioned - manually adding a macvlan subinterface to unRAID (ie mac0) and moving the unRAID IP address to the subinterface ip link add link br0 mac0 type macvlan ip addr flush dev br0 ip addr add 10.0.0.2/24 dev mac0 Or have every single host use the gateway as a way to reach the container which a very ugly and hard to maintain hack. ip route add 10.0.0.3/32 via 10.0.0.1 Link to comment
DieFalse Posted May 8, 2018 Author Share Posted May 8, 2018 43 minutes ago, ken-ji said: Just to clarify since you are not getting this part right. When a container is connected to a macvlan network, the container becomes a first class member of that network, with its own dedicated IP. At that point, there are no more port mapping trickery involved. So if unRAID is on the main VLAN (br0) (ie 10.0.0.2/24 - gateway 10.0.0.1) and the macvlan network (br0.2) is a VLAN (ie 10.0.1.0/24 - gateway 10.0.1.1) any container on that macvlan network is now a 1st class member of the subnet and gets an IP statically assigned or dynamically by docker (ie 10.0.1.2) however if the macvlan network is br0 (ie 10.0.0.0/24) the container has an IP on that network but the kernel modules will consume any packets from containers trying to connect to the host IP directly. There are other workarounds, which @bonienl mentioned - manually adding a macvlan subinterface to unRAID (ie mac0) and moving the unRAID IP address to the subinterface ip link add link br0 mac0 type macvlan ip addr flush dev br0 ip addr add 10.0.0.2/24 dev mac0 Or have every single host use the gateway as a way to reach the container which a very ugly and hard to maintain hack. ip route add 10.0.0.3/32 via 10.0.0.1 Thank you both. I do get it and understand it. I was simply looking for the best way to expose it to the unraid interface. The above helps and gives me the food I need to make my meal. thanks. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.