I hope this isn't considered a necropost but I seem to be having an issue with unraid connecting to my containers over the br0.5 i created. Is this supposed to be blocked? From the unraid console:
root@Nexus:~# ping 10.0.1.5
PING 10.0.1.5 (10.0.1.5) 56(84) bytes of data.
From 10.0.1.6 icmp_seq=1 Destination Host Unreachable
From 10.0.1.6 icmp_seq=2 Destination Host Unreachable
From 10.0.1.6 icmp_seq=3 Destination Host Unreachable
From 10.0.1.6 icmp_seq=4 Destination Host Unreachable
From 10.0.1.6 icmp_seq=5 Destination Host Unreachable
From 10.0.1.6 icmp_seq=6 Destination Host Unreachable
^C
--- 10.0.1.5 ping statistics ---
7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6136ms
pipe 4
And this is what my route looks like:
root@Nexus:~# ip route
default via 192.168.1.1 dev br0 proto dhcp src 192.168.1.44 metric 217
default via 10.0.1.1 dev br0.5 proto dhcp src 10.0.1.6 metric 219
10.0.1.0/24 dev br0.5 proto dhcp scope link src 10.0.1.6 metric 219
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev br-17bf4a1665ee proto kernel scope link src 172.18.0.1 linkdown
192.168.1.0/24 dev br0 proto dhcp scope link src 192.168.1.44 metric 217
Unraid can ping itself (10.0.1.6) and the gateway (10.0.1.1) but not any of the docker containers using the same br0.5 tagged network. I don't think it is "by-design" it seems like its a configuration issue on my end. I followed all the instructions on the guide and I'm running a unifi managed switch, USG and cloud key. Everything else seems to be working. I can access the br0.5 containers from any other device on the network.