Going by this thread here, I believe this issue to originate with 6.10 when ipvlan was introduced.
I've just recently needed to switch from macvlan to ipvlan as my sever started kernel panicking after adding some new hardware (nvme+ram). if 'Host access to custom networks' is Enabled while using ipvlan the server functions normally in terms of routing, reaching the internet, etc, and the routing table is all good:
root@vault13:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.81.0.6 0.0.0.0 UG 0 0 0 br0 10.81.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-f237a61f1b1e
After starting the docker service, the shim interfaces get added to the routing table and everything seems fine:
root@vault13:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.81.0.6 0.0.0.0 UG 0 0 0 br0 10.81.0.0 0.0.0.0 255.255.255.128 U 0 0 0 shim-br0 10.81.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 10.81.0.128 0.0.0.0 255.255.255.128 U 0 0 0 shim-br0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-f237a61f1b1e
then after a minute or two of the docker service running all internet routing from the server comes to a halt, except for any containers on a br interface with their own IPs. Another capture of the routing table at this point doesn't show any changes from above.
The only work around it setting 'Host access to custom networks' to disabled, but that's a not ideal.
- 1
Recommended Comments
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.