Jump to content

Docker > Networks > Multi networks same NIC > Is this even possible?


Recommended Posts

So I've searched around like mad trying to find some tutorials that would happen to show my 'possible' use case. So far, no luck.

 

My UR has 3 NICs in it. I am using only two of them currently.

 

NIC1: 10gbe is primary - 192.168.200.0/24

NIC2: 1gbe - 192.168.202.0/24 (tagged vlan2 on Unifi)

 

I currently have bridging turned off an both. Docker network mode is set to MACVLAN.

 

The default bridge network in UrDocker is hooked into NIC1 and I do have some containers on there that I want to keep there for the higher bandwidth.

 

Other containers I have on NIC2 to keep them somewhat separate from the primary network and route traffic through NPM (also on NIC2).

 

But there are containers that are on the bridge network that I would rather be on the 202.0/24 network.

 

I've tried pulling the IP assigned to NIC2 and setting up a VLAN-ID2 with the 202.0/24 network and assigning the IP manually there. I also added another Unifi network as VLAN-ID201 201.0/24 and assigned an IP on that network in the event I want to put my HA VM on there (that's another puzzle, VM networking in UR).

 

But here's the thing. Once I add VLAN201 in the networking settings, the gateway for VLAN2 disappears in the Docker settings and any container assigned to br1.2, can't get out and nobody can access the services.

 

image.thumb.png.9f6bdae2143c3be2173a2e0416a81abd.png

 

My thought was I wanted to have the default bridge (200.0/24) running as a bridge. 202.0/24 running as a MACVLAN but also have a 202.0/24 bridge that is accessed from NIC2's assigned IP.

 

So I would have 

 

  1. Bridge (200.0 > 172.1)
  2. eth1-bridge (202.2 > 172.2)
  3. eth1-MACVLAN-2 (202.0/24)
  4. eth1-MACVLAN-201 (201.0/24)

 

Is this setup even possible using manual custom Docker networks?

 

Also, if anyone knows of any video series that plainly lays out Docker networks, please share! I've seen a bunch so far but when I try to implement what they've shown I don't get the same results.

Edited by aglyons
Link to comment
4 hours ago, aglyons said:

to keep them somewhat separate from the primary network and route traffic

The proper way of doing this is with VLANs.

Since my primary 10G NIC is sufficient for all my needs, I disabled all other NICs on the unraid host, allowing pass-trough of these NICs to VMs.

But you could use unraid NICs as trunk (multipe VLAN-IDs, one NIC) or access ports (one NIC, one VLAN-ID) just like you would do with a switch.

Also using LACP/bonding is possible with the right Switch on the other side.

 

This is only a matter of (V)LAN settings, not Docker networking as such.

I am only using custom.networks for Dockers (static IPs) - and ipvlans, as macvlan caused some problems in the past. Since Dockers don't do DHCP, there is no real use for macvlans....maybe besides some 802.1X auth usecases, I think.

For each (set of) LAN-Port(s) - plural/set when bonding is used - individually used (either trunk or access-port) I recommend to create an individual bridge.

 

Note, as a side effect when using VLANs, you need a VLAN capable router in order to allow inter-VLAN traffic, even between Dockers on the same Host (but on different networks)

Edited by Ford Prefect
Link to comment

Hey Ford!

 

So networking gear is Unifi so VLANs are not a problem there. By default Unifi allows inter-vlan traffic. You have to block it if you don't want it.

 

But the majority of what you were talking about flew right past me lol.

 

I went back to using MACVLAN as being a geek, I like to see all the servers and PC's on the network. IPVLAN use plays havock with Unifi as clients pop up and drop off randomly. The MACVLAN issue was when the primary NIC was used for bridging creating br0 while using MACVLAN. Using a second NIC alleviates that problem. 

 

Thanks for jumping in and trying to help out. If you could dumb it down a bit for a lunkhead, I'd appreciate the translation!

Link to comment
9 hours ago, aglyons said:

So networking gear is Unifi so VLANs are not a problem there.

Yes, that's why I suggested to go that route, as you obviously have the gear in place

 

9 hours ago, aglyons said:

By default Unifi allows inter-vlan traffic. You have to block it if you don't want it.

This is just the way of thinking what a default firewall config would look like. In a more conservative/risk-aware setting you typically block everything first, then open up "the holes". Makes it easier to block everything when things go havoc.

9 hours ago, aglyons said:

But the majority of what you were talking about flew right past me lol.

Hmmm...OK...what I was trying to say, was that you do not need to mix "normal" networking settings and Docker networking setting. Think IP-networking only and use VLANs iin the "normal" networking settings. Do not confuse MACVLANs with VLANs...MACVLANS (or IPVLANS) is just a concept of the Docker daemon on how Docker-networking is managed internally. On the Outside - how a running Docker containers get presented to the world, networking wise...it is "just" IP-networking (the only difference is with MACVLANs, each Container will present itself with a distinguished MAC, with IPVLANs all Containers will be seen as coming/going through from the same MAC - but with different IPs.

9 hours ago, aglyons said:

IPVLAN use plays havock with Unifi as clients pop up and drop off randomly.

This is in fact the case with most SoHo Routers/Gateways...and actually confirms to me, that dropping unify equipment was the right descision to me ;-).

It is a faulty assumption that there needs to be a 1:1 relationship between MAC(L2) and IP(L3) in a client table.

To me, using MACVLANs does not make much sense, as Dockers don't use real DHCP to acquire an IP (the dhcp-pool setting in unraid is a workaround and I don't use it). Maybe a use case around IEEE 802.1X authentication in a VLAN context might change that view, but I am also not using that, so did not test.

9 hours ago, aglyons said:

The MACVLAN issue was when the primary NIC was used for bridging creating br0 while using MACVLAN. Using a second NIC alleviates that problem. 

Maybe you are right and yes, I did have lots of stability problems with using MACVLANs...moving to IPVLANs made these go away for me.

As you loose the 1:1 relationship between MAC and IP, there is no simple way to automagically create a central client table (with (DNS-)name, MAC, IP).

The real use behind that such a table is to have the (DNS-)name and IP "linked" in order to address the Dockeer service either by IP or Name. The MAC itsself doesn't really count (assuming you do not use it in a firewall rule) but obviously is used in unify gear to create that table automatically from ARP requests going through the Switch/Router.

So this is an inconvenience, as I see it. All it takes is to create a static DNS entry for name and IP in your Router and/or DNS-Server once you create a Docker and assign a static IP in unraid.

So for me, moving over to IPVLANs was the easier part, rather than trying to stick to MACVLANs by doing other quirks in the unraid settings, like you did.

 

16 hours ago, aglyons said:

Once I add VLAN201 in the networking settings, the gateway for VLAN2 disappears in the Docker settings and any container assigned to br1.2, can't get out and nobody can access the services.

That's "normal"...the gateway in each VLAN is specified in the standard network settings and is acknowledged/used just fine, regardless of it disapearing in the Docker settings view.

When enabling VLANs, the gateway in the routing table for each VLAN is reported/becomes the interface (like br0.10), not the gateway-IP (like 192.168.10.1 for VLAN-ID 10)...maybe this is causing the "glitch" in the Docker settings UI.

If this does not work for you, maybe you did something wrong with your VLAN setting (either in unraid or in the Router).

 

16 hours ago, aglyons said:

I want to put my HA VM on there (that's another puzzle, VM networking in UR).

That's why I use bridged networking model in the standard networking settings in unraid.

Just assign a (virtio) NIC to the VM on the  desired bridge. (which may cause problems when there ar MACVLAN Dockers on the same bridge, hence I switched to IPVLANs).

This enables the fasted way of communication for network-services running on the same unraid Host (virtio in a VM gives you a "NIC" with the complete CPU-bandwidth - on my i8100 box, this gives approx 45Gbps - the same is true for a Docker using custom bridge networking).

Another option is, of course. should you have spare, unused NIC in your host, to use IOMMU passthrough for the VM and connect it directly to the physical Switch. Which is what I do for most of my VMs (where I cannot find or make a suitable Docker setup for)...I have a spare i350-T4 card in my unraid host for this.

 

10 hours ago, aglyons said:

Thanks for jumping in and trying to help out. If you could dumb it down a bit for a lunkhead, I'd appreciate the translation!

Simply put: As you have the gear, move completely to a VLAN based setup.

This will enable you to separate traffic for different services on your unraid host (Dockers and VMs)...which is what you want to achieve.

VLAN setup works fine if correctly configured (on both sides, unraid and Router/Switches)...if it doesn't work for you, the problem is not a bug in unraid.

...maybe draw a diagram of what you want and how each physical NIC/LAN-Port and virtual LAN-Port comes into play and needs to be deployed, in order to get yor head around it.

When it comes to thinking VLANs, the first concept that needs to be understood is that of Trunk- and Acces-Ports for physical ports in the multi-homed Host and/or a Router/Switch.

Link to comment

Hey Ford!

 

Thanks for the deep dive. I think I can follow this. I've been swamped with the other  stuff that pays the bills. I'll go through this with a fine tooth magnifying glass and see if I can put 2 and 2 together.

 

Thx

 

A.

 

PS ......and always carry a towel.

Link to comment
7 hours ago, aglyons said:

Thanks for the deep dive. I think I can follow this.

Your're welcome.

Best way to test is using other Clients on the network or VMs (with dedicated NICs or virtio NICs on the individaul unraid bridge, like br0.10 for VLAN10).

Test if the client-OS will receive an IP from the dedicated DHCP-pool, assigned to that VLAN from your Router/DHCP-Server. For all - clients, dockers and VMs - use tools like ping, traceroute, iperf3 to test performance and the paths a packet takes when traversing through your network.

 

I am using mikrotiik gear. A benefit is, that you can use mikrotik RouterOS in a VM as well, with very low ressources (a RouterOS VM only needs 200MB RAM, 1vCPU and 1-2GB vdisk)....with dedicated, passed-through NICs and a spare unraid testbox, one can setup a sandbox to test...keeps the WAF up high ;-)

 

7 hours ago, aglyons said:

PS ......and always carry a towel.

Ah, someone who knows the good old ways of how the world works...I appreciate that 🤩

Let me know how things progress on your side and ask questions, if need be..I'll try to help and promise to not turn into a BOFH ;-)

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...