[指南] 如何解决自定义网络上容器的 macvlan 和 ipvlan 问题


Recommended Posts

I have been using this setup for idk 2 months? I have not had any issues. I am running 6.11.5 Using Unifi UDMP pointing to pihole just for dns. Docker DHCP allows for labels

NIC on unraid and UDMP are in promiscuous mode. I add the flag --mac-address <mac address> to every container under Extra Parameters: I also add --dns. I let docker DHCP add the IP address, then i add that to pihole DNS and make it a fixed address in UDMP. Also set the DHCP that UDMP hands out really low like 10 ip's 192.168.x.20 to 30. 

upper unraid network.png

lowerss unraid network.png

unraid docker.png

Link to comment

I am running JUST br1 network on a UDMP router all other networks are behind a different router. I remember reading that "unraid docker" uses the host netwok (br0) configured DNS server to resolve (I confirmed this by removing br1 dns entry from br0 and it left br1 in the dark) So I started adding --dns

 

Labels are a way docker containers can reach each other using host names (vs IP addresses) when on same network in my case running "Traefik" and "Authentik" containers 

Docker DHCP allows docker to know which container is on the network thus "hostnames" = labels vs IP addresses.

Just an easier way to configure containers that need to talk to each other

Link to comment
Quote

until someone provides suggestions on how to make br1 show up.

I had that same problem i just remembered it reading your post. IIRC it has to do how br0 is configured you have to enable bridging? It has to do with how your networks are set up and i also think i had to reboot the server after configured. It seems most of "Unraid Docker" underpinning is tied to br0.

 

It has been a frustrating ordeal to figure it out and make it play nice (docker) with UDMP on something other then br0 and then UDMP changes OS but thats another story :) PS. UDMP changing OS is a good thing!

Link to comment
Quote

n my case, DNS working fine without br0 (eth bridging set to no)

Ya I am trying to remember I had to bridge something inorder to get it to show up as docker custom network and then it was definitely tied to br0. 

We are talking about a separate network port controled by a separate router

Link to comment
4 hours ago, thorzeen said:

I had that same problem i just remembered it reading your post. IIRC it has to do how br0 is configured you have to enable bridging? It has to do with how your networks are set up and i also think i had to reboot the server after configured. It seems most of "Unraid Docker" underpinning is tied to br0.

 

It has been a frustrating ordeal to figure it out and make it play nice (docker) with UDMP on something other then br0 and then UDMP changes OS but thats another story :) PS. UDMP changing OS is a good thing!

Thanks, but at this point I'm at a loss for what to do. This is above my pay grade! I just don't understand how setting up another ethernet port (I have 4 NICs on my R710) can still use macvlan and still work. Seems to have something to do with MAC addresses.
Haven't had a warning/crash since the last reboot last night using macvlan. And when I get syslog entries on a possible macvlan issue, my server continues to operate.

I'll see how it goes.

 

Does anyone lknow what the overall plan is for this issue? Is this separate NIC thing a workaround or a permanent fix? Thought I saw somewhere that the macvlan issue was deemed a bug of some sort.

Link to comment

First up, I tried looking at the Unraid docs to try and figure it out myself but there is nothing there!

 

I followed along, like everyone else. But, I ran into something that confuses the heck out of me.

 

I have a second NIC, always have and on that NIC I had bridging turned off. Each container assigned to eth1 would be on network 168.202.x defined as vlan2 on my UDMProSE, and assigned an IP manually to each container. This would be the situation for any container that I want to expose to the internet via NPM. But, some containers are not exposed and don't need to have a dedicated IP so for those I stuck on the bridge for local and VPN access only (Radarr, Sonarr etc).

 

But once I turned on bridging for eth1, br1 showed up and eth1 disappeared! All the containers I had setup on eth1 were offline.

 

On a sidenote; 'Bridge' is still listed in the networks but choosing that is using the eth0 NIC network 168.200.x even though bridging is disabled for eth0. Why isn't that bridge using the eth1 NIC for the bridge mode since it IS enabled? Same for 'Host'

 

 

Link to comment
Quote

I ran into something that confuses the heck out of me

One thing I have run into with unraid networking is it has a memory. I have not researched this, it might be a bug or it might be a safe guard. I have deleted networks that still call out for dhcp untill I literally shutdown not reboot but shutdown. 

 

Clearing arp might help 

ip -s -s neigh flush all

 

Edited by thorzeen
Link to comment

There's a interesting discussion going on over here about this 

bugreports>prereleases  6.12.0-rc4 "macvlan call traces found", but not on <=6.11.x - Prereleases - Unraid

 

Quote

There is a suspicion that there occurs a conflict situation between bridge function and macvlan function

 

Basically if your not using bridge (VM's) turn it off on br0 and br1 and eth1 shows up in docker for a custom network  on v6.12.0-rc5 

Edited by thorzeen
Link to comment

Thanks for the Tutorial.

 

After reading it I finally bit the bullet and tried this and I activated the Onboard 2.5gb Nic and assigned it to dockers only. 

 

The problem is that this does not solve any of the issues, with this solution Host access to custom networks does not work at all, neither in Mcvlan or with Ipvlan. 

 

Even if this is stable is does not solve the issue and I can achieve the exact same "stable" system using IPVLAN on 1 nic and share it with unraids "normal" system. 

The problem still stands, you either have Mcvland with Host access to custom networks which allows you to assign dockers like Guacamole or Unifi a dedicated IP and they can talk to other dockers, which allows you to setup and use NGINIX reverse proxy, BUT and it is a big freaking but, It crashes all the time. 


OR you have ipvlan or this dedicated NIC solution which cause no crashes but you have no Host access to custom networks. 

 

As such I don't see how this "solution" helps vs just using IPVLAN?

  • Upvote 1
Link to comment
5 hours ago, nik82 said:

Its for the dockers to be able to communicate if one docker is using a custom IP.

I am just curious are you using a custom docker network on a vlan off br0, or on a separate ethernet port ?

Edited by thorzeen
Link to comment
  • anpple changed the title to [指南] 如何解决自定义网络上容器的 macvlan 和 ipvlan 问题
  • JorgeB unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.