[指南] 如何解决自定义网络上容器的 macvlan 和 ipvlan 问题


Recommended Posts

On 3/28/2023 at 1:05 PM, bonienl said:

1. Configure the dedicated interface in network settings (array must be stopped).

    - Enable bridging for this interface

    - Use IPv4 only or IPv4 and IPv6 as per your case

    - No IP addresses are assigned to this interface

 

image.png

If these dockers will be using a VLAN network isn’t it also necessary to enable VLAN?

Link to comment
2 hours ago, insomnia417 said:

按照你的方法,我最近升级了6.12 rc5 rc6 , macvlan没有导致报错死机,但是,我用br1创建的br网络,与macvlan不能互相访问,已经开了主机自定义访问

i think hostaccess isn't possible with this "solution",... so in my eyes this isn't a real "solution" for the whole "macvlan issue". maybe it also should't be named as that.

Link to comment

So every once in awhile I get O.C.D about something and this has turned into just that.

I had this running in the post I made on page 2 but i just kept messing with it..some of you know how that goes.

Anyway

Let me preface this ... This is a docker network on it's own network port separate from the host system.

Separate meaning: Unraid host and all network BUT one are under one router and THIS ONE docker network is under a separate switch and router. This is running atm using unraid 6.12.rc6 and Unifi UDMP 3.0.20. IF you are running VM's this is proably not for you.

 

The only thing I really did different is how I created this network vs on page 2 and turning off bridge on both eth0 AND eth1.

So first turn off bridging

Then I created the network using docker run

docker network create -d macvlan \
  --subnet=192.168.50.0/26 \
  --ip-range=192.168.50.0/27 \
  --gateway=192.168.50.1 \
  -o parent=eth1 proxy

Under the docker settings you see the eth1 network but I did NO configuring left it blank.

As I created containers I added

--mac-address

under "Extra Parameters" and gave them all a static IP

 

UDMP: I let the router handle DHCP and configured it to use the same gateway as above BUT to had out leases from .33 to .62

I also directed the router to use a Pi-Hole I have set up in the network to handle DNS.

As I created containers mac-address I configured DHCP reservations and DNS records in the router (even tho pi-hole does the DNS)

Reason: UDMP atm floats what it see's. UDMP has had DNS issues from creation so this just kind of locks it in you my not have to do this. I was messing with this in the middle of them upgrading from 1.0.1 to 3.0.20.

And of course DNS records in the pi-hole.

Thats it works great no errors.

Unraid Network

1449253141_unraid2networks.thumb.png.c009a087f6b39fb13ea5abfcb409c7e9.png

 

Docker-Network

1741900142_unraid2dockernetworks.thumb.png.c85fea7193d8bae09818504d79dd8bb7.png

 

Unraid routes

2057772323_unraidroutes2.thumb.png.d32167ffef09a4548ab3a129a2510cd2.png

 

Docker container network

234403011_unraid2containernetwork.thumb.png.b89186d9399cd22c7c9a1222ae80d779.png

 

Docker Dash

705881517_unraid2dockerdash.thumb.png.05b6ceb0477a786c03567d0b2305cd51.png

 

Have fun!!

Link to comment
On 2023/5/18 at PM10点11分, sonic6 said:

我认为 hostaccess 不可能用这个“解决方案”,...所以在我看来,这不是整个“macvlan 问题”的真正“解决方案”。也许它也不应该这样命名。

sorry my english is not good ,
非常赞同你的观点,我看过你的帖子"6.12.0-RC4 "MACVLAN CALL TRACES FOUND", BUT NOT ON <=6.11.X" ,并且我也同样在最近2个月时间里面,不断地升级,又降级,又升级,现在最终还是回到了6.11.5(即使6.11.5仍然有可能带来不稳定的因素) ,

同时,我也比较赞同引起macvlan错误导致整个unraid奔溃死机的原因 ,可能是因为bridge桥接的问题

根据大部分的说辞看来, 必须要等待linux内核对macvlan这个问题进行修复,然而,unraid的RC更新,不停地进行内核更新,我想这可能是个无尽的未知等待,真的太折磨人了!!!
我可能不会很快升级到正式版的6.12,也许半年,也许一年以后吧,

  • Like 1
Link to comment
On 4/15/2023 at 2:25 AM, FredrikJL said:

Make sure to have “IPv4 address assignment:” set to “none” for your eth1. (As well as for ipv6). 

 

I'm having the same issue, my server keeps crashing 
I manually setup a macvlan with but still no br1 . 

I set my Dockers to use dockerlan but the DHCP list is assigning an address starting on 172.20.30.1 not .100 
Please help 
docker network create -d macvlan \
    --subnet=172.20.30.0/23 \
    --gateway=172.20.30.254 \
    --ip-range=172.20.30.100/27 \
    -o parent=br2 dockerlan

unraidserver-diagnostics-20230521-1454.zip

Link to comment
  • 3 weeks later...

After few weeks of using eth0 network for docker with bridging disabled, with no sign of macvlan traces, I had a lockup and after hard reset of the server, macvlan traces are back to system logs (after few hours of uptime).

I need macvlan for setting the IPV6 address based on the Mac address, so switching to IPvlan is not a good solution. Tried using a second nic, which didn't help.

Anything else to try?

Link to comment

While this solution works, it does not allow host access, so it's not really a solution.

 

1) macvlan + host access =  bridged containers are able to talk to containers that has static ip's but it crashes the server

 

2) ipvlan + host access = bridged containers are able to talk to containers that has static ip's but server loses connection to the outside after a while

 

3) ipvlan + no host access = no crashes or connection issues, but bridged containers are unable to talk to containers that has static ip's

 

4) this solution = works the exact same as option 3

 

So an example, to get nginx proxy manager to work with bridged containers or to make bridged containers use my adguard home which has a static ip, I will have to use option 1 or 2 which will result in either crashes or drop of connection to the outside.

 

this solution is no solution at all but a bandaid for people using macvlan without host access.

 

It's so frustrating :(

Edited by mikl
  • Thanks 1
  • Upvote 3
Link to comment
3 hours ago, JonathanM said:

Especially since it doesn't effect all hardware or software combinations, so it's not been possible to figure out a cause, let alone a fix.

Atleast for me it started happening after switching from an Intel platform to AMD Ryzen.

Link to comment
On 6/11/2023 at 3:58 PM, mikl said:

3) ipvlan + no host access = no crashes or connection issues, but bridged containers are unable to talk to containers that has static ip's

 

4) this solution = works the exact same as option 3

 

@bonienl @mikl

 

im a little confused :

ipvlan + no host access = this solution ( macvlan + no host access on br1 ) but if host access enabled then its not working ?

 

im had issue on 6.11.5 with macvlan call traces found only in syslog ,on 6.12 also having in syslog also server crashed yesterday,

using  macvlan + host access ...

 

image.thumb.png.919264466d5dde5cf36f66a9b914e877.png

 

So the problem with macvlan or when macvlan + host access enabled or needs to be checked in every user case ?

Link to comment
On 3/28/2023 at 6:05 PM, bonienl said:

In such a case the general advice is to switch the connection to a docker ipvlan network type, which usually solves the issue, but for some users may introduce a network connectivity issue, depending on the network equipment (router) in use and if it can handle the specifics of ipvlan.

 

Could you elaborate on this at all, what specifics of ipvlan does the router need to support ?

 

Thanks.

Link to comment

@boniel  Hate asking dumb questions, but I have to in this case.  Does a multi-port NIC count as separate adapters for this solution?  I'd assume so, but I've moved all my containers off the br0 interface (on an Intel quad-port NIC port0) and I'm still seeing reports of macvlan call traces.  I prefer macvlan to ipvlan because some of my containers require it, and it seems like it would potentially offer better security as all 4 NIC ports are separated by subnet and VLANs into my managed switch and then into similarly separate NIC ports on my firewall. 

 

Just trying to figure out how to get rid of these call trace errors and improve stability.

  • Like 1
Link to comment

I'm trying to learn about this problem as well. I've been using macvlan without problems for a couple years at this point, and then yesterday morning (3 days after updating to 6.12.0) I had my first system instability/crash, requiring a hard reboot. After I came back up, I had warnings about "macvlan traces found"

 

I've already ordered a second NIC, as I do a ton of VLAN stuff with both Docker and VM's on different vlans and really want to keep using MACVLAN. But now I'm concerned that this won't actually end up helping. Is this an Unraid thing? A docker thing? Can I investigate anything to help solve it if I crash again?

Link to comment
4 minutes ago, Chunks said:

Is this an Unraid thing? A docker thing?

It's more of a "works fine with some hardware and not with other hardware" thing. This one is very hard to pin down.  For many. moving to ipvlan solves their call trace problem.  

 

Personally, the problem for me with macvlan call traces went away completely when I moved docker containers to a VLAN with their own IP addresses on that VLAN instead of using br0.  It does not matter if I use macvlan or ipvlan for Docker since I created the VLAN.

Link to comment

in my case on same hardware was no problems at all about 3 years ,but last month on 6.11.5 kernel traps on syslog with

macvlan + host access.

on 6.12 got server freeze and macvlan traps , so changed to ipvlan then to this fix (macvlan on br1 + no host access).

also got System_cache_ssd xfs filesystem broken with docker.img  btrfs error , maybe not related ...

so fixed xfs and deleted docker image ... :(

so now i have strange br0 behavior on syslog every some minutes :

 

image.thumb.png.b14d65f180b815e6eb2507488d2120f4.png

 

Also strange docker alocation of proxynet instead of 172.18.0.x , maybe this is just the bug of 6.12.1 ?

 

image.thumb.png.54f7a782786de32de1986966667945bb.png

image.thumb.png.6a3d1101bb04909707ce7b2a8828be0f.png

 

image.png.c058c474e6af889e5d74cde6713a0ce2.png

image.png.5b65ff9eb92933644787d1f67429c2b7.png

 

Link to comment

I also encountered the macvlan problem. I moved all containers from br0 to br2 as described (as ipvlan), but containers connected to the default bridge network cannot communicate with containers connected to br2.


br0/eth0 has 10.0.0.2/16
br2 has no IP but the subnet 10.100.0.0/24

The routing beetwen both networks is done by opnsense and works with all other networkdevices e.g. my PC (10.0.1.87/16) connects to Traefik Container (10.100.0.3) without any problems

 Any ideas?

Link to comment

As I mentioned in my previous post, I have a quad port Intel NIC and each port has it's own /24.  Each goes via a managed switch to the firewall where again, each has a specific VLAN assigned (so not in Unraid interface settings).  MOST all my containers have an IP assigned, and host access is disabled in Docker settings.  NO containers are assigned to br0.  While I ran pre 6.11 with no issues, unfortunately I swapped the NIC and drives from an AMD 5900X/X570 build to an 11th gen Intel and mobo combo around the time 6.11 was out and at some point FCP started reporting macvlan call trace warnings and I also started having these server hangs. 

 

By my understanding, I've follow all the steps, but I continue to have this problem unless I switch to ipvlan.  Frustrating.

Link to comment

The server hangs finally drove me to abandon macvlan for now and I flipped the switch back to ipvlan again.  I've rebooted a couple of times since for various reasons, then in reviewing my FCP scan this morning it is still reporting DOZENS of macvlan call trace warnings - and I'm not running macvlan.  Meanwhile my very similar Unraid server with the same model quad nic and pretty similar network setup (on a 10th gen Intel vs the 11th gen on this one) hums along on macvlan very nicely.  Frustration peaking.

Link to comment
32 minutes ago, BurntOC said:

The server hangs finally drove me to abandon macvlan for now and I flipped the switch back to ipvlan again.  I've rebooted a couple of times since for various reasons, then in reviewing my FCP scan this morning it is still reporting DOZENS of macvlan call trace warnings - and I'm not running macvlan.  Meanwhile my very similar Unraid server with the same model quad nic and pretty similar network setup (on a 10th gen Intel vs the 11th gen on this one) hums along on macvlan very nicely.  Frustration peaking.

It's the 11th gen.  I am getting the same thing after updating my i9 11th gen.  Something happened between 6.12 rc8 and 6.12.1 that fucked it up

Link to comment
  • anpple changed the title to [指南] 如何解决自定义网络上容器的 macvlan 和 ipvlan 问题
  • JorgeB unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.