[指南] 如何解决自定义网络上容器的 macvlan 和 ipvlan 问题


Recommended Posts

On 7/31/2023 at 6:57 PM, bonienl said:

Bridging is only needed when VMs are used, otherwise it can be disabled.

Thanks for explaining, but I'm a little confused, on bridge help wrote that its for docker and vm to communicate with phisical port eth1. If we disable bridge br1 for (eth1), then we have only eth1 for dockers? 

 

Next day:

After checked a day with no br1 on eth1, there was no macvlan traces

Need to check more but seems it's working. 

 

Also have another problem, maybe it's  bad cable :

Have br0 eth0 port link down and upto 100mb from 1gb + ntpd. 

Then if I run manually :

ethtool -s eth0 speed 1000 duplex full

After some time server is gone from local net, only I can use keyboard + console + monitor connected on server. 

It maybe same as I peaple said becose of IpvLAN but I'm on macvlan...

Link to comment
On 6/20/2023 at 5:38 PM, Hoopster said:

Personally, the problem for me with macvlan call traces went away completely when I moved docker containers to a VLAN with their own IP addresses on that VLAN instead of using br0.  It does not matter if I use macvlan or ipvlan for Docker since I created the VLAN.

 

it seems that disabling bridge really solves the macvlan problem but i ordered a cheap tp-link managed switch TL-SG108E to

forget it and to enable host access also.

so i just need to set docker on br1 (eth1) port to br1.10 (192.168.10.199) if br0 on 192.168.0.199 , but host can access to VLAN if i enable it ?

Link to comment
2 hours ago, Masterwishx said:

so i just need to set docker on br1 (eth1) port to br1.10 (192.168.10.199) if br0 on 192.168.0.199

That depends on how you have the VLAN settings defined in Network Settings.

 

In my case, I have the Dockers VLAN assigned to VLAN number 3 on br0 so the custom network that appears in the docker container is Custom: br0.3 --Dockers

 

image.thumb.png.2ddddb4a00fc968fd7cfbaec999e7597.png

 

I followed this guide to get the VLAN setup in Unraid, the switch and the router.  I have a Ubiquiti Unifi switch and router but the principles are the same for other switches and routers that support VLANs.

  • Like 1
Link to comment
On 5/20/2023 at 3:41 AM, insomnia417 said:

sorry my english is not good ,
非常赞同你的观点,我看过你的帖子"6.12.0-RC4 "MACVLAN CALL TRACES FOUND", BUT NOT ON <=6.11.X" ,并且我也同样在最近2个月时间里面,不断地升级,又降级,又升级,现在最终还是回到了6.11.5(即使6.11.5仍然有可能带来不稳定的因素) ,

同时,我也比较赞同引起macvlan错误导致整个unraid奔溃死机的原因 ,可能是因为bridge桥接的问题

根据大部分的说辞看来, 必须要等待linux内核对macvlan这个问题进行修复,然而,unraid的RC更新,不停地进行内核更新,我想这可能是个无尽的未知等待,真的太折磨人了!!!
我可能不会很快升级到正式版的6.12,也许半年,也许一年以后吧,

是不是现在想要能够让最新版系统正常使用 能做的选择就是不用 macvlan  或者 加一个物理网卡去开 macvlan

Link to comment

chasing another crash issue and I'm giving this tutorial a shot.  I just have one question.  After all these configs are complete, I want all my containers to run the on same IP, different ports obviously.  On my first container, it works great. Custom IP taken with eth2 (my dedicated Docker interface) selected.  But when I select eth2 and type in the same IP on the second container, I want I get 0.0.0.0 because that IP is in use?

 

How do I get around this? I really don't want a different IP for every container.  Just want to use a dedicated NIC for all containers.

 

Thanks in advance!

Link to comment
2 hours ago, SmallwoodDR82 said:

I really don't want a different IP for every container.  Just want to use a dedicated NIC for all containers.

 

If you assign an IP to a container it is just for that container so there is no risk of port overlap. Any containers that don't have a custom IP will share the host's IP and you need to make sure there is no port overlap.

Link to comment
On 8/22/2023 at 2:08 AM, ljm42 said:

Hey everyone! We have a 6.12.4 rc release available that resolves the macvlan issue without needing to use two nics. We'd appreciate your help confirming the fixes before releasing 6.12.4 stable:

 

Nope wanted to spin finally up again my lancache as i also use swag and hat the crash some hours later. No response only hard reset possible seems not fixed :/

Link to comment
17 minutes ago, TheBroTMv2 said:

Nope wanted to spin finally up again my lancache as i also use swag and hat the crash some hours later. No response only hard reset possible seems not fixed :/

without any diagnostic, this isn't provable that this is a macvlan problem, or another crash.

  • Like 1
Link to comment
2 hours ago, TheBroTMv2 said:

Nope wanted to spin finally up again my lancache as i also use swag and hat the crash some hours later. No response only hard reset possible seems not fixed :/

 

There isn't a lot to go on here, maybe run through the release notes again and see if you missed any steps? 

 

After making these changes, only eth0 should be in use, any other nics should be unplugged and nothing should be referencing them.

 

If you think everything is correct please show screenshots of Settings -> Networking -> eth0 and Settings -> Docker, as well as upload your diagnostics.zip (from Tools -> Diagnostics)

 

If you expect a crash, you'll also want to go to Settings -> Syslog Server and enable "Mirror syslog to flash". This will save a copy of your logs on the flash drive so we can see the last thing written if there is a crash. Note this is not something you want to keep enabled long term as it puts extra wear and tear on the flash drive.

Link to comment

In my current setup i use bonding (Mode 4 (802.3ad)), main reason was because when I'm doing heavy traffic up/download, i got issues during plex playback because my nic was fully saturated.

With these changes i have the feeling i will go back to the same issues before i decided to bond them.

Link to comment

When moving Dockers from a single interface (eth0) to a new dedicated one (eth1).... I had no problems when the dockers were swapped from br0 to eth1 networks. But I have a whole bunch of dockers using just "bridge". These get the IP address of the host system, the main unraid IP (eth0).

 

For example:

 

image.png.a30b21962704a04e47b298bbc2db6303.png

 

Is the fix to make all of these eth1 as well? If so, what IP do I use? Can I create a single new address that they all can share, or is this the whole point of removing bridging? Or did I miss something/mess something up? 

 

Link to comment
1 hour ago, Chunks said:

When moving Dockers from a single interface (eth0) to a new dedicated one (eth1).... I had no problems when the dockers were swapped from br0 to eth1 networks. But I have a whole bunch of dockers using just "bridge". These get the IP address of the host system, the main unraid IP (eth0).

 

For example:

 

image.png.a30b21962704a04e47b298bbc2db6303.png

 

Is the fix to make all of these eth1 as well? If so, what IP do I use? Can I create a single new address that they all can share, or is this the whole point of removing bridging? Or did I miss something/mess something up? 

 

 

Docker containers in bridge or host mode will always use Unraid's main IP. This guide is specifically to deal with containers that have a dedicated IP on a custom network.

  • Like 1
Link to comment
4 hours ago, TRaSH said:

In my current setup i use bonding (Mode 4 (802.3ad)), main reason was because when I'm doing heavy traffic up/download, i got issues during plex playback because my nic was fully saturated.

With these changes i have the feeling i will go back to the same issues before i decided to bond them.

 

I have updated the 6.12.4 release notes to make it clear that the new solution does work with bonding:

https://docs.unraid.net/unraid-os/release-notes/6.12.4/

Link to comment
2 hours ago, Masterwishx said:

Updated to 6.12.4 , moved back to bond0 from 2nics (eth1 for docker) this solution , made steps for docker fix. 

in VM Manager default netsource : virbr0 , but in every VM :vhost0 , 

so should we set vhost0 as default for VMS only in case of problems like said in release note, or preffered by default ?

vhost0 replaces br0. 

virbr0 is provided by libvirt and is used to created a NAT network for VMs. 

You should only update the interface in VMs (if it doesn't update automatically) to vhost0 if it was br0. There is no preferred default, it depends if you want the VM to run on your network (non-isolated) or run in a separated isolated network.

Edited by thecode
  • Thanks 1
Link to comment
  • anpple changed the title to [指南] 如何解决自定义网络上容器的 macvlan 和 ipvlan 问题
  • JorgeB unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.