bonienl Posted July 31, 2023 Author Share Posted July 31, 2023 43 minutes ago, JorgeB said: @bonienlcan you confirm if bridging has any advantages for this? Without bridging the call traces should not appear, or at least they should be much less likely to appear. Bridging is only needed when VMs are used, otherwise it can be disabled. Quote Link to comment
JorgeB Posted July 31, 2023 Share Posted July 31, 2023 29 minutes ago, bonienl said: Bridging is only needed when VMs are used, otherwise it can be disabled. Please add that to the original post, i.e., that anyone using a NIC for dockers only should leave bridging disabled for that interface. Quote Link to comment
dlandon Posted August 1, 2023 Share Posted August 1, 2023 If you're ISP does not support IPv6, just select 'IPv4 Only' as your protocol and you won't have any issues. Quote Link to comment
Masterwishx Posted August 2, 2023 Share Posted August 2, 2023 On 7/31/2023 at 6:57 PM, bonienl said: Bridging is only needed when VMs are used, otherwise it can be disabled. Thanks for explaining, but I'm a little confused, on bridge help wrote that its for docker and vm to communicate with phisical port eth1. If we disable bridge br1 for (eth1), then we have only eth1 for dockers? Next day: After checked a day with no br1 on eth1, there was no macvlan traces Need to check more but seems it's working. Also have another problem, maybe it's bad cable : Have br0 eth0 port link down and upto 100mb from 1gb + ntpd. Then if I run manually : ethtool -s eth0 speed 1000 duplex full After some time server is gone from local net, only I can use keyboard + console + monitor connected on server. It maybe same as I peaple said becose of IpvLAN but I'm on macvlan... Quote Link to comment
Masterwishx Posted August 5, 2023 Share Posted August 5, 2023 On 7/5/2023 at 8:57 PM, bonienl said: Best to use the interface as a dedicated interface for docker only. I'm using eth1(no bridge for now) for docker, for vm br0 (eth1) is OK or virbr0 should be used for avoid macvlan traces? Quote Link to comment
JorgeB Posted August 5, 2023 Share Posted August 5, 2023 18 minutes ago, Masterwishx said: for vm br0 (eth1) is OK or virbr0 should be used for avoid macvlan traces? Do you mean br0 (eth0)? If yes that should be fine for the VMs. 1 Quote Link to comment
baunegaard Posted August 6, 2023 Share Posted August 6, 2023 I think i can safely say that my macvlan issues is completely resolved by disabling bridging on the dedicated nic 🎉 Quote Link to comment
Masterwishx Posted August 6, 2023 Share Posted August 6, 2023 On 6/20/2023 at 5:38 PM, Hoopster said: Personally, the problem for me with macvlan call traces went away completely when I moved docker containers to a VLAN with their own IP addresses on that VLAN instead of using br0. It does not matter if I use macvlan or ipvlan for Docker since I created the VLAN. it seems that disabling bridge really solves the macvlan problem but i ordered a cheap tp-link managed switch TL-SG108E to forget it and to enable host access also. so i just need to set docker on br1 (eth1) port to br1.10 (192.168.10.199) if br0 on 192.168.0.199 , but host can access to VLAN if i enable it ? Quote Link to comment
Hoopster Posted August 6, 2023 Share Posted August 6, 2023 2 hours ago, Masterwishx said: so i just need to set docker on br1 (eth1) port to br1.10 (192.168.10.199) if br0 on 192.168.0.199 That depends on how you have the VLAN settings defined in Network Settings. In my case, I have the Dockers VLAN assigned to VLAN number 3 on br0 so the custom network that appears in the docker container is Custom: br0.3 --Dockers I followed this guide to get the VLAN setup in Unraid, the switch and the router. I have a Ubiquiti Unifi switch and router but the principles are the same for other switches and routers that support VLANs. 1 Quote Link to comment
high-fiber-hall3404 Posted August 9, 2023 Share Posted August 9, 2023 On 5/20/2023 at 3:41 AM, insomnia417 said: sorry my english is not good , 非常赞同你的观点,我看过你的帖子"6.12.0-RC4 "MACVLAN CALL TRACES FOUND", BUT NOT ON <=6.11.X" ,并且我也同样在最近2个月时间里面,不断地升级,又降级,又升级,现在最终还是回到了6.11.5(即使6.11.5仍然有可能带来不稳定的因素) , 同时,我也比较赞同引起macvlan错误导致整个unraid奔溃死机的原因 ,可能是因为bridge桥接的问题 根据大部分的说辞看来, 必须要等待linux内核对macvlan这个问题进行修复,然而,unraid的RC更新,不停地进行内核更新,我想这可能是个无尽的未知等待,真的太折磨人了!!! 我可能不会很快升级到正式版的6.12,也许半年,也许一年以后吧, 是不是现在想要能够让最新版系统正常使用 能做的选择就是不用 macvlan 或者 加一个物理网卡去开 macvlan Quote Link to comment
SmallwoodDR82 Posted August 14, 2023 Share Posted August 14, 2023 chasing another crash issue and I'm giving this tutorial a shot. I just have one question. After all these configs are complete, I want all my containers to run the on same IP, different ports obviously. On my first container, it works great. Custom IP taken with eth2 (my dedicated Docker interface) selected. But when I select eth2 and type in the same IP on the second container, I want I get 0.0.0.0 because that IP is in use? How do I get around this? I really don't want a different IP for every container. Just want to use a dedicated NIC for all containers. Thanks in advance! Quote Link to comment
ljm42 Posted August 14, 2023 Share Posted August 14, 2023 2 hours ago, SmallwoodDR82 said: I really don't want a different IP for every container. Just want to use a dedicated NIC for all containers. If you assign an IP to a container it is just for that container so there is no risk of port overlap. Any containers that don't have a custom IP will share the host's IP and you need to make sure there is no port overlap. Quote Link to comment
lcadmedia Posted August 20, 2023 Share Posted August 20, 2023 (edited) On 4/15/2023 at 2:28 AM, FredrikJL said: How to set to none? I can't set it, the only has 2 options: "Static" and "Automatic" Edited August 20, 2023 by kbnomad Quote Link to comment
ljm42 Posted August 22, 2023 Share Posted August 22, 2023 On 8/20/2023 at 12:24 PM, kbnomad said: How to set to none? I can't set it, the only has 2 options: "Static" and "Automatic" Are you sure you are looking at eth1? eth1 should have a "none" option, eth0 will not. Quote Link to comment
ljm42 Posted August 22, 2023 Share Posted August 22, 2023 Hey everyone! We have a 6.12.4 rc release available that resolves the macvlan issue without needing to use two nics. We'd appreciate your help confirming the fixes before releasing 6.12.4 stable: 3 Quote Link to comment
TheBroTMv2 Posted August 25, 2023 Share Posted August 25, 2023 On 8/22/2023 at 2:08 AM, ljm42 said: Hey everyone! We have a 6.12.4 rc release available that resolves the macvlan issue without needing to use two nics. We'd appreciate your help confirming the fixes before releasing 6.12.4 stable: Nope wanted to spin finally up again my lancache as i also use swag and hat the crash some hours later. No response only hard reset possible seems not fixed Quote Link to comment
sonic6 Posted August 25, 2023 Share Posted August 25, 2023 17 minutes ago, TheBroTMv2 said: Nope wanted to spin finally up again my lancache as i also use swag and hat the crash some hours later. No response only hard reset possible seems not fixed without any diagnostic, this isn't provable that this is a macvlan problem, or another crash. 1 Quote Link to comment
ljm42 Posted August 25, 2023 Share Posted August 25, 2023 2 hours ago, TheBroTMv2 said: Nope wanted to spin finally up again my lancache as i also use swag and hat the crash some hours later. No response only hard reset possible seems not fixed There isn't a lot to go on here, maybe run through the release notes again and see if you missed any steps? After making these changes, only eth0 should be in use, any other nics should be unplugged and nothing should be referencing them. If you think everything is correct please show screenshots of Settings -> Networking -> eth0 and Settings -> Docker, as well as upload your diagnostics.zip (from Tools -> Diagnostics) If you expect a crash, you'll also want to go to Settings -> Syslog Server and enable "Mirror syslog to flash". This will save a copy of your logs on the flash drive so we can see the last thing written if there is a crash. Note this is not something you want to keep enabled long term as it puts extra wear and tear on the flash drive. Quote Link to comment
TRaSH Posted August 31, 2023 Share Posted August 31, 2023 In my current setup i use bonding (Mode 4 (802.3ad)), main reason was because when I'm doing heavy traffic up/download, i got issues during plex playback because my nic was fully saturated. With these changes i have the feeling i will go back to the same issues before i decided to bond them. Quote Link to comment
Chunks Posted August 31, 2023 Share Posted August 31, 2023 When moving Dockers from a single interface (eth0) to a new dedicated one (eth1).... I had no problems when the dockers were swapped from br0 to eth1 networks. But I have a whole bunch of dockers using just "bridge". These get the IP address of the host system, the main unraid IP (eth0). For example: Is the fix to make all of these eth1 as well? If so, what IP do I use? Can I create a single new address that they all can share, or is this the whole point of removing bridging? Or did I miss something/mess something up? Quote Link to comment
ljm42 Posted August 31, 2023 Share Posted August 31, 2023 1 hour ago, Chunks said: When moving Dockers from a single interface (eth0) to a new dedicated one (eth1).... I had no problems when the dockers were swapped from br0 to eth1 networks. But I have a whole bunch of dockers using just "bridge". These get the IP address of the host system, the main unraid IP (eth0). For example: Is the fix to make all of these eth1 as well? If so, what IP do I use? Can I create a single new address that they all can share, or is this the whole point of removing bridging? Or did I miss something/mess something up? Docker containers in bridge or host mode will always use Unraid's main IP. This guide is specifically to deal with containers that have a dedicated IP on a custom network. 1 Quote Link to comment
ljm42 Posted August 31, 2023 Share Posted August 31, 2023 Unraid 6.12.4 is now available! The release notes detail a different solution to this macvlan problem, the method detailed in this guide is no longer needed. 1 Quote Link to comment
ljm42 Posted August 31, 2023 Share Posted August 31, 2023 4 hours ago, TRaSH said: In my current setup i use bonding (Mode 4 (802.3ad)), main reason was because when I'm doing heavy traffic up/download, i got issues during plex playback because my nic was fully saturated. With these changes i have the feeling i will go back to the same issues before i decided to bond them. I have updated the 6.12.4 release notes to make it clear that the new solution does work with bonding: https://docs.unraid.net/unraid-os/release-notes/6.12.4/ Quote Link to comment
Masterwishx Posted September 1, 2023 Share Posted September 1, 2023 Updated to 6.12.4 , moved back to bond0 from 2nics (eth1 for docker) this solution , made steps for docker fix. in VM Manager default netsource : virbr0 , but in every VM :vhost0 , so should we set vhost0 as default for VMS only in case of problems like said in release note, or preffered by default ? Quote Link to comment
thecode Posted September 1, 2023 Share Posted September 1, 2023 (edited) 2 hours ago, Masterwishx said: Updated to 6.12.4 , moved back to bond0 from 2nics (eth1 for docker) this solution , made steps for docker fix. in VM Manager default netsource : virbr0 , but in every VM :vhost0 , so should we set vhost0 as default for VMS only in case of problems like said in release note, or preffered by default ? vhost0 replaces br0. virbr0 is provided by libvirt and is used to created a NAT network for VMs. You should only update the interface in VMs (if it doesn't update automatically) to vhost0 if it was br0. There is no preferred default, it depends if you want the VM to run on your network (non-isolated) or run in a separated isolated network. Edited September 1, 2023 by thecode 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.