etho -> macvtap0 - Promiscuous mode flip flop


Recommended Posts

I just happened to open the UR log window today, no particular reason just taking a peek. And noticed that etho and macvtap0 were trading promiscuous mode back and forth.

 

This, seemingly goes on all the time, forever.

 

Apr 10 10:40:05 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:40:06 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:40:06 KNOXX kernel: device eth0 left promiscuous mode
Apr 10 10:40:20 KNOXX kernel: device macvtap0 entered promiscuous mode
Apr 10 10:40:20 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:40:21 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:40:21 KNOXX kernel: device eth0 left promiscuous mode
Apr 10 10:40:35 KNOXX kernel: device macvtap0 entered promiscuous mode
Apr 10 10:40:35 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:40:36 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:40:36 KNOXX kernel: device eth0 left promiscuous mode
Apr 10 10:40:50 KNOXX kernel: device macvtap0 entered promiscuous mode
Apr 10 10:40:50 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:40:51 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:40:51 KNOXX kernel: device eth0 left promiscuous mode
Apr 10 10:41:05 KNOXX kernel: device macvtap0 entered promiscuous mode
Apr 10 10:41:05 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:41:06 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:41:06 KNOXX kernel: device eth0 left promiscuous mode
Apr 10 10:41:20 KNOXX kernel: device macvtap0 entered promiscuous mode
Apr 10 10:41:20 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:41:21 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:41:21 KNOXX kernel: device eth0 left promiscuous mode
Apr 10 10:41:35 KNOXX kernel: device macvtap0 entered promiscuous mode
Apr 10 10:41:35 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:41:36 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:41:36 KNOXX kernel: device eth0 left promiscuous mode
Apr 10 10:41:50 KNOXX kernel: device macvtap0 entered promiscuous mode
Apr 10 10:41:50 KNOXX kernel: device eth0 entered promiscuous mode
Apr 10 10:41:51 KNOXX kernel: device macvtap0 left promiscuous mode
Apr 10 10:41:51 KNOXX kernel: device eth0 left promiscuous mode

 

I also still have the duplicated IP warnings in my Unifi. I have gone through every post outlining fixing the MACVLAN networking issues that all claim to solve both the freezes and the duplicate IP warnings, but to no avail.

 

I should add that after following the post below, I am unable to contact the host system from my Homeassistant VM. I can't ping it or map network drives to any content on the host system from within the VM.

 

I'm happy to provide screenshots of network and docker configs and any log files that might help diagnose this once and for all.

 

 

Any thoughts here?

knoxx-diagnostics-20240410-1051.zip

Edited by aglyons
Link to comment

You are saying that you still see duplicate warnings although you followed my post? Then I have no better idea, sorry.

 

regarding the HA VM, you may try to enable only 1 of both (just try both alternatives):

- IPv4 custom network on interface eth0 (optional) (default is ON)

- Host access to custom networks (default is OFF)

 

Link to comment

Thinking about this some more, I'm wondering why I have 2 vhost adapters listed in the UR CLI.

 

If bridging is disabled on the primary NIC and MACVTAP is taking care of the Docker and VM's getting out. Why are there VHOSTs being created for the primary LAN (vhost0@eth0) and the added VLAN ([email protected]), which is also on the primary NIC? Why would the VLAN (eth0.2) need a vhost created for it at all?

 

It's these VHOST NICs (both have the same MAC addr)  that are showing up in the Unifi logs as sharing the same IP as the UN primary NIC.

 

root@KNOXX:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.88/24 metric 1 scope global eth0
       valid_lft forever preferred_lft forever
4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
5: vhost0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
6: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:d6:d7:15:3b brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
30: macvtap0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe94:7776/64 scope link
       valid_lft forever preferred_lft forever

 

Link to comment
5 hours ago, murkus said:

If bridging is on, ipvlan is used.

If bridging is off, vhost is used. I guess simply each active hardware interface gets its own vhost interface.

 

 

 

If bridging is on and docker setting, custom network needs set to ipvlan. The network setting are then setup for use with ipvlan...
 

^-this has been the macvlan trace issue case. As it doesn't drop down to vhost / macvtap to network bridge correctly.

The Nic card you use may not have true promiscuous mode and should run your network for ipvaln to fix these network logged issues.

 

Confirm your nic is set with netstat and manually turn it on.

 

https://askubuntu.com/questions/430355/configure-a-network-interface-into-promiscuous-mode

unraid terminal:

 

ip a

netstat -i

ip link set eth0 promisc on

 

 

Link to comment

First off, thanks for those that took some time to reply. I appreciate the help!

 

So instead of "nestat -i" I used "ip -d link" which showed all the links and the promiscuity value.

 

I can't say I would know what is right or wrong here. But, trying to wrap my head around this, to me, I think eth0 should NOT have promiscuous mode turned on and the VLAN eth0.2@eth0 SHOULD have it turned on. It's only eth0.2 that has multiple MAC addresses on it. ETH0 does not.

 

Just to clarify so we are all on the same page, it is 02:65:43:f8:ad:c4 (vhost0@eth0 & [email protected]) that is being reported on Unifi as having the same IP as 3c:8c:f8:ee:59:84 (eth0)

 

A couple of other questions if anyone knows:

  1. Why does eth0 have "promiscuity 2" whereas all the others are "promiscuity 1" or "promiscuity 0".
  2. Why are maxmtu different across intefaces? It's "65535" on eth0.2, virbr0 and vhost0.2 and "16334" on eth0, vhost0 and macvtap0.

 

root@KNOXX:~# ip -d link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0  allmulti 0 minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 


2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0  allmulti 0 minmtu 0 maxmtu 0 
    ipip any remote any local any ttl inherit nopmtudisc addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 


3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff promiscuity 2  allmulti 0 minmtu 68 maxmtu 16334 addrgenmode eui64 numtxqueues 32 numrxqueues 32 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:03:00.0 


4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff promiscuity 1  allmulti 0 minmtu 0 maxmtu 65535 
    vlan protocol 802.1Q id 2 <REORDER_HDR> addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 


5: vhost0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP mode DEFAULT group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 16334 
    macvtap mode bridge bcqueuelen 1000 usedbcqueuelen 1000 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 


6: [email protected]: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq state UP mode DEFAULT group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff promiscuity 1  allmulti 0 minmtu 68 maxmtu 65535 
    macvtap mode bridge bcqueuelen 1000 usedbcqueuelen 1000 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 


7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:d6:d7:15:3b brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 65535 
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:d6:d7:15:3b designated_root 8000.2:42:d6:d7:15:3b root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   47.60 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 no_linklocal_learn 0 mcast_vlan_snooping 0 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 


10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 65535 
    bridge forward_delay 200 hello_time 200 max_age 2000 ageing_time 30000 stp_state 1 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.52:54:0:ca:33:2d designated_root 8000.52:54:0:ca:33:2d root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    1.28 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   80.37 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 no_linklocal_learn 0 mcast_vlan_snooping 0 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 


30: macvtap0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 500
    link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 16334 
    macvtap mode bridge bcqueuelen 1000 usedbcqueuelen 1000 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 

 

Link to comment

We need 2 pictures of your settings in unraid.

 

1. Advance docker settings:

 

image.thumb.png.93143264031fc3a1bcc6f37010c5f352.png

--We need to know Docker custom network type / Hast access to custom settings...

 

2. Network tab:

image.png.a88bdcc90f450a958bb98eb436abb3d0.png

--We need to know if you have bridging on or off. I need to know if you are using bonding.

 

Especial with the vlan 2 that you have setup on eth0 interface...

It appears that your vlan 2 is getting taped for docker macvlan instead of eth0 directly...

 

To clarify, eth0 would need promics mode on. Not just vlan 2.

 

The reason for netstat -i was to see which flags was enabled...
https://askubuntu.com/questions/430355/configure-a-network-interface-into-promiscuous-mode
https://support.citrix.com/article/CTX116493/how-to-enable-promiscuous-mode-on-a-physical-network-card

Unraid at boot attempts to enable them...

 

ip link set eth0 promisc on

You should be able to use the command on/off:


*Promic mode need to be on to use macvlan...


As murkus said earlier...

If bridging  is on your telling unraid to use ipvlan

if bridging is off your telling unraid to use macvlan.

 

You may have a conflict depending on your docker network configuration...

If I were to guess. I believe you have bridging and bonding off. Since you are using vlan and want to use vlans I recommend enabling bonding.

 

 

Edited by bmartino1
Link to comment
3 hours ago, bmartino1 said:

--We need to know Docker custom network type / Hast access to custom settings...

 

image.thumb.png.045e8b0f40b7c5873ea41aa3b8b45d0f.png

 

 

3 hours ago, bmartino1 said:

--We need to know if you have bridging on or off. I need to know if you are using bonding.

 

image.thumb.png.b2be92765ac74ab84fd29d88cbabeae5.png

 

3 hours ago, bmartino1 said:

The reason for netstat -i was to see which flags was enabled...

 

root@KNOXX:~# netstat -i
Kernel Interface table
Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
docker0   1500        0      0      0 0             0      0      0      0 BMU
eth0      1500 994505887      0 107185 0      6225434828      0      0      0 BMPRU
eth0.2    1500 22788216      0      0 0      10709997      0      0      0 BMRU
lo       65536   914688      0      0 0        914688      0      0      0 LRU
macvtap0  1500 36174618    129    129 0      41388622      0      0      0 BMRU
vhost0    1500 42786669      0  72288 0         62893      0      0      0 BMRU
vhost0.2  1500  1481570      0      0 0             3      0      0      0 BMPRU
virbr0    1500        0      0      0 0             0      0      0      0 BMU

 

 

3 hours ago, bmartino1 said:

Since you are using vlan and want to use vlans I recommend enabling bonding.

 

I'm confused as AFAIK, bonding is for combining multiple physical NICs which in this system, there is only one NIC.

Link to comment

Ty for this information. I believe I have a solution to your issue. You are correct, normally bonding is for combining 2 nic cards. You can have bonding with 1 nic. We do this to make and fix a unraid networking issue.

your experience a macvlan tap issues due to promisic mode and trunking... You may need to enable promisic mode on 

ip link set eth0.2 promisc on


Any vlan on eth0 you are using macvlan as the docker network will require this enabled.

To fix macvlan 802.1q which is what you are trying to do for your eth0.2 for your docker network setup since you have macvlan docker setting and added a vlan.

Next we need to check trunking is enabled for the VLAN to actual work on your system. These sometimes can require specific net capable cards. 

https://www.techtarget.com/searchsecurity/definition/promiscuous-mode
please review the YouTube video going over every docker network....

You may need to use terminal to delete and recreate you macvlan network to macvtap eth0 and not eth0.2

or enable bonding, which would take priority over eth0 and can potentially fix your network settings.

Edited by bmartino1
can't type
Link to comment

I enabled bonding "active-backup (1)". Nestat -i reports the following.

 

root@KNOXX:~# netstat -i
Kernel Interface table
Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0     1500   221481      0     63 0         17634      0      0      0 BMmRU
bond0.2   1500    30618      0      0 0          9139      0      0      0 BMRU
docker0   1500        0      0      0 0             0      0      0      0 BMU
eth0      1500   221481      0 109304 0         17634      0      0      0 BMPsRU
lo       65536  1039555      0      0 0       1039555      0      0      0 LRU
vhost0    1500     5021      0     53 0           674      0      0      0 BMRU
vhost0.2  1500       60      0      0 0             6      0      0      0 BMRU
root@KNOXX:~# 

 

I'll have to wait and see if the duped IP warning pops up again. I'll post back if I see anything.

 

Thx for helping out!!!

  • Like 1
Link to comment
Posted (edited)

Hmmmmm........Nope. No dice!

 

One thing I will say has changed is it is much more stable in showing up in Unifi now. It used to pop up now and then and then disappear for a bit. Now it's there full time and never goes away.

 

(but would a reboot be needed to clear anything out?)

 

image.thumb.png.379af3eb31c24ca5799f8f83525d8f92.png

 

root@KNOXX:~# netstat -i
Kernel Interface table
Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0     1500 191492028      0  31613 0      1517176420      0      0      0 BMmRU
bond0.2   1500  1672302      0      0 0       1477775      0      0      0 BMRU
docker0   1500        0      0      0 0             0      0      0      0 BMU
eth0      1500 191492028      0 109419 0      1517176420      0      0      0 BMPsRU
lo       65536  3104925      0      0 0       3104925      0      0      0 LRU
macvtap0  1500  4186327      0      0 0       6011325      0      0      0 BMRU
vhost0    1500 178933022      0  26852 0      67473052      0      0      0 BMRU
vhost0.2  1500    16069      0      0 0             6      0      0      0 BMRU
virbr0    1500        0      0      0 0             0      0      0      0 BMU

 

All that seems to have happened is two new interfaces were added, bond0 and bond0.2. the vhost0 and vhost0.2 are still there and from what I can see, vhost0@bond0 is coming up with the same IP as eth0.

 

root@KNOXX:~# ip addr
256: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:14:ad:6e:85 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: eth0: <BROADCAST,MULTICAST,PROMISC,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
278: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
279: macvtap0@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe94:7776/64 scope link 
       valid_lft forever preferred_lft forever
252: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.88/24 metric 1 scope global bond0
       valid_lft forever preferred_lft forever
253: bond0.2@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
254: vhost0@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.88/24 scope global vhost0
       valid_lft forever preferred_lft forever
255: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff

 

 

 

EDIT:

Maybe someone can explain the purpose of the vhost interfaces? eth0 is defined and so is the VLAN eth0.2. What is the point or purpose of the vhost additions, one for the physical interface eth0 and one for the VLAN eth0.2? In a stretch I could understand the vhost0.2 as eth0.2 is a virtual NIC in a way. But why would the primary NIC eth0 need a vhost as well in vhost0?

Edited by aglyons
questions about vhost purpose
Link to comment

Vhost explained: https://stackoverflow.com/questions/68334207/what-is-linux-vhost-and-how-it-helps-virtio-offloading
vhost acts like a brdge that taps into the active interface for L3 Routes...
macvtap is the docekr driver simlar to vhost but for the macvlan network drvier.
not seeing LAN traffic is a issue with how networking is done and how ipvlan is implements on vlan2 set up for access.

To see unfi lan network traffic you:

1 must have a unifi/ubnt switch.

2 Macvlan docker network driver must be used...

3 have a Mac address that is different from unraids on the docker.

 

4 review this entire video going over docker networks...

Goes over from commands to explaining the difference and xyz why....
*****You may need to create a docker network to fix this...******

UNIFI is a network tool! It requires network access to things...


What Unifi are you running:

Linux IO:


Pete Asking Unif reborn:


LXC:


Otherwise In your case, I would recommend the LXC route... as you can configure it to use x network interface and be on bond0.2 / vhost0.2 for its ip/separate mac to connect to the local network to be a network tool.

If vlan 2 is only their to separate for unfi your entire network setup is wrong and I ask you to review the forum post from my share above tothe end to attempt to understand what how and why...

Edited by bmartino1
video becomeing the pl;ay in post deltes line above and below. post lost content...
Link to comment

OK, so I am not running the controller in a container or VM, I have a UDMProSE. Hardware.

 

IT (unifi hardware is on its own "default" VLAN)

Docker (for docker containers or other external services)

Family (for regular client devices)

Guest (standard unifi guest network)

 

I've watched NetworkChucks video a while back and just watched it again, just in case something jumped out at me. The odd thing is the networks he was creating don't result in the same behavior that UNRaid does. His MACVLANnetwork setup does not attach the same IP as the host. It doesn't assign an IP at all. Here on UR, it does and that is what is causing the problem IMO.

 

To dissect the three points you mentioned;

 

To see unfi lan network traffic you:

1 must have a unifi/ubnt switch.

 

YUP, all my network equipment is Unifi

 

2 Macvlan docker network driver must be used...

 

YUP, but that's what UNRaid is supposed to be doing in the background, right?

 

3 have a Mac address that is different from unraids on the docker.

 

Not totally clear what you mean here but the point of the MACVLAN is so that each container on that network will get it's own MAC. So that would mean yes, Unifi should, and does, see each container as a separate client device and tracks the traffic. That is why I want to use MACVLAN. Using IPVLAN, Unifi doesn't see the client and I can't setup traffic rules against them. Bridge is even worse as it all goes though one IP address and one MAC. That also kills anything possible with PiHole (local DNS, etc).

 

So I'm still at a loss here as the way I see it, which is also how it was shown in NetworkChucks video, the vhost should not be getting the same IP as the parent interface. It shouldn't be getting an IP address at all as far as I can tell.

 

 

Link to comment

Thank you for that information...

Please review the above stuff agian... and potential re-watch again the network chuck video...

On your Udm Pro the option prob lay not setup correctly for vlan2 as that needs trunking... and other connection problay back to family for lan connections...

Look for spanning tree/ spd and enabled on the udmpro. <This explains the constant back and forth at the top...

 

...Yes unrad does the same tap in your case based previous posted data above: 

 

Quote

root@KNOXX:~# ip addr
256: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:14:ad:6e:85 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: eth0: <BROADCAST,MULTICAST,PROMISC,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
278: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
279: macvtap0@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe94:7776/64 scope link 
       valid_lft forever preferred_lft forever
252: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.88/24 metric 1 scope global bond0
       valid_lft forever preferred_lft forever
253: bond0.2@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
254: vhost0@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.88/24 scope global vhost0
       valid_lft forever preferred_lft forever
255: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
    link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff

 

Vhost and bond 0 is where the unraid default network is at. you can see the same ip address on both this is the drvier tap the unraid did for the default network. Since you made a vlan which became bon0.2 you need to make a docker network to tap and use that vlan.

Since you are using a vlan unraid defaulted that vlan to ipvlan you will need to use docker network create command to use dockers on unraid with your udmpro vlan.

This requires more setup and configuration to the unraid host / udmpro to properly setup as you are tryign to segment data to unraid on both the udm router network and the unraid network for services.

 

My friend uses a udm and i will try to help but there also we will be missing as this is support for unraid and not ubquity.

Please review:

 

Link to comment

Essential, I need to know more on what's connect to what and how...

modem > UDM > ??? ?? ??? > unraid...

1 nic for unraid needs to connect to what traffic?

ATM I can only assume default / whatever vlan2 is... there are thousands of ways to configure and misconfigure the udm...

-My biggest complaint against ubiquity is issues with nat.. they still ahven't fixed the nat issues....

For me to further assist...
In my head on how this should be is vlan 2 is its own dhcp ip network and unraid and docker talk to the dhcp server that is vlan2...
Vlan 2 is able to talk to defaut network at udm level. Depending on swtich port and what's counted to what that may be different.

 

In which case, the udm switch port that unraid connects to needs xyz settings.
You misunderstand the network settings in unraid:
image.png.035144d724f64394e47889e4b73c0938.png

Not enough info.

so atm I would say in unraid delete vlan 2 as the udm should be giving unraid port to vlan 2 only...
keep docker setting the way they are and have bonding with 1 eth0 as active backup for the time being.


As such  I don't see the need to make vlan2 on unraid in this case... and your docker network needs to use macvlan.
*unless you segment services on unraid. IE smb is on vlan 1(default) and vlan2 is xyz which is then made via the udm...

**If segmenting your on your own you would need to trunk "all" from the direct connect of unraid. Recreate a unraid docker network. and possibly other configurations.
^not worth the headache / trouble.


 

Edited by bmartino1
Link to comment

I don't mean to be confrontational bmartino1, I appreciate your trying to help. But I'm not sure what the concern is with how the VLANs are set up on the UDM. All the services on that VLAN that are served by Unraid work just fine across the network. All of those containers on that VLAN show up as individual clients in the Unifi Network UI, with unique MAC addresses, just as I wanted them to. Also Spanning Tree is for network loops across switches. It would have no bearing in this situation.

 

The settings on the UDM would have nothing to do with the UNRaid server assigning the primary NICs IP to the vhost0 interface. That is what is causing the 'duplicate IP/different MAC addr' problem. This is NOT just shown in the Unifi Controller, it is clear in the CLI output on the UNRaid server itself. NetworkChucks video shows the MACVLan network he created does not have an IP assigned to it at all. So why is UR assigning that IP?

 

Perhaps this is a side effect of the workaround for the MACVLan freeze issue in the linux kernel that came out a little while ago. That is when I started seeing this issue pop up. Once I went through the workaround setup, this behavior started.

 

Link to comment

For vhost got same IP problem, this likely due to host access enable and your network environment relate. in general said, once enable that always trouble.

 

I use macvtap at all ( ipvlan also fine ), just disable bridge and host access.

 

For same mac on source destination, this usually because routing relate and easy happen when use multiple NIC.

 

To simple my question, does all problem gone if you disable bridge and host access and never touch UR routing table? ( exclude HA unreachable and HA can't access share issue )

Edited by Vr2Io
Link to comment
13 hours ago, aglyons said:

I don't mean to be confrontational bmartino1, I appreciate your trying to help. But I'm not sure what the concern is with how the VLANs are set up on the UDM. All the services on that VLAN that are served by Unraid work just fine across the network. All of those containers on that VLAN show up as individual clients in the Unifi Network UI, with unique MAC addresses, just as I wanted them to. Also Spanning Tree is for network loops across switches. It would have no bearing in this situation.

 

The settings on the UDM would have nothing to do with the UNRaid server assigning the primary NICs IP to the vhost0 interface. That is what is causing the 'duplicate IP/different MAC addr' problem. This is NOT just shown in the Unifi Controller, it is clear in the CLI output on the UNRaid server itself. NetworkChucks video shows the MACVLan network he created does not have an IP assigned to it at all. So why is UR assigning that IP?

 

Perhaps this is a side effect of the workaround for the MACVLan freeze issue in the linux kernel that came out a little while ago. That is when I started seeing this issue pop up. Once I went through the workaround setup, this behavior started.

 

That's right, I'm an experienced hobbyist and trying to help. I can't help how I come off or appear to others. I'm not being confrontational. 

ATM -- Then I don't see or know of the problem nor the question you really have at this time due to the interactions.

So Summary as I understand it:
You first went from a bad networking configuration on Unraid due to how promisic mode was enabled. Due to this post as "flip flop" as you put it...

Then to a problematic of bond setting up a vlan 2 and not seeing docker network traffic in unifi... As it was not clear and I assumed as I help alot on this forum with unifi dockers. So I assumed and posted about the unifi dockers...

Now we know that when you said unifi you meant a udm pro running unfi network application. This changes a few things which I outlined above...


So I recommend spd spanning tree... As this would be required as what you described to me now is "segmenting" on unraid and have unraid run and make vlans. Which is a thing... Which I also outlined above is to not have unriad segment that traffic. As I don't have enough info on your setup, that is breaking things... On top of new info that changes how you would handle basic networking. Also compounded by different experience and knowledge of networking....

So lets review:
How Unraid handles networking is weird to begin with, especial since the dev decided to try and move to ipvaln.
If you take a look at the other unifi threads above, I sent you to... Then you would have come across the networking info i posted and the bridging info along with how linux interfaces work. Atm for you. I recommend installing the plugin "Tips and Tweaks".
Then go over every item in the settings network tab.

To answer your question again...

Vhost@bond0 duplicates the ip due to the static ip assigned to bond0
bond0 is taped.
This mean that what ever bond0 has is now what vhost has.
it is normal to see that in ip a....

root@BMM-Unraid:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether f0:2f:74:1c:1d:2c brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f0:2f:74:1c:1d:2c brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.254/24 metric 1 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::f22f:74ff:fe1c:1d2c/64 scope link 
       valid_lft forever preferred_lft forever
5: vhost0@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
    link/ether 02:56:98:cd:46:72 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.254/24 scope global vhost0
       valid_lft forever preferred_lft forever
    inet6 fe80::56:98ff:fecd:4672/64 scope link 
       valid_lft forever preferred_lft forever
6: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.253.0.1/32 scope global wg0
       valid_lft forever preferred_lft forever
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:cf:39:95:36 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
8: br-b03de0ef91d0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:67:4e:61:38 brd ff:ff:ff:ff:ff:ff
    inet 172.31.200.1/24 brd 172.31.200.255 scope global br-b03de0ef91d0
       valid_lft forever preferred_lft forever
9: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:2b:22:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever

^-Normal...

vhost is not the issue here and I don't know how to explain or display that in linuxs or context.
This is literary how unraid made the layer 3 route to maintain the vm network to talk to the local network and has nothing to do with your "flip flop issue"
the ip is the same for ip routing that unraid / virio did to maintain and fix routes.

This is normally and should be happening!...

The workaround link form release notes of 6.12.4 as I have been fig hinting the devs from there design of removing and moving off macvaln compounded by having them bring it back as it was the default and is still needed is a problem. The removed option within settings docker and networking settings was then pre-programmed(for point and click end users) to be these settings = ipvlan and not these settings = macvlan. Ongoing issues that are a per client issues since unraid version 6.9 and 6.11... 6.12 broke how docker taps and how networking is done.

This is more on how some setting in unraid as set once and at reboot are not touched again nor change unless you delete the config files.
This was outlined in my other post to the form. So at this time all I can really say is search the form. It clears that you're not understanding nor taking my advice and reviewing the posts outlines. So I see no other recourse/reason to continue helping outside the fact that there needs to be more research done on your part.

I honestly would recommend making a new topic. You can even say a continuation for this topic, as statically do to the # of replies, I don't see the other community dev / moderator coming in unless it's a different post. Sorry I couldn't help you.

Please research more into networking with Linux and options for networking tun/taping, vlans, bridging vs bonding, bonding and bridging for other Linux interfaces...

Good luck

Edited by bmartino1
spelling
Link to comment
11 hours ago, Vr2Io said:

For vhost got same IP problem, this likely due to host access enable and your network environment relate. in general said, once enable that always trouble.

 

I use macvtap at all ( ipvlan also fine ), just disable bridge and host access.

 

For same mac on source destination, this usually because routing relate and easy happen when use multiple NIC.

 

To simple my question, does all problem gone if you disable bridge and host access and never touch UR routing table? ( exclude HA unreachable and HA can't access share issue )


the problem with unifi and ipvaln is that everything from dockers to services use the same mac address and are not logged correctly in the unifi application. macvlan must be used for unifi...

Link to comment
5 hours ago, bmartino1 said:


the problem with unifi and ipvaln is that everything from dockers to services use the same mac address and are not logged correctly in the unifi application. macvlan must be used for unifi...

That's know, but why OP have the said problem ( flipping and source destination same mac )

Edited by Vr2Io
Link to comment
11 hours ago, Vr2Io said:

That's known, but why OP have the said problem ( flipping and source destination due to same mac )


Correct because the client used ipvaln on the unraid created vlan2  that is assigned to xyz docker... that they are using and that shared the same mac address. Then blow back from unifi network application on how it handled duplicate mac and how unraid handled the syslog network traffic... Caused by miss configuration in both udm pro(UNIFI) and unraid via promisc mode and settings...

Link to comment

Let me clear up the confusions.

 

First, I have and always have had the system running in MACVLAN mode.

 

The actual problem is the primary NIC VHOST (vhost0) is being assigned the same IP as the primary NIC. The primary NIC and vhost0 each have unique MAC addresses. 

 

As such, the network has two systems with different MAC addr using the same IP addr. The cherry in top being both systems are plugged into the same physical network port!

 

This behavior has only cropped up since implementing the workaround to allow MACVLAN without the call-trace lock ups.

 

What's really confusing to me is if you search online for "Linux, MACVLAN, call trace" all that comes up are UnRaid reports. Even the bug report for the Linux kernel was made by limetech. There are no other reports by anyone else about this behavior. So it seems to me that this is an UNRaid specific issue.

 

I have other Linux systems running Docker with MACVLAN networks that have never had a call trace lock up. So why are we seeing this happen on UNRaid systems?

Link to comment

To summarize,

 

Pls try

- disable host access

- disable bridge

- disable bonding

- stop VM

 

Then check does different mac have same IP (assign / static) and promiscuous mode flapping gone. ( you may need waiting enough aging time for switch forgot the mac )

 

If positive, we can assume problem come from VM then focus on VM part / setting.

Edited by Vr2Io
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.