Jump to content

aglyons

Members
  • Posts

    195
  • Joined

  • Last visited

Posts posted by aglyons

  1. Let me clear up the confusions.

     

    First, I have and always have had the system running in MACVLAN mode.

     

    The actual problem is the primary NIC VHOST (vhost0) is being assigned the same IP as the primary NIC. The primary NIC and vhost0 each have unique MAC addresses. 

     

    As such, the network has two systems with different MAC addr using the same IP addr. The cherry in top being both systems are plugged into the same physical network port!

     

    This behavior has only cropped up since implementing the workaround to allow MACVLAN without the call-trace lock ups.

     

    What's really confusing to me is if you search online for "Linux, MACVLAN, call trace" all that comes up are UnRaid reports. Even the bug report for the Linux kernel was made by limetech. There are no other reports by anyone else about this behavior. So it seems to me that this is an UNRaid specific issue.

     

    I have other Linux systems running Docker with MACVLAN networks that have never had a call trace lock up. So why are we seeing this happen on UNRaid systems?

  2. I don't mean to be confrontational bmartino1, I appreciate your trying to help. But I'm not sure what the concern is with how the VLANs are set up on the UDM. All the services on that VLAN that are served by Unraid work just fine across the network. All of those containers on that VLAN show up as individual clients in the Unifi Network UI, with unique MAC addresses, just as I wanted them to. Also Spanning Tree is for network loops across switches. It would have no bearing in this situation.

     

    The settings on the UDM would have nothing to do with the UNRaid server assigning the primary NICs IP to the vhost0 interface. That is what is causing the 'duplicate IP/different MAC addr' problem. This is NOT just shown in the Unifi Controller, it is clear in the CLI output on the UNRaid server itself. NetworkChucks video shows the MACVLan network he created does not have an IP assigned to it at all. So why is UR assigning that IP?

     

    Perhaps this is a side effect of the workaround for the MACVLan freeze issue in the linux kernel that came out a little while ago. That is when I started seeing this issue pop up. Once I went through the workaround setup, this behavior started.

     

  3. OK, so I am not running the controller in a container or VM, I have a UDMProSE. Hardware.

     

    IT (unifi hardware is on its own "default" VLAN)

    Docker (for docker containers or other external services)

    Family (for regular client devices)

    Guest (standard unifi guest network)

     

    I've watched NetworkChucks video a while back and just watched it again, just in case something jumped out at me. The odd thing is the networks he was creating don't result in the same behavior that UNRaid does. His MACVLANnetwork setup does not attach the same IP as the host. It doesn't assign an IP at all. Here on UR, it does and that is what is causing the problem IMO.

     

    To dissect the three points you mentioned;

     

    To see unfi lan network traffic you:

    1 must have a unifi/ubnt switch.

     

    YUP, all my network equipment is Unifi

     

    2 Macvlan docker network driver must be used...

     

    YUP, but that's what UNRaid is supposed to be doing in the background, right?

     

    3 have a Mac address that is different from unraids on the docker.

     

    Not totally clear what you mean here but the point of the MACVLAN is so that each container on that network will get it's own MAC. So that would mean yes, Unifi should, and does, see each container as a separate client device and tracks the traffic. That is why I want to use MACVLAN. Using IPVLAN, Unifi doesn't see the client and I can't setup traffic rules against them. Bridge is even worse as it all goes though one IP address and one MAC. That also kills anything possible with PiHole (local DNS, etc).

     

    So I'm still at a loss here as the way I see it, which is also how it was shown in NetworkChucks video, the vhost should not be getting the same IP as the parent interface. It shouldn't be getting an IP address at all as far as I can tell.

     

     

  4. Hmmmmm........Nope. No dice!

     

    One thing I will say has changed is it is much more stable in showing up in Unifi now. It used to pop up now and then and then disappear for a bit. Now it's there full time and never goes away.

     

    (but would a reboot be needed to clear anything out?)

     

    image.thumb.png.379af3eb31c24ca5799f8f83525d8f92.png

     

    root@KNOXX:~# netstat -i
    Kernel Interface table
    Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
    bond0     1500 191492028      0  31613 0      1517176420      0      0      0 BMmRU
    bond0.2   1500  1672302      0      0 0       1477775      0      0      0 BMRU
    docker0   1500        0      0      0 0             0      0      0      0 BMU
    eth0      1500 191492028      0 109419 0      1517176420      0      0      0 BMPsRU
    lo       65536  3104925      0      0 0       3104925      0      0      0 LRU
    macvtap0  1500  4186327      0      0 0       6011325      0      0      0 BMRU
    vhost0    1500 178933022      0  26852 0      67473052      0      0      0 BMRU
    vhost0.2  1500    16069      0      0 0             6      0      0      0 BMRU
    virbr0    1500        0      0      0 0             0      0      0      0 BMU

     

    All that seems to have happened is two new interfaces were added, bond0 and bond0.2. the vhost0 and vhost0.2 are still there and from what I can see, vhost0@bond0 is coming up with the same IP as eth0.

     

    root@KNOXX:~# ip addr
    256: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:14:ad:6e:85 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
    3: eth0: <BROADCAST,MULTICAST,PROMISC,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    278: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
           valid_lft forever preferred_lft forever
    279: macvtap0@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
        link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::5054:ff:fe94:7776/64 scope link 
           valid_lft forever preferred_lft forever
    252: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
        inet 192.168.200.88/24 metric 1 scope global bond0
           valid_lft forever preferred_lft forever
    253: bond0.2@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    254: vhost0@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
        inet 192.168.200.88/24 scope global vhost0
           valid_lft forever preferred_lft forever
    255: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff

     

     

     

    EDIT:

    Maybe someone can explain the purpose of the vhost interfaces? eth0 is defined and so is the VLAN eth0.2. What is the point or purpose of the vhost additions, one for the physical interface eth0 and one for the VLAN eth0.2? In a stretch I could understand the vhost0.2 as eth0.2 is a virtual NIC in a way. But why would the primary NIC eth0 need a vhost as well in vhost0?

  5. I enabled bonding "active-backup (1)". Nestat -i reports the following.

     

    root@KNOXX:~# netstat -i
    Kernel Interface table
    Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
    bond0     1500   221481      0     63 0         17634      0      0      0 BMmRU
    bond0.2   1500    30618      0      0 0          9139      0      0      0 BMRU
    docker0   1500        0      0      0 0             0      0      0      0 BMU
    eth0      1500   221481      0 109304 0         17634      0      0      0 BMPsRU
    lo       65536  1039555      0      0 0       1039555      0      0      0 LRU
    vhost0    1500     5021      0     53 0           674      0      0      0 BMRU
    vhost0.2  1500       60      0      0 0             6      0      0      0 BMRU
    root@KNOXX:~# 

     

    I'll have to wait and see if the duped IP warning pops up again. I'll post back if I see anything.

     

    Thx for helping out!!!

    • Like 1
  6. 3 hours ago, bmartino1 said:

    --We need to know Docker custom network type / Hast access to custom settings...

     

    image.thumb.png.045e8b0f40b7c5873ea41aa3b8b45d0f.png

     

     

    3 hours ago, bmartino1 said:

    --We need to know if you have bridging on or off. I need to know if you are using bonding.

     

    image.thumb.png.b2be92765ac74ab84fd29d88cbabeae5.png

     

    3 hours ago, bmartino1 said:

    The reason for netstat -i was to see which flags was enabled...

     

    root@KNOXX:~# netstat -i
    Kernel Interface table
    Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
    docker0   1500        0      0      0 0             0      0      0      0 BMU
    eth0      1500 994505887      0 107185 0      6225434828      0      0      0 BMPRU
    eth0.2    1500 22788216      0      0 0      10709997      0      0      0 BMRU
    lo       65536   914688      0      0 0        914688      0      0      0 LRU
    macvtap0  1500 36174618    129    129 0      41388622      0      0      0 BMRU
    vhost0    1500 42786669      0  72288 0         62893      0      0      0 BMRU
    vhost0.2  1500  1481570      0      0 0             3      0      0      0 BMPRU
    virbr0    1500        0      0      0 0             0      0      0      0 BMU

     

     

    3 hours ago, bmartino1 said:

    Since you are using vlan and want to use vlans I recommend enabling bonding.

     

    I'm confused as AFAIK, bonding is for combining multiple physical NICs which in this system, there is only one NIC.

  7. First off, thanks for those that took some time to reply. I appreciate the help!

     

    So instead of "nestat -i" I used "ip -d link" which showed all the links and the promiscuity value.

     

    I can't say I would know what is right or wrong here. But, trying to wrap my head around this, to me, I think eth0 should NOT have promiscuous mode turned on and the VLAN eth0.2@eth0 SHOULD have it turned on. It's only eth0.2 that has multiple MAC addresses on it. ETH0 does not.

     

    Just to clarify so we are all on the same page, it is 02:65:43:f8:ad:c4 (vhost0@eth0 & [email protected]) that is being reported on Unifi as having the same IP as 3c:8c:f8:ee:59:84 (eth0)

     

    A couple of other questions if anyone knows:

    1. Why does eth0 have "promiscuity 2" whereas all the others are "promiscuity 1" or "promiscuity 0".
    2. Why are maxmtu different across intefaces? It's "65535" on eth0.2, virbr0 and vhost0.2 and "16334" on eth0, vhost0 and macvtap0.

     

    root@KNOXX:~# ip -d link
    
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0  allmulti 0 minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 
    
    
    2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0  allmulti 0 minmtu 0 maxmtu 0 
        ipip any remote any local any ttl inherit nopmtudisc addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 
    
    
    3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff promiscuity 2  allmulti 0 minmtu 68 maxmtu 16334 addrgenmode eui64 numtxqueues 32 numrxqueues 32 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:03:00.0 
    
    
    4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff promiscuity 1  allmulti 0 minmtu 0 maxmtu 65535 
        vlan protocol 802.1Q id 2 <REORDER_HDR> addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 
    
    
    5: vhost0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP mode DEFAULT group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 16334 
        macvtap mode bridge bcqueuelen 1000 usedbcqueuelen 1000 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 
    
    
    6: [email protected]: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq state UP mode DEFAULT group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff promiscuity 1  allmulti 0 minmtu 68 maxmtu 65535 
        macvtap mode bridge bcqueuelen 1000 usedbcqueuelen 1000 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 
    
    
    7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
        link/ether 02:42:d6:d7:15:3b brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 65535 
        bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:d6:d7:15:3b designated_root 8000.2:42:d6:d7:15:3b root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   47.60 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 no_linklocal_learn 0 mcast_vlan_snooping 0 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 
    
    
    10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
        link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 65535 
        bridge forward_delay 200 hello_time 200 max_age 2000 ageing_time 30000 stp_state 1 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.52:54:0:ca:33:2d designated_root 8000.52:54:0:ca:33:2d root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    1.28 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   80.37 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 no_linklocal_learn 0 mcast_vlan_snooping 0 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 
    
    
    30: macvtap0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 500
        link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 16334 
        macvtap mode bridge bcqueuelen 1000 usedbcqueuelen 1000 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 

     

  8. Thinking about this some more, I'm wondering why I have 2 vhost adapters listed in the UR CLI.

     

    If bridging is disabled on the primary NIC and MACVTAP is taking care of the Docker and VM's getting out. Why are there VHOSTs being created for the primary LAN (vhost0@eth0) and the added VLAN ([email protected]), which is also on the primary NIC? Why would the VLAN (eth0.2) need a vhost created for it at all?

     

    It's these VHOST NICs (both have the same MAC addr)  that are showing up in the Unifi logs as sharing the same IP as the UN primary NIC.

     

    root@KNOXX:~# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
    3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
        inet 192.168.200.88/24 metric 1 scope global eth0
           valid_lft forever preferred_lft forever
    4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
    5: vhost0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
    6: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
    7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:d6:d7:15:3b brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
           valid_lft forever preferred_lft forever
    30: macvtap0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
        link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::5054:ff:fe94:7776/64 scope link
           valid_lft forever preferred_lft forever

     

  9. I just happened to open the UR log window today, no particular reason just taking a peek. And noticed that etho and macvtap0 were trading promiscuous mode back and forth.

     

    This, seemingly goes on all the time, forever.

     

    Apr 10 10:40:05 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:40:06 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:40:06 KNOXX kernel: device eth0 left promiscuous mode
    Apr 10 10:40:20 KNOXX kernel: device macvtap0 entered promiscuous mode
    Apr 10 10:40:20 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:40:21 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:40:21 KNOXX kernel: device eth0 left promiscuous mode
    Apr 10 10:40:35 KNOXX kernel: device macvtap0 entered promiscuous mode
    Apr 10 10:40:35 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:40:36 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:40:36 KNOXX kernel: device eth0 left promiscuous mode
    Apr 10 10:40:50 KNOXX kernel: device macvtap0 entered promiscuous mode
    Apr 10 10:40:50 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:40:51 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:40:51 KNOXX kernel: device eth0 left promiscuous mode
    Apr 10 10:41:05 KNOXX kernel: device macvtap0 entered promiscuous mode
    Apr 10 10:41:05 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:41:06 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:41:06 KNOXX kernel: device eth0 left promiscuous mode
    Apr 10 10:41:20 KNOXX kernel: device macvtap0 entered promiscuous mode
    Apr 10 10:41:20 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:41:21 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:41:21 KNOXX kernel: device eth0 left promiscuous mode
    Apr 10 10:41:35 KNOXX kernel: device macvtap0 entered promiscuous mode
    Apr 10 10:41:35 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:41:36 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:41:36 KNOXX kernel: device eth0 left promiscuous mode
    Apr 10 10:41:50 KNOXX kernel: device macvtap0 entered promiscuous mode
    Apr 10 10:41:50 KNOXX kernel: device eth0 entered promiscuous mode
    Apr 10 10:41:51 KNOXX kernel: device macvtap0 left promiscuous mode
    Apr 10 10:41:51 KNOXX kernel: device eth0 left promiscuous mode

     

    I also still have the duplicated IP warnings in my Unifi. I have gone through every post outlining fixing the MACVLAN networking issues that all claim to solve both the freezes and the duplicate IP warnings, but to no avail.

     

    I should add that after following the post below, I am unable to contact the host system from my Homeassistant VM. I can't ping it or map network drives to any content on the host system from within the VM.

     

    I'm happy to provide screenshots of network and docker configs and any log files that might help diagnose this once and for all.

     

     

    Any thoughts here?

    knoxx-diagnostics-20240410-1051.zip

  10. On 3/7/2024 at 12:03 PM, ljm42 said:

     

    Most likely, you disabled bridging in order to enable macvlan. If you want to switch to ipvlan you need to re-enable bridging first. For more info see:

    https://docs.unraid.net/unraid-os/release-notes/6.12.4/#fix-for-macvlan-call-traces

     

    Being able to enable macvlan without getting call traces/crashes was a big enhancement of 6.12.4. A side effect of the network changes is that your router may complain about duplicate IPs on different interfaces. Aside from the complaint from your router it should not cause any actual problems.

     

    I think this may be causing problems when it comes to VM's going through the MACVTAP. I posted a new thread as it's similar but not the same.

     

     

    What do you think @JorgeB ?

     

  11. BUMP

     

    This has to be an issue with UR as I can map SMB drives in HAOS on other systems on the network. It's just connecting to the host IP address or hostname that does not work.

     

    Also, the duplicate HOST IP is on the network again with a different MAC. I suspect that this is the main culprit. I believe the HAOS VM, because it goes through the MACVTAP, it believes that eth0.2 'IS' the system I am trying to connect to. Since there are no shares there (it's a bridge after all) it fails.

     

    I've cleaned up a bunch of things that were not being used, mainly extra NICs that were in the system that I chose not to use. There is only one physical NIC in the system now and it has no bridging or bonding set. I do have a VLAN configured on it and that is being used for most of my Docker containers.

     

     

     

    image.thumb.png.3283fddb0d327b4cfe50e9884b752eb9.png

     

     

    Docker config

    image.thumb.png.65bebf1ae215b5397c64dd43fa559626.png

     

     

    Now from what I gather, because I have a VLAN set, the system is creating a VHOST for it, eth0.2 as well as etho. It is the MAC for eth0.2 that is causing the duplicated HOST IP on the network

    image.png.fb61a701cd37eb6cd0d73780fa4eaca0.png

     

    root@KNOXX:~# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
    4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
        inet 192.168.200.88/24 metric 1 scope global eth0
           valid_lft forever preferred_lft forever
    5: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 3c:8c:f8:ee:59:84 brd ff:ff:ff:ff:ff:ff
        inet 192.168.202.2/24 metric 1 scope global eth0.2
           valid_lft forever preferred_lft forever
    6: vhost0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
    7: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 500
        link/ether 02:65:43:f8:ad:c4 brd ff:ff:ff:ff:ff:ff
        inet 192.168.202.2/24 scope global vhost0.2
           valid_lft forever preferred_lft forever
    8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:b4:d2:d6:aa brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    10: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:ca:33:2d brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
           valid_lft forever preferred_lft forever
    11: macvtap0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
        link/ether 52:54:00:94:77:76 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::5054:ff:fe94:7776/64 scope link 
           valid_lft forever preferred_lft forever
    root@KNOXX:~# 

     

    But I am lost as to why the MAC address for both eth0.2@eth0 and [email protected], which is reported in UR as having the IP 192.168.202.2, shows up on the network as having the IP 192.168.200.88 ?!?!?

     

     

     

    Also, I am curious as to why the MACVTAP is IPV6 only? I have IPV6 disabled in UR and across my network. It's also disabled in HAOS. Does this matter in this instance or is that strictly for internal purposes and one of the reasons the MACVTAP approach works as it does?

     

  12. 17 minutes ago, Taddeusz said:

    If you’re using an external instance of MariaDB or MySQL you will need to manually create the database and tables using the provided schema sql scripts. Automatic database creation and updating are currently only supported for the container that includes MariaDB.

     

    AH HA! Was that mentioned somewhere that I totally missed?

     

    While I managed to get those scripts run using PHPMyAdmin eventually, I initially tried running them in DBeaver Desktop v24.0.0.202403091004. DBeaver always failed with an SQL syntax error for some reason.

     

    Thanks!

  13. Hey all!

     

    Brand new install of this (latest-nomariadb).

     

    Upon startup I get an error;

     

    "An error has occurred and this action cannot be completed. If the problem persists, please notify your system administrator or check your system logs."

     

    I've looked at the Guacamole log and there was nothing. But looking at the other logs (catalina.out and MariaDB error logs) I see that while Guacamole (192.168.202.253) is configured to use the UNAME and PWD in the configuration file, it doesn't use that to try an connect to the DB server (192.168.200.88.)

     

    config

    ### http://guacamole.apache.org/doc/gug/jdbc-auth.html#jdbc-auth-mysql
    ### MySQL properties
    mysql-hostname: 192.168.200.88
    mysql-port: 3306
    mysql-database: guacamole
    mysql-username: guacamole
    mysql-password: *********************************
    mysql-server-timezone: America/New_York

     

     

    catalina.out

    14:28:54.765 [http-nio-8080-exec-7] WARN  o.a.g.e.AuthenticationProviderFacade - The "mysql" authentication provider has encountered an internal error which will halt the authentication process.

     

     

    MariaDB error log

    2024-03-13 14:28:32 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: 'localhost' (This connection closed normally without authentication)
    2024-03-13 14:28:32 6 [Warning] Access denied for user 'root'@'localhost' (using password: YES)
    2024-03-13 14:28:54 7 [Warning] IP address '192.168.202.253' could not be resolved: Name does not resolve

     

    It is trying to connect to the server as root@localhost

     

    This is the initial startup so there are no tables even created yet on the DBServer.

     

     

     

    UPDATE*

    The MariaDB error log just updated with a new line

    2024-03-13 14:38:54 7 [Warning] Aborted connection 7 to db: 'guacamole' user: 'guacamole' host: '192.168.202.253' (Got timeout reading communication packets)

     

    So user 'guacamole' did connect but the connection timed out for some reason?!

  14. I had the issue of duplicate IP's on my network and happened across another post that seemed to sort things out. But it introduced a new issue that I haven't been able to figure out.

     

    I figured out my error after re-reading the post @murkus wrote. I had the custom network for the eth0 still enabled in the docker config.

     

    That solved the duplicate IP, different MAC issue!

     

    BUT! I seem to have now run into another problem. HAOS VM is using the vhost0 network and it is accessible on the network. But there was an error in HAOS about a failed mounted network storage volume that is located on the host system.

     

    [853.7504591] CIFS: VFS: Error connecting to socket. Aborting operation.
    [853.7518201] CIFS: VFS: cifs mount failed w/return code = -115

     

    After pulling my hair out I determined that while everyone on the network can access the HAOS website, and HAOS can ping and mount shares on other systems, HAOS can't ping or mount anything on the host system and the host system can't ping the VM.

     

    HOST (UnRaid): 192.168.200.88

    HAOS VM: 192.168.200.87

     

    Can't ping VM from HOST

    root@KNOXX:~# ping 192.168.200.87
    PING 192.168.200.87 (192.168.200.87) 56(84) bytes of data.
    From 192.168.200.88 icmp_seq=1 Destination Host Unreachable
    From 192.168.200.88 icmp_seq=2 Destination Host Unreachable
    From 192.168.200.88 icmp_seq=3 Destination Host Unreachable
    From 192.168.200.88 icmp_seq=4 Destination Host Unreachable
    From 192.168.200.88 icmp_seq=5 Destination Host Unreachable
    From 192.168.200.88 icmp_seq=6 Destination Host Unreachable
    From 192.168.200.88 icmp_seq=7 Destination Host Unreachable
    ^C
    --- 192.168.200.87 ping statistics ---
    8 packets transmitted, 0 received, +7 errors, 100% packet loss, time 7195ms
    pipe 4
    root@KNOXX:~# 

     

    Can't ping HOST from VM

    image.png.124a5b4b551d1a91563ab4b726d27c7d.png

     

    Host UnRaid routing table

    root@KNOXX:~# ip route
    default via 192.168.200.1 dev eth0 metric 1 
    default via 192.168.202.1 dev eth0.2 metric 2 
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
    192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 
    192.168.200.0/24 dev eth0 proto kernel scope link src 192.168.200.88 metric 1 
    192.168.200.0/24 dev br1 proto kernel scope link src 192.168.200.253 metric 1 linkdown 
    192.168.202.0/24 dev vhost0.2 proto kernel scope link src 192.168.202.2 
    192.168.202.0/24 dev eth0.2 proto kernel scope link src 192.168.202.2 metric 1 
    root@KNOXX:~# 

     

    HAOS VM routing table

    # ip route
    default via 192.168.200.1 dev enp1s0 proto static metric 100
    172.30.32.O/23 dev hassio proto kernel scope link src 172.30.32.1
    172.30.232.0/23 dev docker0 proto kernel scope link src 172.30.232.1
    192.168.2OO.0/24 dev enp1s0 proto kernel scope link src 192.168.2OO.87 metric 100

     

     

    Any thoughts??

  15. On 3/7/2024 at 6:40 PM, murkus said:

    to avoid vhost0@eth0 to use the same IP as eth0 (will be alarmed by arpwatch, pfSense, TrueNAS, etc.): do no NOT enable BOTH of IPv4 custom network on interface eth0 (optional) (default is ON) and Host access to custom networks (default is OFF)

     

    Does this step not interfere with the new workaround to solve the MACVLAN call trace issues? In the config docs it states "Settings > Docker > Host access to custom networks = Enabled". If I recall correctly, this was required to allow the workaround to function properly as a custom network is created when docker is started.

     

  16. I haven't run into any issues myself. I'm not a scripter really. This was built with the help of ChatGPT and just kept adding revisions until it did everything I was looking for.

     

    I've run into some docker containers that don't provide UUID/GUID features so it's impossible to set those. That's why I created it. It has saved me headaches of not being able to access files created by those apps across the LAN/SMB.

  17. 13 hours ago, GlassPup said:

    I used “Unassigned Devices” and “Unassigned Devices Plus” plugins to mount the Synology DS920+ directly within Unraid. From there I just rsync from one /mnt/ to another. Works pretty well so far and hovers around 70-80Mbs.

    Not everyone, including me is super comfortable with the CLI or writing scripts. And for all the things Synology does wrong, one thing they did right IMO is the UI for the hyperbackup. It's so easy to pick and choose a bunch of folders across the NAS and schedule a backup. I used to use Hyperbackup and point it to a QNAP NAS that had an rsync server running. Took all of 2 minutes to get set up.

     

    But UR can't be used as a target rsync server, or at least I nor anyone else has figured out how. So all the backups across the network need to be done from within the UR cli, mapping each remote folder one by one.

     

    AFAIK there also isn't any built in failure notifications with that, so that would mean scripting out error detection and notification functions, etc. I think you can see where this is going.

     

    It would be so much simpler if there was either a built in rsync server that could handle network wide backups as a target or an addon/userscript/docker template that would offer these functions.

     

    Alas, I've never been able to find one. I've resorted to getting another Synology and backing them up to each other.

  18. 4 hours ago, Caleb Bassham said:

    You may need to use a name without the `.` in it. So like eth02 or something. Then specify the name with the name key.

     

    So:

     

    networks:
    	eth02:
    		name: "eth0.2"
    		external: true

     

     

    Unraid itself named the VLAN eth0.2 when I created it in the UR network settings. Perhaps if that is creating an invalid naming convention the UR devs should be notified that this needs fixing @JorgeB.

     

    I did manage to get this working with some help from a Reddit post I put up a few days after my post here.

     

    While it is working, it is presented differently in the UR Docker list. The network shown is the hex ID of what appears to be a new network. But it does work.

     

    image.thumb.png.6ec704c00f98d204c86eed0f89246624.png

  19. I really do like this app and am very thankful for the team that put it together. 

     

    That being said, I feel this image should be transitioned into a single container with ES and REDIS included. Having those dependencies as external containers is confusing to new adopters and has clearly introduced issues, just looking at the number of threads. 

     

    I don't have any idea HOW to create an image with all the dependencies included.......yet. 

     

    But I'm gonna start looking into how that's done. If someone else beats me to it, that's be great!

  20. Haven't been working with Docker for very long and even less with compose.

     

    I have a project that I was trying to get working in compose as there's no template for it is the UR app store.

     

    https://github.com/JesseWebDotCom/homepage-for-tesla

     

    I have a MACVLAN docker network already in place (eth0.2) for some other external services so I was trying to interpret what I found on other sites on how to add those settings to the compose YAML. I must have something wrong as it's not working at all. It doesn't crash but I can't access anything from the IP I assigned it.

     

    If it's easier to create an UR template rather than crack the compose mystery then I'm all for that. Mind you, I don't have a clue how to do that yet either lol.

     

    version: '3'
    services:
      app:
        image: jessewebdotcom/homepage-for-tesla:latest
        volumes:
          - /mnt/user/appdata/homepage-for-tesla/bookmarks/bookmarks.json:/app/public/bookmarks.json
          - /mnt/user/appdata/homepage-for-tesla/images:/app/public/images
        ports:
          - "80:80"
        networks:
          eth0.2:
            ipv4_address: 192.168.202.70
    networks:
      eth0.2:
        external: true

     

  21. 1 hour ago, JorgeB said:

    Flashing to IT mode would be better, note that it may change the disks ID and require a new config, but it shouldn't based on the screenshot.

     

    As for performance, the controller won't usually bottleneck 8 disks, it will SSDs, since you have 10 disks I assume there's an expander backplane? The speeds you are showing are certainly not being limited by the controller, check if the disks are linking at SATA3 speed.

     

    All disks are neg at either SATA3.1 or STATA3.3. All 6Gpbs.

     

    I checked the disk specs and they are all capable of 240MBs read/write. I guess that all depends on the files that are being read/written. Smaller files would most def hinder the drives from hitting top speeds before the operation finished.

     

    That being said, I ran unbalance on the drive I pulled out first to get as much off as I could. That was running the move at about 48MBs read. reconstruct write was enabled which is supposed to speed things up. And I understand that the Fuse parity calculation operation is intensive but my dual Xeon CPUs and 96GB of RAM were practically sleeping when all of this is happening. So I don't get where the slowdown comes from.

  22. Hey all!

     

    I've been running UR on a dell R510 with a PERC310 HBA for a while now. I understand that this card is not considered the go-to choice, but at the time I had no clue and went with the board that was included. I was told that it would work, and it does; just not 100%. More like 99.99%.

     

    That being said I recently started a parity swap with a new drive. I've been watching the process and the read/writes are about 140-200MB/s between the two drives. It got me thinking that the HBA might be holding the servers performance back from what it could be.

     

    UR doesn't recognize the PERC310 as a PERC310. Instead it loads the Broadcom / LSI MegaRAID SAS 2008 [Falcon] (rev 03) driver according to the hardware report.

     

    So my questions are;

     

    1. Can I/Should I flash the HBA with alternate FW to allow for better integration with UR
    2. Does the PERC310 have hardware limitations that are not allowing the system to run as it could
    3. If flashing is not an option and the PERC310 is a bottleneck, suggestions on an alternate controller
    4. I assume swapping the controller might cause issues with the UR license on the flash. What's the process to correct that.
    5. I also assume that swapping the controller, as long as all the drives are connected after, should retain the parity.

     

    If I am off base anywhere by all means steer me in the right direction.

     

    Thanks all!

     

    image.thumb.png.d94e121184443ce518207ba768510634.png

     

     

    PS. The more I am learning about UR, Linux, VM's and docker the more fun I am having!!

  23. On 8/25/2023 at 9:51 AM, kri kri said:

    Getting this error now on RedisJSON 

     

    8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> Failed loading RedisAI API.
    8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> RedisGears v2.0.11, sha='0aa55951836750ceabd9733decb200f8a5e7bac3', build_type='release', built_for='Linux-ubuntu22.04.x86_64'.
    8:M 25 Aug 2023 13:50:52.558 * <redisgears_2> Registered backend: js.
    8:M 25 Aug 2023 13:50:52.559 * Module 'redisgears_2' loaded from /opt/redis-stack/lib/redisgears.so
    8:M 25 Aug 2023 13:50:52.559 * Server initialized
    8:M 25 Aug 2023 13:50:52.560 * <search> Loading event starts
    8:M 25 Aug 2023 13:50:52.560 * <redisgears_2> Got a loading start event, clear the entire functions data.
    8:M 25 Aug 2023 13:50:52.560 * Loading RDB produced by version 6.2.13
    8:M 25 Aug 2023 13:50:52.560 * RDB age 794829 seconds
    8:M 25 Aug 2023 13:50:52.560 * RDB memory usage when created 1.10 Mb
    8:M 25 Aug 2023 13:50:52.560 # The RDB file contains AUX module data I can't load: no matching module 'graphdata'

     

    me too. And then TA shuts down after a series of connection attempts.

     

    I really am not a fan of multiple container operations for a single service to function.

×
×
  • Create New...