• Mellanox MT26448 not working on 6.9 (but working great on 6.8.3)


    Helmonder
    • Solved Minor

    After a couple of days of mucking about I was able to update 6.8.3 to 6.9. Pretty proud of myself :-)

     

    I am now running 6.9 although without my 10GB card functioning. I am running on a bond with my 10GB in it together with two 1GB connections.

     

    My network speeds are at 1GB level so I safely assume that my 10GB is not active, this is also logical because with -only- that 10GB in use as connection there was no connection at all.

     

    For details on this issue see:

     

     

    At the moment, as stated, that upgrade has worked and I am now stuck withouy my 10GB as  a seperate issue.

     

    It concerns a Mellanox SFP+ card with two uplinks:

     

    IOMMU group 12:[15b3:6750] 03:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)

     

    Attached is also my current diagnostics.

    tower-diagnostics-20210308-2114.zip

     

    Contents of IP L SHOW :

     

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
    3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/gre 0.0.0.0 brd 0.0.0.0
    4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1464 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
    7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
        link/sit 0.0.0.0 brd 0.0.0.0
    12: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
        link/ether 00:02:c9:52:e9:61 brd ff:ff:ff:ff:ff:ff permaddr ac:1f:6b:94:71:62
    13: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
        link/ether 00:02:c9:52:e9:61 brd ff:ff:ff:ff:ff:ff permaddr ac:1f:6b:94:71:63
    14: eth2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
        link/ether 00:02:c9:52:e9:60 brd ff:ff:ff:ff:ff:ff
    15: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
        link/ether 00:02:c9:52:e9:61 brd ff:ff:ff:ff:ff:ff
    16: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000
        link/ether 00:02:c9:52:e9:61 brd ff:ff:ff:ff:ff:ff
    17: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
        link/ether 00:02:c9:52:e9:61 brd ff:ff:ff:ff:ff:ff
    18: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
        link/ether 52:54:00:64:42:af brd ff:ff:ff:ff:ff:ff
    19: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq master virbr0 state DOWN mode DEFAULT group default qlen 1000
        link/ether 52:54:00:64:42:af brd ff:ff:ff:ff:ff:ff
    20: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq master br0 state UNKNOWN mode DEFAULT group default qlen 1000
        link/ether fe:54:00:aa:46:a4 brd ff:ff:ff:ff:ff:ff
    21: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
        link/ether 02:42:06:45:db:64 brd ff:ff:ff:ff:ff:ff
    23: veth3911429@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
        link/ether ce:61:ee:ba:fa:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    25: veth8a6fc6e@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
        link/ether 96:2b:5c:d4:58:04 brd ff:ff:ff:ff:ff:ff link-netnsid 1

     

    The 10GB interface is the eth0, so:

     

    15: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
        link/ether 00:02:c9:52:e9:61 brd ff:ff:ff:ff:ff:ff

     

    For the life of me I do not see anything wrong with it ?

     

    Note: same behaviour on 6.9.1

     

    Any help is appreciated.

     

     




    User Feedback

    Recommended Comments

    What do you want me to do with it ?  It is installed and shows info but nothing I think I can do something with ?

     

    mft.thumb.JPG.a44bcd6e13c16e4ebfc897483c1c2fcf.JPG

    Link to comment

    I wasnt sure if the package includes the drivers for the cards also. @ich777 any suggestions for Helmonder on possible solutions to get the card working.

    Link to comment

    I'm running a ConnectX2 in my DevServer and a ConnectX3 on my Main server and have no problem whatsoever.

     

    The Plugin shows also information about the card itself so the modules should be loaded and I think nothing is preventing it from running.

    Eventually try to reset the Network settings and start over again only with the 10Gbit/s card.

    • Thanks 1
    Link to comment

    That is what I had... just the 10GB... perfectly working in 6.8.3..

     

    But when I then update to 6.9 there is no network, see 

     

    I only made this bond to be able to get 6.9 running... The situation basically is the same only the other nic's in the group do work which is why my server is reachable.. If I now remove the bond that will only result in a server not beiing reachable anymore..

    Link to comment

    Ok.... did somethin else now..

     

    The not working 10GB was eth0, which was now in a bond with eth1 and 3. eth1 and eth3 are regular lan ports on my motherboard.

     

    - I now swapped eth0 and eth1 so the 10GB is now my 10GB.

    - Then I removed the 10Gb eth1 from the bond (connectivity still ok)

    - Then I tried to configure the 10GB interface.. AND THE THING GOT AN IP ADDRESS !!

     

    That means that the card itself -is- working on an OS level... This is getting more and more curious...

     

    - Now I moved the eth1 10GB card back into the  bond

    - Tested connectivity: back on 10GB speeds !

     

    - Did a reboot half expecting it to not work again but alas... it still works !

     

     

     

    Edited by Helmonder
    • Like 1
    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.