Jump to content
Helmonder

Mellanox interface not showing

18 posts in this topic Last Reply

Recommended Posts

Posted (edited)

I just installed my Mellanox card in my server (and another one in my backup server).

 

The card is a HP 592520-B21 4X QDR CX-2 Dual Port Adapter Mellanox ConnectX-2 MHQH29B-XTR

 
Interface - System devices shows:
 
IOMMU group 1:	[8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 05)
[15b3:673c] 01:00.0 InfiniBand: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s In... (rev b0)

 

I am not fully linux savvy when it comes to this stuff but I think the driver is loading:

 

01:00.0 InfiniBand: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s In... (rev b0)
        Subsystem: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE]
        Kernel driver in use: mlx4_core
        Kernel modules: mlx4_core

 

I am expecting to see an extra NIC in my Settings - Network, but nothing is visible.

 

There are no errors in the log (diagnostics attached), specific to the Mellanox card I see the following, does not look like an error:

 

May 10 14:12:48 Tower kernel: mlx4_core: Mellanox ConnectX core driver v4.0-0
May 10 14:12:48 Tower kernel: mlx4_core: Initializing 0000:01:00.0

And a bit further down:

 

May 10 14:12:48 Tower kernel: mlx4_core 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
What am i looking at here ?  Am I correct in thinking that a second interface should "appear" in unraid settings ?  Or does it not work that way ?
 
lp link show shows:
 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
10: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff
11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff
12: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000
    link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff
13: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff
14: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:dc:26:eb:4b brd ff:ff:ff:ff:ff:ff

 

Netstat -i

Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0     1500  6333525      0    181 0       4445360      0      0      0 BMPmRU
br0       1500  6304380      0     39 0       4293148      0      0      0 BMRU
docker0   1500        0      0      0 0             0      0      0      0 BMU
eth0      1500  6304562      0      0 0       4445358      0      0      0 BMsRU
eth1      1500    28962      0      0 0             0      0      0      0 BMsRU
lo       65536      112      0      0 0           112      0      0      0 LRU

 

Any help is appreciated .. 

 

tower-diagnostics-20190510-1306.zip

Edited by Helmonder

Share this post


Link to post
15 minutes ago, Helmonder said:

I am expecting to see an extra NIC in my Settings - Network, but nothing is visible.

It should, try deleting /boot/config/network-rules.cfg and then reboot.

Share this post


Link to post

Thanks !

 

- Stopped the array

 

The file now shows the following:

 

# PCI device 0x8086:0x1533 (igb)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:62", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

# PCI device 0x8086:0x1533 (igb)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:63", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

- Removed the file

- Rebooting..

- System back up. In settings still only eth0 and eth 1

 

Share this post


Link to post

Your Mellanox adapter uses an "Infiniband" interface and not an "ethernet" interface.

Infiniband is not supported by Unraid in its current release, but under investigation.

 

Share this post


Link to post

Arrghh.... I was afraid of something like that.. Thats what I get for getting enthusiastic to soon..

 

Just to be sure: Nothing I can do about it ?

 

What would your best guess be: wait it out to see if unraid will support it or go out and ebay new cards ?

Share this post


Link to post
10 minutes ago, bonienl said:

Your Mellanox adapter uses an "Infiniband"

Yep, didn't noticed that, I believe most Mellanox NICs can be switched to Ethernet mode, you'll need to google it though.

Share this post


Link to post
11 minutes ago, Helmonder said:

What would your best guess be: wait it out to see if unraid will support it or go out and ebay new cards ?

No guarantees if/when Unraid might support Infiniband (in my view it is a niece product aimed at data centers)

Share this post


Link to post
Posted (edited)

Checking this out:

 

# lspci | grep Mellanox

01:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX IB QDR, PCIe 2.0 5GT/s] (rev b0)

#echo ib > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port1

#echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port2

From:

 

https://community.mellanox.com/s/article/howto-change-port-type-in-mellanox-connectx-3-adapter

 

I actually found these files in the file system:

 

 

Capture.JPG

 

I was able to change the port type and now have them active !!

 

The whole thing is not persistent the way I did it now.. There is an option to do it with persistence, but that I think in case of unraid I cannot do myself.

Edited by Helmonder

Share this post


Link to post

you can plug the card into a windows machine and adjust config there, if it isn't persistent. That's how I handles mine.

Share this post


Link to post

Did that... but in the options dialogue there is not the option to change the port..

Share this post


Link to post
Posted (edited)

I have made an addition in my go file in the very beginning to make the needed setting:

 

# Set Mellanox cards to ethernet
echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port1
echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port2
#

This works. There is however 1 major disadvantage... The interfaces come up as beiing down in unraid, putting them up is not an issue on itself, but Dockler will only recognize the interfaces that are up when docker is started, which means that after every reboot I need to:

 

- enable interfaces for the Mellanox card

- stop docker

- start docker

 

There appears to be an option to make the setting persistent, but I do not seem to find the necessary folders in the unraid filesystem (and also: since unraid is built during reboot I think making these settings here will not work anyhow:

 

Option 2 (in case RDMA is in use):

Edit the file /etc/rdma/mlx4.conf:

Note: This file is read when the mlx4_core module is loaded and used to set the port types for any hardware found.

Format:

<pci_device_of_card> <port1_type> [port2_type]

port1 and port2: One of "auto", "ib", or "eth". port1 is required at all times, port2 is required for dual port cards.

For example:

0000:05:00.0 eth eth
Perform reboot to reload the modules

#reboot

I cannot find the mlx4.conf file in the unraid file system but it -does- appear in the filesystem of the dockers:

 

/etc/modprobe.d/mlx4.conf

 

Anyone any idea ?

Edited by Helmonder

Share this post


Link to post
Posted (edited)

Would you try flash the firmware, so that change from VPI mode to Eth mode only.

 

In simple say, Unraid won't well support for those VPI firmware card.

 

I have similar case, an Emulex NIC can't use but after change firmware then it work with Unraid.

 

Edited by Benson

Share this post


Link to post
14 hours ago, Benson said:

Would you try flash the firmware, so that change from VPI mode to Eth mode only.

 

In simple say, Unraid won't well support for those VPI firmware card.

 

I have similar case, an Emulex NIC can't use but after change firmware then it work with Unraid.

 

Would love to do that, but how ?

 

At the moment my log is flushed with the following error messages:

 

May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2
May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2
May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2
May 11 19:11:04 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2
May 11 19:11:04 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2

 

Share this post


Link to post
Posted (edited)
On 5/10/2019 at 9:16 PM, Helmonder said:

The card is a HP 592520-B21 4X QDR CX-2 Dual Port Adapter Mellanox ConnectX-2 MHQH29B-XTR

http://www.mellanox.com/page/firmware_table_ConnectX2IB

 

1.PNG.82f53754a69e14edfdf5de698f8454f0.PNG

So you have a ConnectX-2 card, this card have version A1 or A2

 

Then you need risk to decide flash to which ConnectX-2_EN firmware by change PSID, there are several method doing under different OS platform, pls deep google it.

 

 

Suppose 29 was 2 port, 19 was 1 port.

 

2.PNG.fef8a46a13a43ac82e84608e1652f70e.PNG

 

 

 

 

Edited by Benson

Share this post


Link to post

I took the easy way out and bought me a new Mellanox card that -does- do eth out of the box 🙂  These things are realy cheap on ebay..

 

I now have a MELLANOX MNPH29D-XTR DUAL PORT 10G ADAPTER CARD - MNPH29D-XTR

 

This one does eth out of the box, now waiting for the DAC cable... 

 

Btw: I ordered thru SF.COM... These guys are realy extremely cheap..

Share this post


Link to post

06:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)

I have 2 pieces of Mellanox MNPA19-XTR (single port), and they all work out of box on unRAID 6.6.7 (didn't try any earlier versions, but I read from the forum that it shall work as well).

Definitely the cheapest way to go 10GbE.

 

Cheers

Share this post


Link to post

For those that are reading this now who might be new to the forum (such as myself) - if you are using Linux, you can set the port type on Mellanox Infiniband cards using the following procedure:

 

1. su to root (if you're running Ubuntu, etc. and root account is disabled by default, you can either enable the root account or you can use su -s)

 

2. Download and install the Mellanox Firmware Tools (MFT).

 

3. Find out the PCI device name:

# mlxfwmanager --query

That will query all devices and output the PCI Device Name, which you are going to need to set the port link types.

 

4. Set the port link types:

# mlxconfig -d /dev/mst/mt4115_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2

replace the stuff after the flag '-d' with your PCI device name obtained from (3). The example that I have provided above is what I have, where I've got a dual-port card, and therefore; I can set the link type for both ports.

 

Alternatively, if you have a dual port card, and you actually USE Infiniband (because you aren't only doing NIC to NIC direct attached link, but you're plugged in to a switch), then you might set one port to be running IB and the other port running ETH.

 

Perhaps this might be useful for other people in the future, who might be using something like this.

 

(P.S. The Mellanox 100 GbE switches are more expensive (per port) than their IB switches.)

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.