Helmonder Posted May 10, 2019 Share Posted May 10, 2019 (edited) I just installed my Mellanox card in my server (and another one in my backup server).  The card is a HP 592520-B21 4X QDR CX-2 Dual Port Adapter Mellanox ConnectX-2 MHQH29B-XTR  Interface - System devices shows:  IOMMU group 1: [8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 05) [15b3:673c] 01:00.0 InfiniBand: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s In... (rev b0)  I am not fully linux savvy when it comes to this stuff but I think the driver is loading:  01:00.0 InfiniBand: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s In... (rev b0) Subsystem: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] Kernel driver in use: mlx4_core Kernel modules: mlx4_core  I am expecting to see an extra NIC in my Settings - Network, but nothing is visible.  There are no errors in the log (diagnostics attached), specific to the Mellanox card I see the following, does not look like an error:  May 10 14:12:48 Tower kernel: mlx4_core: Mellanox ConnectX core driver v4.0-0 May 10 14:12:48 Tower kernel: mlx4_core: Initializing 0000:01:00.0 And a bit further down:  May 10 14:12:48 Tower kernel: mlx4_core 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link) What am i looking at here ? Am I correct in thinking that a second interface should "appear" in unraid settings ? Or does it not work that way ?  lp link show shows:  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 10: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 12: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 13: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 14: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:dc:26:eb:4b brd ff:ff:ff:ff:ff:ff  Netstat -i Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg bond0 1500 6333525 0 181 0 4445360 0 0 0 BMPmRU br0 1500 6304380 0 39 0 4293148 0 0 0 BMRU docker0 1500 0 0 0 0 0 0 0 0 BMU eth0 1500 6304562 0 0 0 4445358 0 0 0 BMsRU eth1 1500 28962 0 0 0 0 0 0 0 BMsRU lo 65536 112 0 0 0 112 0 0 0 LRU  Any help is appreciated ..  tower-diagnostics-20190510-1306.zip Edited May 10, 2019 by Helmonder Quote Link to comment
JorgeB Posted May 10, 2019 Share Posted May 10, 2019 15 minutes ago, Helmonder said: I am expecting to see an extra NIC in my Settings - Network, but nothing is visible. It should, try deleting /boot/config/network-rules.cfg and then reboot. Quote Link to comment
Helmonder Posted May 10, 2019 Author Share Posted May 10, 2019 Thanks !  - Stopped the array  The file now shows the following:  # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:62", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:63", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" - Removed the file - Rebooting.. - System back up. In settings still only eth0 and eth 1  Quote Link to comment
bonienl Posted May 10, 2019 Share Posted May 10, 2019 Your Mellanox adapter uses an "Infiniband" interface and not an "ethernet" interface. Infiniband is not supported by Unraid in its current release, but under investigation. Â Quote Link to comment
Helmonder Posted May 10, 2019 Author Share Posted May 10, 2019 Arrghh.... I was afraid of something like that.. Thats what I get for getting enthusiastic to soon.. Â Just to be sure: Nothing I can do about it ? Â What would your best guess be: wait it out to see if unraid will support it or go out and ebay new cards ? Quote Link to comment
JorgeB Posted May 10, 2019 Share Posted May 10, 2019 10 minutes ago, bonienl said: Your Mellanox adapter uses an "Infiniband" Yep, didn't noticed that, I believe most Mellanox NICs can be switched to Ethernet mode, you'll need to google it though. Quote Link to comment
bonienl Posted May 10, 2019 Share Posted May 10, 2019 11 minutes ago, Helmonder said: What would your best guess be: wait it out to see if unraid will support it or go out and ebay new cards ? No guarantees if/when Unraid might support Infiniband (in my view it is a niece product aimed at data centers) Quote Link to comment
Helmonder Posted May 10, 2019 Author Share Posted May 10, 2019 (edited) Checking this out:  # lspci | grep Mellanox 01:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX IB QDR, PCIe 2.0 5GT/s] (rev b0) #echo ib > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port1 #echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port2 From:  https://community.mellanox.com/s/article/howto-change-port-type-in-mellanox-connectx-3-adapter  I actually found these files in the file system:    I was able to change the port type and now have them active !!  The whole thing is not persistent the way I did it now.. There is an option to do it with persistence, but that I think in case of unraid I cannot do myself. Edited May 10, 2019 by Helmonder 1 Quote Link to comment
Dr. Ew Posted May 10, 2019 Share Posted May 10, 2019 you can plug the card into a windows machine and adjust config there, if it isn't persistent. That's how I handles mine. Quote Link to comment
Helmonder Posted May 10, 2019 Author Share Posted May 10, 2019 Did that... but in the options dialogue there is not the option to change the port.. Quote Link to comment
Helmonder Posted May 11, 2019 Author Share Posted May 11, 2019 (edited) I have made an addition in my go file in the very beginning to make the needed setting:  # Set Mellanox cards to ethernet echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port1 echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port2 # This works. There is however 1 major disadvantage... The interfaces come up as beiing down in unraid, putting them up is not an issue on itself, but Dockler will only recognize the interfaces that are up when docker is started, which means that after every reboot I need to:  - enable interfaces for the Mellanox card - stop docker - start docker  There appears to be an option to make the setting persistent, but I do not seem to find the necessary folders in the unraid filesystem (and also: since unraid is built during reboot I think making these settings here will not work anyhow:  Option 2 (in case RDMA is in use): Edit the file /etc/rdma/mlx4.conf: Note: This file is read when the mlx4_core module is loaded and used to set the port types for any hardware found. Format: <pci_device_of_card> <port1_type> [port2_type] port1 and port2: One of "auto", "ib", or "eth". port1 is required at all times, port2 is required for dual port cards. For example: 0000:05:00.0 eth eth Perform reboot to reload the modules #reboot I cannot find the mlx4.conf file in the unraid file system but it -does- appear in the filesystem of the dockers:  /etc/modprobe.d/mlx4.conf  Anyone any idea ? Edited May 11, 2019 by Helmonder Quote Link to comment
Vr2Io Posted May 11, 2019 Share Posted May 11, 2019 (edited) Would you try flash the firmware, so that change from VPI mode to Eth mode only. Â In simple say, Unraid won't well support for those VPI firmware card. Â I have similar case, an Emulex NIC can't use but after change firmware then it work with Unraid. Â Edited May 11, 2019 by Benson Quote Link to comment
Helmonder Posted May 12, 2019 Author Share Posted May 12, 2019 14 hours ago, Benson said: Would you try flash the firmware, so that change from VPI mode to Eth mode only. Â In simple say, Unraid won't well support for those VPI firmware card. Â I have similar case, an Emulex NIC can't use but after change firmware then it work with Unraid. Â Would love to do that, but how ? Â At the moment my log is flushed with the following error messages: Â May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:04 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:04 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 Â Quote Link to comment
Vr2Io Posted May 12, 2019 Share Posted May 12, 2019 (edited) On 5/10/2019 at 9:16 PM, Helmonder said: The card is a HP 592520-B21 4X QDR CX-2 Dual Port Adapter Mellanox ConnectX-2 MHQH29B-XTR http://www.mellanox.com/page/firmware_table_ConnectX2IB  So you have a ConnectX-2 card, this card have version A1 or A2  Then you need risk to decide flash to which ConnectX-2_EN firmware by change PSID, there are several method doing under different OS platform, pls deep google it.   Suppose 29 was 2 port, 19 was 1 port.      Edited May 12, 2019 by Benson Quote Link to comment
Helmonder Posted May 21, 2019 Author Share Posted May 21, 2019 I took the easy way out and bought me a new Mellanox card that -does- do eth out of the box 🙂 These things are realy cheap on ebay..  I now have a MELLANOX MNPH29D-XTR DUAL PORT 10G ADAPTER CARD - MNPH29D-XTR  This one does eth out of the box, now waiting for the DAC cable...  Btw: I ordered thru SF.COM... These guys are realy extremely cheap.. Quote Link to comment
Helmonder Posted May 24, 2019 Author Share Posted May 24, 2019 And it works ! Â For anyone interested, the following ebay card works out of the box immediately: Â https://www.ebay.nl/itm/MELLANOX-MNPH29D-XTR-DUAL-PORT-10G-ADAPTER-CARD-MNPH29D-XTR-NO-GBICS/264238911491?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2060353.m2749.l2649 Â Quote Link to comment
flipphos Posted May 30, 2019 Share Posted May 30, 2019 06:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0) I have 2 pieces of Mellanox MNPA19-XTR (single port), and they all work out of box on unRAID 6.6.7 (didn't try any earlier versions, but I read from the forum that it shall work as well). Definitely the cheapest way to go 10GbE.  Cheers Quote Link to comment
alpha754293 Posted July 15, 2019 Share Posted July 15, 2019 For those that are reading this now who might be new to the forum (such as myself) - if you are using Linux, you can set the port type on Mellanox Infiniband cards using the following procedure: Â 1. su to root (if you're running Ubuntu, etc. and root account is disabled by default, you can either enable the root account or you can use su -s) Â 2. Download and install the Mellanox Firmware Tools (MFT). Â 3. Find out the PCI device name: # mlxfwmanager --query That will query all devices and output the PCI Device Name, which you are going to need to set the port link types. Â 4. Set the port link types: # mlxconfig -d /dev/mst/mt4115_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2 replace the stuff after the flag '-d' with your PCI device name obtained from (3). The example that I have provided above is what I have, where I've got a dual-port card, and therefore; I can set the link type for both ports. Â Alternatively, if you have a dual port card, and you actually USE Infiniband (because you aren't only doing NIC to NIC direct attached link, but you're plugged in to a switch), then you might set one port to be running IB and the other port running ETH. Â Perhaps this might be useful for other people in the future, who might be using something like this. Â (P.S. The Mellanox 100 GbE switches are more expensive (per port) than their IB switches.) 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.