Jump to content

Siren

Members
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Siren

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Utilizing the ethernet protocol for the Mellanox Cards/RDMA (or in this case, RoCE) is much easier for most OS's to recognize full compatible connectivity without requiring a SM, unlike IB. And also because of the way IB protocols work (Dont want to explain it here). Unfortunately, you would have to do a lot of kernel modding to get IB drivers to work properly in UnRAID at the time of writing, as IB mode will not be seen in the network page on the Web GUI. Obviously, using lspci in the shell will show that its there as I experienced that the hard way. Also, they only included base rdma in UnRAID, but I haven't gotten it to work at all. Trying to use the rdma command gives me an error (dont have it on me). But in some way shape or form, if anyone else has a way to get the actual Mellanox drivers/RDMA to work in UnRAID along with the Infiniband requirements (i.e: "Infiniband Support" packages like in CentOS 7), I'd be more than willing to test it out, as I'm not that good in kernel modding in Linux. If base RDMA wont work (without RoCE/iWARP), you probably wont have any luck in NVMEoF. However, you are correct in that once you get an SM up and running along with IP's allocated to each card, it would work out. The devs just need to modify/add the requirements for it. I've already requested this to be a feature under the feature request page, but if more people start to do this then they may add this in the next major release. Out of curiosity, have you tested the connect-x 4 cards with standard SSD drives (i.e: 860 EVO's)? Just wanted to know as the only OS I've seemed to get it to work is in Windows, which I'm getting around 32gb/s out of the 40 in atto benchmark (not complaining about the speed at all. Just complaining that Linux isnt treating me well with the speed I'm getting there, which is around 10-15). Tried it in VMWare as well by attaching the card to the VM, but getting poor read speeds there. Pretty sure with NVMe drives, I'll get there but 1TB M.2 drives are relatively pricey and would need a LOT of them for my host.
  2. Been a while since I got back to this thread. Was really busy and trying out some other possibilities to get 40g to work anywhere else. My goal isn't to turn this thread into a Mellanox tutorial, but I'll help out. Seems like your just trying to run the exe VIA double click. The exe doesn't work like that and is ONLY run by the command prompt/powershell (best to use command prompt). Steps below are how to find out the necessary card info and get it to work in ethernet mode: A: I'm assuming you've already downloaded and installed BOTH WinOF & WinMFT for your card as well as you've already installed the card in your system. If not, head over to the Mellanox site and download + install it. WinOF should automatically update the firmware of the card. B: In this example, I'm using my card which I've already listed above. Again, YMMV if you have a different model. 1. Run Command prompt as Administrator. Navigate to where WinMFT is installed in Windows by default: cd C:\Program Files\Mellanox\WinMFT 2. Run the following command & save the info for later: mst status Your output should look something like this: MST devices: ------------ <device identifier>_pci_cr0 <device identifier>_pciconf<port number> ##In my case: MST devices: ------------ mt4099_pci_cr0 mt4099_pciconf0 Note that any additional ports will also be shown here as well. 3. Query the card and port to check on the mode it is using mlxconfig -d <device identifier>_pciconf<port number> ## in my case mlxconfig -d mt4099_pciconf0 query the output should be something similar below: What is in green is the port type for the card. Note that just because I have 2 ports there doesnt mean that I have 2 physical ports on the card. As I mentioned above in the thread, the card is a single port card. The port types are as follows: (1) = Infiniband (2) = Ethernet (3) = Auto sensing 4. If your card is already in ethernet, then your good. If not, use the following command below to change it: mlxconfig -d <device identifier>_pciconf<device port> set LINK_TYPE_P1=2 LINK_TYPE_P2=2 ##In my case mlxconfig -d mt4099_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2 Output: Select Y and hit enter and the port type will change (note that the change will be under the New column). It'll then ask you to reboot the system to take effect. Do so and do step 3 again to verify it's changed. You should get nearly the same output as mine above.
  3. I've definitely have some things I can post on it. Just been way too busy since my last post, but worked on it a good amount and it works really well. My notes below assume that you have the same card. YMMV if it's a different model: Configuring notes: Additional Config Notes: Performance Notes: Warnings Issues I faced: I'm not necessarily making a guide for noobs or anything. Just from my own experiences. If you know what your doing, then I would go for the 40gb speeds, since it's entirely possible and data rates are much faster and maybe even more affordable than 10gb speeds.
  4. Thanks for the reply jonnie.black. I'm aware that Unraid doesnt support infiniband as of 6.6.7 and figured out that there was an issue with my card not saving the config after a reboot to Ethernet and kept switching back to Auto Sensing. Had it set to ethernet before. Eventually, I re-flashed the firmware manually to the same version and it seems to be working just fine after 10 reboots and Unraid recognizes it in the network settings. I'm going to be out of the country for a week, so I can't test it out until I come back. Out of curiosity: Have there been any posts/reports of 40gbe connections working with unraid? If not, guess I might be the 1st 😁 Thanks.
  5. Hi all, I am new to the unraid community and would like some advice/opinions/guidance as to how I can use the connectx-3 cards to work with Unraid. I recently acquired 2 of the MCX353A cards to use. One for my unraid machine (Dell R610 server w/ 2 * X5650 and 128gb of RAM that I bought for a good price off of a client who was decommissioning it) and another for my "hybrid" file server and backup server running Windows 10 pro (custom built ITX system: 8700K, 16gb of RAM (will add more in the future)). I understand that it isn't ideal to use a consumer desktop OS with infiniband and might just be outright stupid, but I have seen it and have got it to work with some other clients systems. Windows recognizes the cards just fine and was able to update the both of them to the newest firmware on each of the card. However, I can see the card in unraid using "lspci | grep Mellanox" and it outputs: Under tools -> system devices in unraid it does match up also. My assumption is that there aren't any recognizable drivers for this (to which I'm not surprised, since not many people request/use a 40gb infiniband card) as it also doesnt show up in the network settings section. Mellanox's site does have documentation/guides AND the actual firmware for the cards. Provided link (MCX353A-FCBT): http://www.mellanox.com/page/firmware_table_ConnectX3IB If anyone has any suggestions/solutions/guidance to this, I'd greatly appreciate it. Thanks in advance.