Mellanox ConnectX-3 support


32 posts in this topic Last Reply

Recommended Posts

Hi all,

          I am new to the unraid community and would like some advice/opinions/guidance as to how I can use the connectx-3 cards to work with Unraid. 

 

I recently acquired 2 of the MCX353A cards to use. One for my unraid machine (Dell R610 server w/ 2 * X5650 and 128gb of RAM that I bought for a good price off of a client who was decommissioning it) and another for my "hybrid" file server and backup server running Windows 10 pro (custom built ITX system: 8700K, 16gb of RAM (will add more in the future)). I understand that it isn't ideal to use a consumer desktop OS with infiniband and might just be outright stupid, but I have seen it and have got it to work with some other clients systems.

 

Windows recognizes the cards just fine and was able to update the both of them to the newest firmware on each of the card. However, I can see the card in unraid using "lspci | grep Mellanox" and it outputs:

Quote

05:00.0 Network Controller: Mellanox Technologies MT27500 Family [ConnectX-3]

Under tools -> system devices in unraid it does match up also.

 

My assumption is that there aren't any recognizable drivers for this (to which I'm not surprised, since not many people request/use a 40gb infiniband card) as it also doesnt show up in the network settings section. Mellanox's site does have documentation/guides AND the actual firmware for the cards. Provided link (MCX353A-FCBT):

 

http://www.mellanox.com/page/firmware_table_ConnectX3IB

 

If anyone has any suggestions/solutions/guidance to this, I'd greatly appreciate it. Thanks in advance.

Link to post
On 3/3/2019 at 2:24 AM, johnnie.black said:

Unraid doesn't currently support Infiniband, most/all Mellanox Infinand NICs can be set to Ethernet mode, and Unraid does support the Connectx-X 10GbE.

Thanks for the reply jonnie.black. I'm aware that Unraid doesnt support infiniband as of 6.6.7 and figured out that there was an issue with my card not saving the config after a reboot to Ethernet and kept switching back to Auto Sensing. Had it set to ethernet before.

 

Eventually, I re-flashed the firmware manually to the same version and it seems to be working just fine after 10 reboots and Unraid recognizes it in the network settings.

 

I'm going to be out of the country for a week, so I can't test it out until I come back.

 

Out of curiosity: Have there been any posts/reports of 40gbe connections working with unraid? If not, guess I might be the 1st 😁 

 

Thanks.

Link to post
  • 2 months later...
On 3/4/2019 at 6:15 PM, Siren said:

 

 

Out of curiosity: Have there been any posts/reports of 40gbe connections working with unraid? If not, guess I might be the 1st 😁 

 

Thanks.

Keep me posted, just dropped 40gbe in all my servers, haven't had a chance to mess with them just yet. Upgraded from 10gbe

Link to post
4 hours ago, CyrixDX4 said:

Keep me posted, just dropped 40gbe in all my servers, haven't had a chance to mess with them just yet. Upgraded from 10gbe

I've definitely have some things I can post on it. Just been way too busy since my last post, but worked on it a good amount and it works really well. My notes below assume that you have the same card. YMMV if it's a different model:

 

 

Configuring notes:

Quote

- Assuming that you have the same mellanox card above (single port), it's relatively easy to set them to Ethernet mode, as UnRAID wont see the card if it's in IB mode (learned that the hard and dumb way) or if its auto sensing. There are a few guides online on how to change it and have it stick to ethernet mode on boot. Even update to the latest firmware automatically on windows via CMD command. I might make a guide on how to do this in the future here, since it took me a bit and referencing two official mellanox guides. I'll post them here during the weekend.

 

- There may be some guides that tell you that you need a subnet manager so that you can get both ends to communicate between each other. You dont. These specific cards have their own VPI, so using and configuring a subnet manager will be a waste of time and be a pain on Unraid specifically, since it doesnt like to install packages on my end (tried to install some mellanox drivers, let alone OFED).

 

- Once you configured a static IP info on both ends, if Windows is your other end make sure that any windows firewall settings allow ping on whatever network type you selected it when the new network prompt comes up (private, public, or work). Once it does, run a ping on the IP from the Windows device to the unraid server. Linux should just be straight forward by configuring a static IP within the config, but I haven't tested it out.

 

Additional Config Notes:

 

Quote

- Even if you have all NVMe drives across both devices, you wont be able to achieve the 40gb speeds instantaneously due to either sata limitations or even with consumer NVMe drives (i.e: 970 EVO) since the drives cant saturate the entire link. So in order to do so, you'll need to configure a RAM cache on a directory you want. I personally created a directory called ramcache and allocated it 64gb of RAM, since I added another 64gb of ram to my server, totaling to 192gb. For consumer systems, I recommend to have at least 64gb of RAM with no more than 50% of total RAM capacity allocated to the directory, since you have to factor in memory utilization and memory swap. Plus, linux likes to use free RAM as cache as is. However, try to avoid as many other bottlenecks as possible (disk speed and CPU are the other 2)

 

- Edit your samba shares config and add the new directory and share name. I used a share that was completely public and copied the info on it, making it easier and accessible. You can adjust it if you want it to be secured, but for testing purposes I made it open. I also added my cache drives on the share for testing as well. They're older 840 EVO's in RAID 1, but they still work fine.

 

- Configure the other device with the ramcache as well. If you are using windows, there are some free utilities you can use and check out, but they will limit you as to how much you can use on a free trial. Hence why I recommend using this on linux.

 

 

Performance Notes:

 

Quote

Will add all screenshots this weekend. However, when going from ramcache to ramcache through the NICs, I got 40gb speeds :)

 

Other results may surprise you as well ;)

 

 

Warnings

Quote

- It should be explanatory, but I will make a warning to this: RAM IS VOLATILE!!! YOU WILL LOSE YOUR DATA IN THE RAMCACHE DIRECTORY WHEN THE SERVER IS SHUTDOWN!!!! 

 

 

Issues I faced:

Quote

- I attempted to create a script that allows you to automatically map the directory you want with the ramcache on boot, but it isn't sticking. This way, you wont have to manually input the command everytime and have to recreate the directory.

 

- Samba shares doesn't save even after a reboot as well with the new directory and cache drives being accessible.

 

Any resolutions to my issues are greatly appreciated.

 

I'm not necessarily making a guide for noobs or anything. Just from my own experiences. If you know what your doing, then I would go for the 40gb speeds, since it's entirely possible and data rates are much faster and maybe even more affordable than 10gb speeds.

 

Link to post
  • 2 months later...

In reading this thread, this is such a pity.

 

I'm using Mellanox ConnectX-4 (100 Gbps 4x EDR Infiniband cards) and I was hoping to be able to run NVMe RAID so that it will be presented to my Infiniband network in my basement, at home, and running NVMeoF.

 

But it looks like that is going to be a no-go with unRAID. :(

 

I am not really sure why unRAID can't/doesn't do it because once you install MLNX_Linux_OFED drivers, and you assign an IP address (e.g. IPv4) to the IB devs, and you're running IPoIB, I don't really see why this couldn't work (or why you would specifically need 100 Gb *ethernet*).

 

As long as you have an IP address assigned to the device, wouldn't that be all that you need?

 

(Assuming that you meet all of the other IB requirements, i.e. you have something that's running a subnet manager (OpenSM).)

 

Once my SM is up and running, and then an IPv4 address has been assigned, then I'm able to start moving data over IB without any issues.

 

Is there a way for me to inject this into the unRAID installation?

Link to post
  • 2 weeks later...
On 5/6/2019 at 11:15 PM, Siren said:

I've definitely have some things I can post on it. Just been way too busy since my last post, but worked on it a good amount and it works really well. My notes below assume that you have the same card. YMMV if it's a different model:

 

 

Configuring notes:

 

Additional Config Notes:

 

 

 

Performance Notes:

 

 

 

Warnings

 

 

Issues I faced:

 

I'm not necessarily making a guide for noobs or anything. Just from my own experiences. If you know what your doing, then I would go for the 40gb speeds, since it's entirely possible and data rates are much faster and maybe even more affordable than 10gb speeds.

 

Hi, could you provide a detailed guide on how to set up the nic to Ethernet Mode? I tried opening the “mlxconfig.exe” but it automatically closes. I am using Windows 10 and updated the firmware using the WinOF.exe installation wizard. Then I installed the MFT tool but now cannot get “mlxconfig.exe” to open. I’d really appreciate your help.

Link to post

Been a while since I got back to this thread. Was really busy and trying out some other possibilities to get 40g to work anywhere else. My goal isn't to turn this thread into a Mellanox tutorial, but I'll help out.

 

Seems like your just trying to run the exe VIA double click. The exe doesn't work like that and is ONLY run by the command prompt/powershell (best to use command prompt). Steps below are how to find out the necessary card info and get it to work in ethernet mode:

 

A: I'm assuming you've already downloaded and installed BOTH WinOF &  WinMFT for your card as well as you've already installed the card in your system. If not, head over to the Mellanox site and download  + install it. WinOF should automatically update the firmware of the card.

 

B: In this example, I'm using my card which I've already listed above. Again, YMMV if you have a different model.

 

1. Run Command prompt as Administrator. Navigate to where WinMFT is installed in Windows by default:

 

cd C:\Program Files\Mellanox\WinMFT

 

2.  Run the following command & save the info for later:

 

mst status

 

Your output should look something like this:

MST devices: 
------------ 
     <device identifier>_pci_cr0 
     <device identifier>_pciconf<port number> 


##In my case: 

MST devices: 
------------ 
     mt4099_pci_cr0 
     mt4099_pciconf0

 

Note that any additional ports will also be shown here as well.

 

3. Query the card and port to check on the mode it is using

 

mlxconfig -d <device identifier>_pciconf<port number>

## in my case
  
mlxconfig -d mt4099_pciconf0 query

 

the output should be something similar below:

 

Quote

Device #1:
----------

Device type:    ConnectX3
Device:         mt4099_pciconf0

Configurations:                              Next Boot
         SRIOV_EN                            False(0)
         NUM_OF_VFS                          8
         LINK_TYPE_P1                        ETH(2)
         LINK_TYPE_P2                        ETH(2)

         LOG_BAR_SIZE                        3
         BOOT_PKEY_P1                        0
         BOOT_PKEY_P2                        0
         BOOT_OPTION_ROM_EN_P1               True(1)
         BOOT_VLAN_EN_P1                     False(0)
         BOOT_RETRY_CNT_P1                   0
         LEGACY_BOOT_PROTOCOL_P1             PXE(1)
         BOOT_VLAN_P1                        1
         BOOT_OPTION_ROM_EN_P2               True(1)
         BOOT_VLAN_EN_P2                     False(0)
         BOOT_RETRY_CNT_P2                   0
         LEGACY_BOOT_PROTOCOL_P2             PXE(1)
         BOOT_VLAN_P2                        1
         IP_VER_P1                           IPv4(0)
         IP_VER_P2                           IPv4(0)
         CQ_TIMESTAMP                        True(1)

 

What is in green is the port type for the card. Note that just because I have 2 ports there doesnt mean that I have 2 physical ports on the card. As I mentioned above in the thread, the card is a single port card. The port types are as follows:

(1) = Infiniband

(2) = Ethernet

(3) = Auto sensing

 

4. If your card is already in ethernet, then your good. If not, use the following command below to change it:
 

mlxconfig -d <device identifier>_pciconf<device port> set LINK_TYPE_P1=2 LINK_TYPE_P2=2

##In my case

mlxconfig -d mt4099_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2

 

Output:

 

Quote

Device #1:
----------

Device type:    ConnectX3
Device:         mt4099_pciconf0

Configurations:                              Next Boot       New
         LINK_TYPE_P1                        ETH(2)          ETH(2)
         LINK_TYPE_P2                        ETH(2)          ETH(2)

 Apply new Configuration? (y/n)

 

Select Y and hit enter and the port type will change (note that the change will be under the New column).

It'll then ask you to reboot the system to take effect.

 

Do so and do step 3 again to verify it's changed. You should get nearly the same output as mine above.

 

Edited by Siren
Clarity & grammar
Link to post
On 7/15/2019 at 12:49 PM, alpha754293 said:

I am not really sure why unRAID can't/doesn't do it because once you install MLNX_Linux_OFED drivers, and you assign an IP address (e.g. IPv4) to the IB devs, and you're running IPoIB, I don't really see why this couldn't work (or why you would specifically need 100 Gb *ethernet*).

 

As long as you have an IP address assigned to the device, wouldn't that be all that you need?

 

(Assuming that you meet all of the other IB requirements, i.e. you have something that's running a subnet manager (OpenSM).)
 

Once my SM is up and running, and then an IPv4 address has been assigned, then I'm able to start moving data over IB without any issues.

 

Is there a way for me to inject this into the unRAID installation?

Utilizing the ethernet protocol for the Mellanox Cards/RDMA (or in this case, RoCE) is much easier for most OS's to recognize full compatible connectivity without requiring a SM, unlike IB. And also because of the way IB protocols work (Dont want to explain it here). 

 

Unfortunately, you would have to do a lot of kernel modding to get IB drivers to work properly in UnRAID at the time of writing, as IB mode will not be seen in the network page on the Web GUI. Obviously, using lspci in the shell will show that its there as I experienced that the hard way.

 

Also, they only included base rdma in UnRAID, but I haven't gotten it to work at all. Trying to use the rdma command gives me an error (dont have it on me). But in some way shape or form, if anyone else has a way to get the actual Mellanox drivers/RDMA to work in UnRAID along with the Infiniband requirements (i.e: "Infiniband Support" packages like in CentOS 7), I'd be more than willing to test it out, as I'm not that good in kernel modding in Linux. If base RDMA wont work (without RoCE/iWARP), you probably wont have any luck in NVMEoF.

 

However, you are correct in that once you get an SM up and running along with IP's allocated to each card, it would work out. The devs just need to modify/add the requirements for it. I've already requested this to be a feature under the feature request page, but if more people start to do this then they may add this in the next major release.

 

Quote

I'm using Mellanox ConnectX-4 (100 Gbps 4x EDR Infiniband cards) and I was hoping to be able to run NVMe RAID so that it will be presented to my Infiniband network in my basement, at home, and running NVMeoF.

 

But it looks like that is going to be a no-go with unRAID. :(

 

Out of curiosity, have you tested the connect-x 4 cards with standard SSD drives (i.e: 860 EVO's)? Just wanted to know as the only OS I've seemed to get it to work is in Windows, which I'm getting around 32gb/s out of the 40 in atto benchmark (not complaining about the speed at all. Just complaining that Linux isnt treating me well with the speed I'm getting there, which is around 10-15). Tried it in VMWare as well by attaching the card to the VM, but getting poor read speeds there.

 

Pretty sure with NVMe drives, I'll get there but 1TB M.2 drives are relatively pricey and would need a LOT of them for my host.

 

Link to post
  • 3 weeks later...

I would love to see infiniband support in Unraid... Obviously bridging to a vm isn't possible, because there is no virtio driver for this.... But, support for file sharing via ipoib would rock...

 

Also, SR-IOV can be used to pass the ib card through to VM's.  My connectx-3 card supports 16 virtual connection, plus the host connection. I just set up a server with Proxmox/ZFS because I want 56gb infiniband.  Took me 3 weeks to figure out how to get it all running, but it's finally running.

 

Connectx-3 ib/en cards can be had for less than $75... For dual port cards.... Single port cards are even cheaper..  IB switches can be had for less than $200...  I got my 12 port Mellanox unmanged switch for less than $100... You can't touch a 40gb qsfp ethernet switch for that price.....  I've seen 36 port managed switches for less than $200... Sure they sound like a jet engine, but they can be modded with quiet fans...  and qsfp cables can be had for cheap as well... picked up a 25 meter optical cable to run down to my data center in the garage for less than $100...

 

I will look up your feature request and second it...

 

 

Link to post
  • 5 months later...

I tried my Mellanox ConnectX-3 649281-B21 its a dual Qsfp+ 40gig card in UnRaid 6.8.2 will not boot past the unraid load screen. I have not problems with Windows 10, BSD and Fedora running this card in Ethernet 40 and 10gig mode.

I see others boot just fine, its kinda hard see the problem when you don't see anything past the unraid load screen

Link to post
5 hours ago, Ronclark said:

its kinda hard see the problem when you don't see anything past the unraid load screen

You likely need to remove the offending card first and get it to boot normally, and make the changes.

Enable "mirror syslog to flash", see Settings -> Syslog server

 

A copy of the syslog information is stored on your USB device in the /logs folder. Post it here.

Link to post
6 hours ago, ArtVandelay said:

@Siren just wanted to thank you for the quick tutorial to get these cards working - took all of 20 mins to update and install both cards and i now have a flawless 40g connection between my main pc and unraid. 

No Problem. Glad that you got your cards working :)

 

Again, still have been busy and still trying to figure out where my bottleneck is, but havent had time yet to figure it out. Not intentionally trying to resurrect this thread but its been a crazy ride with other OS's. Added my servers to my signature, for anyone who wants to accurately see what I have now.

Now that you have a 40g link, go ahead and test the raw data transfer and see what you get (assuming you can get RoCE working). If you get close to 40gbps speeds (or 5Gb/s), let me know what you have so that I can evaluate my stuff here and eventually make a guide for 40gb speeds.

Thanks in advance

Edited by Siren
Link to post
39 minutes ago, Siren said:

No Problem. Glad that you got your cards working :)

 

Again, still have been busy and still trying to figure out where my bottleneck is, but havent had time yet to figure it out. Not intentionally trying to resurrect this thread but its been a crazy ride with other OS's. Added my servers to my signature, for anyone who wants to accurately see what I have now.

Now that you have a 40g link, go ahead and test the raw data transfer and see what you get (assuming you can get RoCE working). If you get close to 40gbps speeds (or 5Gb/s), let me know what you have so that I can evaluate my stuff here and eventually make a guide for 40gb speeds.

Thanks in advance

i’ve run a iperf3 test and gotten up to 27gbps but it’s seems to be a single thread limitation since i’ll have one core pinned to 100% on the unraid side.  I tried running multiple servers on different ports but it seems to stick to one core.  Maybe I can run multiple instances and pin some cores.  

 

I’ll also set up some ram disks tmr and report back. 

Link to post
On 2/2/2020 at 2:24 AM, bonienl said:

You likely need to remove the offending card first and get it to boot normally, and make the changes.

Enable "mirror syslog to flash", see Settings -> Syslog server

 

A copy of the syslog information is stored on your USB device in the /logs folder. Post it here.

I guess the intel board with 3770 just did not like the setup. I moved it over to a different system and works great, its a Lenovo with a 3470

Link to post
On 2/5/2020 at 9:03 PM, ArtVandelay said:

i’ve run a iperf3 test and gotten up to 27gbps but it’s seems to be a single thread limitation since i’ll have one core pinned to 100% on the unraid side.  I tried running multiple servers on different ports but it seems to stick to one core.  Maybe I can run multiple instances and pin some cores.  

 

I’ll also set up some ram disks tmr and report back. 

I been playing with theses 40G Connectx 3 cards for awhile thy been fun to learn and see were the bottleneck are. Here is a helpful link on iperf3 40G and above https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/

Link to post
8 hours ago, Ronclark said:

I been playing with theses 40G Connectx 3 cards for awhile thy been fun to learn and see were the bottleneck are. Here is a helpful link on iperf3 40G and above https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/multi-stream-iperf3/

ya i’ve tried to run multiple servers and end up with the same overall result.  

 

@SirenTurns out i’m a big dumb dumb though and my unraid mobo only supports pci-e 2.0 and it’s running at 8x 🤦‍♂️🤦‍♂️🤦‍♂️🤦‍♂️😭😭😭😭

 

Link to post

I'm just curious what kind of performance everyone is getting.  I recently got my 40Gb Mellanox Connectx-3 cards working with unraid but my SMB copy speeds to a ram drive are maxing out at about 1.5 - 1.7 GB/s which feels a bit slow. The only SMB options I've added are below. On the client side I'm copying from a pcie 4.0 NVME drive which has read speeds over 4GB/s so that shouldn't be the issue. Any thoughts/suggestions?  Thanks!

 

server multi channel support = yes
aio read size = 1
aio write size = 1
strict locking = No

 

Link to post
  • 1 month later...
On 7/28/2019 at 12:31 PM, Siren said:

Been a while since I got back to this thread. Was really busy and trying out some other possibilities to get 40g to work anywhere else. My goal isn't to turn this thread into a Mellanox tutorial, but I'll help out.

 

Seems like your just trying to run the exe VIA double click. The exe doesn't work like that and is ONLY run by the command prompt/powershell (best to use command prompt). Steps below are how to find out the necessary card info and get it to work in ethernet mode:

 

A: I'm assuming you've already downloaded and installed BOTH WinOF &  WinMFT for your card as well as you've already installed the card in your system. If not, head over to the Mellanox site and download  + install it. WinOF should automatically update the firmware of the card.

 

B: In this example, I'm using my card which I've already listed above. Again, YMMV if you have a different model.

 

1. Run Command prompt as Administrator. Navigate to where WinMFT is installed in Windows by default:

 


cd C:\Program Files\Mellanox\WinMFT

 

2.  Run the following command & save the info for later:

 


mst status

 

Your output should look something like this:


MST devices: 
------------ 
     <device identifier>_pci_cr0 
     <device identifier>_pciconf<port number> 


##In my case: 

MST devices: 
------------ 
     mt4099_pci_cr0 
     mt4099_pciconf0

 

Note that any additional ports will also be shown here as well.

 

3. Query the card and port to check on the mode it is using

 


mlxconfig -d <device identifier>_pciconf<port number>

## in my case
  
mlxconfig -d mt4099_pciconf0 query

 

the output should be something similar below:

 

 

What is in green is the port type for the card. Note that just because I have 2 ports there doesnt mean that I have 2 physical ports on the card. As I mentioned above in the thread, the card is a single port card. The port types are as follows:

(1) = Infiniband

(2) = Ethernet

(3) = Auto sensing

 

4. If your card is already in ethernet, then your good. If not, use the following command below to change it:
 


mlxconfig -d <device identifier>_pciconf<device port> set LINK_TYPE_P1=2 LINK_TYPE_P2=2

##In my case

mlxconfig -d mt4099_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2

 

Output:

 

 

Select Y and hit enter and the port type will change (note that the change will be under the New column).

It'll then ask you to reboot the system to take effect.

 

Do so and do step 3 again to verify it's changed. You should get nearly the same output as mine above.

 

 

Is it possible to complete this process in Unraid or some other Linux (one that can be run from LiveC) distro?  I have no way of running Windows on my server currently and would love to get my Connectx-3 working in Unraid.

 

EDIT:  I managed to install Windows onto a USB3 HDD to the above steps working perfectly.  Thanks!

Edited by IamSpartacus
Link to post
  • 1 month later...

Been a while since I updated this thread, but some slightly good news: I have some improvements by only changing a few pieces of hardware.

TL,DR;

As of now, what I am getting in terms of speed is around 2.0GB/s on a raw transfer to RAID 10 SSD's from a 120gb vm after swapping to a really good RAID controller. Halfway there! 😄 🍺

 

What I realized is that on these Dell servers, the on-board RAID controller (H710p Mini) is PCIe 2.0 x8 which is UTTERLY slow, even for RAID 10. So I took a risk and bought a single H740p for the server. Normally, this is NOT supported by Dell and they even claim that it might not be possible for it to work in its entirety.

 

But that's not the case

 

All I did was insert the controller in the server and it showed up after some Initialization (for any new hardware, the Dell servers will perform an initialization after ANY change in internal components like CPU and PCIe devices). Cleared the controller's old cache and re-imported the config after I swapped the SAS cables from the server. Lastly, I updated the firmware of the H740p to the latest (because some of the original firmwares gate the cache to 4gb rather than getting the full 8gb) and it was done. 

 

I didnt have to reimport or redo any configurations as it automatically imported it from the old controller, which was perfect as I didnt want to reinstall my copy of windows :)

 

 

As to why I bought this RAID controller? Four simple reasons:

- 8gb cache, which is an INSANE amount of cache

- Ability to toggle between RAID and HBA mode along with NVMe support (which is used for the newer Poweredge X40 models)

- Price was reasonable for the controller

- Potential compatibility with Dell servers and online management

 

In terms of my speeds, I went from about:

 

1.75GB/s seq. read and 1GB/s seq. write RAID 10 on the old H710p

to:

7GB/s seq. read and 6.75GB/s seq. write RAID 10 on the new H740p

 

On the disks using CrystalDisk.

 

Proceeded to do a raw one-way 120gb VM transfer VIA SMB using the 40gbe cards and got speeds of 2.0GB/s and peaking at around 2.2GB/s

 

All on 4x Samsung 850 EVO 1TB disks. Yes, I am still using RAID on the card, but at some point I'll test them under HBA mode as I know unRAID does NOT like/prefer use of RAID (I guess... hence the name 🤣). But the results are still some of the better ones I've seen on my server.

 

Lastly, when I updated to the newest version of UnRAID (6.8.3, at the time of writing), I realized that there were SMB issues. Figured out that you need to enable Anonymous login AND enable SMB1.0 support in Windows and Group Policy editor to get it to work. There is a thread that I found that worked for me, but I'm unable to link it because I cant find it again and if so, I can repost it here.

 

Cheers!

 

Edited by Siren
Link to post
  • 6 months later...

Thanks for the tips and info, this guide has been very helpful.

 

Are there any updates with the newer unRAID and the possible use with Infiniband? I know the card still doesn't show in network settings until set to Ethernet mode. I'd really like to try the 40GbE out instead of 10GbE.

 

For anyone who doesn't have easy access to a Windows machine for the easy updates/configuration changes, you can pass the card through to a VM and do it from there without using a separate computer. You won't see it back in unRAID network settings until you "unstub" the card though, removing it from the VM. I also rebooted my entire unRAID server rather than just the VM when Windows said that was needed for the card changes to take effect.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.