alpha754293

Members
  • Posts

    5
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

alpha754293's Achievements

Noob

Noob (1/14)

1

Reputation

  1. My system isn't up and running yet, but my current installed, raw capacity at home is a total of about between 120-150 TB. Somewhere in there. I've lost count and it's been a while since I've audited all of my systems/servers, etc. The two new builds that I am targetting now is somewhere between 8-12 TB of pure NVMe SSD "cache" (of sorts) (using two or three Asus Hyper M.2 x16 cards and 8-12 Samsung 960 Pro 1 TB NVMe SSDs) that might sit "in front" of 12-16 10-14 TB drives. (So lowest capacity will be 120 TB raw, and highest will be 224 TB raw.) It'll depend on what's available and what can the management software do vs. what the hardware itself can do and cost of all of it. (My 80 TB (raw) server is already 85% full.)
  2. Will this work? https://community.mellanox.com/s/article/how-to-extract-mlnx-ofed-source-files Or what about this? https://github.com/Mellanox (I haven't gone through all of their stuff, but maybe these might help?)
  3. For those that are reading this now who might be new to the forum (such as myself) - if you are using Linux, you can set the port type on Mellanox Infiniband cards using the following procedure: 1. su to root (if you're running Ubuntu, etc. and root account is disabled by default, you can either enable the root account or you can use su -s) 2. Download and install the Mellanox Firmware Tools (MFT). 3. Find out the PCI device name: # mlxfwmanager --query That will query all devices and output the PCI Device Name, which you are going to need to set the port link types. 4. Set the port link types: # mlxconfig -d /dev/mst/mt4115_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2 replace the stuff after the flag '-d' with your PCI device name obtained from (3). The example that I have provided above is what I have, where I've got a dual-port card, and therefore; I can set the link type for both ports. Alternatively, if you have a dual port card, and you actually USE Infiniband (because you aren't only doing NIC to NIC direct attached link, but you're plugged in to a switch), then you might set one port to be running IB and the other port running ETH. Perhaps this might be useful for other people in the future, who might be using something like this. (P.S. The Mellanox 100 GbE switches are more expensive (per port) than their IB switches.)
  4. Is the source absolutely required for it to be integrated into unRAID?
  5. In reading this thread, this is such a pity. I'm using Mellanox ConnectX-4 (100 Gbps 4x EDR Infiniband cards) and I was hoping to be able to run NVMe RAID so that it will be presented to my Infiniband network in my basement, at home, and running NVMeoF. But it looks like that is going to be a no-go with unRAID. I am not really sure why unRAID can't/doesn't do it because once you install MLNX_Linux_OFED drivers, and you assign an IP address (e.g. IPv4) to the IB devs, and you're running IPoIB, I don't really see why this couldn't work (or why you would specifically need 100 Gb *ethernet*). As long as you have an IP address assigned to the device, wouldn't that be all that you need? (Assuming that you meet all of the other IB requirements, i.e. you have something that's running a subnet manager (OpenSM).) Once my SM is up and running, and then an IPv4 address has been assigned, then I'm able to start moving data over IB without any issues. Is there a way for me to inject this into the unRAID installation?