Will the LSI SAS31601E 16 Port PEI-E SAS/SATA controller HBA work with UNRAID?


Recommended Posts

Will this controller card work with unraid? 

 

LSI SAS31601E L3-01143-03D 16-port PCI-e SAS / SATA controller card 3G HBA 3Gb/s

 

 

s-l1600.jpg

 

 

The motherboard I purchase lised these specs

 

Expansion Slots

PCI Express 3.0 x16
1 x PCI Express 3.0 x16
PCI Express x1
2 x PCIe 3.0 x1
 
https://www.newegg.com/Product/Product.aspx?Item=N82E16813130894
 
So how many(if any) of these card can my motherboard support? 
 
I'm confused by the specs, I guess its saying PCI Express 3.0 x16 is 16 speed not 16 ports and PCI Express x1 must be 1 speed ( not 1 port?) 
 
So does this mean there are 3 PCI express ports? Is PCIe the same as PCI Express? 
 
Since Unraid is the OS, is there drivers for the LSI SAS Drivers? 
 
Thanks. 
 
THANKS. 

 

Edited by miogpsrocks
Link to comment

I'm not 100% on Unraid, but as far as your board, it'll take one.  I have one of those cards on order, so hopefully I can tell you one way or another in a few days as far as Unraid is concerned.  LSI has Linux drivers for most of their cards, so I'm fairly certain this one will work though..

 

Link to comment

That card needs a x8 PCI-E slot. Your motherboard has only one that will work - the x16 slot (usually occupied used for Graphics cards)

Its a relatively old controller (circa 2007), and is more likely to be well supported by the Linux kernel - probably under the mptsas driver.

Yes most (if not all) of LSI's HBAs are supported under Linux

 

Link to comment

 

Just noticed myself that it's basically a single card with 2 1068e's built on.  Guess it won't work for my current application (need it to run several 8tb drives in a MD1000 chassis).  Good thing they aren't expensive, lol.

 

Oh, and the 1068e controllers ARE well supported with Unraid, I'm using one in a Cisco C200 M2 at the moment for testing, and it works perfectly fine with Unraid (aside from the size limitations).

 

Edited by heffe2001
  • Upvote 1
Link to comment
6 minutes ago, heffe2001 said:

 

Just noticed myself that it's basically a single card with 2 1068e's built on.  Guess it won't work for my current application (need it to run several 8tb drives in a MD1000 chassis).  Good thing they aren't expensive, lol.

 

Best options for you are the 9200-8e or the 9216-4i4e.

Link to comment

Yeah, I'm looking at the 9200-8e at the moment, still relatively cheap too.  I've got an Areca ARC-1231ML in  my current server that works great with this setup (I have 2 4tb WD Red drives in a stripe set for parity, and several 8tb Seagate drives, plus a couple 4tb's in the array, with a 300gb Raptor for cache, and a 512gb SSD for all my docker stuff, everything but the SSD connected to the Areca card).  I'm contemplating using 4 2tb reds for parity on the new setup (using the onboard 1068e controller) for parity, and the external box containing all the 8tb's (plus a couple new ones, I'm running low on space at the moment, lol).

 

Link to comment
3 minutes ago, johnnie.black said:

 

 

 

 

Yep, that's what I need for my situation, not sure about the OP or not though.  These Cisco C200 m2 boxes are pretty nice for what they are, 1u chassis with 2 5650's, capable of up to 192gb ram, with 4 hot-swap bays.  If you use the onboard 1068e controller you're limited to 2tb drives per slot up front, 8tb max (6tb with parity), but you can always put a different controller in the 8x PCIE slot and plug the front drives into that controller, and use a controller with external ports in the 16x slot, going to something like a MD1000/MD3000 external chassis (that'd give you an additional 15 slots for drives, and depending on what controller use the larger 4tb+ drives, and push upwards of 150tb depending on the drives used..

Link to comment

Yeah, it's just a storage box without any sort of CPU.  Has redundant power supplies, and most times redundant interfaces on the back (or you can split the array in it into 2 halves, one with 8 drives, one with 7, each set controlled by one of the rear controllers, probably how I will use it, with each controller on the back connected to a different port on the 8e card).  If I remember correctly, you can chain 3 of them together (that may be a MD3000 + 2x MD1000's, can't exactly remember).  I just wish they offered it in a tower version instead of just a rack-mount version.  Had to 3d print a set of feet for it to sit vertically at our office to use with a T610 Hyper-v box that needed more drives..

 

 

e7f2b0_8ea10f4c83ce43e3b1dc52f4ed6112cc.

Link to comment
On 5/8/2017 at 0:02 AM, heffe2001 said:

I'm not 100% on Unraid, but as far as your board, it'll take one.  I have one of those cards on order, so hopefully I can tell you one way or another in a few days as far as Unraid is concerned.  LSI has Linux drivers for most of their cards, so I'm fairly certain this one will work though..

 

 

Did you get your card in? When do you think it will come in?  Please let me know if they work.

 

Thanks. 

Link to comment
23 hours ago, CyberSkulls said:

The LSI 9201-16e was mentioned earlier and I happen to run two of them in unRAID and they work perfectly fine.

I only bring that card back up since they are fairly cheap on eBay these days.


Sent from my iPhone using Tapatalk

 

 

Are you using these cards? Do you know if they have a 2TB limit? 

 

Thanks. 

Link to comment
On 5/8/2017 at 10:06 AM, heffe2001 said:

going to something like a MD1000/MD3000 external chassis (that'd give you an additional 15 slots for drives, and depending on what controller use

 

I use an md1000 with an hp h220 hba. There are 2 issues. The first is that either unRaid or my hba absolute hate having all 15 disks from the enclosure sent to it in unified mode, and it locks up unRaid. So I use it in split mode with half as my primary array, the other half as a backup from another server. It works for my needs. The second issue is that it connects to the hba via an x4 cable, which should support over 1000MB/s. But it doesn't. I only see half the throughput. I believe (and this is not corroborated anywhere) that the way the md1000 is wired is that is splits the total bandwidth between each side of the enclosure, so you'll only ever get about 500MB/s out of each half max. If you get it working on unified mode, then I'm sure it's less of an issue getting then full bandwidth, but 6 disks splitting 500MB/s of bandwidth for parity checks creates a bottleneck, one that is still present with the bandwidth split.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.