Will the LSI SAS31601E 16 Port PEI-E SAS/SATA controller HBA work with UNRAID?


Recommended Posts

11 minutes ago, 1812 said:

 

You would lose that bet.

 

 

Lol, you're probably right.  I do remember reading an article a while back when we first got the MD1000 about the interposers, and remember reading something about it being the cause of speed issues, but what I didn't remember was the post that challenged that assumption.  It's something to do with the SAS drives being multi-ported, where SATA drives are single, and that SAS drives are full-duplex, where SATA drives aren't.  At least thats what I had in my notes from the research we did at the time.  All that being said, I got my 9201-16e, and it appears to work fine in my Cisco C200 M2.  Now to score another cheap MD1000 and I'll be ready to move from my old box to my new one..

 

Link to comment

Do I need PCI 3.0 X 16 for LSI SAS Cards 16 port or can I use a lower number PCI and just get slower performance?

 

I am trying to build a 30 drive system, however, most motherboards only have 1 PCI 3.0 X 16 slot. 

 

Can one of those lower PCI slots like the X 1 but just have lower performance speed or is the X16 required to work in any capacity? 

 

Thanks. 

Link to comment

There are two things at play here - one is the the PCI version number (1.0/1.1, 2.0, 3.0) and the other is the number of "lanes".

 

PCIe 1.x is 250 MB/sec per lane

PCIe 2.0 is 500MB/sec per lane

PCIe 3.0 is 1000MB/sec per lane

 

Each card's maximum number of lanes is determined by the card's physical design (i.e., literally the length of the PCIe bus connector.)

 

A 1 lane card is the shortest, and a 16 lane card is the longest.

 

The motherboard and the card will negotiate a specification based on the highest spec both the slot and the card support. So a PCIe 2.0 card in a PCIe 3.0 slot, will run at PCIe 2.0 speed.

 

Similarly they will agree on the number of lanes based on the "shortest" one - the card or the slot. Most disk controller cards are either 1 lane, 4 lane, or 8 lane, often referred to a x1, x4, x8. If you put an x4 card in an x8 slot, you will only have 4 usable lanes. And if you put an x8 card in an x4 slot, you will also have 4 usable lanes. Putting an x8 card into an x4 slot is not always physically possible, because the x4 slot is too short. But some people have literally melted away the back end of the slot to accommodate the wider card, which is reported to work just fine. Making things just a little more confusing, some motherboards have an x8 physical slot but is actually just wired for x4. So you can put a longer card in there with no melting, but it only uses 4 of the lanes.

 

If you have, say, a PCIe 1.1 card with 1 lane, and it supports 4 drives, then your performance per drive would be determined by dividing the 250 MB/sec bandwidth by 4 = ~62.5 MB/sec max speed if all four drives are running in parallel. Since many drives are capable of 2-3x that speed, you would be limited by the card. If the slot were a PCIe 2.0 slot, you'd have 500MB/sec speed for 4 drives, meaning  125 MB/sec. While drives can run faster on their outer cylinders, this would likely be acceptable speed, with only minor impact on parity check speeds. With a PCIe 3.0 drive, you'd have 250 MB/sec per drive for each of the 4 drives. More than fast enough for any spinner, but maybe not quite fast enough for 4 fast SSDs all running at full speed at the same time.

 

You might think of each step in PCIe spec as equivalent to doubling the number of lanes from a performance perspective. So a PCIe 1.1 x8 card would be roughly the same speed as a PCIe 2.0 x4 card.


Hope that background allows you to answer most any questions about controller speed.

 

I should note that PCIe 1.x and 2.0 controller cards are the most popular. And as I said, x1, x4 and x8 the most common widths 

 

If you are looking at a 16 port card, and looking at the speed necessary to support 16 drives in a single controller ...

 

PCIe1.1 at x4 = 1GB/sec / 16 = 62.5 MB/sec - significant performance impact with all drives driven

PCIe1.1 at x8 / PCIe 2.0 at x4 = 2GB/sec/16 = 125 MB/sec - some performance impact with all drive driven

PCIe2.0 at x8 / PCIe 3.0 at x4= 4GB/sec/16 = 250 MB/sec - no performance limitations for spinning disks (at least today)

PCIe3.0 at x8 - 8 GB/sec/16 = 500 MB/sec per drive - no performance limitations even for 16 SSDs.

 

The speeds listed are approximate, but close enough for government work. Keep in mind, it is very uncommon to drive all drives at max speed simultaneously. But the unRAID parity check does exactly that, and parity check speed is a common measure here. If you are willing to sacrifice parity check speed, a slower net controller speed will likely not hold you back for most non-parity check operations.

 

For 16 drives on one controllers, I'd recommend a PCIe 2.0 slot at x8. For example, an LSI SA 9201-16i

 

Here is a pretty decent article on PCIe if you need more info:

 

http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/

 

 

 

 

  • Thanks 2
  • Upvote 3
Link to comment

NewEgg 1151 CPU 2x+ PCI-e 16x slots

 

All these boards have 2x+ PCI-E 16x slots on them, there's plenty of boards that do.  My first Unraid box was (and currently still is) running on an 'enthusiast' board as opposed to a 'server'-class board.  My next system (that's in process) is running on server-grade hardware, with 1 8x slot, and 1 16x slot, plus an onboard 4 port sas card (plus an additional 4 port plug going to the onboard SATA controller).  

 

A good, server-grade motherboard that will do what you need (and includes IPMI, which is certainly handy in an unraid setup):

 

SUPERMICRO MBD-X11SSL-F-O

This board has 2 8x slots, and 1 16x, should give you the 30+ drives your wanting (with 2 of the cards suggested above), and STILL leave you a 16x slot you can use for video, or additional drives down the road..

 

 

 

 

Link to comment

miogpsrocks -

 

You created two very similar threads which I have merged.

 

The card you are looking at is a PCIe 1.0 x8 controller card. Although you may be able to attach 16 drives to it, the performance will moderately suffer.

 

If you look at my post 6 posts up it will help you understand what this means. Also, you will see this is the performance you would get ...

 

PCIe1.1 at x8 = 2GB/sec / 16 drives = 125 MB/sec (per drive) - some performance impact with all drive driven

 

So if you put it in an x8 or x16 slot (both will result in 8 lanes), you will experience mild controller bottleneck for the beginning of parity checks.

 

These are designed to hook up drives EXTERNAL to the case, not internal. It will take a special cable to be able to utilize them, but they are available. You will need 4 of those cables, which will likely cost in the $35-$45 per cable range. (With an internal card, you'd need a different but similar cable, for similar cost).

 

BUT, I would DEFINITELY recommend looking at this card instead:

 

LSI SAS9201-16e

 

It is a PCIe 2.0 card that looks darn near equivalent to the one you are looking at. You should experience no controller bottlenecks. It is 2x as fast and priced only a little more.

 

There is also an internal version, the LSI SAS9201-16i. It costs more, but will make a nicer install if the goal is to mount the drive internal. I have one of these and works nicely.

 

It you used two of these PCIe 2.0 cards, and had to put one of them into an x4 slot, you would get have moderate performance penalty with all drive driven as mentioned above. But if you only used 12 instead of 16 ports, it would likely be fast enough to eliminate any bottleneck. Personally I don't think the mild bottleneck of having all 16 drives driven would be enough to be objectionable. 125 MB/sec sustained speed is quite respectable, and the speed would likely drop below that very soon after a parity check started.

 

Link to comment

The LSI SAS9201-16e card would definitely be a better purchase, and won't cost you much more.  It supports drives bigger than 2tb as well..  If you're wanting to do a 30 drive unraid system, you really should replace that motherboard with something more suited to it.  Me personally, I'd not void my warranty cutting part of a 1x slot off just to fit a card.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.