Jump to content

SAS Controller to Maximize Expander


Recommended Posts

Good Morning,

 

Having just got in on the E5-2670 deals, I'm looking to clean up and optimize my storage server... and I suppose to a degree simplify it.

 

I will be re-installing unraid on the box, which means any card needs to support IT mode (AFAIK). Anyways, what I currently have is an IBM M1015 connect to an HP Expander running 20 (soon to be 24) drives. Now, afaik the M1015 is a PCIe 2.0 card. My question is will I see a performance jump if I purchased a PCIe 3.0 controller card?

 

If so, any suggestions on cards? I was looking at the HP H200... thoughts?

 

Thanks in advance!

 

~Spritz

Link to comment

Your performance will depend on bandwidth and latency of HDD>backplane>expander>hba>pcie bus chain.  Determine the specs on each link to determine potential bottlenecks before changing anything.

 

The M1015 will easily handle 24 HDD without coming near the throughput limits.  If adding SSDs in the future, they would be limited by 3gbps for sure.  If you plan on adding SSD cache drives, you could use one port on hba to connect to expander for HDD, and 2nd port directly to SSDs.  Unless you are using more than 4 enterprise SSD's on the same controller, I would not worry about PCIe 3.0 or 12gbps hba - the M1015 is proven, reliable and fast enough.

 

 

Link to comment

Your performance will depend on bandwidth and latency of HDD>backplane>expander>hba>pcie bus chain.  Determine the specs on each link to determine potential bottlenecks before changing anything.

 

The M1015 will easily handle 24 HDD without coming near the throughput limits.  If adding SSDs in the future, they would be limited by 3gbps for sure.  If you plan on adding SSD cache drives, you could use one port on hba to connect to expander for HDD, and 2nd port directly to SSDs.  Unless you are using more than 4 enterprise SSD's on the same controller, I would not worry about PCIe 3.0 or 12gbps hba - the M1015 is proven, reliable and fast enough.

 

I'm afraid that I must disagree with you.  The math shows that the M1015 tops out at about 130mb/s when having 24 drives connected to it, and it's rougly halfed if you're only using one lane.  This has the potential to start bottlenecking the drives (if/when they're all in use).  That being said, the question was more will the HP Expander/Controller be able to use the additional bandwidth available to it.

 

Thanks!

 

~Spritz

Link to comment

Ok, its possible I'm calculating incorrectly; here are the numbers I used:

 

A single Sff8087 pcie 2.0 connection has 4 lanes @ 6Gbps each, for 24Gbps total.

1 Gbs=125MBs, so single Sff8087 provides 3000MBs throughput.

Divided across 24 drives, each drive gets 125MBs.

Dual connection allows 250MBs per/drive, way beyond current HDD speeds.

 

With Unraid, the only time all 24 drives would be spinning for extended time is a parity check, and then it is all about latency, which is influenced by disk platter density and rotation speed.

 

The point I was trying to make is that to optimize and simplify your setup you have to know where the bottleneck is before you start changing parts.  For example, is your HP expander 3Gbp/6Gbps?  How many 5400rpm drives to you have?  Any 3Gbps drives?  Odds are that your drives are the bottleneck, but without specifics it is difficult to be sure.

Link to comment

 

 

With Unraid, the only time all 24 drives would be spinning for extended time is a parity check, and then it is all about latency, which is influenced by disk platter density and rotation speed.

I'm not going to talk about the speed of the controller etc, but I felt I must correct you before the community starts thinking what you said is correct because it isnt.

 

No. Thats not the only time, what's missing is the most critical situations when there is a failed drive. One obviously knows that drives do fail, otherwise one wouldnt be using a parity protected system in the first place.

 

All drives are used in the following situations:

Parity builds

Parity checks

Turbo writes if that mode is enabled

Failed drive simulated for reads

Failed drive simulated for writes

Failed drives rebuilds.

Link to comment

Ok, its possible I'm calculating incorrectly; here are the numbers I used:

 

A single Sff8087 pcie 2.0 connection has 4 lanes @ 6Gbps each, for 24Gbps total.

1 Gbs=125MBs, so single Sff8087 provides 3000MBs throughput.

Divided across 24 drives, each drive gets 125MBs.

 

True, but this is the maximum theoretical bandwidth, never tested with a expander but in my experience real world max speed is usually considerably less, I'd guess at best 80 to 90%, so ~100MB/s per disk.

 

 

Dual connection allows 250MBs per/drive, way beyond current HDD speeds.

 

Using both connections the limit is going to be the PCIe 2.0 x8 slot, in this case max theoretical bandwidth is 4000MB/s, real world, and this one I did test is <3000MB/s, so at best 125MB/s per disk.

 

So while still a decent speed, it will be a bottleneck even for modern 5400rpm disks.

 

Link to comment

FYI: I ran a test with 16 drives (Seagate ST3000DM001) first connected to 2 M1015 controllers.  Then connected to my SAS Expander (RES2SV240) with a single cable connecting it to the M1015 (so slowest speed from expander).  I ran a parity check to completion on both.  When I used my expander I got a 25% reduction in speed on the parity check.  Got 90MB/s on the outer tracks and less then that on the inner but I don't remember now what it was.  It may not have been much below the 90MB/s on the outer tracks. 

Link to comment

Ok, its possible I'm calculating incorrectly; here are the numbers I used:

 

A single Sff8087 pcie 2.0 connection has 4 lanes @ 6Gbps each, for 24Gbps total.

1 Gbs=125MBs, so single Sff8087 provides 3000MBs throughput.

Divided across 24 drives, each drive gets 125MBs.

Dual connection allows 250MBs per/drive, way beyond current HDD speeds.

 

With Unraid, the only time all 24 drives would be spinning for extended time is a parity check, and then it is all about latency, which is influenced by disk platter density and rotation speed.

 

The point I was trying to make is that to optimize and simplify your setup you have to know where the bottleneck is before you start changing parts.  For example, is your HP expander 3Gbp/6Gbps?  How many 5400rpm drives to you have?  Any 3Gbps drives?  Odds are that your drives are the bottleneck, but without specifics it is difficult to be sure.

 

You math forgot the overhead associated with PCI-e encoding (~20% hit), so the numbers are about 2500MBs... also the PCI-e bus (v2) limits you to about 3200MBs, so that introduces a bottleneck.  So napkin math says that each drive will have about 133mb/s available to it... which is brutal for SSD's (not that I'd connect an SSD to the expander), and does slow down the drives.

 

To answer your questions though, all drives are 7200 RPM, the SAS expander is Sata2/SAS3.

 

~Spritz

Link to comment

Sorry to steer you wrong, I will go back to listening and learning.  If you do add the 12Gbps HBA, please post comparison w/6Gbps - now I'm intrigued by the possibilities.

 

No apology is necessary, I appreciate the time and effort you put in.  Your post made me dig further, as it made me remember something I read long ago... so thank you.

 

I'll try to post comparison if I ever get around to finding a reasonably priced card :)

 

~Spritz

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...