Jump to content

PCIe SATA Controller Speed Imapct


Recommended Posts

Hi, I'm rethinking my new Unraid Setup.   I'm looking at a H200 6Gb PCI-e SAS SATA 8-Port Raid Controller in IT Mode for a Unraid VM hosted with ESXi.   It fits into a PCIe x8 slot running at PCIe x8 2.0.   My motherboard only has a free slot that's PCIe x8 but only at PCIe x4 2.0 speed.   Will this slower slot affect transfer speeds if I also have SSD cache drives?   I only have a 1 GB/s network but plan on upgrading my network to 2.5 GB/s or 5GB/s in the near future.  Thanks!

 

David

Edited by davidst95
Link to comment

Thanks go's to @SSD for this info:

 

There are two things at play here - one is the the PCI version number (1.0/1.1, 2.0, 3.0) and the other is the number of "lanes".

 

PCIe 1.x is 250 MB/sec per lane

PCIe 2.0 is 500MB/sec per lane

PCIe 3.0 is 1000MB/sec per lane

 

Each card's maximum number of lanes is determined by the card's physical design (i.e., literally the length of the PCIe bus connector.)

 

A 1 lane card is the shortest, and a 16 lane card is the longest.

 

The motherboard and the card will negotiate a specification based on the highest spec both the slot and the card support. So a PCIe 2.0 card in a PCIe 3.0 slot, will run at PCIe 2.0 speed.

 

Similarly they will agree on the number of lanes based on the "shortest" one - the card or the slot. Most disk controller cards are either 1 lane, 4 lane, or 8 lane, often referred to a x1, x4, x8. If you put an x4 card in an x8 slot, you will only have 4 usable lanes. And if you put an x8 card in an x4 slot, you will also have 4 usable lanes. Putting an x8 card into an x4 slot is not always physically possible, because the x4 slot is too short. But some people have literally melted away the back end of the slot to accommodate the wider card, which is reported to work just fine. Making things just a little more confusing, some motherboards have an x8 physical slot but is actually just wired for x4. So you can put a longer card in there with no melting, but it only uses 4 of the lanes.

 

If you have, say, a PCIe 1.1 card with 1 lane, and it supports 4 drives, then your performance per drive would be determined by dividing the 250 MB/sec bandwidth by 4 = ~62.5 MB/sec max speed if all four drives are running in parallel. Since many drives are capable of 2-3x that speed, you would be limited by the card. If the slot were a PCIe 2.0 slot, you'd have 500MB/sec speed for 4 drives, meaning  125 MB/sec. While drives can run faster on their outer cylinders, this would likely be acceptable speed, with only minor impact on parity check speeds. With a PCIe 3.0 drive, you'd have 250 MB/sec per drive for each of the 4 drives. More than fast enough for any spinner, but maybe not quite fast enough for 4 fast SSDs all running at full speed at the same time.

 

You might think of each step in PCIe spec as equivalent to doubling the number of lanes from a performance perspective. So a PCIe 1.1 x8 card would be roughly the same speed as a PCIe 2.0 x4 card.


Hope that background allows you to answer most any questions about controller speed.

 

I should note that PCIe 1.x and 2.0 controller cards are the most popular. And as I said, x1, x4 and x8 the most common widths 

 

If you are looking at a 16 port card, and looking at the speed necessary to support 16 drives in a single controller ...

 

PCIe1.1 at x4 = 1GB/sec / 16 = 62.5 MB/sec - significant performance impact with all drives driven

PCIe1.1 at x8 / PCIe 2.0 at x4 = 2GB/sec/16 = 125 MB/sec - some performance impact with all drive driven

PCIe2.0 at x8 / PCIe 3.0 at x4= 4GB/sec/16 = 250 MB/sec - no performance limitations for spinning disks (at least today)

PCIe3.0 at x8 - 8 GB/sec/16 = 500 MB/sec per drive - no performance limitations even for 16 SSDs.

 

The speeds listed are approximate, but close enough for government work. Keep in mind, it is very uncommon to drive all drives at max speed simultaneously. But the unRAID parity check does exactly that, and parity check speed is a common measure here. If you are willing to sacrifice parity check speed, a slower net controller speed will likely not hold you back for most non-parity check operations.

 

For 16 drives on one controllers, I'd recommend a PCIe 2.0 slot at x8. For example, an LSI SA 9201-16i

 

Here is a pretty decent article on PCIe if you need more info:

 

http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/

  • Like 2
Link to comment
17 hours ago, jpowell8672 said:

Thanks go's to @SSD for this info:

 

There are two things at play here - one is the the PCI version number (1.0/1.1, 2.0, 3.0) and the other is the number of "lanes".

 

PCIe 1.x is 250 MB/sec per lane

PCIe 2.0 is 500MB/sec per lane

PCIe 3.0 is 1000MB/sec per lane

 

Each card's maximum number of lanes is determined by the card's physical design (i.e., literally the length of the PCIe bus connector.)

 

A 1 lane card is the shortest, and a 16 lane card is the longest.

 

The motherboard and the card will negotiate a specification based on the highest spec both the slot and the card support. So a PCIe 2.0 card in a PCIe 3.0 slot, will run at PCIe 2.0 speed.

 

Similarly they will agree on the number of lanes based on the "shortest" one - the card or the slot. Most disk controller cards are either 1 lane, 4 lane, or 8 lane, often referred to a x1, x4, x8. If you put an x4 card in an x8 slot, you will only have 4 usable lanes. And if you put an x8 card in an x4 slot, you will also have 4 usable lanes. Putting an x8 card into an x4 slot is not always physically possible, because the x4 slot is too short. But some people have literally melted away the back end of the slot to accommodate the wider card, which is reported to work just fine. Making things just a little more confusing, some motherboards have an x8 physical slot but is actually just wired for x4. So you can put a longer card in there with no melting, but it only uses 4 of the lanes.

 

If you have, say, a PCIe 1.1 card with 1 lane, and it supports 4 drives, then your performance per drive would be determined by dividing the 250 MB/sec bandwidth by 4 = ~62.5 MB/sec max speed if all four drives are running in parallel. Since many drives are capable of 2-3x that speed, you would be limited by the card. If the slot were a PCIe 2.0 slot, you'd have 500MB/sec speed for 4 drives, meaning  125 MB/sec. While drives can run faster on their outer cylinders, this would likely be acceptable speed, with only minor impact on parity check speeds. With a PCIe 3.0 drive, you'd have 250 MB/sec per drive for each of the 4 drives. More than fast enough for any spinner, but maybe not quite fast enough for 4 fast SSDs all running at full speed at the same time.

 

You might think of each step in PCIe spec as equivalent to doubling the number of lanes from a performance perspective. So a PCIe 1.1 x8 card would be roughly the same speed as a PCIe 2.0 x4 card.


Hope that background allows you to answer most any questions about controller speed.

 

I should note that PCIe 1.x and 2.0 controller cards are the most popular. And as I said, x1, x4 and x8 the most common widths 

 

If you are looking at a 16 port card, and looking at the speed necessary to support 16 drives in a single controller ...

 

PCIe1.1 at x4 = 1GB/sec / 16 = 62.5 MB/sec - significant performance impact with all drives driven

PCIe1.1 at x8 / PCIe 2.0 at x4 = 2GB/sec/16 = 125 MB/sec - some performance impact with all drive driven

PCIe2.0 at x8 / PCIe 3.0 at x4= 4GB/sec/16 = 250 MB/sec - no performance limitations for spinning disks (at least today)

PCIe3.0 at x8 - 8 GB/sec/16 = 500 MB/sec per drive - no performance limitations even for 16 SSDs.

 

The speeds listed are approximate, but close enough for government work. Keep in mind, it is very uncommon to drive all drives at max speed simultaneously. But the unRAID parity check does exactly that, and parity check speed is a common measure here. If you are willing to sacrifice parity check speed, a slower net controller speed will likely not hold you back for most non-parity check operations.

 

For 16 drives on one controllers, I'd recommend a PCIe 2.0 slot at x8. For example, an LSI SA 9201-16i

 

Here is a pretty decent article on PCIe if you need more info:

 

http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/

17 hours ago, jpowell8672 said:

Thanks go's to @SSD for this info:

 

There are two things at play here - one is the the PCI version number (1.0/1.1, 2.0, 3.0) and the other is the number of "lanes".

 

PCIe 1.x is 250 MB/sec per lane

PCIe 2.0 is 500MB/sec per lane

PCIe 3.0 is 1000MB/sec per lane

 

Each card's maximum number of lanes is determined by the card's physical design (i.e., literally the length of the PCIe bus connector.)

 

A 1 lane card is the shortest, and a 16 lane card is the longest.

 

The motherboard and the card will negotiate a specification based on the highest spec both the slot and the card support. So a PCIe 2.0 card in a PCIe 3.0 slot, will run at PCIe 2.0 speed.

 

Similarly they will agree on the number of lanes based on the "shortest" one - the card or the slot. Most disk controller cards are either 1 lane, 4 lane, or 8 lane, often referred to a x1, x4, x8. If you put an x4 card in an x8 slot, you will only have 4 usable lanes. And if you put an x8 card in an x4 slot, you will also have 4 usable lanes. Putting an x8 card into an x4 slot is not always physically possible, because the x4 slot is too short. But some people have literally melted away the back end of the slot to accommodate the wider card, which is reported to work just fine. Making things just a little more confusing, some motherboards have an x8 physical slot but is actually just wired for x4. So you can put a longer card in there with no melting, but it only uses 4 of the lanes.

 

If you have, say, a PCIe 1.1 card with 1 lane, and it supports 4 drives, then your performance per drive would be determined by dividing the 250 MB/sec bandwidth by 4 = ~62.5 MB/sec max speed if all four drives are running in parallel. Since many drives are capable of 2-3x that speed, you would be limited by the card. If the slot were a PCIe 2.0 slot, you'd have 500MB/sec speed for 4 drives, meaning  125 MB/sec. While drives can run faster on their outer cylinders, this would likely be acceptable speed, with only minor impact on parity check speeds. With a PCIe 3.0 drive, you'd have 250 MB/sec per drive for each of the 4 drives. More than fast enough for any spinner, but maybe not quite fast enough for 4 fast SSDs all running at full speed at the same time.

 

You might think of each step in PCIe spec as equivalent to doubling the number of lanes from a performance perspective. So a PCIe 1.1 x8 card would be roughly the same speed as a PCIe 2.0 x4 card.


Hope that background allows you to answer most any questions about controller speed.

 

I should note that PCIe 1.x and 2.0 controller cards are the most popular. And as I said, x1, x4 and x8 the most common widths 

 

If you are looking at a 16 port card, and looking at the speed necessary to support 16 drives in a single controller ...

 

PCIe1.1 at x4 = 1GB/sec / 16 = 62.5 MB/sec - significant performance impact with all drives driven

PCIe1.1 at x8 / PCIe 2.0 at x4 = 2GB/sec/16 = 125 MB/sec - some performance impact with all drive driven

PCIe2.0 at x8 / PCIe 3.0 at x4= 4GB/sec/16 = 250 MB/sec - no performance limitations for spinning disks (at least today)

PCIe3.0 at x8 - 8 GB/sec/16 = 500 MB/sec per drive - no performance limitations even for 16 SSDs.

 

The speeds listed are approximate, but close enough for government work. Keep in mind, it is very uncommon to drive all drives at max speed simultaneously. But the unRAID parity check does exactly that, and parity check speed is a common measure here. If you are willing to sacrifice parity check speed, a slower net controller speed will likely not hold you back for most non-parity check operations.

 

For 16 drives on one controllers, I'd recommend a PCIe 2.0 slot at x8. For example, an LSI SA 9201-16i

 

Here is a pretty decent article on PCIe if you need more info:

 

http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/

 

Hi, thanks for the detail reply.   It is really informative.   This is all new to me so it's taking me a while to take it all in.   I basically am going to have 4 Red 5400RPM Sata drives and 2 SATA SSD Drives or 2 NVMe drives.   Will either PCIe 2.0 x4 or x8 cause a bottleneck in my scenario?   If so, do they have PCIe 3.0 version of SATA/SAS 8 port controllers like the LSI 9211-8i?   Thanks again for all the info!

 

David

 

Edited by davidst95
Link to comment
43 minutes ago, davidst95 said:

 

Hi, thanks for the detail reply.   It is really informative.   This is all new to me so it's taking me a while to take it all in.   I basically am going to have 4 Red 5400RPM Sata drives and 2 SATA SSD Drives or 2 NVMe drives.   Will either PCIe 2.0 x4 or x8 cause a bottleneck in my scenario?   If so, do they have PCIe 3.0 version of SATA/SAS 8 port controllers like the LSI 9211-8i?   Thanks again for all the info!

 

David

 

Yes the 9207-8i which is pcie 3.0 as long as your main board supports pcie 3.0. It works out of the box with Unraid no need to flash. I am using the 9207-8i with Unraid and so is a lot of others. This is where I got mine: https://www.ebay.com/itm/LSI-SAS-9207-8i-Storage-controller-8-Channel-SATA-6Gb-s-OEM/383089458335?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649

Watch out for counterfit cards, make sure you have or get the correct forward SAS SFF-8087 36Pin to 4 SATA 7Pin HDD cables & all these hba/raid cards are meant for server cases with high airflow so you will want to add a fan to keep it cool so it doesn't die an early death.

Link to comment
9 minutes ago, jpowell8672 said:

Yes the 9207-8i which is pcie 3.0 as long as your main board supports pcie 3.0. It works out of the box with Unraid no need to flash. I am using the 9207-8i with Unraid and so is a lot of others. This is where I got mine: https://www.ebay.com/itm/LSI-SAS-9207-8i-Storage-controller-8-Channel-SATA-6Gb-s-OEM/383089458335?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649

Watch out for counterfit cards, make sure you have or get the correct forward SAS SFF-8087 36Pin to 4 SATA 7Pin HDD cables & all these hba/raid cards are meant for server cases with high airflow so you will want to add a fan to keep it cool so it doesn't die an early death.

 

Ok, thanks for the recommendation on which card to get.   I only have a slot with PCIe 3.0x4 but I think I'll get that one to plan for the future. 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...