Any pcie 3.0 x16 controllers?


Recommended Posts

I am wanting to use just one controller for 28 drives in my system. With a LSI 9300-8i and an Intel RES3TV360 I can get just over 200MB/s on all of the drives during a parity check. While this is fine I would prefer to get max speed if there are pcie 3.0 x16 sas controllers out there. I can't seam to find much when I do a google search.

Link to comment
On 5/31/2021 at 10:40 PM, ChatNoir said:

There are 16i and even some 24i variants (only saw MegaRAID though in my quick search). Never heard of more than 24.

I'm not talking about the number of the ports on the controller. I am wondering if there are any controllers that are pcie 3.0 x16 for more bandwidth on a single controller.

Link to comment

It looks to me like you are not limited by PCIe bandwidth. PCIe gen3 @ x8 is good for (real-world) ~7000 MB/sec.  If you are getting a little over 200 MB/sec each for 28 drives, that's ~6000 MB/sec. (You are obviously using a Dual-link connection HBA<==>Expander which is good for >> 7000 [9000].)  Either your 9300 does not have the muscle  to exceed 6000, or you have one (or more) drives that are stragglers,  handicapping the (parallel) parity operation. (I'm assuming you are not CPU-limited--I don't use unraid.)

 

Link to comment
5 hours ago, xxredxpandaxx said:

"Supports 8 inputs and 28 outputs configuration" from the data sheet of the RES3TV360

OK, thanks, remember reading it only supported 24, and the Adaptec that uses the same PMC chip only mentions 24 devices.

 

1 hour ago, UhClem said:

PCIe gen3 @ x8 is good for (real-world) ~7000 MB/sec.

 

In my experience it's closer to 6000MB/s, even LSI mentions 6400Mb/s as the maximum usable:

image.png.0ecbe74956f900244b4ed95362fabf70.png

 

Assuming the OP is using SATA3 devices (not SAS3) it also means the PMC expander chip has something equivalent to LSI's Databolt, or max bandwidth would be limited to 4400MB/s with dual link, in my tests I could get around 5600MB/s with a PCIe x8 HBA and a databolt enable LSI expander, so it's possible that while that kind of technology certainly helps there could always be some extra overhead and not exactly the same as true SAS3 performance.

 

 

As for the original question, the only x16 HBA I know of is the LSI 9405W-16i/e, though probably not worth the investment for this, and there's a risk that performance won't improve by much (if at all) unless using SAS3 devices.

 

 

Link to comment
7 hours ago, JorgeB said:

In my experience it's closer to 6000MB/s, even LSI mentions 6400Mb/s as the maximum usable:

image.png.0ecbe74956f900244b4ed95362fabf70.png

 

In my direct, first-hand, experience, it is 7100+ MB/sec.  (I also measured 14,200+ MB/sec on PCIe3 x16). I used a PCIe3 x16 card supporting multiple (NVMe) devices. [In a x8 slot for the first measurement.]

[Consider: a decent PCIe3 x4 NVMe SSD can attain 3400-3500 MB/sec.]

 

That table's  "Typical" #s are factoring in an excessive amount of transport layer overhead.

Quote

Assuming the OP is using SATA3 devices (not SAS3) it also means the PMC expander chip has something equivalent to LSI's Databolt

I'm pretty certain that the spec for SAS3 expanders eliminates the (SAS2) "binding" of link speed to device speed. I.e., Databolt is just Marketing.

Quote

in my tests I could get around 5600MB/s with a PCIe x8 HBA and a databolt enable LSI expander,

Well, that's two tests of the 9300, with different  dual-link SAS3 expanders and different device mix, that are both capped at ~5600 ... prognosis: muscle deficiency [in the 9300].

 

Link to comment
In my direct, first-hand, experience, it is 7100+ MB/sec.  (I also measured 14,200+ MB/sec on PCIe3 x16). I used a PCIe3 x16 card supporting multiple (NVMe) devices. [in a x8 slot for the first measurement.]
I'm referring to HBAs, never got or seen someone get more than about 3000MB/s with a x8 PCIe 2.0 HBA and 6000MB/s with a PCIe x8 3.0 HBA, won't say it's not possible, just not typical, and we are talking about a HBA here.
 
I'm pretty certain that the spec for SAS3 expanders eliminates the (SAS2) "binding" of link speed to device speed.
AFAIK it's not part of the SAS3 spec, LSI has Databold, PMC mentions "SAS and SATA edge-buffering preserves customers' investment by improving performance with existing 3G and 6G drives", but in any case the wide link is never 12G native, so some performance degradation would be normal, I expect that if I used SAS3 devices with the 9300-8i and the expander I would get closer to the 6400MB/s typical speed announced.



Link to comment
12 hours ago, JorgeB said:

I'm referring to HBAs, never got or seen someone get more than about 3000MB/s with a x8 PCIe 2.0 HBA and 6000MB/s with a PCIe x8 3.0 HBA, won't say it's not possible, just not typical, and we are talking about a HBA here.

Please keep things in context.

OP wrote:

Quote

I am wanting to use just one controller for 28 drives in my system. With a LSI 9300-8i and an Intel RES3TV360 I can get just over 200MB/s on all of the drives during a parity check. While this is fine I would prefer to get max speed if there are pcie 3.0 x16 sas controllers out there.

Since the OP seemed to think that an x16 card was necessary, I replied:

Quote

It looks to me like you are not limited by PCIe bandwidth. PCIe gen3 @ x8 is good for (real-world) ~7000 MB/sec.

And then you conflated the limitations of  particular/"typical" PCIe3 SAS/SATA HBAs with the limits of the PCIe3 bus itself.

 

In order to design/configure an optimal storage subsystem, one needs to understand, and differentiate, the limitations of the PCIe bus, from the designs, and shortcomings, of the various HBA (& expander) options.

 

If I had a single PCIe3 x8 slot and 32 (fast enough) SATA HDDs, I could get 210-220 MB/sec on each drive concurrently. For only 28 drives, 240-250..(Of course, you are completely free to doubt me on this ...) And, two months ago, before prices of all things storage got crazy, HBA + expansion would have cost < $100.

=====

Specific problems warrant specific solutiions. Eschew mediocrity.

 

Edited by UhClem
HDDs ==> SATA HDDs
Link to comment
13 hours ago, UhClem said:

In order to design/configure an optimal storage subsystem, one needs to understand, and differentiate, the limitations of the PCIe bus, from the designs, and shortcomings, of the various HBA (& expander) options.

 

OK, granted, but also keep in mind that, and again assuming the OP is using SATA devices, he will suffer from some performance loss due to the mentioned SAS wide link not being really 12G, and that is probably what's limiting his current bandwidth, and not the PCIe x8 slot.

 

x4 SAS link has max bandwidth of 1200MB/s, 1100MB/s usable, x4 SAS2 link max is 2400MB/s, 2200MB/s usable, both of these I confirmed myself in the tests linked above and results are very consistent with a number of different devices, x4 SAS3 link max is 4800MB/s, I'm going to assume 4400MB/s usable, unfortunately I have no way of testing this since I'd need some SAS3 SSDs, but LSI points to the same values:

 

imagem.png.1ac5a44499febf8893c922e2a0b4975f.png

 

Now if we accept those I can show the performance degradation of not using a real SAS3 link:

 

524779542_Screenshot2021-06-0507_48_30.png.067348e3785b79bdd9f73825eeddae40.png 501138169_Screenshot2021-06-0507_45_33.png.616d753d537b90249b5cf674b529bd81.png

 

Left side 8 SATA3 SSDs directly connected to an LSI 9300-8i, right side same but now with an expander in the middle, so while Databolt does a very good job, or max combined speed would only be 2.2GB/s, still about 10% degradation from what it should be with real SAS3 devices (4.4GB/s), and IMHO that's likely the limit the OP is hitting, and in that case getting a PCIe x16 HBA won't really help.

 

14 hours ago, UhClem said:

If I had a single PCIe3 x8 slot and 32 (fast enough) SATA HDDs, I could get 210-220 MB/sec on each drive concurrently. For only 28 drives, 240-250..(Of course, you are completely free to doubt me on this ...)

I would say not possible with SATA devices because of the above, and very much doubt it's possible even with SAS devices, I believe that in optimal conditions around 6500MB/s would be possible, don't think 7000MB/s+ can be done, but happy to be proved wrong ;), just point to someone achieving those speeds in the real world with a HBA/expander, I would like to get my hands on some SAS3 SSDs to test myself, maybe some day if I can find some cheap ones. :)

 

 

 

 

  • Like 1
Link to comment
22 hours ago, JorgeB said:

I believe that in optimal conditions around 6500MB/s would be possible, don't think 7000MB/s+ can be done

Remembered that while I don't have to hardware to test the real max for an x8 PCIe 3.0 slot, I can test with the HBA in an x4 slot, so same 9300-8i with 8 directly connected SSDs as above but this time I enabled bifurcation on the slot effectively turning it into an x4 slot:

 

02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
                        LnkSta: Speed 8GT/s (ok), Width x4 (downgraded)

 

1675163367_Screenshot2021-06-0609_18_05.png.e7f6f38f56cbe7c07b7cbb325682f584.png

 

So an x8 slot should be able to do around 6800MB/s, possibly a little more with different hardware, still think 7000MB/s+ will be very difficult, but won't say it's impossible, though again don't think the PCIe bandwidth is what's limiting the OP's speed, IMHO and more likely it's because of the expander being used with non 12G devices.

 

Link to comment
On 6/5/2021 at 5:44 AM, JorgeB said:

 

524779542_Screenshot2021-06-0507_48_30.png.067348e3785b79bdd9f73825eeddae40.png 501138169_Screenshot2021-06-0507_45_33.png.616d753d537b90249b5cf674b529bd81.png

 

Left side 8 SATA3 SSDs directly connected to an LSI 9300-8i, right side same but now with an expander in the middle,

Excellent evidence! But, to me, very disappointing that the implementations (both LSI & PMC, apparently)] of this feature are this sub-optimal.. Probably a result of cost/benefit analysis with regard to SATA users (the peasant class--"Let them eat cake."). Also surprising that this hadn't come to light previously.

 

Speaking of the LSI/PMC thing ... Intel's SAS3 expanders (such as the OP's) are documented, by Intel, to use PMC expander chips. How did you verify that your SM backplane actually uses a LSI expander chip (I could not find anything from Supermicro themself; and I'm not confident relying on a "distributor" website)? Do any of the sg_ utils expose that detail? The reason for my "concern" is that the coincidence of both OP's & your results, with same 9300-8i and same test (Unraid parity check) [your 12*460 =~ OP's 28*200]  but different??? expander chip is curious.

 

Link to comment
22 hours ago, JorgeB said:

I can test with the HBA in an x4 slot ...


02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
                        LnkSta: Speed 8GT/s (ok), Width x4 (downgraded)

1675163367_Screenshot2021-06-0609_18_05.png.e7f6f38f56cbe7c07b7cbb325682f584.png

 

So an x8 slot should be able to do around 6800MB/s, possibly a little more with different hardware, still think 7000MB/s+ will be very difficult, but won't say it's impossible,

You're getting there ... 😀 Maybe try a different testing procedure:. See the attached script. I use variations of it for SAS/SATA testing.

 

Usage:

~/bin [ 1298 ] # ndk a b c d e
  /dev/sda: 225.06 MB/s
  /dev/sdb: 219.35 MB/s
  /dev/sdc: 219.68 MB/s
  /dev/sdd: 194.17 MB/s
  /dev/sde: 402.01 MB/s
Total =    1260.27 MB/s

ndk_sh.txt

Speaking of testing (different script though) ...

~/bin [ 1269 ] # nvmx 0 1 2 3 4
/dev/nvme0n1: 2909.2 MB/sec
/dev/nvme1n1: 2907.0 MB/sec
/dev/nvme2n1: 2751.0 MB/sec
/dev/nvme3n1: 2738.8 MB/sec
/dev/nvme4n1: 2898.5 MB/sec
Total = 14204.5 MB/sec
~/bin [ 1270 ] # for i in {1..10}; do nvmx 0 1 2 3 4 | grep Total; done
Total = 14205.8 MB/sec
Total = 14205.0 MB/sec
Total = 14207.5 MB/sec
Total = 14205.8 MB/sec
Total = 14203.3 MB/sec
Total = 14210.6 MB/sec
Total = 14207.0 MB/sec
Total = 14208.0 MB/sec
Total = 14203.4 MB/sec
Total = 14201.9 MB/sec
~/bin [ 1271 ] #

PCIe3 x16 slot [on HP ML30 Gen10, E-2234 CPU] nothing exotic

 

 

Link to comment
2 hours ago, UhClem said:

But, to me, very disappointing that the implementations (both LSI & PMC, apparently)] of this feature are this sub-optimal..

Interesting, I feel the other way, i.e. that they did a good job with it, I have an older HP expander that is SAS2 but SATA2 only, so with SATA devices it can only do 1100MB/s per wide link, without Databolt and the PMC equivalent, we'd only be able to get 2200MB/s per wide link with a SAS3 HBA+expander, so 4000MB/s (and  around 5500MB/s with dual link) seem good to me, I think it's difficult to expect that a 6G link would have the exact same performance as a native 12G link.

 

2 hours ago, UhClem said:

Probably a result of cost/benefit analysis with regard to SATA users

It's not exclusive to SATA, it's link speed related, using SAS2 devices with a SAS3 HBA+expander will be the same, since they also link at 6G, and because there's no 12Gb/s SATA nothing much can really be done about it, SAS2 users can upgrade to SAS3 devices if they really want max performance, some more interesting info I found on this:

 

271066822_databoltbechmark.thumb.PNG.9caddf939746ee1cba0f71861e9cb1c8.PNG

 

Note that read speed aligns well with my results with dual link and the expander:
 

Quote

 

10 x 510MB/s

12 x 460MB/s

 

 

Write speeds are even better, basically same as native 12G, that is something I can never replicate since the SSDs I have for testing are fast at reading but much slower at writing.

 

2 hours ago, UhClem said:

How did you verify that your SM backplane actually uses a LSI expander chip (I could not find anything from Supermicro themself; and I'm not confident relying on a "distributor" website)? Do any of the sg_ utils expose that detail?

It can be seen for example with lsscsi -v:

 

[1:0:12:0]   enclosu LSI      SAS3x28          0601  -        
  dir: /sys/bus/scsi/devices/1:0:12:0  [/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host1/port-1:0/expander-1:0/port-1:0:12/end_device-1:0:12/target1:0:12/1:0:12:0]

 

 

Link to comment
2 hours ago, UhClem said:

Maybe try a different testing procedure:. See the attached script. I use variations of it for SAS/SATA testing.

 

This was the result (PCIe 3.0 x4):

 

ndk_sh t u v w x y z aa
  /dev/sdt: 409.95 MB/s
  /dev/sdu: 409.91 MB/s
  /dev/sdv: 409.88 MB/s
  /dev/sdw: 410.22 MB/s
  /dev/sdx: 410.31 MB/s
  /dev/sdy: 410.54 MB/s
  /dev/sdz: 412.00 MB/s
  /dev/sdaa: 410.20 MB/s
Total =    3283.01 MB/s

 

Ran it 3 times and this was the best of the 3, so strangely a little slower than an Unraid read check.

 

2 hours ago, UhClem said:

Speaking of testing (different script though) ...

Do you mind sharing that one also? Have 4 NVMe devices in a bifurcated x16 slot but no good way of testing, since an Unraid read check produces much slower than expected speeds with those.

 

2 hours ago, UhClem said:

Total = 14204.5 MB/sec

That's a good result, but I expect NVMe devices will be a little more efficient, consider that you have the NVMe controller on the device(s) going directly to the PCIe bus, with SAS/SATA you have the SAS/SATA controller on each device, then you have the HBA and only then the PCIe bus, so I believe it can never be as fast as NVMe, though I have no problem in now acknowledging that the PCIe 3.0 bus itself can reach around 7GB/s with an x8 link, but IMHO those speeds will only be possible with NVMe devices, still believe with SAS/SATA HBA it will always be a little slower, around 6.6-6.8GB/s.

 

P.S. what adapter are you using for 5 NVMe devices in a one slot? Mostly find 4 device adapters to use on bifurcated slots, though I guess it will be more expensive due to the PICe bridge, still might be worth to get one more NMVe device in a single slot while maintaining good performance.

 

Link to comment
20 hours ago, JorgeB said:

Interesting, I feel the other way, i.e. that they did a good job with it, ...

Certainly ... but as an old-school hardcore hacker, I wonder if it could have been (at least a few %) better. I have to wonder if  any very large, and very competent, potential customer (e.g., GOOG, AMZN, MSFT), did a head-to-head comparison between LSI & PMC before placing their 1000+ unit chip order.

Quote

some more interesting info I found on this:

That lays the whole story out--with good quantitative details. I commend LSI. And extra credit for "underplaying" their hand. Note how they used "jumps from 4100 MB/s to 5200 MB/s" when their own graph plot clearly shows ~5600. (and that is =~ your own 5520)

 

I suspect that the reduction in read speed, but not write speed, is due to the fact that writing can take advantage of "write-behind" (similar to HDD's and OS's), but reading can not do "read-ahead" (whereas HDD's and OS's can).

Quote

It can be seen for example with lsscsi -v:

Thanks for the verification.

 

Link to comment
22 hours ago, JorgeB said:

 

This was the result (PCIe 3.0 x4):

 


ndk_sh t u v w x y z aa
  /dev/sdt: 409.95 MB/s
  /dev/sdu: 409.91 MB/s
  /dev/sdv: 409.88 MB/s
  /dev/sdw: 410.22 MB/s
  /dev/sdx: 410.31 MB/s
  /dev/sdy: 410.54 MB/s
  /dev/sdz: 412.00 MB/s
  /dev/sdaa: 410.20 MB/s
Total =    3283.01 MB/s

 

Ran it 3 times and this was the best of the 3, so strangely a little slower than an Unraid read check.

Try the B option. It might help, or not ... Devices (and buses) can act strange when you push their limits.

Quote

Do you mind sharing that one also? Have 4 NVMe devices in a bifurcated x16 slot but no good way of testing, since an Unraid read check produces much slower than expected speeds with those.

The nvmx script uses a home-brew prog instead of hdparm. Though I haven't used it myself, you can check out fio for doing all kinds of testing of storage.

Quote

That's a good result, but I expect NVMe devices will be a little more efficient, consider that you have the NVMe controller on the device(s) going directly to the PCIe bus, with SAS/SATA you have the SAS/SATA controller on each device, then you have the HBA and only then the PCIe bus, so I believe it can never be as fast as NVMe, though I have no problem in now acknowledging that the PCIe 3.0 bus itself can reach around 7GB/s with an x8 link,

I completely agree with you.

Quote

 still believe with SAS/SATA HBA it will always be a little slower, around 6.6-6.8GB/s.

I do not completely agree with this.

Quote

PS: ...

I'll send you a PM.

 

Link to comment
3 hours ago, UhClem said:

Try the B option. It might help, or not ...

Yep, that does it:

 

ndk_sh B t u v w x y z aa
  /dev/sdt: 434.79 MB/s
  /dev/sdu: 434.79 MB/s
  /dev/sdv: 433.82 MB/s
  /dev/sdw: 435.06 MB/s
  /dev/sdx: 434.84 MB/s
  /dev/sdy: 434.54 MB/s
  /dev/sdz: 434.61 MB/s
  /dev/sdaa: 434.80 MB/s
Total =    3477.25 MB/s

 

And results were more consistent with multiple runs.

Link to comment
  • 1 month later...

Turns out that contrary to what I understood at first the OP didn't have this HBA+expander combo yet, was just thinking of getting it and assumed the performance, I got one recently and tested it with an LSI 9300-8i and SSDs, note that these results are after updating the expander to latest firmware, it performed much worse without it, similar to what would be expected without Databolt or the PMC equivalent.

 

Unraid read test (no parity):

 

RES3TV360 single link with LSI 9308-8i

8 x 490MB/s

12 x 330MB/s

16 x 245MB/s

20 x 170MB/s

24 x 130MB/s

28 x 105MB/s

 

RES3TV360 dual link with LSI 9308-8i

12 x 505MB/s

16 x 380MB/s

20 x 300MB/s

24 x 230MB/s

28 x 195MB/s

 

So up to roughly 4GB/s with single link and 6GB/s with dual link, not that different from the LSI SAS3 expander, unfortunately since my LSI expander is on a backplane I can only test that one with up to 12 devices, this combo with SAS3 devices should be capable of around 4.4GB/s with single link and 7GB/s with dual link.

 

Curiously total bandwidth decreases with more devices, possibly due to the hardware used or Unraid but don't think so, using @UhClem's script only got around 4500MB/s max with dual link (with or without the B option) using 12 or more devices, so there shouldn't be a CPU/Unraid bottleneck, I suspect the technology used by PMC to emulate a 12G link with 6G devices loses some efficiency as you add more devices.

 

 

 

 

 

Link to comment
17 minutes ago, JorgeB said:

these results are after updating the expander to latest firmware

how you update the firmware? do you need any special hardware to do that?

i think i got the same expander as you from link you posted on great deals, but i still had not a chance to test it.

Link to comment
9 minutes ago, uldise said:

how you update the firmware? do you need any special hardware to do that?

I used Windows, it was the easiest one for me since it didn't require any extra software, used the 9300-8i for updating, but it should work with any LSI HBA in IT mode, possibly other brands.

Link to comment
On 7/8/2021 at 12:46 PM, JorgeB said:

...  this combo with SAS3 devices should be capable of around 4.4GB/s with single link and 7GB/s with dual link.

As for the 7GB/s "estimate", note that it assumes that the 9300-8i itself actually has the muscle. It probably does, but it is likely that it will be PCIe limited to slightly less than 7.0 GB/s (ref: your x4 test which got 3477; [i.e. < 7000/2])

On 7/8/2021 at 12:46 PM, JorgeB said:

Curiously total bandwidth decreases with more devices, possibly due to the hardware used or Unraid but don't think so, using @UhClem's script only got around 4500MB/s max with dual link (with or without the B option) using 12 or more devices, so there shouldn't be a CPU/Unraid bottleneck, I suspect the technology used by PMC to emulate a 12G link with 6G devices loses some efficiency as you add more devices.

Interesting. The under-performance of my script suggests another deficiency of the PMC implementation (vs LSI) :

My script uses hdparm -t which does read()'s of 2MB (which the kernel deconstructs to multiple 512KB max to the device/controller). Recall that LSI graphic you included which quantified the Databolt R/W throughput for different Request sizes (the x-axis). There was a slight decrease of Read throughput at larger request sizes (64KB-512KB). I suspect that an analogous graph for the PMC edge-buffering expander would show a more pronounced tail off.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.