Sata/SAS controllers tested (real world max throughput during parity check)


Recommended Posts

  • 2 months later...

Hi,
I planed on using my B550 AORUS PRO V2 for an unraid build. In intended on using 6 HDDs (~260 MBs max) for the array (possibly adding more in the future) and 2 mirrored NVMEs for cache/appdata, possibly adding 2 normal SSDs for whatever.
I thought about getting a LSI 9305-16i instead of the LSI SAS2008 - mainly because the 2008 is rather old by now and I read a lot about trouble with TRIM signals, as well as reaching lower C-States.
1. Question: Would the 9305-16i improve on that?

Besides that, when I looked into my mainboards specs, its 2 additional x16 slots (the main will hold a GPU) have rather wild asterix to them:
- 1 x PCI Express x16 slot (PCIEX4), integrated in the Chipset: - Supporting PCIe 3.0 x4 mode

(* The M2B_SB connector shares bandwidth with the PCIEX4 slot. The PCIEX4 slot will become unavailable when an SSD is installed in the M2B_SB connectors.)
- 1 x PCI Express x16 slot (PCIEX2), integrated in the Chipset: - Supporting PCIe 3.0 x2 mode
(* The PCIEX2 slot shares bandwidth with the SATA3 4, 5 connectors. The PCIEX2 slot will become unavailable when a device is installed in the SATA3 4 or SATA3 5 connector.)

It also has 2 NVME slots. In my research, I learned that it yeets all the storage besides the main NVME slot through its storage controller, which seems to be PCIe3 x4.
Since I wanted to use both NVME slots, I would be forced to use PCIEX2. Now, Im at a loss how that would function/what would happen, would I slot the 9305-16i in there. It's a PCIe3x8 card - and as far as I understand, it would get the PCIe3x2 as mentioned above. That would be about 2000 MB/s - enough for 8 HDDs. But does it actually work that way? Would, since the card is x8, only certain connectors function?
I figure it's all rather suboptimal, considering everything, even SSDs on the MB SATA would share the bandwith, as well as one of the nvmes?

Link to comment
18 hours ago, SunSh4dow said:

1. Question: Would the 9305-16i improve on that?

If you meant lower C-States I cannot answer, I have some but never checked that.

 

18 hours ago, SunSh4dow said:

- 1 x PCI Express x16 slot (PCIEX4), integrated in the Chipset: - Supporting PCIe 3.0 x4 mode

This is not ideal, since it's only x4 and shares the DMI with SATA controller, etc, but if it's the only option it should still give decent performance.

Link to comment
  • 2 months later...
Quote

LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset

8 x 565MB/s (425MB/s*, 380MB/s**)

How was the 8x425MB/s achieved on 9300-8i? According to spec the card has 12Gb/s throughput only, meaning 1.5GB/s.

 

The 1.5GB/s limit I also experience on my system:

NkqSM6Xl.jpeg

 

Where am I bottlenecking or do I misunderstand something?

Edited by AlphaXL
Link to comment
46 minutes ago, AlphaXL said:

According to spec the card has 12Gb/s throughput only, meaning 1.5GB/s.

The card is capable of 12Gbps per port, that's 1200MB/s (with SAS3 devices, it will be 6Gbps per port with SATA max). also 1.5GB/s not the same as 1.5Gbps

 

47 minutes ago, AlphaXL said:

The 1.5GB/s limit I also experience on my system:

Likely the HBA is not linking at full speed/width, post the output of:

lspci -d 1000: -vv

 

Link to comment
Quote

 

03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
        Subsystem: Broadcom / LSI SAS9300-8i
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 16
        Region 0: I/O ports at e000
        Region 1: Memory at df140000 (64-bit, non-prefetchable)
        Region 3: Memory at df100000 (64-bit, non-prefetchable)
        Expansion ROM at df000000 [disabled]
        Capabilities: [50] Power Management version 3
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [68] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0W
                DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
                        RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s (downgraded), Width x4 (downgraded)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range BC, TimeoutDis+ NROPrPrP- LTR-
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [a8] MSI: Enable- Count=1/1 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [c0] MSI-X: Enable+ Count=96 Masked-
                Vector table: BAR=1 offset=0000e000
                PBA: BAR=1 offset=0000f000
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [1e0 v1] Secondary PCI Express
                LnkCtl3: LnkEquIntrruptEn- PerformEqu-
                LaneErrStat: 0
        Capabilities: [1c0 v1] Power Budgeting <?>
        Capabilities: [190 v1] Dynamic Power Allocation <?>
        Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 0
                ARICtl: MFVC- ACS-, Function Group: 0
        Kernel driver in use: mpt3sas
        Kernel modules: mpt3sas

 

 

"LnkSta: Speed 2.5GT/s (downgraded), Width x4 (downgraded)"

 

Does that mean my motherboard doesn't have enough lanes?

 

I have "ASRock B365M Phantom Gaming 4" motherboard.

https://pg.asrock.com/mb/Intel/B365M Phantom Gaming 4/index.asp#Specification

 

HBA card is in PCIe x4 slot as it does not work in the x16 slot. Both M.2 slots have NVMe installed also.

Edited by AlphaXL
Link to comment
5 minutes ago, JorgeB said:

It's only linking at PCIe 1.0 speeds, and also only x4, what is the board model and slot where it is installed?

Sorry, edited the previous post.

 

I have "ASRock B365M Phantom Gaming 4" motherboard.

https://pg.asrock.com/mb/Intel/B365M Phantom Gaming 4/index.asp#Specification

 

HBA card is in PCIe x4 slot (named PCIE3, the very bottom one) as it does not work in the x16 slot. Both M.2 slots have NVMe installed also.

Edited by AlphaXL
Link to comment
33 minutes ago, AlphaXL said:

HBA card is in PCIe x4 slot (named PCIE3, the very bottom one)

OK, the x4 is normal, not normal only linking at PCIe 1.0, check the bard BIOS to see if the PICe slot link is limited.

Link to comment
6 minutes ago, JorgeB said:

OK, the x4 is normal, not normal only linking at PCIe 1.0, check the bard BIOS to see if the PICe slot link is limited.

I tried Auto and also to force it into Gen3. lspci command still gives the identical output.

Motherboard and HBA card both have the latest BIOS. I have tried resetting both also.

 

ChpB8v3.jpeg

Link to comment

So... if the lspci command says that the HBA card link speed is "downgraded" it would indicate that the card itself is capable of higher link speeds and the issue is not with the HBA card but with the motherboard?

 

Would there be any point to get a Z390 board (as I want to keep the CPU and MEM modules the same) to get rid of the PCIe 1.0 x4 bottleneck? I have 7 drives connected to HBA card currently. If I add the 8th drive the speed of each drive would drop even more? As the 1.5GB/s throughput gets divided between all drives.

Edited by AlphaXL
Link to comment
5 minutes ago, AlphaXL said:

itself is capable of higher link speeds and the issue is not with the HBA card but with the motherboard?

Correct.

 

6 minutes ago, AlphaXL said:

Would there be any point to get a Z390 board (as I want to keep the CPU and MEM modules the same) to get rid of the PCIe 1.0 x4 bottleneck?

It should help, as long as compatible, using a different brand board where it would work in the GPU slot would also solve the issue.

 

7 minutes ago, AlphaXL said:

If I add the 8th drive the speed of each drive would drop even more?

Yep.

 

 

 

Link to comment
  • 5 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.