tapodufeu Posted October 2, 2020 Share Posted October 2, 2020 (edited) I have just upgrade my conf with a lsi sas 2008 and I can't figure out why read performance are worst with the LSI than with SATA. I have a LSI 9220-8i, H3-25113-03A. Write performance are better, for example if I do a dd test, write performs 30 to 40% faster. (tests done on the same drive). On my old 2.5 1T drive, I can reach more 130MB/s write speed at the beginning of the disk and around 70MB/s at the end. It was not possible to reach this level of performance with my sata card on regular load. With my sata card, I was able to have this kind of performance if I turn off all dockers and VM. But read performances are stuck at 50MB/s max.... on ALL drives. This is the first time I see disks faster during write than read. For example, if I do a parity check, I am stuck around 50MB/s because read is done at this speed. If I use my onboard sata (for parity disks) and pci sata 6x ports (for data disks), parity check is done at 65MB/s. please help. BTW, I have flash the latest firmware P20 yesterday, nothing changed. I have bought this card on ebay, from a seller with thousands of positive feedbacks. the card was delivered with the firmware 10.x.x.x Any idea where I can investigate? Edited October 2, 2020 by tapodufeu Quote Link to comment
tapodufeu Posted October 2, 2020 Author Share Posted October 2, 2020 (edited) another information, maybe relevant... during dd tests, write speed vary depending on where data is written on disk..... 130 to 70MB/s. read don't... always 50MB/s... even when I read the file written during the write test at 130MB/s for example. of course I do write and read tests on /mnt/disk[1-6] to not use my cache. I don't use a sata cache SDD drive but a pcie card with nvme SDD, which is far more efficient than the sata version... I would never go back on a sata sdd cache drive anymore !!!! Edited October 2, 2020 by tapodufeu Quote Link to comment
JorgeB Posted October 2, 2020 Share Posted October 2, 2020 Use the diskspeed docker to run a benchmark on the controller, also check its link speed. Quote Link to comment
tapodufeu Posted October 2, 2020 Author Share Posted October 2, 2020 How can I test the link speed with the controller? I have check my MB bios, and force pcie 2.0 for the slot... in case of. Nothing change with AUTO mode. I have also disable ASPM? benchmarks are running. Quote Link to comment
JorgeB Posted October 2, 2020 Share Posted October 2, 2020 Speedtest docker should show it, if not run lspci -vv Quote Link to comment
tapodufeu Posted October 2, 2020 Author Share Posted October 2, 2020 (edited) controller link looks good... speedtest docker : SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] IBM (Broadcom / LSI) Serial Attached SCSI controller Type: Add-on Card in PCIe Slot 2 (x16 PCI Express) Current & Maximum Link Speed: 5GT/s width x1 (4 GB/s max throughput) Capabilities: storage pm pciexpress vpd msi msix bus_master cap_list rom Port 1: sdb 1TB Western Digital WD10JPVX Rev 08.01A08 Serial: WXW1A17AP02T (Disk 5) Port 2: sdc 1TB Western Digital WD10JPVX Rev 08.01A08 Serial: WXX1A178VD2Y (Disk 4) Port 3: sdd 1TB Western Digital WD10JPVX Rev 08.01A08 Serial: WXW1A17PTTNY (Disk 2) Port 4: sde 1TB Western Digital WD10JPVX Rev 01.01A01 Serial: WX81E73KEER4 (Disk 3) Port 5: sdf 1TB Seagate ST1000LM035 Rev LCM2 Serial: WL1FVK5S (Parity 2) Port 6: sdg 1TB Unknown HTS541010A9E662 Rev JA0AB5D0 Serial: JA8000ET03SZSS (Disk 1) Port 7: sdh 1TB Western Digital WD10JPVX Rev 08.01A08 Serial: WXW1A17APEXK (Disk 6) Port 8: sdi 1TB Seagate ST1000LM035 Rev LCM2 Serial: WL12V84C (Parity) lspci -vv results: 02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) Subsystem: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 23 Region 0: I/O ports at e000 Region 1: Memory at 805c0000 (64-bit, non-prefetchable) Region 3: Memory at 80180000 (64-bit, non-prefetchable) Expansion ROM at 80100000 [disabled] Capabilities: [50] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [68] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+ RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 5GT/s (ok), Width x1 (downgraded) TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported AtomicOpsCap: 32bit- 64bit- 128bitCAS- DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled AtomicOpsCtl: ReqEn- LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [d0] Vital Product Data Not readable Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities: [c0] MSI-X: Enable+ Count=15 Masked- Vector table: BAR=1 offset=00002000 PBA: BAR=1 offset=00003800 Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap- HeaderLog: 00000000 00000000 00000000 00000000 Capabilities: [138 v1] Power Budgeting <?> Capabilities: [150 v1] Single Root I/O Virtualization (SR-IOV) IOVCap: Migration-, Interrupt Message Number: 000 IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+ IOVSta: Migration- Initial VFs: 16, Total VFs: 16, Number of VFs: 0, Function Dependency Link: 00 VF offset: 1, stride: 1, Device ID: 0072 Supported Page Size: 00000553, System Page Size: 00000001 Region 0: Memory at 00000000805c4000 (64-bit, non-prefetchable) Region 2: Memory at 00000000801c0000 (64-bit, non-prefetchable) VF Migration: offset: 00000000, BIR: 0 Capabilities: [190 v1] Alternative Routing-ID Interpretation (ARI) ARICap: MFVC- ACS-, Next Function: 0 ARICtl: MFVC- ACS-, Function Group: 0 Kernel driver in use: mpt3sas Kernel modules: mpt3sas I have CC the jpg of benchamrk results. I see 8 disks, with speed from 130 -> 60. So the average I imagine should be around 85/90. Still don't understand why parity check cap at 55MB/s. I have also included history of parity check. The 63MB/s was prior the LSI card installation (2 parity drives onboard SATA + 6 sata disk on pcie x1 card). 53MB/s is with all disks on LSI SAS 2008. 1 parity drive on port1, 1 parity drive on port0. Edited October 2, 2020 by tapodufeu Quote Link to comment
JorgeB Posted October 2, 2020 Share Posted October 2, 2020 5 minutes ago, tapodufeu said: controller link looks good... No: 5 minutes ago, tapodufeu said: LnkSta: Speed 5GT/s (ok), Width x1 (downgraded) It's only x1, should be on an x8 slot (x4 minimum) Quote Link to comment
tapodufeu Posted October 2, 2020 Author Share Posted October 2, 2020 ..... seriously.... lol.... what can I do ? there is a setting somewhere? never seen that before. Quote Link to comment
tapodufeu Posted October 2, 2020 Author Share Posted October 2, 2020 hum... I think I found the issue... my fucking motherboard asrock j4105M as a pcie 2.0 x16 port running at x1 speed !!!! this situation is just crazy !!! https://www.asrock.com/MB/Intel/J4105M/index.asp again a new return to amazon pff Quote Link to comment
JorgeB Posted October 2, 2020 Share Posted October 2, 2020 10 minutes ago, tapodufeu said: my fucking motherboard asrock j4105M as a pcie 2.0 x16 port running at x1 speed !!!! Yep, those CPUs only have 6 PCIe lanes total coming from the CPU. Quote Link to comment
tapodufeu Posted October 2, 2020 Author Share Posted October 2, 2020 (edited) I switch to i5-6400t with motherboard GA-H110M-S2H.... I jump to 30/50W but at least it will work like I want. I am right now between 20 and 35W but too slow. Do you think this time I do a better choice ? I think I can really use the 20$ I invested in my legit (or not hahaha) lsi card to really mulitplex my 2.5 slow hard disks edit; I have returned to amazon my 23$ pciex1 sata card.... Edited October 2, 2020 by tapodufeu Quote Link to comment
JorgeB Posted October 2, 2020 Share Posted October 2, 2020 12 minutes ago, tapodufeu said: Do you think this time I do a better choice ? Yes. Quote Link to comment
Stupoo Posted June 12, 2022 Share Posted June 12, 2022 Hello, Maybe of no help now or just random that my symptoms were similar but I had the lsi 2008 SAS HBA stuck at 50mb/s , I have an external raid box connected via a four lane sas cable , I disabled NCQ on the HBA and my speed went to what I was expecting (many hundreds of mb/s) Stu Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.