Jump to content
The Unraid Annual Cyber Weekend Sale is here 🔥 ×

Where to plug my parity disks ?


Recommended Posts

Posted

Hi everyone,

I've a question regarding my setup. You can see my full setup here:


My question will be related to my motherboard (Asus P12R-E/10G-2T) and my HBA (Lsi Sas 9305-24i)

 

For the moment, all my disks are connected on my HBA though my PCI-E x8.

Do you think I can improve my performance (data rebuild / parity check) if I connect my 2 parity disks directly on my motherboard ?

Or the opposite, my perfs will decrease.

I would appreciate any advice here

thx

Posted

My HBA is in PCIe v3 but my moterboard is PCIe v4. If I change my HBA for the version eHBA 9600-24i for example, will I have a significant performance improvement ?

Posted

If you are using an HBA card, actual drive speed should be the bottleneck. You can verify performance with the DiskSpeed application if you're curious. Rather than asking vague theory type questions: what's your current parity check time / drive size / speed results? Are you asking because you think something is wrong?

Posted

Hi @_cjd_,

You will probably say that my parity check time is fine and it's on average. 
image.thumb.png.563a4d446a370e4e93dec2c68c0383d9.png

but the reality is that I have to stop all my VMs and all my Docker containers during parity check or rebuild to reach that average speed. and even with everything stopped, my unraid server is useless 2days per month

I'm just checking if I can optimize something 

Posted

I missed before that it was a 24 port HBA, and I see you have 20 disks connected, still if it's on a x8 PCIe 3.0 slot, it should have just enough bandwidth, run the diskspeed docker controller test and post the results.

Posted

Oof, yeah that's a lot and certainly all over the place on time. I usually see 22hr for a 16tb array where daytime use slows things down, but also a total of 5 drives in the array (mobo or asm1166 card connected). Time machine is the killer. No need for me to stop docker (no VMs any more).

It's probably not even parity drives specifically, if you're saturating the card/slot. Are your VMs/docker running off the array? Curious what you have there with sufficient io to slow parity check.

Posted

Those are the disk results, and already show some interesting info, multiple disks are under-performing, but also would like to see the controller test, looks like this:

 

image.png

 

 

Posted

That indicates a big controller bottleneck, though the previous individual test also didn't look goof for several disks, post the output of:

 

lspci -d 1000: -vv

 

Posted
root@NAS01:~# lspci -d 1000: -vv
01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3224 PCI-Express Fusion-MPT SAS-3 (rev 01)
        Subsystem: Broadcom / LSI SAS9305-24i
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 16
        IOMMU group: 15
        Region 0: I/O ports at 4000 [size=256]
        Region 1: Memory at a9100000 (64-bit, non-prefetchable) [size=64K]
        Expansion ROM at a9000000 [disabled] [size=1M]
        Capabilities: [50] Power Management version 3
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [68] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75W
                DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                        RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x8
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range BC, TimeoutDis+ NROPrPrP- LTR-
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
                         EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [a8] MSI: Enable- Count=1/1 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [c0] MSI-X: Enable+ Count=96 Masked-
                Vector table: BAR=1 offset=0000e000
                PBA: BAR=1 offset=0000f000
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 04000001 0000000c 01080008 c61e2ad2
        Capabilities: [1e0 v1] Secondary PCI Express
                LnkCtl3: LnkEquIntrruptEn- PerformEqu-
                LaneErrStat: 0
        Capabilities: [1c0 v1] Power Budgeting <?>
        Capabilities: [190 v1] Dynamic Power Allocation <?>
        Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 0
                ARICtl: MFVC- ACS-, Function Group: 0
        Kernel driver in use: mpt3sas
        Kernel modules: mpt3sas

 

Posted
54 minutes ago, N47H4N said:
LnkSta: Speed 8GT/s, Width x8

This is OK, but the HBA is not performing as it should, or the disks, I would start by calling/emailing LSI with the serial number to make sure it's genuine.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...