Expander Backplane vs Direct Attach SFF-8087 - Rebuild Times


Recommended Posts

Hello,

I am looking to move my current Unraid server to a larger Supermicro 4U case with more drive bays, specifically something in the CSE 846 lineup that sport 24 front-port bays and support full height PCI-e cards. I've been using an H310 flashed to IT mode with an eight drive CSE 745 case-imposed limit, and the move to dual parity pushed my need for more bays, even with 6+2 8TB drives ("48TB accessible").

 

Now, with the "SAS-2 era" versions of the CSE 846, you can settle with either an expander backplane (BPN-SAS2-846EL1) limited to a maximum of 48 Gbps with two SFF-8087 uplinks or choose a "direct attach" "TQ" or "A" that either have 24 SATA ports (TQ) or 6 SFF-8087 ports (A), some of which are said to support even SAS-3 speeds, as there is no controller between your drives and your HBAs. I have a feeling that the 48 Gbps cap imposed by the expander would not be seen in my typical use, but I can see it being a severe detriment to rebuild and parity check speeds if all drives are being read from simultaneously and I have the entire 24-bay server populated. Although, theoretically, 48 Gbps gives you 250 MB/s to each of 24 disks, which is faster than the 7200 RPM HGST drives I use. I know theory is only good for just that, theorizing, so I remain concerned.

 

Here is a great breakdown of SMCI backplane types by nephri at ServeTheHome: LINK

What I am asking is if there is anyone with experience who can tell me that using an expander with 24 8TB 7200 RPM drives would affect rebuild and parity-check times significantly. I am currently sitting at 16-18 hours for a 7200 RPM 8TB rebuild/parity check using an 8 port TQ-style backplane and a single LSI 9211-8i equivalent, which is already not exactly timely. I am guessing that moving to an expander with more drives would push rebuild/parity check times over 24 hours, which would be my personal limit. Upping to 6 SFF-8087 connections would require another 4-port LSI card like a 9305-16i or 9201-16i, but that should not be a big deal if it saves time on rebuilds and parity checks.

Thank you for your input!

Edited by Prograde
Link to comment

 

Wow, thank you for all of your research, that is quite useful, an amazing amount of information there you have come up with! I have saved that link for future reference. I'm honestly surprised expanders did so well in your tests. Very little overhead with them.

 

Looks like my rebuild times WOULD go up using an H310 and a dual-link expander to something like (doing some math) 22 hours if all 24 drive bays are populated. This, being limited to 95 MB/s, would be 4-6 hours slower than normal, which is just barely bearable, 24 hours being my cut-off.

I'm almost willing to just say SAS-2 with an expander is the fastest I would need until the physical SATA interface is replaced altogether.

Anyway, I like the rest of your points in that link. Fortunately, the rest of the system (-mostly eBay deals-) I built is still up to speed, even being almost three(!) years old:
Supermicro X10SRL-F, Xeon E5-1660 V3, 64GB Samsung M393A2G40DB0-CPB0, 1.4TB P420M PCI-e Cache (works on newer versions), Intel X520-DA2

I've loved the simplicity of Unraid, and the community support due to users like you has been awesome. Thank you again!

Edited by Prograde
Link to comment
  • 2 weeks later...
  • 3 years later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.