Jump to content
Sign in to follow this  
Stokkes

30-drive setup, which low-profile HBAs?

13 posts in this topic Last Reply

Recommended Posts

Hey all,

 

I'm building a new server on 10th gen intel (chassis: Supermicro CSE-847, mobo: Supermicro X12SCA-F) and due to the chassis / motherboard, I have the following contraints:

 

  • Must be low-profile
  • Mobo only has 2x PCIe x8 slots, 1x PCIe x4

 

Which HBA would people recommend for this build that would work in Unraid?

 

I currently have older M1015 (9240-8i), but I'd be maxed at 16 drives and no PCIe slots left on the motherboard. So i'd like to buy 1 (or 2 if speeds would improve) new HBAs to support up to 30 drives in this new build.

 

Thanks!

Share this post


Link to post

or keep your M1015 and add an expander, like res2cv240/res2sv240 ... I have one im my rig as well, but I think these are not available new anymore.

Share this post


Link to post

Actually, since the chassis has a built-in expander (it's the BPN-SAS2-846EL1), I think the 2x cards I have now should be able to sustain 30 drives on 4 ports

 

 

Share this post


Link to post

Hmm, may be worth investing in one of those HP 12Gbps HBAs then? I worry putting 24 drives on 1 HBA?

 

On the Supermicro 847, the back drives are on a different backplane, so i could use the second HBA for the back.

 

Share this post


Link to post
27 minutes ago, Stokkes said:

On the Supermicro 847, the back drives are on a different backplane, so i could use the second HBA for the back.

 

...still one HBA, as each Backplane will take/need only one SFF-connector

Share this post


Link to post

I actually haven't bought the case yet, but I have 2 I could buy, one older chassis (now discontinued) with the SAS2 backplane and a newer one with SAS3, the SAS3 one is obviously significantly more.

 

I guess I'm worried about 1 HBA @ 6Gbps being a bottleneck for 24-30 drives.

Share this post


Link to post

...this will depend on the UseCase and your Infrastructure.

Where will be the bottleneck, at a 6Gbps Backplane link (how many drives being in active read/write at the same time? - depending on the Drive going at 1.25Gpbs if a good performance drive) or the client link(s) (the X12SCA-F only has one 1Gbps + one 2.5Gbps NIC, which sums up to max 3.5Gbps)?

 

With 2 Backplanes and 2 HBAs you could split data and parity disk(s) - also across shares.

Or go for the SAS3 version of the case/Backplane and a 12Gbps HBA (like a LSI/Broadcom-3008...you just need one 8i version, one SFF-Link per Backplane).

 

Since the X12SCA-F supports 2x NVMe@PCIe-x4, this is where the real data in/out should occur and the mover can take its time later?

With a 1TB NVMe going so cheap (like the Patriot Viper VPN100), maybe this is another option?

But depending on your UseCase....

 

Edit: also check, that the NICs on this 10thgen MB are supported at all, see: 

 

Edited by Ford Prefect

Share this post


Link to post

First, thank you so much for your time and replies today, very much appreciated! I haven't built a server in 6-7 years.

 

You're right, I would be using 2x NVME (1TB) where the writes would occur and the mover would just move the data 1-2 times a day.

 

The drives I plan to put in are all Seagate Exos X16  (16TB), so about 384-576TB of space, 2 drives dedicated to parity.

 

I guess I'm concerned about parity checks/rebuilds/etc. As long as I can hit 100MB/s across all drives during a parity check, we're looking at about 48 hours to do a parity rebuild.

 

My use cases are pretty simple - nothing high throughput, about 1-2TB of transfer per day being moved to the array from the NVMe - regular scheduled parity checks.

 

I was also looking at this thread, specifically the image that has 1 LSI 2008 connected to 24 drives, which shows a max of 95MB/s I think (if that's correct).

 

Cheers,

 

 

Share this post


Link to post

Yes, you are right...I actually did not have to make a parity rebuild for years (wish me luck) and totaly missed that usecase ;-)

 

Intersing find...according to the other link: 

 

According to the data in the link, connecting two ports of the same HBA does help (I did a test with mine, when I did build it and did not see a difference, but did not test parity-rebuilds at that time).

 

So, with 16TB disks, this is introducing a severe risk of failure of more drives during these 48hrs, SAS3 is the only way to go in order to reduce risks then, isn't it?

 

Edit: the SAS3 backplanes can be found at "the bay" for around 100bucks each, most sellers offer a replacement/swap of SAS2 for SAS3 backplanes as well....definitely not a bargain.

Edited by Ford Prefect

Share this post


Link to post

I'm looking on eBay for these cases, but there is a difference between the SAS2 and SAS3 versions, at least from reputable sellers. About 450USD difference.

 

SAS3 seems like it would be the ideal choice for "long term" use and would definitely be quicker for parity / etc

 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this