Best 24i controller for the future?


Recommended Posts

Hi All

 

I currently have the following setup:

  • MB - Gigabyte C246-WU4 (PCI gen3) -> 10 x Sata 3 ports drives
  • LSI Logic SAS 9201-16i PCI-e 2.0 x8 SAS 6Gbs HBA Card --> 16 drives
  • Total of 26 drives + Nvme
  • Case: Servercase UK SC-4324S - Norco RPC-4224 4U (Backplane with Mini-SAS 36 Pin SFF-8087 til Mini-SAS 36 Pin SFF-8087)

 

So I have (Like most people limited PCI slots available) and I have "alot" of older enterprise drives that I might (future) want to put in this bad boy case:

Chenbro-NR40700-4U-Storinator-48-bay - Maybee Unraid will support a dual arrays in the future :-)

image.png.d32c18d036238cdeb3785d610f173353.png

I have googled and it looks like the SAS 9305-24i Host Bus Adapter - x8, PCIe 3.0, 8000 MB/s

Is useable with the backplane on the above case

 

image.png.a80aa66e8d4890a85482ae0fe5fc5177.png

 

Questions

  • Can anyone confirm this controller will work with this case? - (Chenbro-NR40700-4U-Storinator-48-bay + Expansion module backplate)?
  • Would I see any speed increase when using this new controller in my existing system? (Parity time? - when all the drives is spindles but all drives are 7200rpm and newer faster drives - the old: LSI Logic SAS 9201-16i say "Up to 6Gb/s" and the new one is 12 GB/s but the case says "24x hot-swappable SATA/SAS 6G drive bays" so I am not really sure this will provide any gain?

 

Then again might be better to spend the money on newer and bigger drives replacing older and smaller ones, not adding a bunch of smaller drives (Cost: case+control card) also adding increase in power consumption

 

As always your input is much appreciated

  

 

Link to comment
18 minutes ago, casperse said:

Can anyone confirm this controller will work with this case? - (Chenbro-NR40700-4U-Storinator-48-bay + Expansion module backplate)?

Should work fine, you need the appropriate cables, also you only need a 16port HBA for dual link (if supported by the backplanes), you can't use 24 ports with that chassis.

 

20 minutes ago, casperse said:

 the old: LSI Logic SAS 9201-16i say "Up to 6Gb/s" and the new one is 12 GB/s but the case says "24x hot-swappable SATA/SAS 6G drive bays" so I am not really sure this will provide any gain?

No gains in that part, unless used with a Datablod support LSI expander, it might still be a little faster because the HBA is PCIe 3.0

 

Link to comment
23 hours ago, JorgeB said:

Should work fine, you need the appropriate cables, also you only need a 16port HBA for dual link (if supported by the backplanes), you can't use 24 ports with that chassis.

 

No gains in that part, unless used with a Datablod support LSI expander, it might still be a little faster because the HBA is PCIe 3.0

 

This doesn't sound promising - but thanks for the info. My hopes was that a newer card and the PCI 3.0 and the statement on the old card was "Up to 6Gb/s" would give me a speed increase (+20-30Mb)

Link to comment
On 10/17/2020 at 11:32 AM, JorgeB said:

The "Up to 6Gb/s" just means SAS2/SATA3 link for devices, that's never the bottleneck with disks, even 3Gb/s (SATA2) would be enough for most disks, see here for some benchmarks to give you a better idea of the possible performance increase going with a PCIe 3.0 HBA.

Ok thanks! So I am back to upgrading my cache instead  - That would give some speed increase having the files longer on cache before moving files to the array! (But have to wait for 4TB being available and lower pricing)

And of course upgrade to 10Gbit network (After talking with @debit lagos I think money is better spend here, thanks!)

 

But since I am using the CPU Xeon E-2176G I have limited PCI-e lanes! 😩

So do you guys think I can run my Quadro 2000P on a 4x slot and swap my cards like this:

 

Slot 1 PCI x8 - LSI Logic SAS 9201-16i PCI-e 2.0 x8 SAS 6Gbs HBA Card

 

            M2A x2:    - N/A  (PCIe Gen3 x2 /SATA Mode! - x2 = half speed? 1750MB/s) - Share BW Sata port 3_1 not available? (Not in use)

 

Slot 2 PCI x4 - Nvidia Quadro P2000 (VCQP2000-PB)  (Manage with only x4 lanes?)

 

Slot 3 PCI x8 - Intel Ethernet Converged Network Adapter X710-T4 (RJ45 4x10GbE LAN) - Need two for pfSense router & two for local LAN 

 

            M2M x4:    - Samsung MZ-V7S2T0BW 970 Evo Plus SSD [2 TB] (PCIe Gen3 x4) - SHARES BW with PCIEX_4x below (should be ok! not big impact)

 

Slot 4 PCIEX_x4 - Intel Pro 1000 VT Quad Port NIC (EXPI9404VTG1P20) - SHARES BW with M2M (4 x LAN for XPenology)

 

image.png.c9016a9bdc89192dcb0e2068e193c8f3.png

 

Link to comment

I've read folks using those video cards down to 8x, but nothing about 4x.  If anything, the card will register but you won't be getting anything close to the potential of that card out of it.

 

Just from card specs alone, you really need another 8x slot at minimum.   If anything, everything will probably work.  The Nvidia card will probably not give you the potential it has due to the limited lanes it's riding on.  Motherboard selection was the first thing that I spent weeks looking at to ensure that if I wanted to expand capabilities in the future, I have bus speeds and lanes to support them.  10G NIC is a good example of it.  I'm using a Asus M.2 Gen 2 NVME adapter that holds 4x NVME cards.  It needs 4x4x4x4 PCIE slot, which my motherboard supports.

 

Just some things.  We can chat here or in PM's if you want.  Hopefully my thoughts are helpful. 

Link to comment
On 10/20/2020 at 2:49 AM, debit lagos said:

I've read folks using those video cards down to 8x, but nothing about 4x.  If anything, the card will register but you won't be getting anything close to the potential of that card out of it.

 

Just from card specs alone, you really need another 8x slot at minimum.   If anything, everything will probably work.  The Nvidia card will probably not give you the potential it has due to the limited lanes it's riding on.  Motherboard selection was the first thing that I spent weeks looking at to ensure that if I wanted to expand capabilities in the future, I have bus speeds and lanes to support them.  10G NIC is a good example of it.  I'm using a Asus M.2 Gen 2 NVME adapter that holds 4x NVME cards.  It needs 4x4x4x4 PCIE slot, which my motherboard supports.

 

Just some things.  We can chat here or in PM's if you want.  Hopefully my thoughts are helpful. 

Thanks for all your input!

Seem I can go one of two way's now

 

1) Remove the P2000 and use the built in IGU for encoding instead (Speed/QA?) that would free up a PCIe x8 slot for one 2x10GB x8 card)

2) Go back and look what the cost would be to upgrade the server (More PCI slots and total of 64x instead of 32x) and more than 64G ram)

 

Always come down to "Time" = $$$ 😫

Edited by casperse
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.