New Inexpensive 24 port SATA PCIE Card


Recommended Posts

NOTE: i own this card but have not currently tested it in my fully populated 24 drive array system to see how it performs for speed to each drive and under UNRAID on XFS using RECONSTRUCTIVE WRITE aka all drive same time live.   I appreciate the discussion i had below and will report back on this card as soon as I get it tested.   

 

IMPORTANT: CURRENTLY UNTESTED on UNRAID do not purchase thinking it will work until I do my testing unless you want to take that risk.   See replies to this post below.  

 

I recently had my LSI 9305-24i fail and I purchased the following card and cables to replace it.   I believe this is fully supported and I also believe it is a really inexpensive option over LSI HBA's for 24 port internal 

 

 

1x Amazon.com: IO CREST 24 Port SATA III to PCIe 3.0 x4 Non-RAID Expansion Card JMB575 JMB582 Low Profile Bracket,SI-PEX40169 @ ~$150

3x Amazon.com: ChenYang CY SFF-8654 8i 74Pin PCI-E Ultraport Slimline SAS Slim 4.0 to Dual SFF-8087 Mini SAS Cable PCI-Express @ ~$36 each

 

note may need different cables depending on what you are connecting to 

note the card for ~$150 comes with 3x SFF-8654 (big connector on board) to 8x sata port cables included in price.  

 

Total ~$250 for 24 port internal JBOD SATA with cables

 

61oIsN8T9-L._AC_SL1200_.jpg

 

61wdUNz2CuL._AC_SL1100_.jpg

 

I will report back after i actually install this however I thought it would be interesting to others 

 

 

Edited by bing281
Add note that i need to test this card before actually recommending it as working for speed to all drives and stability of UNRAID XFS array itself.
Link to comment

i agree those are not recommended and i don't even use them after my HBA's but i also was reading lately that these specific port multipliers are supported and they also are fast enough to max out all the ports at once on spinning hardware but not on SSD's.   

 

Can you correct me here if I am wrong because that was my understanding on these specific chips from this forum and from storage review 

 

Here is the basic setup from this card 

 

611EietBSnL._AC_SL1201_.jpg

 

Goes JMB585 -> 5x JMB575 -> 25/24 SATA ports 

 

on the thread you referenced they call out JMB585 chip cards as recommended.  Difference here it does't have the secondary breakdown with the JMB575 however to my knowledge this should only be a speed issue for SSD's.   

 

Perhaps I am uneducated or missing something and I look forward to understanding better.   Thanks for the help. 

Edited by bing281
  • Like 1
Link to comment

Shoot i see the issue now i think hmm 

 

Basically it goes JMB585 which is PCIE to 5xSATA ports (6Gbps)

then it goes on each of the 5xSATA to a JMB575 which is a SATA port multiplier from 1xSATA to 5xSATA (6Gbps) total of 25 SATA however it is only using 24 of them and neglecting 1.   

 

So you are saying that UNRAID doesn't like the JMB575 multiplier in this case.  What is the reason that UNRAID doesn't like port multipliers is it a speed thing or is it another inherent reason? 

 

Let me do some math real quick here.  

 

PCIE x4 = 3.938 GB/s or 31.504 Gbps 

PCIE x4 31.504 Gbps / 5xSATA = 6.308 Gbps to each multiplier over the sata link

6.308 Gbps / 5xSATA port multiplier = 1.26016 Gbps to each of the final 24 SATA ports

 

1.26016 Gbps on each of the 24 ports is approximately equal to 157.5 MB/s max speed on each port.   

 

On my spinners I see them at parity check when on the beginning / inside of the plates run around 160 MB/s max speed for parity checking 

 

Now continuing my thinking out loud here there is clearly some loss in there during all of this however theoretically it can basically push all 24 of my spinners at max speed which they can only do for about the first 0-25% of the parity check.    Therefore I am guessing that while I would see some decrease in max sustainable speed I don't think it would bring them down below 140 MB/s or around 10% and i think it could maintain that speed easily.   Therefore for a 10% loss in max sustainable speed I can stop purchasing $1000 cards.   However this only holds true if there is not some other unknown to me issue for why UNRAID can't use port multipliers.   

 

Thoughts?  

 

I currently am running my older -8i IT cards on my drives and i think they bring down the max sustainable speed to around 120 MB/s 

Edited by bing281
Link to comment
4 hours ago, bing281 said:

So you are saying that UNRAID doesn't like the JMB575 multiplier in this case. 

It's not Unraid, it's a general issue, though it Unraid in can b worse since all the devices are accessed simultaneously (if they are in the array).

 

4 hours ago, bing281 said:

PCIE x4 31.504 Gbps / 5xSATA = 6.308 Gbps to each multiplier over the sata link

6.308 Gbps / 5xSATA port multiplier = 1.26016 Gbps to each of the final 24 SATA ports

Each SATA port 600MB/s is split into 5 ports, so 600/5=125MB/s max per port if all used at the same time in the same multiplier, also the JMB585 is PCIe x2 (it's only x4 physically) max usable bandwidth is around 1750MB/s, so if all drives are accessed at the same time that's around 73MB/s max per device.

  • Like 1
  • Thanks 2
Link to comment
7 hours ago, JorgeB said:

It's not Unraid, it's a general issue, though it Unraid in can b worse since all the devices are accessed simultaneously (if they are in the array).

 

Each SATA port 600MB/s is split into 5 ports, so 600/5=125MB/s max per port if all used at the same time in the same multiplier, also the JMB585 is PCIe x2 (it's only x4 physically) max usable bandwidth is around 1750MB/s, so if all drives are accessed at the same time that's around 73MB/s max per device.

yeah i think you are right on this one for the 73MB/s that card says it requires 4x lanes but who knows if it really does.   

 

I think i am going to install it and run it on one of my test UNRAID servers and see how it does.   I am interested to see how it operates on a 24 hard drive system and the speed it will really give.   Because if it did give 160MB/s per drive and was stable it is an excellent inexpensive UNRAID solution.   Since I have one I might as well find out for others.   However I went on the eBay yesterday and purchased a 24i HBA card which was surprisingly inexpensive at only $200 without any cables at all.  

 

Therefore my current comment is that no one should run out and buy this for UNRAID until i put it in my system and see how it runs and report back.   

 

Appreciate your help and comments thank you.   

  • Like 1
Link to comment
  • 2 months later...
  • 4 weeks later...

For all those concerned about bandwidth (and saying solution is bad, because it's limited), also think about the PCIe bus connection. 1x PCIe lane is around 1Gbyte/sec (more like 750Mbytes/sec in practice). So this card is 4 lanes, maximum ~3000Mbytes/sec. Which is ~125Mbytes/sec/drive. That's just about golden for 5400rpm disks.
If you have a x8 Adaptec/LSI card, it's not going to be pushing much faster when using all drives at the same time. I think that last thing is key. When are you likely to use 24 drives _at the same time_? Quite unlikely unless you have a 24-drive Raid-5/6 set (and that is nuts from a reliability point of view.)

@bing281, did you come to a conclusion of your tests? (I ask because I have one laying on my desk, about to be used for 8Tbyte QVO SATA SSDs as a long-term storage for 8x Intel P4150 drives which are going to be connected through this extra cool/special card: https://www.amazon.nl/-/en/PCIe-ports-Switch-chipset-Profile/dp/B097HRQJZ8 (PCIe 3.0 card 16x for 8 SSD U.2 NVMe (U2 NGFF) or 8 ports PCIe x4 Multi Host Switch Card chipset PLX PEX 8749 High and Low Profile.). The idea is to use the NVMe disks as a caching layer in front of the SATA disks.  Basically the PCIe switch card allows me to use 8 drives, on a 16x PCIe bus. Limited to ~1500Mbytes/drive, and if all used at the same time, around 12Gbytes/sec. So the SATA backup is a factor ~4x slower than the cache. The idea is to have the cards in the server and add disks as my requirements grow.

Edited by camprr
Adding more information about the complete setup for Unraid
Link to comment
14 minutes ago, camprr said:

When are you likely to use 24 drives _at the same time_?

Most don't, but there are certainly some who do with Unraid. 

 

15 minutes ago, camprr said:

24-drive Raid-5/6 set (and that is nuts from a reliability point of view.)

Unraid doesn't do it like that. But dual parity definitely recommended if you have that many disks.

Link to comment

But it's not like you are sitting next to it, waiting. Are you? I am talking about performance bottlenecks that cause a loss in 'exterior' performance. I mean, after all, you have to have something that processes data at that speed. So if we manage to reach 10Gbit, basically all is good. Or not? And 10Gbit can go through a PCIe x2 slot. My internet connection is 1Gbit, I do a but of unparring (but that's all read from memory, as they are recent writes), and then the media is watched at a 'mellow' pace (10-50Mbit). 

My question is more this: "for a hobby environment, a PCIe x4 to 24xSATA card is plenty good". Yes, I think is the answer to that. For enterprise server grade stuff not, but would you be running Unraid for that? More likely ceph or something that provides multi-node redundancy and increased performance.

Link to comment
47 minutes ago, camprr said:

But it's not like you are sitting next to it, waiting. Are you? I am talking about performance bottlenecks that cause a loss in 'exterior' performance.

When you run in reconstruct write mode and do any write to the array all drives are being hit at the same time, i.e. full load scenario.

Edited by Kilrah
Link to comment
  • 9 months later...

I'm currently only planning, but I won't use this card and not so many disks. I'm planning on using 2x delock card: https://www.delock.com/produkt/90010/merkmale.html?setLanguage=en

 

I did the math on IO, I calculate that I get roughly 200MB/s /device, and HDD speeds are close to maxed with that. So I think that will be fine for me. However I don't know abount how Unraid can hangle this. Can anyone update me with this? Should I scrap the idea?

 

Thanks.

Link to comment
7 hours ago, Cerberus9 said:

I'm currently only planning, but I won't use this card and not so many disks. I'm planning on using 2x delock card: https://www.delock.com/produkt/90010/merkmale.html?setLanguage=en

 

I did the math on IO, I calculate that I get roughly 200MB/s /device, and HDD speeds are close to maxed with that. So I think that will be fine for me. However I don't know abount how Unraid can hangle this. Can anyone update me with this? Should I scrap the idea?

 

Thanks.

ASM1064 work well with Unraid generally, x1 PCIe 3.0 also fine for 4 spinner disk.

  • Like 2
Link to comment
  • 4 months later...

I have recently been testing the JMB585 controller on cards, paired with multiplier boards using the JMB575.  So far all of my tests have been showing very positive results, and fall into the speeds I calculated and expected.  I have also been testing the AS media controller, with no issues yet.

 

I just noticed this card about 30 minutes ago on Amazon, and realized it has everything combined into one card, instead of playing with one controller card, and 5 separate multiplexer boards.  Plus, one of my concerns was heat on the controller and multiplexer chips, this board has a nice heatsink to take care of that concern too!

 

Do I recommend this chip set solution?  I am going to upgrade 1 or 2 of my older Unraid systems going this route, as it will drastically speed up parity checks and drive builds if needed.  But, as not many people are running the combination, I would say proceed with caution.

Link to comment
9 minutes ago, electron286 said:

I have recently been testing the JMB585 controller on cards, paired with multiplier boards using the JMB575.  So far all of my tests have been showing very positive results, and fall into the speeds I calculated and expected.  I have also been testing the AS media controller, with no issues yet.

 

I just noticed this card about 30 minutes ago on Amazon, and realized it has everything combined into one card, instead of playing with one controller card, and 5 separate multiplexer boards.  Plus, one of my concerns was heat on the controller and multiplexer chips, this board has a nice heatsink to take care of that concern too!

 

Do I recommend this chip set solution?  I am going to upgrade 1 or 2 of my older Unraid systems going this route, as it will drastically speed up parity checks and drive builds if needed.  But, as not many people are running the combination, I would say proceed with caution.

 

But still no report for testing from @bing281 and JorgeB estimate min. throughput down 73MB/s per disk.

Link to comment

So far, with my testing, I would NOT recommend using more than 3 of the SATA ports on the JMB585 chip.  That gives about 200MB/s of unused bandwidth on the PCIe 3.0 x2 port connection. I like NO bottlenecks when possible.  Using 4 or 5 ports has worked well in my tests, but has definitely been a slow down due to only having two lanes from the PCIe bus. 

 

Similar with the JMB575 multipliers, I would not recommend more than 3 of the 5 ports to be used.  That way the 600MB/s port is shared with only 3 SATA ports providing an average of 200MB/s.  This would make this 24 port board, usable in my mind for only 9 drives at what I would describe as nice quick speeds for Parity build/check and drive rebuild.  (about 7 hours in tests for 4TB drive sizes, actually averaging about 150 MB/s per device, mixed HDD and SSD devices.)

 

But, used (even some new) LSI/Broadcom 9207-8i (or 8e for that matter), can be found rather cheaply.  And if you have a motherboard that has a couple 16x size PCIe 3.0 slots open with 8 lanes or more each available...  2ea 9207-8i boards could be used at a MASSIVE speed available to 16 SATA drives with 1000MB/s available per drive that would never be used since each drive would be limited to 600MB/s by the drive interfaces!  Of course a SAS Expander could be used with one controller, but the total price is about the same!

 

The testing is fun, and I have learned from it so far, but ultimately, realistic limitations apply at every stage, bandwidth of the generation of PCIe, number of available lanes, any limitations of shared SATA lines or SAS lines using various controllers, and expanders/multipliers, and the total number of drives desired.  Of course again big differences in SSDs and HDDs, and even between various models and brands of each.  Surprisingly many of the SATA SSDs I have tested so far, have actually been slower than spinning hard drives on many of my tests.

 

Even at 73MB/s per disk, it would be a big speed increase on my oldest server which is currently running old PCI mode controllers.  But looking at an upgrade, not where I want to target for a hardware update.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.