Jump to content
bsim

Areca ARC-1280ML 24 Port vs SAS2008's...Upgrade?

20 posts in this topic Last Reply

Recommended Posts

I currently have an Areca ARC-1280ML 24 Port with 2GB ECC onboard (Intel IOP341 Processor) running only JBOD (got 2 it for REALLY cheap). 23/24 spinners (mostly 7200RPM, Some 5400) and I'm trying to decide if replacing my controller card with separate 2 or 3 controller cards (Most likely SAS2008's) for the 24 drives would cause my read speeds to increase.
Currently, my parity checks (All XFS drives) take about 1 day/12 hr at about 38.9 MB/s. From what I've researched, with spinners, the SATA2 of the Areca shouldn't be a limiting issue. From the documentation on the card, the best transfer rate figures I can find is 885MB/s sustained read at Raid0. The card has 6 SAS 8087 ports. The documentation doesn't really tell me what my transfer rates are for each SAS port (beyond the overall 885MB/s generic transfer rate.

Do I have a bottleneck with my controller card?

Will splitting the drives over multiple SAS2008's increase my read speeds/lower my parity check times?

Share this post


Link to post
2 hours ago, bsim said:

Do I have a bottleneck with my controller card?

Yes, not because it's SATA2 but mostly because you have 24 disks on a PCIe 1.0 x8 controller.

 

You can check this thread for some performance numbers on the LSI controllers, also check the SAS expanders tests, one HBA and one expander would also be a good option:

 

Edited by johnnie.black

Share this post


Link to post

Thank you for the help! I used your help to really start digging and now really understand.

Is there anything that doesn't look right below?

 

------------------------------------------------------------------------------------

 

Thinking out loud below and working through my questions for others reference...

 

It looks like the PCIe limitations of the card puts me minimally above the necessary bus rate. So it sounds like just offloading half of the drives onto a second Areca 1280ML (was a backup in storage) would nullify any loss for my collection of spinners.

 

------------------------------------------------------------------------------------

My motherboard Supermicro H8DGi (https://www.supermicro.com/Aplus/motherboard/Opteron6000/SR56x0/H8DGi.cfm) (Full virtualization, 44GB, 32 Cores) and has 3 PCIE-2 x16's and, one PCIE-2x8 that I can use.

 

I don't really have a problem using 3 of my 4 available slots for controllers or up buying to PCIX-3 when I only have PCIE-2 right now.

I considered upgrading to an Adaptec 71605 (16 Drives) and a Dell PERC H200 (8 drives) rather than just running 3 Dell PERC H200's (8 drives x 3) or 3 SAS 9207-8i's (8 drives x 3).

 

The bigger question is, if I do save myself a slot and use the 16 drive Adaptec, will the 16 drives cause another bottleneck?

How much bus bandwidth does each spinner typically use?

 

How much room does the bus bandwidth math leave for adding SSD's in the future if I just go to 2x Adaptec 71605 's ((16+8)+8)?

 

How much bus bandwidth does an SSD typically use?.

 

------------------------------------------------------------------------------------

 

From what I've read (assuming theoretical numbers):

 

7200 Desktop (non-black/red) spinner ~ 150 MB/second (a bit high I think for parity type reads)

SSD ~ 500 MB/second

 

PCIe 1.0 x8 ~ 2,000 MB/s (4 SSD's OR 13 x 7200s)

PCIe 2.0 x8 ~ 4,000 MB/s (8 SSD's OR 26 x 7200s)

PCIe 3.0 x8 ~ 8,000 MB/s (16 SSD's OR 53 x 7200's)

PCIe 4.0 x8 ~ 16,000 MB/s (32 SSD's OR 106 x 7200's)

In my case with an Areca 1280ML, I am currently running:

24 x 7200 Spinners ~ 24x150=3,600 MB/s (Areca would max out at less than half that)

 

Using 12 spinners on 2 separate Areca 1280ML's would only use 1,800 per card (may be close to maxed out with REAL reads)

 

Assuming 75% theoretical to actual for bandwidth...This may make it close!

 

------------------------------------------------------------------------------------

Reason SATA interface isn't an issue for spinners:

SATA 2 = 300 MB/s

SATA 3 = 600 MB/s

 

 

Edited by bsim
Oops on some figures

Share this post


Link to post
1 hour ago, bsim said:

7200 Desktop spinner ~ 100 MB/second

Fastest current high capacity disks max out at around 275MB/s, smaller capacity 7200rpm disks at 200/225MB/s.

 

1 hour ago, bsim said:

SSD ~ 400 MB/second

Good SSD max out at around 550MB/s.

 

Also don't forget PCIe overhead like shown on the tests, averages around 25%, so only around 75% of the max theoretical bandwidth can be used.

 

2 hours ago, bsim said:

PCIe 3.0 x8 ~ 16,000 MB/s

This one is wrong, theoretical max is 8000MB/s, usable around 6000MB/s.

 

Share this post


Link to post

Thanks for helping out.

 

I would agree that SSD's do tend to come in the 500 MB/s range (not sure why I included SSD's anyway).

 

I did oops on the PCI3x8 max 16,000 is PCIe4 (not sure why I included it to begin with)!

 

Regarding spinners, I have never seen a common consumer desktop drive get besides the 170/180 range even for their higher end Blacks and Red versions.  Here are several crystal disk marks for typical 4TB drives. Normal drives are MUCH lower. Maximums or burst rates I've seen advertised as huge worthless numbers before.

 

I think that Parity checks are closer to the 512k read/writes rather than sequential.

 

OVERALL.png.b2846bb67eb24110e977c3b7278c1590.png568811396_OVERALL2.png.5e110b05bd6ffab9c0f1a1a73f593164.pngentry_332_01_4k.png.c38f9ebb5aed397d891dd65538d4c360.pngseagatecdm.png.3240ef1401f166a20681a29767a1549a.pngwd-cdm.jpg.ca8ae421f482e851838946a6ffed5661.jpg1158302669_WesternDigitalRed4TB.png.5ad0b70c417d988cc397571d214876ce.png170836317_WDBlack4TB.jpg.b83178bb0dd1378519c2f24e24287e53.jpg

Share this post


Link to post

I think the blue and red bars on the last yellow graphic are swapped. Not that serious.

 

Share this post


Link to post

Those "hgst hdn721010ale604"'s are NAS rated drives...should be comparable to the Ironwolf models that are slightly cheaper. I noticed all the rest on the list are only 150MB/s. It would be awesome if the higher speeds would come down in price, but I can get 15TB (5TB drives at 99$ a piece) to get me to that price with still money left over.

Share this post


Link to post
19 minutes ago, bsim said:

Those "hgst hdn721010ale604"'s are NAS rated drives

Nothing to do with that, it's just about platter size and RPM, like I mentioned any 1, 2, 3 or 4TB modern 7200rpm SATA drive by Toshiba, WD or Seagate does 200MB/s on the outer sectors.

Share this post


Link to post

I'm thinking upgrading the cards to my next stage will be moot...Here is what I've found...I expected more, spreading them to separate controllers.

 

Took 12/24 spinning drives off of the Areca 1280ML 2GB (PCIe1 x8) and put half of them on the backup Areca 1280ML 2GB card...I had to see if there was any movement on the Parity check times/transfer rates. The transfer of the drives to the second card worked flawlessly/automatically to my surprise. I had a screenshot of the array just in case I had to remap things out!

 

Next step of research...

I have two new LSI IT mode cards that will run in two of my true PCIe2 x16 slots...

LSI SAS 9201-16i 6Gbps PCIe2 x8

LSI SAS 9207-8i PCIe3 x8

 

Nice part of this next migration will allow me to actually pull smart data (dashboard temps, preclear smart reports...) from the array (Unraid full support for Areca sucks and has hard coded issues that have never/will never be addressed)

 

 

Unraid Parity Check Diff Between 1 and 2 Areca PCIe1 x8 controllers (marked).png

Share this post


Link to post

Final Verdict:

****24 spinning drives do not max out one PCIe1x8 Areca enough to slow down a parity check.

 

I did run a speed test on all of my drives before switching from the dual areca's to the LSI controllers just to eliminate any possible variables. The results were consistent between the two setups.

 

My next step, will be to slowly phase out the slower drives, (it is interesting to see the speeds of a drive directly correlate with the size of the drive), but that will happen only as I need additional space.

 

 

Screenshot_20190320-194818_Gallery.jpg

OVERALL DISK SPEED TEST.png

Share this post


Link to post

Have your tried changing md_* tunables in the disk settings? That made a huge improvement for me, as I too was stuck at 40MB/sec regardless of what cards I used until I increased those settings.

 

Here's the thread with more details.

 

Share this post


Link to post

I've considered running the tunables tester, but it hasn't been updated for a long while and doesn't even test the same tunables available to the most recent versions of unraid. would be great if it was updated however.

 

What was the difference between the 40 you were getting and what you got afterwards?

 

Did you run the tunables script? What unraid version did you run it on at the time?

Share this post


Link to post

What CPU do you have? That can also easily bottleneck, as for the tunables, these are good for most configs, but never had any Areca:

 

Tunable (md_num_stripes): 4096
Tunable (md_sync_window): 2048
Tunable (md_sync_thresh): 2000

 

Share this post


Link to post

Arecas are out of the equation completely at this point...now 2 LSI controllers (9207-8i (SAS2308) and 9201-16i (SAS2116))

(with a SAS2008 running my SSD cache drives and some extra odd drives)

 

I attempted those tunables and still not breaking past the 40s.

 

I have Dual AMD Opteron 6272 @ 2100 processors (32 Cores total) and 44GB of RAM (Supermicro H8DGi Motherboard)

 

The only other item in the configuration of interest might be the backplane I use in my supermicro case...but I doubt that it would cause a bottleneck being that it supports the large drives I have without any issues. Could it cause a bottleneck?

 

I attached a load statistics not doing a parity check...

System Stats.png

 

load statistics doing a parity check...

 

 

System Stats on Parity.png

Edited by bsim

Share this post


Link to post
3 minutes ago, bsim said:

I have Dual AMD Opteron 6272

That's part of the problem, and likely the main part, parity check is single threaded, and that CPU has a very low single thread rating, to see significant speed improvements with the number of disks you have you'd need a faster CPU.

Share this post


Link to post

Would it really be killing my speed that drastically?

 

If I were to swap out the processors with say Opteron 6386 SE's (which have almost double the single thread rating (~1200 vs 6272 ~600) - highest opteron 6000 series supported) what kind of improvement would I see?

 

Does the parity check always only run on a single thread? Is it possible to change it to multi?

 

Would my supermicro backplane be a possible culprit?

Share this post


Link to post
1 hour ago, bsim said:

Would it really be killing my speed that drastically? 

Yep, IMHO it's the most likely reason.

 

1 hour ago, bsim said:

If I were to swap out the processors with say Opteron 6386 SE's (which have almost double the single thread rating (~1200 vs 6272 ~600) - highest opteron 6000 series supported) what kind of improvement would I see?

Difficult to say for sure but would expect a noticeable improvement, though still far from full speed.

 

1 hour ago, bsim said:

Would my supermicro backplane be a possible culprit?

What model is the backplane? From your description I assume it's a direct connection model, without an expander, if that's case no bottlenecks there.

Share this post


Link to post

from here....https://www.cpubenchmark.net/singleThread.html

it shows that If I swap in my spare Opteron 6328's, I should double my single thread performance...As my MB only supports opteron 6000's max, im thinking that this is the best i can do...ill see how it affects the parity check speed...else i may try swapping in a different motherboard.

 

the bigger question is why a server level software only uses a single thread to do parity! I would have figured with how many cores I had, I wouldnt have had any bottlenecks processor wise...This seems like a HUGE holdover from the older unraid days...is there any mention of unraid ever migrating?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now