Areca ARC-1280ML 24 Port vs SAS2008's...Upgrade?


bsim

Recommended Posts

I currently have an Areca ARC-1280ML 24 Port with 2GB ECC onboard (Intel IOP341 Processor) running only JBOD (got 2 it for REALLY cheap). 23/24 spinners (mostly 7200RPM, Some 5400) and I'm trying to decide if replacing my controller card with separate 2 or 3 controller cards (Most likely SAS2008's) for the 24 drives would cause my read speeds to increase.
Currently, my parity checks (All XFS drives) take about 1 day/12 hr at about 38.9 MB/s. From what I've researched, with spinners, the SATA2 of the Areca shouldn't be a limiting issue. From the documentation on the card, the best transfer rate figures I can find is 885MB/s sustained read at Raid0. The card has 6 SAS 8087 ports. The documentation doesn't really tell me what my transfer rates are for each SAS port (beyond the overall 885MB/s generic transfer rate.

Do I have a bottleneck with my controller card?

Will splitting the drives over multiple SAS2008's increase my read speeds/lower my parity check times?

Link to comment
2 hours ago, bsim said:

Do I have a bottleneck with my controller card?

Yes, not because it's SATA2 but mostly because you have 24 disks on a PCIe 1.0 x8 controller.

 

You can check this thread for some performance numbers on the LSI controllers, also check the SAS expanders tests, one HBA and one expander would also be a good option:

 

Edited by johnnie.black
Link to comment

Thank you for the help! I used your help to really start digging and now really understand.

Is there anything that doesn't look right below?

 

------------------------------------------------------------------------------------

 

Thinking out loud below and working through my questions for others reference...

 

It looks like the PCIe limitations of the card puts me minimally above the necessary bus rate. So it sounds like just offloading half of the drives onto a second Areca 1280ML (was a backup in storage) would nullify any loss for my collection of spinners.

 

------------------------------------------------------------------------------------

My motherboard Supermicro H8DGi (https://www.supermicro.com/Aplus/motherboard/Opteron6000/SR56x0/H8DGi.cfm) (Full virtualization, 44GB, 32 Cores) and has 3 PCIE-2 x16's and, one PCIE-2x8 that I can use.

 

I don't really have a problem using 3 of my 4 available slots for controllers or up buying to PCIX-3 when I only have PCIE-2 right now.

I considered upgrading to an Adaptec 71605 (16 Drives) and a Dell PERC H200 (8 drives) rather than just running 3 Dell PERC H200's (8 drives x 3) or 3 SAS 9207-8i's (8 drives x 3).

 

The bigger question is, if I do save myself a slot and use the 16 drive Adaptec, will the 16 drives cause another bottleneck?

How much bus bandwidth does each spinner typically use?

 

How much room does the bus bandwidth math leave for adding SSD's in the future if I just go to 2x Adaptec 71605 's ((16+8)+8)?

 

How much bus bandwidth does an SSD typically use?.

 

------------------------------------------------------------------------------------

 

From what I've read (assuming theoretical numbers):

 

7200 Desktop (non-black/red) spinner ~ 150 MB/second (a bit high I think for parity type reads)

SSD ~ 500 MB/second

 

PCIe 1.0 x8 ~ 2,000 MB/s (4 SSD's OR 13 x 7200s)

PCIe 2.0 x8 ~ 4,000 MB/s (8 SSD's OR 26 x 7200s)

PCIe 3.0 x8 ~ 8,000 MB/s (16 SSD's OR 53 x 7200's)

PCIe 4.0 x8 ~ 16,000 MB/s (32 SSD's OR 106 x 7200's)

In my case with an Areca 1280ML, I am currently running:

24 x 7200 Spinners ~ 24x150=3,600 MB/s (Areca would max out at less than half that)

 

Using 12 spinners on 2 separate Areca 1280ML's would only use 1,800 per card (may be close to maxed out with REAL reads)

 

Assuming 75% theoretical to actual for bandwidth...This may make it close!

 

------------------------------------------------------------------------------------

Reason SATA interface isn't an issue for spinners:

SATA 2 = 300 MB/s

SATA 3 = 600 MB/s

 

 

Edited by bsim
Oops on some figures
Link to comment
1 hour ago, bsim said:

7200 Desktop spinner ~ 100 MB/second

Fastest current high capacity disks max out at around 275MB/s, smaller capacity 7200rpm disks at 200/225MB/s.

 

1 hour ago, bsim said:

SSD ~ 400 MB/second

Good SSD max out at around 550MB/s.

 

Also don't forget PCIe overhead like shown on the tests, averages around 25%, so only around 75% of the max theoretical bandwidth can be used.

 

2 hours ago, bsim said:

PCIe 3.0 x8 ~ 16,000 MB/s

This one is wrong, theoretical max is 8000MB/s, usable around 6000MB/s.

 

Link to comment

Thanks for helping out.

 

I would agree that SSD's do tend to come in the 500 MB/s range (not sure why I included SSD's anyway).

 

I did oops on the PCI3x8 max 16,000 is PCIe4 (not sure why I included it to begin with)!

 

Regarding spinners, I have never seen a common consumer desktop drive get besides the 170/180 range even for their higher end Blacks and Red versions.  Here are several crystal disk marks for typical 4TB drives. Normal drives are MUCH lower. Maximums or burst rates I've seen advertised as huge worthless numbers before.

 

I think that Parity checks are closer to the 512k read/writes rather than sequential.

 

OVERALL.png.b2846bb67eb24110e977c3b7278c1590.png568811396_OVERALL2.png.5e110b05bd6ffab9c0f1a1a73f593164.pngentry_332_01_4k.png.c38f9ebb5aed397d891dd65538d4c360.pngseagatecdm.png.3240ef1401f166a20681a29767a1549a.pngwd-cdm.jpg.ca8ae421f482e851838946a6ffed5661.jpg1158302669_WesternDigitalRed4TB.png.5ad0b70c417d988cc397571d214876ce.png170836317_WDBlack4TB.jpg.b83178bb0dd1378519c2f24e24287e53.jpg

Link to comment

Those "hgst hdn721010ale604"'s are NAS rated drives...should be comparable to the Ironwolf models that are slightly cheaper. I noticed all the rest on the list are only 150MB/s. It would be awesome if the higher speeds would come down in price, but I can get 15TB (5TB drives at 99$ a piece) to get me to that price with still money left over.

Link to comment
  • 4 weeks later...

I'm thinking upgrading the cards to my next stage will be moot...Here is what I've found...I expected more, spreading them to separate controllers.

 

Took 12/24 spinning drives off of the Areca 1280ML 2GB (PCIe1 x8) and put half of them on the backup Areca 1280ML 2GB card...I had to see if there was any movement on the Parity check times/transfer rates. The transfer of the drives to the second card worked flawlessly/automatically to my surprise. I had a screenshot of the array just in case I had to remap things out!

 

Next step of research...

I have two new LSI IT mode cards that will run in two of my true PCIe2 x16 slots...

LSI SAS 9201-16i 6Gbps PCIe2 x8

LSI SAS 9207-8i PCIe3 x8

 

Nice part of this next migration will allow me to actually pull smart data (dashboard temps, preclear smart reports...) from the array (Unraid full support for Areca sucks and has hard coded issues that have never/will never be addressed)

 

 

Unraid Parity Check Diff Between 1 and 2 Areca PCIe1 x8 controllers (marked).png

Link to comment

Final Verdict:

****24 spinning drives do not max out one PCIe1x8 Areca enough to slow down a parity check.

 

I did run a speed test on all of my drives before switching from the dual areca's to the LSI controllers just to eliminate any possible variables. The results were consistent between the two setups.

 

My next step, will be to slowly phase out the slower drives, (it is interesting to see the speeds of a drive directly correlate with the size of the drive), but that will happen only as I need additional space.

 

 

Screenshot_20190320-194818_Gallery.jpg

OVERALL DISK SPEED TEST.png

Link to comment

Have your tried changing md_* tunables in the disk settings? That made a huge improvement for me, as I too was stuck at 40MB/sec regardless of what cards I used until I increased those settings.

 

Here's the thread with more details.

 

Link to comment

I've considered running the tunables tester, but it hasn't been updated for a long while and doesn't even test the same tunables available to the most recent versions of unraid. would be great if it was updated however.

 

What was the difference between the 40 you were getting and what you got afterwards?

 

Did you run the tunables script? What unraid version did you run it on at the time?

Link to comment

Arecas are out of the equation completely at this point...now 2 LSI controllers (9207-8i (SAS2308) and 9201-16i (SAS2116))

(with a SAS2008 running my SSD cache drives and some extra odd drives)

 

I attempted those tunables and still not breaking past the 40s.

 

I have Dual AMD Opteron 6272 @ 2100 processors (32 Cores total) and 44GB of RAM (Supermicro H8DGi Motherboard)

 

The only other item in the configuration of interest might be the backplane I use in my supermicro case...but I doubt that it would cause a bottleneck being that it supports the large drives I have without any issues. Could it cause a bottleneck?

 

I attached a load statistics not doing a parity check...

System Stats.png

 

load statistics doing a parity check...

 

 

System Stats on Parity.png

Edited by bsim
Link to comment
3 minutes ago, bsim said:

I have Dual AMD Opteron 6272

That's part of the problem, and likely the main part, parity check is single threaded, and that CPU has a very low single thread rating, to see significant speed improvements with the number of disks you have you'd need a faster CPU.

Link to comment

Would it really be killing my speed that drastically?

 

If I were to swap out the processors with say Opteron 6386 SE's (which have almost double the single thread rating (~1200 vs 6272 ~600) - highest opteron 6000 series supported) what kind of improvement would I see?

 

Does the parity check always only run on a single thread? Is it possible to change it to multi?

 

Would my supermicro backplane be a possible culprit?

Link to comment
1 hour ago, bsim said:

Would it really be killing my speed that drastically? 

Yep, IMHO it's the most likely reason.

 

1 hour ago, bsim said:

If I were to swap out the processors with say Opteron 6386 SE's (which have almost double the single thread rating (~1200 vs 6272 ~600) - highest opteron 6000 series supported) what kind of improvement would I see?

Difficult to say for sure but would expect a noticeable improvement, though still far from full speed.

 

1 hour ago, bsim said:

Would my supermicro backplane be a possible culprit?

What model is the backplane? From your description I assume it's a direct connection model, without an expander, if that's case no bottlenecks there.

Link to comment

from here....https://www.cpubenchmark.net/singleThread.html

it shows that If I swap in my spare Opteron 6328's, I should double my single thread performance...As my MB only supports opteron 6000's max, im thinking that this is the best i can do...ill see how it affects the parity check speed...else i may try swapping in a different motherboard.

 

the bigger question is why a server level software only uses a single thread to do parity! I would have figured with how many cores I had, I wouldnt have had any bottlenecks processor wise...This seems like a HUGE holdover from the older unraid days...is there any mention of unraid ever migrating?

Link to comment

As an update, I've removed/replaced the 1TB drives from the array and my parity speeds went to the 50s...still low but better...plan on migrating out the 2tb drives to see if it affects me further.

 

I did confirm that my supermicro backplane is simply a pass through.

 

While I have the different processors on hand, I plan on swapping in dual AMD Opteron 6328's OS6328WKT8GHK...they about double my single thread rating (higher speeds, but lower cores)...a step just to determine if the processors are really holding me back (vs the motherboard itself).

 

After that, I plan on swapping in a new motherboard completely with a Supermicro X9DR3-LN4F+ and dual Intel E5-2637V2's, I gain my built in KMV again, and gain 4 PCIe3x16s.  I can't find the old motherboard (Supermicro H8DGi) bus speed/lanes anywhere but it was PCIe2, the new motherboard will have a bus speed of up to 8 GT/s and PCIe3. Gives me a great upgrade path to an SSD rack for the future.

Link to comment

Added 2 5TB drives, removed 3 1TB drives...will remove 3 2TB drives...plan on adding another 5TB drive...so yes, 3 slow drives will be removed overall...I was primarily thinking of the older drive speeds dragging down my averages...the replacements will bring up my average direct drive speeds by 50-75MB/s...question is if this translates into faster parity speeds...I will find out!

Edited by bsim
Link to comment

So far (running parity check right now) the slower speeds have nothing to do with the single thread rating of the processors...I'm now running double the single thread rating processors (dual Opteron 6272's for dual Opteron 6328's) as the originals...and almost no change at all. I saw a bigger jump (~15MB/s) migrating 3 1tb drives onto one 5tb drive and pulling the 1tb drives out of the array. Still pulling only mid 50's from a parity check.

 

I'm still leaning towards my issue being a limitation of the motherboard OR some sort of connection between the motherboard and unraid (drivers). Is there a way to tell what driver is being used for the drives through unraid?

Is there a different driver for ahci vs the driver or are they all rolled into one?

Can I tell what ahci driver is being used?

 

---------------------------------------------------------------------------------------

UPDATE:   I may have answered my question with more research...

 

dmesg | grep -i --color ahci
ahci 0000:00:11.0: AHCI 0001.0100 32 slots 6 ports 3 Gbps 0x3f impl SATA mode

 

grep -i SATA /var/log/syslog | grep --color -i 'link up'
kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)

 

Are these only referencing the odd motherboard unassigned device I have or is this being used for the entire array?

 

 

 

Edited by bsim
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.