SLOW OVERALL ARRAY SPEED


Recommended Posts

I'm hoping to get some advice on what to change.  The overall throughput for my array is slow (about 430 MB/s max).  I know that it not unRaid, I'm not implying that at all.  This speed issue was manageable until I added a second parity drive, now when the cache drive writes back to the array, the array becomes so slow it's virtually unuseable.

 

About my setup:

Desktop PC with a 7th gen i7 CPU.  

20 Drives in the array, ranging from 1TB to 4TB, two are parity and one 480 Gb SSD for cache, so 17 data drives.

The cache drive, one of the two parity drives, and two data drives are inside the tower connected via Sata.

I have two Mediasonic 8 bay enclosures, both are connected via eSata to seperate eSata cards.  The cards support port multiplying.  

 

I'm guessing the eSata port multiplier cards are the main culprit, but how do I prove this.  

 

Additionally, what is a normal overall array speed.  430 MB/s is what the entire array does during a parity rebuild, which means at the beginning when the 1 TB drives are still rebuilding, the per drive speed is around 20 MB/s.

 

Right now, the cache drive is writing back to the array, and it's only writing at about 22 MB/s.  This is with all apps disabled except plex, to keep the other I/O down.  

 

Link to comment
7 hours ago, nerbonne said:

Additionally, what is a normal overall array speed.  430 MB/s is what the entire array does during a parity rebuild

With no bottlenecks you just multiply the maximum speed of the slowest disk by the number to disks, I have some server doing about 5GB/s.

 

7 hours ago, nerbonne said:

I have two Mediasonic 8 bay enclosures, both are connected via eSata to seperate eSata cards.  The cards support port multiplying.

These won't be good for performance (and reliability), but might not be the only reason for this:

7 hours ago, nerbonne said:

now when the cache drive writes back to the array, the array becomes so slow it's virtually unuseable.

There could be other things going on, like CPU, etc.

Link to comment

I would guess the problem with throughput is SATA bus contention. If you have both those cases filled with drives that is conservatively as much as 2400MB/s (150MB/s * 16) trying to be pushed through two 600MB/s SATA connections. That's not even taking into account the overhead of the two port multipliers in the cases switching between drives. I don't know what kind of latency SATA port multipliers confer when switching?

 

I have 9 drives hooked to two LSI 9211-8i cards. During parity checks or rebuilds I get a total throughput of about 1.3GB/s.

Link to comment
  • 5 weeks later...

Hi all, thanks for the responses.  Not sure how many PCIe lanes the eSATA cards have, but I'm to the point where I want to get a rack mount chassis, just not sure where to start.  Maybe you guys have some suggestions?  I'm shooting for a 24 bay system but I'll settle for 20 bays, 3.5 inch.  All my current drives are 4tb or smaller, but I'd like the backplane to support larger drives, 10tb seem to be the most cost effective right now.  The part I'm struggling with is trying to find the right backplane and raid controller.  Seems like no matter how much I google this, I just keep coming up with really old info, and most backplanes being sold on ebay don't specify what size drive they support, so I'm guessing it's only 2tb?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.