Jump to content
nerbonne

SLOW OVERALL ARRAY SPEED

4 posts in this topic Last Reply

Recommended Posts

I'm hoping to get some advice on what to change.  The overall throughput for my array is slow (about 430 MB/s max).  I know that it not unRaid, I'm not implying that at all.  This speed issue was manageable until I added a second parity drive, now when the cache drive writes back to the array, the array becomes so slow it's virtually unuseable.

 

About my setup:

Desktop PC with a 7th gen i7 CPU.  

20 Drives in the array, ranging from 1TB to 4TB, two are parity and one 480 Gb SSD for cache, so 17 data drives.

The cache drive, one of the two parity drives, and two data drives are inside the tower connected via Sata.

I have two Mediasonic 8 bay enclosures, both are connected via eSata to seperate eSata cards.  The cards support port multiplying.  

 

I'm guessing the eSata port multiplier cards are the main culprit, but how do I prove this.  

 

Additionally, what is a normal overall array speed.  430 MB/s is what the entire array does during a parity rebuild, which means at the beginning when the 1 TB drives are still rebuilding, the per drive speed is around 20 MB/s.

 

Right now, the cache drive is writing back to the array, and it's only writing at about 22 MB/s.  This is with all apps disabled except plex, to keep the other I/O down.  

 

Share this post


Link to post
7 hours ago, nerbonne said:

Additionally, what is a normal overall array speed.  430 MB/s is what the entire array does during a parity rebuild

With no bottlenecks you just multiply the maximum speed of the slowest disk by the number to disks, I have some server doing about 5GB/s.

 

7 hours ago, nerbonne said:

I have two Mediasonic 8 bay enclosures, both are connected via eSata to seperate eSata cards.  The cards support port multiplying.

These won't be good for performance (and reliability), but might not be the only reason for this:

7 hours ago, nerbonne said:

now when the cache drive writes back to the array, the array becomes so slow it's virtually unuseable.

There could be other things going on, like CPU, etc.

Share this post


Link to post

I would guess the problem with throughput is SATA bus contention. If you have both those cases filled with drives that is conservatively as much as 2400MB/s (150MB/s * 16) trying to be pushed through two 600MB/s SATA connections. That's not even taking into account the overhead of the two port multipliers in the cases switching between drives. I don't know what kind of latency SATA port multipliers confer when switching?

 

I have 9 drives hooked to two LSI 9211-8i cards. During parity checks or rebuilds I get a total throughput of about 1.3GB/s.

Share this post


Link to post

The problem could also be a bus bottleneck. How many PCIe lanes are those eSATA cards?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now