Jump to content

Single-disk performance impact on the array


Recommended Posts

Hey All,

 

Here's an interesting look at how a single disk's performance affects the entire array.  I ran a parity check on my 6-disk array yesterday, and cacti logged these graphs:

 

1167048183_ArrayPerformance.thumb.png.35800d59ba2d803e577614b4d0dd04b2.png

 

Disks sdc, sdd and sdh are 6 TB WD Reds.

Disks sde and sdf are 2 TB Seagate somethings.

Disk sdg is a 1 TB WD Blue.

 

The "beginning" of a disk performs much better than the "end" of a disk, due to how the data is physically organized on the disk platters.

 

As the parity check progressed across the 1 TB disk, its decrease in performance slowed down the entire array, until the scan reached the end of the 1 TB disk.  Then performance jumped back up, until the scan started slowing down again toward the end of the the 2 TB disks.  Then it jumped back up again while it read through the remaining space on the 6 TB disks.

 

I've known for years that in unRAID, a slow disk would impact performance of the entire array.  And having data that not only confirms the point, but also shows that the array is literally LOCKED to the performance of the slowest disk, is really eye-opening.

 

EDIT: Sorry, this was supposed to be posted in the lounge, please move if necessary.

Link to comment
1 hour ago, koyaanisqatsi said:

I've known for years that in unRAID, a slow disk would impact performance of the entire array.  And having data that not only confirms the point, but also shows that the array is literally LOCKED to the performance of the slowest disk, is really eye-opening.

 

Of course, this is just for parity checks, which isn't the "normal" usage.

 

When reading a file, only the disk the file is on is involved.

 

When writing a file (non-turbo), only the disk being written and the parity disk are involved.

 

When writing a file (turbo), the whole array is involved, but turbo does the parity update differently so there is less disk access required.

Link to comment

What can be seen here is that most of the time, the machine was managing 100 MB/s or more. It was actually the larger disks that ended up with the lowest transfer rates at the end.

 

But for most of your storage capacity, you can almost fill the bandwidth of a gbit network interface (about 110 MB/s) when reading a single stream from a disk.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...