Jump to content
Outcasst

[6.7.2] Very poor array read speeds

7 posts in this topic Last Reply

Recommended Posts

Hi, I've updated to the latest stable version, however I'm seeing very poor read speeds from the array.

 

If I transfer a file (70GB movie file) using MC from the Array to a share that's set to cache-only, I will see no more than 13-14 MB/s read speed from the array. However, transferring the same file over the network to my Windows PC yields an average of 100+ MB/s.

 

The array is running off an LSI 9211-8i controller and the cache drive is a 970 Evo 1TB.

 

I had the thought that it could be a PCI-E bandwidth issue, but everything seems to be running at maximum speed.

 

Any suggestions or insights would be fantastic. Thanks.

Edited by Outcasst

Share this post


Link to post

Does it really go to cache SSD ? Or other read write in array at same time.

 

 

Edited by Benson

Share this post


Link to post
1 hour ago, Benson said:

Does it really go to cache SSD ? Or other read write in array at same time.

 

 

Yes, it's definitely going on to the cache. Also, when the transfer is in progress, it eats about 20% of the 4930k, which is alot for a single file transfer, right?

 

Other than that, the array is completely idle. No other read/writes are happening.

 

 

Edit: Now it's happening over the network. a burst of 112MB/s and dropping down to an average of 7-13MB/s.

 

I've tried moving files from different disks within the array, same result.

 

I have also benchmarked the drives using DiskSpeed and they are all showing 200MB/s Max Read speeds, so I don't think it's a hardware problem.

Edited by Outcasst

Share this post


Link to post

If move from array to cache not max 1XMB/s ALWAYS, then this is other common topic.

 

Perform TRIM on SSD may help for write performance on SSD.

 

Someone say ver 6.7, CPU usage include i/o idle wait time, so not maans CPU usage only.

 

Edited by Benson

Share this post


Link to post

I am also seeing horrendous IO wait times on my SSD cache disks w/6.7.2, to the point top wa is at 89% and unraid becomes unusable. The closest I have come to determining where the issue (given the systems near unusable state) is mdrecoveryd has a state of "D" and has 100 utilization (via top). I have downgraded back to 6.7.0 and so far this issue has not returned.

Share this post


Link to post

So I can confirm this is an issue with 6.7.2.

 

As soon as I downgrade back to 6.6.7, the issue goes away and reads are at full speed. Back to 6.7.2 and the reads are slow again.

 

This is 100% repeatable.

Share this post


Link to post
7 minutes ago, Outcasst said:

So I can confirm this is an issue with 6.7.2.

 

As soon as I downgrade back to 6.6.7, the issue goes away and reads are at full speed. Back to 6.7.2 and the reads are slow again.

 

This is 100% repeatable.

You may try tunable setting can fix problem or not.

 

 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.