Outcasst Posted July 5, 2019 Share Posted July 5, 2019 (edited) Hi, I've updated to the latest stable version, however I'm seeing very poor read speeds from the array. If I transfer a file (70GB movie file) using MC from the Array to a share that's set to cache-only, I will see no more than 13-14 MB/s read speed from the array. However, transferring the same file over the network to my Windows PC yields an average of 100+ MB/s. The array is running off an LSI 9211-8i controller and the cache drive is a 970 Evo 1TB. I had the thought that it could be a PCI-E bandwidth issue, but everything seems to be running at maximum speed. Any suggestions or insights would be fantastic. Thanks. Edited July 6, 2019 by Outcasst Quote Link to comment
Vr2Io Posted July 5, 2019 Share Posted July 5, 2019 (edited) Does it really go to cache SSD ? Or other read write in array at same time. Edited July 5, 2019 by Benson Quote Link to comment
Outcasst Posted July 6, 2019 Author Share Posted July 6, 2019 (edited) 1 hour ago, Benson said: Does it really go to cache SSD ? Or other read write in array at same time. Yes, it's definitely going on to the cache. Also, when the transfer is in progress, it eats about 20% of the 4930k, which is alot for a single file transfer, right? Other than that, the array is completely idle. No other read/writes are happening. Edit: Now it's happening over the network. a burst of 112MB/s and dropping down to an average of 7-13MB/s. I've tried moving files from different disks within the array, same result. I have also benchmarked the drives using DiskSpeed and they are all showing 200MB/s Max Read speeds, so I don't think it's a hardware problem. Edited July 6, 2019 by Outcasst Quote Link to comment
Vr2Io Posted July 6, 2019 Share Posted July 6, 2019 (edited) If move from array to cache not max 1XMB/s ALWAYS, then this is other common topic. Perform TRIM on SSD may help for write performance on SSD. Someone say ver 6.7, CPU usage include i/o idle wait time, so not maans CPU usage only. Edited July 10, 2019 by Benson Quote Link to comment
neurocis Posted July 10, 2019 Share Posted July 10, 2019 I am also seeing horrendous IO wait times on my SSD cache disks w/6.7.2, to the point top wa is at 89% and unraid becomes unusable. The closest I have come to determining where the issue (given the systems near unusable state) is mdrecoveryd has a state of "D" and has 100 utilization (via top). I have downgraded back to 6.7.0 and so far this issue has not returned. Quote Link to comment
Outcasst Posted July 16, 2019 Author Share Posted July 16, 2019 So I can confirm this is an issue with 6.7.2. As soon as I downgrade back to 6.6.7, the issue goes away and reads are at full speed. Back to 6.7.2 and the reads are slow again. This is 100% repeatable. Quote Link to comment
Vr2Io Posted July 16, 2019 Share Posted July 16, 2019 7 minutes ago, Outcasst said: So I can confirm this is an issue with 6.7.2. As soon as I downgrade back to 6.6.7, the issue goes away and reads are at full speed. Back to 6.7.2 and the reads are slow again. This is 100% repeatable. You may try tunable setting can fix problem or not. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.