TheGood Posted October 17, 2016 Share Posted October 17, 2016 After having the beta for a while I decided to do a clean install to 6.2.1. Formatted all my drives to BTRFS and started moving my media back. So far all good, I am seeing the expected behavior on all my disks, checked them individually just to be sure, capping the LAN speed for the first 8-10GB (Ram caching?) and after that hovering to 60-65MB/s. Those transfers achieved with reconstruct write and no cache on the shares. At some point during the move back, I believe when my main media share reached the one 3TB drive to 50% and switched to another drive I started seeing speeds hard to explain. The file I copy to all transfers is 15GB, Share to disk1, no cache, reconstruct write Share to disk1, no cache, read-modify-write Share to disk3, no cache, reconstruct write Share to disk3, no cache, read-modify-write Share to disk5, no cache, reconstruct write Share to disk5, no cache, read-modify-write Heavy drop on all the drives except disk3, who doubled the write speed compared to when it was below the 50% full mark. I'm guessing that have to do with doing the parity calculations alone, since the position on the parity is beyond the rest of the drives? In that case why does it take that much hit with read-modify-write? As far as the rest goes is it normal to have that much of an impact when recalculating parity with the array drives empty vs with data on them? And whats up with the fluctuation? It goes from 20-30MB/s to capping the Gbit lan. I have been puzzled for a couple of days with those numbers, any help will be appreciated. storm-syslog-20161018-0031.zip storm-diagnostics-20161018-0035.zip Quote Link to comment
TheGood Posted October 17, 2016 Author Share Posted October 17, 2016 Another note, with Dolphin docker when copying from cache to disks, via /mnt or /user (don't know if FUSE plays any role here) I am getting almost the max speed of each disk as shown in the disk speed test Quote Link to comment
JorgeB Posted October 18, 2016 Share Posted October 18, 2016 I suspect you're hitting the DMI bottleneck, in you system it's only 1000MB/s theoretical max, usually ~750MB/s usable, when using turbo write with 6 disks + cache + NIC, doesn't leave much bandwidth left, to confirm you'd need to get a PCIe controller and use it on the GPU slot, other slots will also share the DMI. It would also explain why local transfers are faster, you're using ~100MB/s less without using the NIC. Quote Link to comment
TheGood Posted October 18, 2016 Author Share Posted October 18, 2016 Good point that would explain indeed the behavior I am seeing, but intel states the DMI between ICH10 and X58 to 2GB/s. Quote Link to comment
JorgeB Posted October 18, 2016 Share Posted October 18, 2016 2000MB/s full duplex, 1000MB/s each way. Quote Link to comment
TheGood Posted October 18, 2016 Author Share Posted October 18, 2016 Didn't know that, will do some further testing with a PCIe NIC to see if there is a change since it is shared with the sata controller and post back. Thanks a lot for the input! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.