[6.2.1] Weird Transfer Speeds


TheGood

Recommended Posts

After having the beta for a while I decided to do a clean install to 6.2.1. Formatted all my drives to BTRFS and started moving my media back.

So far all good, I am seeing the expected behavior on all my disks, checked them individually just to be sure, capping the LAN speed for the first 8-10GB (Ram caching?) and after that hovering to 60-65MB/s. Those transfers achieved with reconstruct write and no cache on the shares.

 

At some point during the move back, I believe when my main media share reached the one 3TB drive to 50% and switched to another drive I started seeing speeds hard to explain.

 

disk_speed.png

 

The file I copy to all transfers is 15GB,

 

Share to disk1, no cache, reconstruct write

disk1.png

 

Share to disk1, no cache, read-modify-write

disk1_read_mod.png

 

Share to disk3, no cache, reconstruct write

disk3.png

 

Share to disk3, no cache, read-modify-write

disk3_read_mod.png

 

Share to disk5, no cache, reconstruct write

disk5.png

 

Share to disk5, no cache, read-modify-write

disk5_read_mod.png

 

Heavy drop on all the drives except disk3, who doubled the write speed compared to when it was below the 50% full mark. I'm guessing that have to do with doing the parity calculations alone, since the position on the parity is beyond the rest of the drives? In that case why does it take that much hit with read-modify-write?

 

As far as the rest goes is it normal to have that much of an impact when recalculating parity with the array drives empty vs with data on them? And whats up with the fluctuation? It goes from 20-30MB/s to capping the Gbit lan.

 

I have been puzzled for a couple of days with those numbers, any help will be appreciated.

storm-syslog-20161018-0031.zip

storm-diagnostics-20161018-0035.zip

Link to comment

I suspect you're hitting the DMI bottleneck, in you system it's only 1000MB/s theoretical max, usually ~750MB/s usable, when using turbo write with 6 disks + cache + NIC, doesn't leave much bandwidth left, to confirm you'd need to get a PCIe controller and use it on the GPU slot, other slots will also share the DMI.

 

It would also explain why local transfers are faster, you're using ~100MB/s less without using the NIC.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.