Jump to content

Tuning NFS/SMB for 10/40GbE


Recommended Posts

I have two Unraid servers connected via direct connect 40GbE NICs and I'm looking for some advice on how best to tune NFS/SMB to be able to get the fast transfers possible between them.  The storage on each end of the transfers is each capable of 2.0GB/s from internal testing.  If I even can get half that I'd be happy but my initial testing is barely breaking 200-250MB/s.  As you can see from the below iperf3 testing, connectivity is not the bottleneck.

 

The only thing I've tested changing is MTU from 1500 to 9000 for this direct connection but it makes no difference.  I've been testing using rsync between the servers.

 

image.png.4797039bfaa96c56338c63b51269d680.png

Edited by IamSpartacus
Link to comment
11 minutes ago, johnnie.black said:

That's odd, pv is usually much faster than rsync and closer to real device speed, if it's not working for you you need to find another tool to test with, like I mentioned rsync is not a good tool for benchmarking.

 

Using cp it seems to be getting over 1GB/s but it's hard to say for sure since there is no way to view real time progress with cp.

 

EDIT:  used nload to view current network usage on the NIC while doing a cp to a SMB mount and I get 18Gbps so that is nice at least.

Edited by IamSpartacus
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...