IamSpartacus Posted April 24, 2020 Share Posted April 24, 2020 (edited) I have two Unraid servers connected via direct connect 40GbE NICs and I'm looking for some advice on how best to tune NFS/SMB to be able to get the fast transfers possible between them. The storage on each end of the transfers is each capable of 2.0GB/s from internal testing. If I even can get half that I'd be happy but my initial testing is barely breaking 200-250MB/s. As you can see from the below iperf3 testing, connectivity is not the bottleneck. The only thing I've tested changing is MTU from 1500 to 9000 for this direct connection but it makes no difference. I've been testing using rsync between the servers. Edited April 24, 2020 by IamSpartacus Quote Link to comment
JorgeB Posted April 24, 2020 Share Posted April 24, 2020 41 minutes ago, IamSpartacus said: my initial testing is barely breaking 200-250MB/s User or disk share? If user try a disk share. Quote Link to comment
IamSpartacus Posted April 24, 2020 Author Share Posted April 24, 2020 6 minutes ago, johnnie.black said: User or disk share? If user try a disk share. Quote Link to comment
JorgeB Posted April 24, 2020 Share Posted April 24, 2020 rsync isn't built for speed, try pv Quote Link to comment
IamSpartacus Posted April 24, 2020 Author Share Posted April 24, 2020 5 minutes ago, johnnie.black said: rsync isn't built for speed, try pv This isn't looking promising: Quote Link to comment
JorgeB Posted April 24, 2020 Share Posted April 24, 2020 Is that an SMB mount? Try to another local device first. Quote Link to comment
IamSpartacus Posted April 24, 2020 Author Share Posted April 24, 2020 3 minutes ago, johnnie.black said: Is that an SMB mount? Try to another local device first. Yes SMB mount. I get the same slowness going from one directory on cache to another with pv. This is a Samsung 960 Pro 1TB NVMe. Quote Link to comment
JorgeB Posted April 24, 2020 Share Posted April 24, 2020 That's odd, pv is usually much faster than rsync and closer to real device speed, if it's not working for you you need to find another tool to test with, like I mentioned rsync is not a good tool for benchmarking. Quote Link to comment
IamSpartacus Posted April 24, 2020 Author Share Posted April 24, 2020 (edited) 11 minutes ago, johnnie.black said: That's odd, pv is usually much faster than rsync and closer to real device speed, if it's not working for you you need to find another tool to test with, like I mentioned rsync is not a good tool for benchmarking. Using cp it seems to be getting over 1GB/s but it's hard to say for sure since there is no way to view real time progress with cp. EDIT: used nload to view current network usage on the NIC while doing a cp to a SMB mount and I get 18Gbps so that is nice at least. Edited April 24, 2020 by IamSpartacus Quote Link to comment
IamSpartacus Posted April 24, 2020 Author Share Posted April 24, 2020 Ok so actual workloads I'm testing with (ie. sonarr/radarr imports from one server to the next) are hitting about 12Gbps so that is very solid. I can live with that :). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.