Jump to content

SMB Performance Tuning


mgutt

Recommended Posts

 

At the moment I try to enable SMB Multichannel when my Client has 10G LAN and my server has two 1G LAN ports. What I tried:

 

Enabled Multichannel and added speed capabilities per adapter/ip:

image.png.65d0a3fd93c9b5da75b494ef3effb7e6.png

 

Checked on the client if both IPs have been found and selected for Multichannel:

image.png.f080e8d28d97e4e74807f3295b07c6d4.png

 

But finally it does not work as eth1 is not used:

1928259348_2021-08-2308_16_04.png.67ae8e3f3e21913de29f8943c66b8568.png

1705787933_2021-08-2308_16_20.png.9a0ef51f4c9283544c7051f642e91016.png

 

Then I reminded that it is not possible to mix RSS and non-RSS scenarios:

https://docs.microsoft.com/en-us/archive/blogs/josebda/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0

Quote

Sample Configurations that do not use SMB Multichannel

The following are sample network configurations that do not use SMB Multichannel:

  • Single non-RSS-capable network adapters. This configuration would not benefit from multiple network connections, so SMB Multichannel is not used.
  • Network adapters of different speeds. SMB Multichannel will choose to use the faster network adapter. Only network interfaces of same type (RDMA, RSS or none) and speed will be used simultaneously by SMB Multichannel, so the slower adapter will be idle.

 

So the first step was to disable RSS on the clients adapter:

image.png.34f7ee56a9d931b034809903706228b5.png

 

But still no activity on eth1:

image.png.aba0578c624ca0466cdd3c0ced33f0e3.png

 

Then I did:

- disabled IPv6 on the server and on the client

- rebooted server and client

 

And now it works:

image.png.b5722d6c195f6a9776f42888b0696b79.png

 

Was it because of disabling RSS? No, after re-enabling IPv6 and rebooting, it does not work anymore:

image.png.4f07740a936937515fe10443766b35cb.png

 

As you can see "thoth" resolves to an IPv6 address. I tried to copy to both IPv4 addresses of the server, but it does not enable SMB Multichannel:

image.png.af58817484216d626d574899150c8c0c.png

 

image.png.f824f17b1892d1db5d9157465e89954c.png

 

This is strange as for both IPs both target server adapters were found:

image.png.917896c9519100bcf3fe7c5b7135e6d7.png

 

Maybe SMB multichannel works only for SMB server names? Let's try it out by adding "tower" as a new server name for .8:

image.png.d626a10144cd4527f85c40d5de5bea87.png

 

again no success after copying to "tower":

image.png.7a9dc706053ad73da61507a962d17d98.png

 

Next step was to disable IPv6 in the network adapter properties:

1883849231_2021-08-2309_14_19.png.997da4aee18589d648bc579c6fe50f0c.png

 

Even rebooting the client does not help... 

 

I did a little bit research and on this blog I found someone who gets IPv6 addresses if he executes Get-SmbMultichannelConnection:

https://blog.chaospixel.com/linux/2016/09/samba-enable-smb-multichannel-support-on-linux.html

image.png.2a1f3973141f6c72548d52b1351994e1.png

 

So I think my problem is that this command returns in my case only IPv4 addresses even if the clients network adapter has IPv6 enabled. But why 🤔

 

The smb service on Unraid listens to IPv6:

netstat -lnp | grep smb
tcp        0      0 0.0.0.0:139             0.0.0.0:*               LISTEN      30644/smbd          
tcp        0      0 0.0.0.0:445             0.0.0.0:*               LISTEN      30644/smbd          
tcp6       0      0 :::139                  :::*                    LISTEN      30644/smbd          
tcp6       0      0 :::445                  :::*                    LISTEN      30644/smbd

 

And the client resolves with "ping" the smb server name with an IPv6 address, too....

 

Just for fun I added the IPv6 addresses to the smb conf:

interfaces = "fd00::b62e:99ff:fea8:c72c;speed=1000000000" "fd00::b62e:99ff:fea8:c72a;speed=1000000000" "192.168.178.8;speed=1000000000" "192.168.178.9;speed=1000000000"

 

SMB listens now only to these specific IPs:

netstat -lnp --wide | grep smb
tcp        0      0 192.168.178.8:139       0.0.0.0:*               LISTEN      24774/smbd          
tcp        0      0 192.168.178.9:139       0.0.0.0:*               LISTEN      24774/smbd          
tcp        0      0 192.168.178.8:445       0.0.0.0:*               LISTEN      24774/smbd          
tcp        0      0 192.168.178.9:445       0.0.0.0:*               LISTEN      24774/smbd          
tcp6       0      0 fd00::b62e:99ff:fea8:c72a:139 :::*                    LISTEN      24774/smbd          
tcp6       0      0 fd00::b62e:99ff:fea8:c72c:139 :::*                    LISTEN      24774/smbd          
tcp6       0      0 fd00::b62e:99ff:fea8:c72a:445 :::*                    LISTEN      24774/smbd          
tcp6       0      0 fd00::b62e:99ff:fea8:c72c:445 :::*                    LISTEN      24774/smbd

 

And ironically the Windows client now reaches the server through one (?!) IPv6:

Get-SmbMultichannelConnection

Server Name Selected Client IP                             Server IP                 Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable
----------- -------- ---------                             ---------                 ---------------------- ---------------------- ------------------ -------------------
THOTH       True     192.168.178.21                        192.168.178.9             13                     11                     False              False
THOTH       True     2003:e0:a71d:5700:e988:c813:e0d4:a16a fd00::b62e:99ff:fea8:c72c 13                     12                     False              False

 

But transfer speed is still capped to one....

 

Next try. Disable IPv6 on the client, force SMB to listen only to IPv4, set 192.168.178.8 as the IP of "THOTH" through the windows hosts file and reboot the client, but still no success 🙈

# SMB Conf:
interfaces = "192.168.178.8;speed=1000000000" "192.168.178.9;speed=1000000000"
bind interfaces only = yes
netstat -lnp --wide | grep smb
tcp        0      0 192.168.178.8:139       0.0.0.0:*               LISTEN      31758/smbd          
tcp        0      0 192.168.178.9:139       0.0.0.0:*               LISTEN      31758/smbd          
tcp        0      0 192.168.178.8:445       0.0.0.0:*               LISTEN      31758/smbd          
tcp        0      0 192.168.178.9:445       0.0.0.0:*               LISTEN      31758/smbd

 

So do it reverse. This time IPv6 is enabled on the client, but server gets IPv6 fully disabled...

image.png.7dbaf4551c7ebb3d27e8959242d7368c.png

 

Not sure if this is important, but the server resolves to the second ethernet port. Don't know why:

1499090554_2021-08-2310_32_02.png.8f3612f8eb8a6596e413f122ac810d65.png

 

Server and client reboot... does not work.

 

... after several additional tests I found out, that it is unreliable. For example if I disable IPv6 only in the router and reboot the client, then SMB Multichannel works. But only for several minutes?! Then it fails again.

 

Next test is to disable IPv4 in the router and reboot all devices incl switches. Maybe thats the reason?!

Link to comment
  • 3 weeks later...

In the last "All things Unraid" there was this Blog Thread: https://unraid.net/blog/how-to-beta-test-smb-multi-channel-support

Basically the sum up of your start post ;-)

BUT two more arguments have been added to smb-extra "aio read size" and "aio write size"

As i had no clue what that meant i asked my pal Google and found at Samba.org a detailed explanation (you also have this link in your post https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html)

Besides the fact that (if i understand it right) "aio write size" and "aio read size" should be already "1" on default i also found "aio max threads" which description sounds interesting.

But you wrote none of the other arguments had any noticable effect on speed. So you tested thatspecific argument as well?

Multihreaded Samba should give, in theory, a big performance boost - for weak single core perfomance CPUs at least. Also couldn't find any hints which Samba version might be required for this.

Link to comment
2 minutes ago, jj1987 said:

But you wrote none of the other arguments had any noticable effect on speed. So you tested thatspecific argument as well?

 

Yes, I tested everything and the only difference I found was "aio write size" must be enabled and "write cache size" should be not zero. But write cache size has been already removed in Samba 4.12 because io_uring became the default:

https://wiki.samba.org/index.php/Samba_Features_added/changed#REMOVED_FEATURES_3

 

PS Unraid 6.9.2 uses already Samba 4.12.14

 

8 minutes ago, jj1987 said:

"aio write size" and "aio read size" should be already "1" on default

Yes.

 

And another important change with Samba 4.13 is the auto detection of RSS. So maybe since Unraid 6.10 we don't need this line anymore:

interfaces = "10.10.10.10;capability=RSS,speed=10000000000"

 

So finally we only need to enable SMB Multichannel and everything should be perfectly running.

Link to comment
  • 4 months later...
On 9/8/2021 at 9:54 AM, mgutt said:

So maybe since Unraid 6.10 we don't need this line anymore:

interfaces = "10.10.10.10;capability=RSS,speed=10000000000"

to get RSS working in the 6.10 RC the "interface =..." is still need.

my config looks like this:

#unassigned_devices_start
#Unassigned devices share includes
   include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end

#server multi channel support = yes
interfaces = "192.168.0.50;capability=RSS,speed=1000000000"
#aio read size = 1
#aio write size = 1

 

but, i dont think i got any advantage from RSS, because of this:

1443193790_image1.thumb.png.ab2b28c9c4a7cbea5b29f065f1afa86d.png

 

am i right?

Link to comment
  • 4 weeks later...
  • 2 weeks later...

@mgutt Great write up!
For #7, you mentioned "or disable the write cache", have you tested disabling the write cache?  If so, how do you do it properly?  I ask because I was trying to diagnose slow SMB performance on my Unraid box and in the process set vm.dirty_background_ratio=0 and vm.dirty_ratio=0 to remove the RAM cache from the testing and my SMB transfer speed tanked to about 5 MB/s.  I replicated this in different Unraid versions and an Ubuntu live USB image on the same hardware.  This was to NVME storage that I confirm is working at > 1.5 GB/s on system.  I can transfer 650 MB/s to the RAM cache when it's on in Unraid.

 

I'm very curios if others have ran into this and if it is normal behavior for Linux or if there is something goofy with my hardware or I didn't disable the RAM cache correctly.  

 

You can read more about the testing I did here:

 

Link to comment
  • 5 months later...

This is an excellent guide - very easy to follow and I did so! Unfortunately, browsing directories with thousands of files in them is still very slow. File transfer speeds are acceptable, but indexing/browsing is incredibly slow and I can't seem to get Folder Caching to work to boot. This was not a problem when the server was Windows 10, but using unRAID, browsing/indexing speeds are intolerably slow.

 

Any idea if RSS should help here, or any other mods?
 

 

Link to comment
  • 1 month later...
On 9/24/2020 at 1:14 PM, mgutt said:
egrep 'CPU|eth*' /proc/interrupts

It must return multiple lines (each for one CPU core) like this:

egrep 'CPU|eth0' /proc/interrupts
            CPU0       CPU1       CPU2       CPU3       
 129:   29144060          0          0          0  IR-PCI-MSI 524288-edge      eth0
 131:          0   25511547          0          0  IR-PCI-MSI 524289-edge      eth0
 132:          0          0   40776464          0  IR-PCI-MSI 524290-edge      eth0
 134:          0          0          0   17121614  IR-PCI-MSI 524291-edge      eth0

I've been trying to enable SMB Multichannel for a week. with the command, however, only 1 line was displayed for eth0. When I was in the network.cfg "old" NICs had been deleted and so on 4 lines were displayed to me

Link to comment
  • 3 months later...

So I know this thread is a little old but I am wondering if I can get some clarification. I have an NVMe SSD in both my Windows machine and also as a cache drive in UNRAID. With a 10g Cat6a connection I am getting around 400mbytes upload to Unraid. The weird part is with a SFP+ adapter using fiber in a different machine I am lucky to get 200mbytes. Both Windows machines are identical in specs, except for the RJ45 NIC is intel 540 and the SFP+ is Mellanox.  When I do a direct connect between the Windows machine and Unraid I hit about 800mbytes. I have tried both a TP-Link 8 port 10g managed switch and also a Mikrotek 12 port managed switch, with the same results. Jumbo frames made little difference.
Thoughts? 

Link to comment
8 minutes ago, mgutt said:

Not sure why you are posting here as you probably having a hardware issue.

Actually I believe it is a settings issue. I have swapped out the hardware for testing and I still get the same results. I think the issue could be related to SMB multi thread. iPerf is giving me 6.7gbps which is way faster than what I am seeing in uploads to Unraid. Unraid is running with defauly network settings other than some Jumbo frame testing I did which made no real difference.

 

Link to comment
21 minutes ago, crowdx42 said:

iPerf is giving me 6.7gbps

Which is still extemely bad. I mean 10G is 10G. Why should you only reach 6.7 while using RAM only (iPerf default).

 

Are you sure your cabling is good enough? Maybe you are having interferences. Try PING for a longer period and check if you have fluctuations or even package loss.

 

24 minutes ago, crowdx42 said:

Jumbo frame

The impact of these is low. I use the default MTU and I reach 1 GB/s without problems. 

Link to comment

For the iPerf result I got above, I was using a Mellanox MCX311A with TP-Link SPF+ transceivers on both ends and connecting using a 25ft fiber cable. I also tried 10GTek transceivers with the same results. On FB, people said I might need to use Parrallel Threads of iperf to max the bandwidth, I am not sure how true that would be due to, if I directly connect the Unraid server to the Windows PC I get over 800MB/s. I have tried both the current TP-Link switch which is SFP+ ports only and I also tried a Mikrotik switch which has RJ45 and SFP+ on it and the results were the same. 


 

Link to comment
9 hours ago, crowdx42 said:

use Parrallel Threads of iperf to max the bandwidth, I am not sure how true that would be due to

Yes in some case / setup.

 

On 2/3/2023 at 12:42 PM, crowdx42 said:

I am getting around 400mbytes upload

I also got similar speed with NVMe, never reach 10G when I test in longtime ago,  but if test use memory cache, I will got full 10G SMB speed ( MTU also 1500 and no need SMB multi-channel ) ...... so since then I basically not use NVMe / SSD for cache purpose.

Edited by Vr2Io
Link to comment

For some reason I can't get RSS to work. I have two Mellanox MCX354A-FCBT in a peer2peer config and in Windows they show as RSS Capable:

44229322_WindowsRSS.JPG.1b1c3781f512626eab132b8a65dcc8e2.JPG

 

But in unraid I get zero lines with "egrep 'CPU|eth1' /proc/interrupts":

1197314073_UnraidRSS.thumb.JPG.e4b12e87d4fe769ee10d3abbb8354274.JPG

(eth0 is the non RSS capable onboard NIC)

 

And SMB Multi Channel is also not working. My Samba config looks like this:

server multi channel support = Yes
interfaces = "10.10.10.1;capability=RSS,speed=4000000000" "10.10.11.1;capability=RSS,speed=4000000000"

 

 I can write with 1,6GB/s to the Server and read with 900MB/s. Not bad of couse but the NVMe drives and 40GbE should be able to performe better.

Link to comment
1 hour ago, meganie said:

But in unraid I get zero lines with "egrep 'CPU|eth1' /proc/interrupts":

This is related to the driver. You need to ask Limetech why this limitation exist. But at first you should boot with Ubuntu and verify that it is not a general issue under Linux.

Link to comment
  • 1 month later...

I've replaced the hardware on the client side and RSS is working now.

RDMA is still not implemented in UNRAID/Samba? Otherwise I would consider upgrading to Windows 10 Pro for Workstations.

 

RSS.PNG.382d97c5af3004a0e938ba81e7e12cde.PNG

 

Is it normal that it shows eth0 multiple times even though the NIC I use for SMB is eth1?

CPUs.thumb.PNG.2b132aedbcc719d2241378ad900ad023.PNG

 

RSSc.PNG.c8e43e54fc367639115d6a7e1805c1c6.PNG

 

CrystalDiskMark Benchmark

CDM.PNG.8625c2a044c3700789597f0afb3ade85.PNG

 

Copy file to the server:

1373491334_totheserver.PNG.9a14602000d2eb1b534cbde740a0584c.PNG

 

Copy file from the server:

352651490_fromtheserver.PNG.6a16e1e68dffbd15ff52deb7993569ef.PNG

Edited by meganie
Link to comment

The non-pro variants support RDMA just fine: https://network.nvidia.com/pdf/user_manuals/ConnectX-3_VPI_Single_and_Dual_QSFP_Port_Adapter_Card_User_Manual.pdf

 

"Client RDMA Capable: False" is just listed because Windows 10 Pro doesn't support RDMA. That's why I would have to upgrade my client to Windows 10 Pro for Workstations to support it.

But as far as I know Unraid/Samba don't support it, still listed as prototype: https://wiki.samba.org/index.php/Roadmap#SMB2.2FSMB3

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...