mgutt Posted August 23, 2021 Author Share Posted August 23, 2021 At the moment I try to enable SMB Multichannel when my Client has 10G LAN and my server has two 1G LAN ports. What I tried: Enabled Multichannel and added speed capabilities per adapter/ip: Checked on the client if both IPs have been found and selected for Multichannel: But finally it does not work as eth1 is not used: Then I reminded that it is not possible to mix RSS and non-RSS scenarios: https://docs.microsoft.com/en-us/archive/blogs/josebda/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0 Quote Sample Configurations that do not use SMB Multichannel The following are sample network configurations that do not use SMB Multichannel: Single non-RSS-capable network adapters. This configuration would not benefit from multiple network connections, so SMB Multichannel is not used. Network adapters of different speeds. SMB Multichannel will choose to use the faster network adapter. Only network interfaces of same type (RDMA, RSS or none) and speed will be used simultaneously by SMB Multichannel, so the slower adapter will be idle. So the first step was to disable RSS on the clients adapter: But still no activity on eth1: Then I did: - disabled IPv6 on the server and on the client - rebooted server and client And now it works: Was it because of disabling RSS? No, after re-enabling IPv6 and rebooting, it does not work anymore: As you can see "thoth" resolves to an IPv6 address. I tried to copy to both IPv4 addresses of the server, but it does not enable SMB Multichannel: This is strange as for both IPs both target server adapters were found: Maybe SMB multichannel works only for SMB server names? Let's try it out by adding "tower" as a new server name for .8: again no success after copying to "tower": Next step was to disable IPv6 in the network adapter properties: Even rebooting the client does not help... I did a little bit research and on this blog I found someone who gets IPv6 addresses if he executes Get-SmbMultichannelConnection: https://blog.chaospixel.com/linux/2016/09/samba-enable-smb-multichannel-support-on-linux.html So I think my problem is that this command returns in my case only IPv4 addresses even if the clients network adapter has IPv6 enabled. But why 🤔 The smb service on Unraid listens to IPv6: netstat -lnp | grep smb tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 30644/smbd tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 30644/smbd tcp6 0 0 :::139 :::* LISTEN 30644/smbd tcp6 0 0 :::445 :::* LISTEN 30644/smbd And the client resolves with "ping" the smb server name with an IPv6 address, too.... Just for fun I added the IPv6 addresses to the smb conf: interfaces = "fd00::b62e:99ff:fea8:c72c;speed=1000000000" "fd00::b62e:99ff:fea8:c72a;speed=1000000000" "192.168.178.8;speed=1000000000" "192.168.178.9;speed=1000000000" SMB listens now only to these specific IPs: netstat -lnp --wide | grep smb tcp 0 0 192.168.178.8:139 0.0.0.0:* LISTEN 24774/smbd tcp 0 0 192.168.178.9:139 0.0.0.0:* LISTEN 24774/smbd tcp 0 0 192.168.178.8:445 0.0.0.0:* LISTEN 24774/smbd tcp 0 0 192.168.178.9:445 0.0.0.0:* LISTEN 24774/smbd tcp6 0 0 fd00::b62e:99ff:fea8:c72a:139 :::* LISTEN 24774/smbd tcp6 0 0 fd00::b62e:99ff:fea8:c72c:139 :::* LISTEN 24774/smbd tcp6 0 0 fd00::b62e:99ff:fea8:c72a:445 :::* LISTEN 24774/smbd tcp6 0 0 fd00::b62e:99ff:fea8:c72c:445 :::* LISTEN 24774/smbd And ironically the Windows client now reaches the server through one (?!) IPv6: Get-SmbMultichannelConnection Server Name Selected Client IP Server IP Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable ----------- -------- --------- --------- ---------------------- ---------------------- ------------------ ------------------- THOTH True 192.168.178.21 192.168.178.9 13 11 False False THOTH True 2003:e0:a71d:5700:e988:c813:e0d4:a16a fd00::b62e:99ff:fea8:c72c 13 12 False False But transfer speed is still capped to one.... Next try. Disable IPv6 on the client, force SMB to listen only to IPv4, set 192.168.178.8 as the IP of "THOTH" through the windows hosts file and reboot the client, but still no success 🙈 # SMB Conf: interfaces = "192.168.178.8;speed=1000000000" "192.168.178.9;speed=1000000000" bind interfaces only = yes netstat -lnp --wide | grep smb tcp 0 0 192.168.178.8:139 0.0.0.0:* LISTEN 31758/smbd tcp 0 0 192.168.178.9:139 0.0.0.0:* LISTEN 31758/smbd tcp 0 0 192.168.178.8:445 0.0.0.0:* LISTEN 31758/smbd tcp 0 0 192.168.178.9:445 0.0.0.0:* LISTEN 31758/smbd So do it reverse. This time IPv6 is enabled on the client, but server gets IPv6 fully disabled... Not sure if this is important, but the server resolves to the second ethernet port. Don't know why: Server and client reboot... does not work. ... after several additional tests I found out, that it is unreliable. For example if I disable IPv6 only in the router and reboot the client, then SMB Multichannel works. But only for several minutes?! Then it fails again. Next test is to disable IPv4 in the router and reboot all devices incl switches. Maybe thats the reason?! Quote Link to comment
jj1987 Posted September 8, 2021 Share Posted September 8, 2021 In the last "All things Unraid" there was this Blog Thread: https://unraid.net/blog/how-to-beta-test-smb-multi-channel-support Basically the sum up of your start post BUT two more arguments have been added to smb-extra "aio read size" and "aio write size" As i had no clue what that meant i asked my pal Google and found at Samba.org a detailed explanation (you also have this link in your post https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html) Besides the fact that (if i understand it right) "aio write size" and "aio read size" should be already "1" on default i also found "aio max threads" which description sounds interesting. But you wrote none of the other arguments had any noticable effect on speed. So you tested thatspecific argument as well? Multihreaded Samba should give, in theory, a big performance boost - for weak single core perfomance CPUs at least. Also couldn't find any hints which Samba version might be required for this. Quote Link to comment
mgutt Posted September 8, 2021 Author Share Posted September 8, 2021 2 minutes ago, jj1987 said: But you wrote none of the other arguments had any noticable effect on speed. So you tested thatspecific argument as well? Yes, I tested everything and the only difference I found was "aio write size" must be enabled and "write cache size" should be not zero. But write cache size has been already removed in Samba 4.12 because io_uring became the default: https://wiki.samba.org/index.php/Samba_Features_added/changed#REMOVED_FEATURES_3 PS Unraid 6.9.2 uses already Samba 4.12.14 8 minutes ago, jj1987 said: "aio write size" and "aio read size" should be already "1" on default Yes. And another important change with Samba 4.13 is the auto detection of RSS. So maybe since Unraid 6.10 we don't need this line anymore: interfaces = "10.10.10.10;capability=RSS,speed=10000000000" So finally we only need to enable SMB Multichannel and everything should be perfectly running. Quote Link to comment
sonic6 Posted January 22, 2022 Share Posted January 22, 2022 On 9/8/2021 at 9:54 AM, mgutt said: So maybe since Unraid 6.10 we don't need this line anymore: interfaces = "10.10.10.10;capability=RSS,speed=10000000000" to get RSS working in the 6.10 RC the "interface =..." is still need. my config looks like this: #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end #server multi channel support = yes interfaces = "192.168.0.50;capability=RSS,speed=1000000000" #aio read size = 1 #aio write size = 1 but, i dont think i got any advantage from RSS, because of this: am i right? Quote Link to comment
mgutt Posted January 22, 2022 Author Share Posted January 22, 2022 2 hours ago, sonic6 said: but, i dont think i got any advantage from RSS, because of this: Sadly, yes. Quote Link to comment
mgutt Posted February 15, 2022 Author Share Posted February 15, 2022 God of SMB Multichannel answered regarding one of my feature requests: https://bugzilla.samba.org/show_bug.cgi?id=14824 Maybe I find the time to test the command. Quote Link to comment
Sarge Posted February 26, 2022 Share Posted February 26, 2022 @mgutt Great write up! For #7, you mentioned "or disable the write cache", have you tested disabling the write cache? If so, how do you do it properly? I ask because I was trying to diagnose slow SMB performance on my Unraid box and in the process set vm.dirty_background_ratio=0 and vm.dirty_ratio=0 to remove the RAM cache from the testing and my SMB transfer speed tanked to about 5 MB/s. I replicated this in different Unraid versions and an Ubuntu live USB image on the same hardware. This was to NVME storage that I confirm is working at > 1.5 GB/s on system. I can transfer 650 MB/s to the RAM cache when it's on in Unraid. I'm very curios if others have ran into this and if it is normal behavior for Linux or if there is something goofy with my hardware or I didn't disable the RAM cache correctly. You can read more about the testing I did here: Quote Link to comment
mgutt Posted February 26, 2022 Author Share Posted February 26, 2022 15 hours ago, Sarge said: vm.dirty_ratio=0 to remove the RAM cache from the testing and my SMB transfer speed tanked to about 5 MB/s. Yes, had similar experiences. Reduce it to 100 MB as mentioned here: https://forums.unraid.net/topic/97165-smb-performance-tuning/?do=findComment&comment=898772 Quote Link to comment
Sarge Posted February 26, 2022 Share Posted February 26, 2022 2 hours ago, mgutt said: Yes, had similar experiences. Reduce it to 100 MB as mentioned here: https://forums.unraid.net/topic/97165-smb-performance-tuning/?do=findComment&comment=898772 Will do, thank you! Quote Link to comment
Kyle Boddy Posted August 7, 2022 Share Posted August 7, 2022 This is an excellent guide - very easy to follow and I did so! Unfortunately, browsing directories with thousands of files in them is still very slow. File transfer speeds are acceptable, but indexing/browsing is incredibly slow and I can't seem to get Folder Caching to work to boot. This was not a problem when the server was Windows 10, but using unRAID, browsing/indexing speeds are intolerably slow. Any idea if RSS should help here, or any other mods? Quote Link to comment
Seduron Posted September 10, 2022 Share Posted September 10, 2022 On 9/24/2020 at 1:14 PM, mgutt said: egrep 'CPU|eth*' /proc/interrupts It must return multiple lines (each for one CPU core) like this: egrep 'CPU|eth0' /proc/interrupts CPU0 CPU1 CPU2 CPU3 129: 29144060 0 0 0 IR-PCI-MSI 524288-edge eth0 131: 0 25511547 0 0 IR-PCI-MSI 524289-edge eth0 132: 0 0 40776464 0 IR-PCI-MSI 524290-edge eth0 134: 0 0 0 17121614 IR-PCI-MSI 524291-edge eth0 I've been trying to enable SMB Multichannel for a week. with the command, however, only 1 line was displayed for eth0. When I was in the network.cfg "old" NICs had been deleted and so on 4 lines were displayed to me Quote Link to comment
crowdx42 Posted February 3, 2023 Share Posted February 3, 2023 So I know this thread is a little old but I am wondering if I can get some clarification. I have an NVMe SSD in both my Windows machine and also as a cache drive in UNRAID. With a 10g Cat6a connection I am getting around 400mbytes upload to Unraid. The weird part is with a SFP+ adapter using fiber in a different machine I am lucky to get 200mbytes. Both Windows machines are identical in specs, except for the RJ45 NIC is intel 540 and the SFP+ is Mellanox. When I do a direct connect between the Windows machine and Unraid I hit about 800mbytes. I have tried both a TP-Link 8 port 10g managed switch and also a Mikrotek 12 port managed switch, with the same results. Jumbo frames made little difference. Thoughts? Quote Link to comment
mgutt Posted February 6, 2023 Author Share Posted February 6, 2023 On 2/3/2023 at 5:42 AM, crowdx42 said: The weird part is with a SFP+ adapter using fiber in a different machine I am lucky to get 200mbytes. Not sure why you are posting here as you probably having a hardware issue. Quote Link to comment
crowdx42 Posted February 6, 2023 Share Posted February 6, 2023 8 minutes ago, mgutt said: Not sure why you are posting here as you probably having a hardware issue. Actually I believe it is a settings issue. I have swapped out the hardware for testing and I still get the same results. I think the issue could be related to SMB multi thread. iPerf is giving me 6.7gbps which is way faster than what I am seeing in uploads to Unraid. Unraid is running with defauly network settings other than some Jumbo frame testing I did which made no real difference. Quote Link to comment
mgutt Posted February 6, 2023 Author Share Posted February 6, 2023 21 minutes ago, crowdx42 said: iPerf is giving me 6.7gbps Which is still extemely bad. I mean 10G is 10G. Why should you only reach 6.7 while using RAM only (iPerf default). Are you sure your cabling is good enough? Maybe you are having interferences. Try PING for a longer period and check if you have fluctuations or even package loss. 24 minutes ago, crowdx42 said: Jumbo frame The impact of these is low. I use the default MTU and I reach 1 GB/s without problems. Quote Link to comment
crowdx42 Posted February 6, 2023 Share Posted February 6, 2023 For the iPerf result I got above, I was using a Mellanox MCX311A with TP-Link SPF+ transceivers on both ends and connecting using a 25ft fiber cable. I also tried 10GTek transceivers with the same results. On FB, people said I might need to use Parrallel Threads of iperf to max the bandwidth, I am not sure how true that would be due to, if I directly connect the Unraid server to the Windows PC I get over 800MB/s. I have tried both the current TP-Link switch which is SFP+ ports only and I also tried a Mikrotik switch which has RJ45 and SFP+ on it and the results were the same. Quote Link to comment
Vr2Io Posted February 7, 2023 Share Posted February 7, 2023 (edited) 9 hours ago, crowdx42 said: use Parrallel Threads of iperf to max the bandwidth, I am not sure how true that would be due to Yes in some case / setup. On 2/3/2023 at 12:42 PM, crowdx42 said: I am getting around 400mbytes upload I also got similar speed with NVMe, never reach 10G when I test in longtime ago, but if test use memory cache, I will got full 10G SMB speed ( MTU also 1500 and no need SMB multi-channel ) ...... so since then I basically not use NVMe / SSD for cache purpose. Edited February 7, 2023 by Vr2Io Quote Link to comment
crowdx42 Posted February 7, 2023 Share Posted February 7, 2023 How do use memory cache for testing? Quote Link to comment
Vr2Io Posted February 7, 2023 Share Posted February 7, 2023 11 minutes ago, crowdx42 said: How do use memory cache for testing? Set it by command line or use "Tips And Tweaks" plugin setting those two parameter, of course, you must have enough memory to cache whole file. Quote Link to comment
crowdx42 Posted February 7, 2023 Share Posted February 7, 2023 Thank you, I tried it but did not have any effect. I guess because the system is already writing between 2 NVMe SSDs on Unraid and the Windows PC. Quote Link to comment
meganie Posted February 8, 2023 Share Posted February 8, 2023 For some reason I can't get RSS to work. I have two Mellanox MCX354A-FCBT in a peer2peer config and in Windows they show as RSS Capable: But in unraid I get zero lines with "egrep 'CPU|eth1' /proc/interrupts": (eth0 is the non RSS capable onboard NIC) And SMB Multi Channel is also not working. My Samba config looks like this: server multi channel support = Yes interfaces = "10.10.10.1;capability=RSS,speed=4000000000" "10.10.11.1;capability=RSS,speed=4000000000" I can write with 1,6GB/s to the Server and read with 900MB/s. Not bad of couse but the NVMe drives and 40GbE should be able to performe better. Quote Link to comment
mgutt Posted February 9, 2023 Author Share Posted February 9, 2023 1 hour ago, meganie said: But in unraid I get zero lines with "egrep 'CPU|eth1' /proc/interrupts": This is related to the driver. You need to ask Limetech why this limitation exist. But at first you should boot with Ubuntu and verify that it is not a general issue under Linux. Quote Link to comment
meganie Posted March 10, 2023 Share Posted March 10, 2023 (edited) I've replaced the hardware on the client side and RSS is working now. RDMA is still not implemented in UNRAID/Samba? Otherwise I would consider upgrading to Windows 10 Pro for Workstations. Is it normal that it shows eth0 multiple times even though the NIC I use for SMB is eth1? CrystalDiskMark Benchmark Copy file to the server: Copy file from the server: Edited March 10, 2023 by meganie Quote Link to comment
rootd00d Posted March 13, 2023 Share Posted March 13, 2023 Maybe need the MCX354A-FCCT to support RDMA, because it’s a ConnectX-3 Pro card, instead of non-Pro? I’ve just been specing this out recently. Seems like this is a major distinction between the two, at least according to Bing Chat. Quote Link to comment
meganie Posted March 13, 2023 Share Posted March 13, 2023 The non-pro variants support RDMA just fine: https://network.nvidia.com/pdf/user_manuals/ConnectX-3_VPI_Single_and_Dual_QSFP_Port_Adapter_Card_User_Manual.pdf "Client RDMA Capable: False" is just listed because Windows 10 Pro doesn't support RDMA. That's why I would have to upgrade my client to Windows 10 Pro for Workstations to support it. But as far as I know Unraid/Samba don't support it, still listed as prototype: https://wiki.samba.org/index.php/Roadmap#SMB2.2FSMB3 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.