Uncast Episode 5: All about Unraid 6.10 and 10gbps

Recommended Posts

On 7/2/2021 at 9:18 PM, SpencerJ said:

there is a deep-dive discussion about 10gbps performance over SMB on Unraid

Enjoyed the podcast, but IMHO more important than using other things to get around the performance penalty introduced by user shares would be to try and improve that, for example I still need to use disk shares for doing internal transfers if I want good performance, and any SMB improvements won't help with that, e.g., transfer of the same folder contents (16 large files) from one pool to another done with pv, first using disk shares, then using user shares:


46.4GiB 0:00:42 [1.09GiB/s] [==============================================>] 100%


46.4GiB 0:02:45 [ 286MiB/s] [==============================================>] 100%


If the base user shares performance could be improved it would also benefit SMB and any other transfers.







  • Like 2
Link to comment
Just now, bonienl said:

Yet, in my testing I get a transfer rate of 800 MB/s when copying a 80 GB file from Windows (nvme disk) to a user share on Unraid (nvme RAID0 pool)

Yeah, user shares performance can vary a lot, I also had some servers/configs in the past were I could get around 600-800MB/s, but lately this is what I get in one of my main servers, there have been a lot o posts from other users with the same issue, between 300-500MB/s when using user shares, 1GB/s+ with disk shares.

Link to comment
14 hours ago, bonienl said:

but this is without the SMB multi channel feature

I don't dispute that, in fact like mentioned I've gotten similar results in the past, but it's not always possible, curiously user shares are currently still faster for me when using SMB than when doing an internal server transfer:




This is my other main server, better but still not great:





My point was that this has been a known issue for many years now, that affects some users particularly bad, here's just one example, and if this could be improved it would be much better than a setting that will only help SMB, also last time I checked Samba SMB multichannel was still considered experimental, though it should be mostly OK by now, but of course if it's currently not possible to fix the base user share performance than any other improvements are welcome, even if they don't help every type of transfer.

  • Like 1
  • Thanks 1
Link to comment
  • 2 weeks later...
On 7/6/2021 at 3:21 AM, JorgeB said:


My point was that this has been a known issue for many years now, that affects some users particularly bad, here's just one example


Thanks for pointing out my thread. This is THE ONLY thing that I am disappointed in with UNRAID since 2014. This still plagues me today, unless I use the crazy work around that I discussed in that thread. Without it, I never break 400MB/s, which is particularly sad when you have all the necessary hardware in place to hit 10Gb/s.

Link to comment
  • 3 weeks later...
On 8/5/2021 at 5:03 PM, glennv said:

Waiting eagerly for the announced blog post with instructions for testers to play with smb multi chan to finaly get all the juice from my 10G . Hoping its not a windows only thing ...... ( osx, linux only user)


I'll work on that this week ;-)  It shouldn't be a Windows-only thing.

  • Like 1
Link to comment
  • 4 weeks later...
On 7/5/2021 at 2:43 PM, JorgeB said:

If the base user shares performance could be improved it would also benefit SMB and any other transfers.


We have a similar discussion in the German forums and we have multiple users with full speed while using user shares:



At the moment we are not sure if this is related to specific NICs or the new Unraid version.


aio read size = 1
aio write size = 1


Why is this disabled in smb.conf of Unraid?




Regarding my tests this option reduces performance if disabled. So I would say it should be enabled by default and not only for SMB Multichannel scenarios.


PS You did not mention in the blog article that SMB Multichannel alone does not work. The adapters on both sides need RSS support. Else you don't benefit from Multichannel from a single 10G connection (but if you use multiple 10G connections).


RSS is auto-detected since Samba 4.13, so Unraid 6.10 is a requirement. More details in my Guide:


On 8/17/2021 at 12:37 AM, Lilarcor said:

Will I get benefits if my server has 2.5Gb NIC, or it’s for 10Gb only?


There are two different things to know:


1.) SMB Multichannel enables transfers across multiple channels. This means if you have multiple ethernet ports in the same ip range, SMB automatically splits a file transfer and use all ports at the same time. By that its possible to connect a server with two 1G ports and a client with one 2.5G and transfer with ~200MB/s.


2.) The next point is RSS (receive side scaling). RSS allows to split transfers across multiple cpu cores. By that the a 10G transfer does not hit the cpu core limit which bottlenecks the transfer speed. As an SSD is faster in writing multiple threads, even 1G connections will benefit a little bit. Note: RSS works only if the network adapter supports it on both sides and multichannel is enabled. RSS will slow down transfer speeds if the target is a HDD. So consider installing more RAM and raising "vm.dirty_ratio" if you like to transfer directly to HDDs.


So what does it mean for your scenario:

It depends. Lets say your server and client both use 2.5G. Then enabling SMB Multichannel alone wouldn't enhance the transfer speeds. But if both support RSS, the transfers will be split across multiple cores and so you benefit a little bit from parallel writes to your SSD, which can be more useful if you transfer very much tiny files.


Or let's say our server has two 1G connections and your client 2.5G. Then enabling SMB Multichannel will boost the transfer to ~200MB/s. In this scenario RSS is not needed. PS I tested this scenario with 2x 1G (server) and 1x 10G (client) and it worked only after I disabled RSS on the client side as my servers ethernet adapters did not support RSS (mixed scenarios are not allowed!).

Link to comment
2 minutes ago, JorgeB said:

Because with some disks performance is much worse if it's enable:

Ok, yes. The reason could be the aio max threads setting. Its default is 100. So using an HDD as target, which has a bad I/O performance, could slow down the transfer.


I think a better solution would be to test a smaller aio max threads value and if it solves the issue, I would suggest to use the small value by default and the higher value is set, after the user added a cache pool. It still means a reduced performance for HDD transfers, but I think those users won't do this often.


Or it will be added as an option to the SMB settings (as SMB Multichannel should be as well).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.