• Slow SMB performance


    ptr727
    • Minor



    User Feedback

    Recommended Comments



    mgutt is there a specific place where  you share your unraid smb config? i read your performance improvemnt topic but it would be easier to test same vs same. I have big issues with SMB performance too and nothiing seems to help it.

    Edited by theruck
    Link to comment
    3 hours ago, mgutt said:

    I don't think the SHFS overhead will be solved by a newer kernel.

     

    I wasn't suggesting that it would. I was just saying that all the development effort was going into moving forward to 6.9, rather than fixing remaining issues with 6.8, but it is taking longer than anticipated because the move to the new kernel has been problematic.

     

    Aspects of SHFS have been improved though, such as the dereferencing of paths to VM vdisks.

     

    Link to comment
    4 minutes ago, John_M said:

     

    I wasn't suggesting that it would. I was just saying that all the development effort was going into moving forward to 6.9, rather than fixing remaining issues with 6.8, but it is taking longer than anticipated because the move to the new kernel has been problematic.

     

    Aspects of SHFS have been improved though, such as the dereferencing of paths to VM vdisks.

     

    sorry but having the SMB issues what would be the reason to remove AFP support in the new release then? It will just piss of more MAC users, especially havin no real "support" for the SMB perfromance issues from limetech

    Edited by theruck
    Link to comment
    1 minute ago, theruck said:

    sorry but having the SMB issues what would be the reason to remove AFP support in the new release then? It will just piss of more MAC users

     

    I regret the loss of AFP too as I have a collection of older Macs that don't work so well with SMB, but I understand that Netatalk has always been a bit of a dog. Have you considered using NFS instead? MacOS is Unix-like and connects to NFS shares if you start the NFS service first.

    Link to comment
    34 minutes ago, John_M said:

    Aspects of SHFS have been improved though, such as the dereferencing of paths to VM vdisks.

    Which is no improvement. It's bypassing.

    Link to comment
    42 minutes ago, TexasUnraid said:

    I tried that powershell command but it just goes to a new line and does not give any information with a transfer active (or not).

     

    Ok, this means you don't have SMB multichannel enabled. This is valid for win2win as well?

    Link to comment
    1 minute ago, John_M said:

    Which is a good thing, when it can be done automatically.

    Yes. But this tweak works only with VMs, because a VM is one huge file which can't be splitted across multiple disks. All other paths in Unraid link to shares which theoretically can target multiple files on multiple disks. So they can't be internally replaced by a direct disk path.

     

    I'm using the same tweak for multiple containers, but it works only for those paths which permanently target the same disk. Like my appdata Share with "Prefer" cache setting, which is by that permanently located on /mnt/cache/appdata, so I can bypass SHFS. But if I would change the appdata cache to "Yes" and start the mover, all my docker containers would be broken. That's why Unraid itself can't implement this "improvement" by default.

     

    The only solution is to optimize SHFS. And as I proposed, they should start with the FUSE_MAX_PAGES_PER_REQ. I have a good feeling about that.

    Link to comment
    1 hour ago, theruck said:

    mgutt is there a specific place where  you share your unraid smb config

     

    I have only two lines to enable SMB Multichannel and RSS, but both aren't needed. It's only a bonus. Is it possible to access your PC remotely? I like to see what you are doing and how your paths / settings look like while you suffer from the low performance. Maybe I find your problem. If this is ok for you, PM me.

     

    Here is the proof:

    341677150_2021-02-0723_03_46.png.318c4ac9e889a8a9a37bbfed334a374e.png

    Link to comment
    50 minutes ago, mgutt said:

     

    Ok, this means you don't have SMB multichannel enabled. This is valid for win2win as well?

     

    I didn't test this, don't have another windows system on right now. They are all running default windows 10 network settings though.

    Link to comment
    23 minutes ago, mgutt said:

     

    I have only two lines to enable SMB Multichannel and RSS, but both aren't needed. It's only a bonus. Is it possible to access your PC remotely? I like to see what you are doing and how your paths / settings look like while you suffer from the low performance. Maybe I find your problem. If this is ok for you, PM me.

     

    Here is the proof:

    341677150_2021-02-0723_03_46.png.318c4ac9e889a8a9a37bbfed334a374e.png

     

    The first line is pretty simple, the second line, I am guessing that I just need to change the IP to my main IP address? I have both 1gig and 10gig. Do I comma separate and add both addresses?

    Link to comment
    4 minutes ago, mgutt said:

     

    You could speed up the process a little bit by enabling this option ( I did not, only selected for this screenshot ;) )

     

    1038525281_2021-02-0723_11_58.png.ad4149cb59e2b60c91b549d715b2be52.png

    I considered this option but like the file integrity of COW, particularly for backups if possible.

    Link to comment
    Just now, TexasUnraid said:

    They are all running default windows 10 network settings though.

    Then they will use SMB Multichannel and if both network adapters support RSS, they will even use this, too. Because its the default:

    https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v=ws.11)

    Quote

    Because SMB Multichannel is enabled by default, you do not have to install additional roles, role services, or features. The SMB client automatically detects and uses multiple network connections when the configuration is identified.

     

    This is the only "magical" difference between win2win and win2unraid. And if you don't use a direct disc path as your target you additionally suffer from the SFHS overhead.

    Link to comment
    2 minutes ago, mgutt said:

    Then they will use SMB Multichannel and if both network adapters support RSS, they will even use this, too. Because its the default:

    https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v=ws.11)

     

    This is the only "magical" difference between win2win and win2unraid. And if you don't use a direct disc path as your target you additionally suffer from the SFHS overhead.

    I know I tried to get multichannel working on unraid when I first set it up last year but could not get it working. I suppose those commands you listed above are the answer.

     

    I will add them and give it a try when I can take it offline in a day or 2.

    Link to comment
    10 minutes ago, TexasUnraid said:

    I will add them and give it a try when I can take it offline in a day or 2.

     

    Those commands work without reboot. You only need to add them to the smb-extra.conf and execute "samba restart".

     

    But you need to replace:

    10000000000

     

    against:

    1000000000

     

    if you're using an 1G adapter.

     

    But this won't help much as ViceVersa does not use multiple threads. But it will help if you have other background connections to Unraid.

     

    Quote

    I am guessing that I just need to change the IP to my main IP address? I have both 1gig and 10gig. Do I comma separate and add both addresses?

     

    Do you need the 1G connection? If not, don't use it. I never tested multiple adapters, but you need to add the IP of the specific adapter.

     

    Link to comment
    3 minutes ago, mgutt said:

     

    Those commands work without reboot. You only need to add them to the smb-extra.conf and execute "samba restart".

     

    But you need to replace:

    10000000000

     

    against:

    1000000000

     

    if you're using an 1G adapter.

     

    But this won't help much as ViceVersa does not use multiple threads. But it will help if you have other background connections to Unraid.

     

     

    Do you need the 1G connection? If not, don't use it. I never tested multiple adapters, but you need to add the IP of the specific adapter.

     

    Yeah, the 10gig is just a P2P connection for 1 machine, I need the 1gig for everything else.

    Link to comment
    1 hour ago, mgutt said:

    The only solution is to optimize SHFS. And as I proposed, they should start with the FUSE_MAX_PAGES_PER_REQ. I have a good feeling about that.

     

    Perhaps they will. I'm not disagreeing. I simply replied to someone who seemed annoyed by the apparent lack of developer comment in this thread.

    • Like 1
    Link to comment

    This may or may not be relevant:  I switched from unraid to ubuntu for various reasons, among which was that I was experiencing this SMB slowdown issue.  So this is using the exact same hardware but somewhat different software.

     

    I'm experiencing this exact same problem in ubuntu that I had in unraid:  In a Windows VM <-> Linux SMB (Samba) all on the same host I see lousy read & write throughput, typically around 15MB/s on hardware that's easily capable of >50MB/s. Typically I see a burst of up to 200MB/s for a few seconds then a crash to 15MB/s.  I assume this is write-caching.  It may be an entirely different issue, but it does seem to have followed me.  I've just lived with the problem; in my case I'm hoping to move to vfio at some point and bypass the whole SMB bottleneck.

     

    I don't want to confuse or sidetrack the issue, but to me this seems to suggest it's a more general upstream performance issue and not unraid-specific.  Has anyone skimmed the ubuntu/debian buglists for similar performance degradation complaints?

     

    Link to comment
    23 hours ago, TexasUnraid said:

    Try it with 1,000,000+

    While the download worked without any problems, I now seem to hit your problem while uploading:

    1584924726_2021-02-0819_19_26.thumb.png.09727e560915fd6b72e84769c00e6314.png

     

    If I pause the process the smbd load directly disappears:

    687529611_2021-02-0819_22_30.thumb.png.fb03a9384d1ae6fe7233d18ae531b0ee.png

     

    And resuming is as slow as before.

     

    Then I tried:

    - to trim the SSD (/sbin/fstrim -v /mnt/cache)

    - Clear the Linux PageCache (sync; echo 1 > /proc/sys/vm/drop_caches)

    - restarted the Samba (samba restart)

     

    Then I created a ramdisk and copied through Windows. The load of the smbd service raises while the transfer speed drops:

    842761580_2021-02-0819_40_00.thumb.png.0a06b0bfef647e2fdde18aeae55a1241.png

     

    I remembered about this tuning guide regarding directories which contain a huge amount of files:

    https://www.samba.org/samba/docs/old/Samba3-HOWTO/largefile.html

     

    So I disabled case sensitivity through "nano /etc/samba/smb-shares.conf":

    387834166_2021-02-0819_58_06.png.08128a240bf8d0f17efbcd8427926bcd.png

     

    And yes, now the transfer and load remains stable:

    1886207609_2021-02-0820_04_00.thumb.png.463d11f4042f338a9f449cc2d70c27e6.png

     

     

    Could this be your problem? Do you have more than 10.000 files in a single sub-directory?

    Link to comment

    Yes, the issue primarily presents itself when uploading files to the server, I don't download files from it very often.

     

    I don't think that any single directory has more then 10k files, generally only a few hundred per directory. There are around 500k directories last I checked IIRC.

     

    I could of sworn that I saw an option for case sensitivity in the unraid GUI someplace but can't find it now.

     

    I tried to update samba with the multichannel but it broke my SMB connection in some way. I reverted it enough to keep it running and plan to reboot the server and do more testing later this week when I have some time.

    Edited by TexasUnraid
    Link to comment
    4 hours ago, TexasUnraid said:

    I don't think that any single directory has more then 10k files, generally only a few hundred per directory. There are around 500k directories last I checked IIRC.

     

    Ok, I changed the code to generate the 1M random files as follows:

     

    share_name="Music"
    mkdir "/mnt/cache/${share_name}/randomfiles"
    for n in {0..999}; do
        dirname=$( printf %03d "$n" )
        mkdir "/mnt/cache/${share_name}/randomfiles/${dirname}/"
    done
    for n in {1..1000000}; do
        filename=$( printf %07d "$n" )
        dirname=${filename:3:3}
        dd status=none if=/dev/urandom of="/mnt/cache/${share_name}/randomfiles/${dirname}/${filename}.bin" bs=4k count=$(( RANDOM % 5 + 1 ))
    done
    

     

    Now we get 1000 directories and each contains 1000 files.

     

     

    More tests follow...

     

    Link to comment
    2 minutes ago, RealActorRob said:

    @TexasUnraid

     

    Case-sensitive names is somewhat (to me) strangely under security settings for the Share.

     

    Shares > 'Click Sharename' then it's there under SMB Security Settings.

     

     

    Bingo, I knew I had seen it someplace.

    • Like 1
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.