• Slow SMB performance


    ptr727
    • Minor



    User Feedback

    Recommended Comments



    I was updating my Coreelec media box today and had to delete ~40k thumbnails over smb (100mb, not even gigabit) on the devices SD card. Even though it was working with an SD card it still managed to delete the files almost 3x as fast as unraid on a raid0 cache pool during the first ~1000 files before it grinds to a halt.

     

    Both use linux samba implementations.

     

    I just found it interesting.

    Edited by TexasUnraid
    Link to comment

    That is interesting.  Can you do the following for testing?
     

    From a shell on your Unraid server do:

    1. dd if=/dev/urandom bs=1024 count=10240 | split -a 3 -b 4k - /mnt/disk1/share/path/to/test/file.

      1. this will create a bunch of files named file.xxx 4k in size.  Be sure to keep the "/file." on the end of the path.

    2. time rm -rf /mnt/disk1/share/path/to/test/file.* 
      1. Be sure the path is accurate so you don't delete anything important!  This will return the time required to execute.
    3. Repeat these steps again, but this time in step 2 (and only in step 2) replace disk1 with user, so /mnt/user/share/path/to/test.


    When I do this it takes about .3 seconds to delete directly from the disk, but when deleting from /mnt/user, deleting the same number of files of the same size, it takes 5.3 seconds.  This is from a 7200RPM SATA drive formatted with XFS.

     

    Now, I'm sure some additional latency is expected with the way unRAID references files within the array, so I don't think this is a smoking gun by any means.  But I'm curious to see what the difference is for you.

    Link to comment
    1 hour ago, _whatever said:

    That is interesting.  Can you do the following for testing?
     

    From a shell on your Unraid server do:

    1. dd if=/dev/urandom bs=1024 count=10240 | split -a 3 -b 4k - /mnt/disk1/share/path/to/test/file.

      1. this will create a bunch of files named file.xxx 4k in size.  Be sure to keep the "/file." on the end of the path.

    2. time rm -rf /mnt/disk1/share/path/to/test/file.* 
      1. Be sure the path is accurate so you don't delete anything important!  This will return the time required to execute.
    3. Repeat these steps again, but this time in step 2 (and only in step 2) replace disk1 with user, so /mnt/user/share/path/to/test.


    When I do this it takes about .3 seconds to delete directly from the disk, but when deleting from /mnt/user, deleting the same number of files of the same size, it takes 5.3 seconds.  This is from a 7200RPM SATA drive formatted with XFS.

     

    Now, I'm sure some additional latency is expected with the way unRAID references files within the array, so I don't think this is a smoking gun by any means.  But I'm curious to see what the difference is for you.

     

    Here is the first test directly on the drive, It created the files at 22mb/s and deleted them in:

     

    real 0m0.116s

    user 0m0.008s

    sys 0m0.076s

     

    And using the same folder in the user share it created them at 10.4mb/s and deleted in:

     

    real 0m0.606s

    user 0m0.014s

    sys 0m0.112s

     

     

    So slower but nothing that would explain the SMB performance, everything I do directly on unraid pretty much has the expected speeds. Except krusader sometimes will be locked to like ~60mb/s for reasons I can't explain. If I copy it in console or with large files with SMB it will go at full speed though, so it must be something to do with krusader.

     

    The only time I have slow down issues is when using SMB and then only with small files for the most part (although if it tries to copy a large file mixed in with a bunch of small files it will sometimes go slower for some reason).

    Edited by TexasUnraid
    Link to comment

    If you follow the details of the thread, my research shows the the problem is NOT SMB, the problem is the Unraid FUSE filesystem.

    Mount your SMB share under a user share (Unraid FUSE code in play), poor performance.

    Mount your SMB share under a disk share (no Unraid code), normal performance.

    Link to comment
    13 minutes ago, ptr727 said:

    If you follow the details of the thread, my research shows the the problem is NOT SMB, the problem is the Unraid FUSE filesystem.

    Mount your SMB share under a user share (Unraid FUSE code in play), poor performance.

    Mount your SMB share under a disk share (no Unraid code), normal performance.

    Interesting, how would I mount an SMB share to the disk and not the user folder?

     

    I find it interesting that if I use NFS to connect to the user share everything works fine, that would seem to say it is not a fuse issue?

     

    Also, I currently have all my shares set to only use 1 drive each, although I doubt it would make a difference.

    Link to comment

    Also, "Case-sensitive names: Auto"  makes significant performance hit with directories with many files:
     

    Quote

    Controls whether filenames are case-sensitive.

    The default setting of auto allows clients that support case sensitive filenames (Linux CIFSVFS) to tell the Samba server on a per-packet basis that they wish to access the file system in a case-sensitive manner (to support UNIX case sensitive semantics). No Windows system supports case-sensitive filenames so setting this option to auto is the same as setting it to No for them; however, the case of filenames passed by a Windows client will be preserved. This setting can result in reduced peformance with very large directories because Samba must do a filename search and match on passed names.

    A setting of Yes means that files are created with the case that the client passes, and only accessible using this same case. This will speed very large directory access, but some Windows applications may not function properly with this setting. For example, if "MyFile" is created but a Windows app attempts to open "MYFILE" (which is permitted in Windows), it will not be found.

    A value of Forced lower is special: the case of all incoming client filenames, not just new filenames, will be set to lower-case. In other words, no matter what mixed case name is created on the Windows side, it will be stored and accessed in all lower-case. This ensures all Windows apps will properly find any file regardless of case, but case will not be preserved in folder listings. Note this setting should only be configured for new shares.

     

    Link to comment
    16 minutes ago, rhard said:

    Also, "Case-sensitive names: Auto"  makes significant performance hit with directories with many files:
     

     

    Yes, I already tested this it seemed to make a minor difference but that doesn't mean much when it is still 10-20x slower then windows lol.

    Link to comment
    23 minutes ago, ptr727 said:

    If you follow the details of the thread, my research shows the the problem is NOT SMB, the problem is the Unraid FUSE filesystem.

    Mount your SMB share under a user share (Unraid FUSE code in play), poor performance.

    Mount your SMB share under a disk share (no Unraid code), normal performance.

    I read the entire thread and saw your posts (and blog) explaining this.  I have not noted the large performance issues others have reported, though, I have not bothered to try doing a disk share vs. a user share.  I may try this to see what the difference I see is on my setup.

     

    However, I do think there is a problem with SMB, based on @TexasUnraid's packet captures.  I don't think it's an SMB problem directly, and it's just being affected by something else under the hood, to your point, likely FUSE.  SMB is trying to verify writes but can't, which explains the "STATUS_OBJECT_NAME_NOT_FOUND" errors, which are the equivalent of "file not found".  This, I believe, is adding additional overhead and latency.

     

    9 minutes ago, rhard said:

    Also, "Case-sensitive names: Auto"  makes significant performance hit with directories with many files:
     

     

    I'm actually playing around with this now myself to see what, if any difference I see in performance.  I typically don't deal with lots of small files but I've generated a bunch now to play with.

    Link to comment

    If I copy a single large file 1GB in size I can max out my client NIC at around 125MB/s.  However, If I copy a bunch of small 4K files my transfer speeds are greatly reduced to around 65KB/s.  I don't have any clients with NICs more capable than 1Gbps to really test saturating my storage or NIC on my server though so this isn't as good a test as others had done previously.  Still, that's pretty bad.  With 'case sensitive' set to true I saw no difference.

    I'm going to test copying the same files to a Windows 10 machine just for good measure.

    Edited by _whatever
    Link to comment

    Another setting it is worth checking if it makes any difference in your tests is Settings -> Global Share Settings -> Tunable (support Hard Links) set to No.

    Link to comment
    25 minutes ago, _whatever said:

    If I copy a single large file 1GB in size I can max out my client NIC at around 125MB/s.  However, If I copy a bunch of small 4K files my transfer speeds are greatly reduced to around 65KB/s.  I don't have any clients with NICs more capable than 1Gbps to really test saturating my storage or NIC on my server though so this isn't as good a test as others had done previously.  Still, that's pretty bad.  With 'case sensitive' set to true I saw no difference.

    I'm going to test copying the same files to a Windows 10 machine just for good measure.

    Yep, basically the same results as what I am seeing. Now try that with ~1 million small files and watch your day go bye bye lol.

     

    When testing to a windows machine, try using a windows client as well, I only use windows clients so not sure how linux client would fair talking to a windows host.

    Edited by TexasUnraid
    Link to comment

    Was doing some testing on the possibility of the fuse file system causing the slowdowns. Not sure how to mount a disk share via SMB but I did finally figure out why my krusader copies are stupid slow sometimes and super fast others.

     

    Turns out if I use the /user path my speed is limited to ~50-60mb/s.

     

    Doing the exact same copy with the direct drive path on the other hand had things copying at over 1GB/s until the memory buffer filled lol Then it dropped to the expected ~120mb/s.

     

    So yes, the fuse file system is a bottleneck for sure, although I have not been able to test it over SMB. I have no idea how to mount things manually in linux.

     

    If I could setup a manual share that mounted directly to the disk and get full performance, that would be an acceptable workaround for my use case.

     

    I actually don't really see me letting most shares span multiple disks, there is simply no need and it makes backups much simpler as I have the shares split up to fit on individual disks right now.

    Edited by TexasUnraid
    Link to comment
    14 minutes ago, TexasUnraid said:

    f I could setup a manual share that mounted directly to the disk and get full performance, that would be an acceptable workaround for my use case.

     

    I actually don't really see me letting most shares span multiple disks, there is simply no need and it makes backups much simpler.

    Settings/Global Share Settings/Enable disk shares - set to Yes

    Link to comment
    19 minutes ago, TexasUnraid said:

    Turns out if I use the /user path my speed is limited to ~50-60mb/s.

    This on 6.8.3 or latest 6.9 beta?

    Link to comment
    30 minutes ago, limetech said:

    Settings/Global Share Settings/Enable disk shares - set to Yes

    I can't stop the array right now as my trial expired this morning and I read somewhere that the extension can take a little time to activate? People are using it right now. I will try changing that when I can afford some downtime.

     

    I was looking at this setting and not sure I understand how I actually create a disk share?

    
    
        If set to No, disk shares are unconditionally not exported.
    
        If set to Yes, disk shares may be exported. WARNING: Do not copy data from a disk share to a user share unless you know what you are doing. This may result in the loss of data and is not supported.
    
        If set to Auto, only disk shares not participating in User Shares may be exported.

    I don't see any options for disk shares when creating a share? Most of my shares right now are set to only use 1 disk so that must not be the answer?

    Link to comment
    30 minutes ago, limetech said:

    This on 6.8.3 or latest 6.9 beta?

    I reverted back to 6.8.3 when I saw my trial was drawing to a close.

     

    Pretty sure I saw this same issue with 6.9 but since I didn't figure out what was causing the issue until today I can't be sure. I didn't know to test for it in particular, I thought it was just a permissions issues or something previously.

    Link to comment
    43 minutes ago, TexasUnraid said:

    I don't see any options for disk shares when creating a share? Most of my shares right now are set to only use 1 disk so that must not be the answer?

    You don't create disk shares like user shares.  As was mentioned previously by Limetech, in Global Share Settings, you just set Enable disk shares: to "Yes" if you want them to show up in the Shares tab.  If enabled, they will be listed under the user shares.

     

    image.png.a5c215b0b0f8f6f6de42fb91060ecc9d.png

     

    Regardless of how many disks are assigned to a user share, if you are going through the user path (/mnt/user/[sharename]) it is going through the fuse file system. /mnt/disk# is direct access to a disk "share."

     

    The danger comes in mixing the two (copying or moving files between user and disk shares).

    Link to comment

    Ah, I see, so it just shares the whole disk. I was thinking it was like user shares but with a direct path to the disk.

     

    This might work, the main reason for disk share use would be during backups and I could setup the backup program to use a disk share then have the mapped drives still be user shares.

    Link to comment

    Ok, I enabled disk shares and did some quick testing today and the performance is indeed significantly faster then using the user shares.

     

    Not quite up to windows performance but a whole lot better.

     

    I will have to see how things go when I do backups in a few days, that will be the true test.

     

    So yes, the issue does seem to be related to the fuse file system. Seeing as it reduces performance basically in half even when working in krusader, this is not real surprising.

    Link to comment
    31 minutes ago, TexasUnraid said:

    So yes, the issue does seem to be related to the fuse file system.

    This is long known.  I asked earlier whether using 6.8.3 or 6.9 beta because in recent kernels extra work has gone into improving FUSE.  Also, this is via 10Gbit ethernet, correct?

    Link to comment

    I see, in all my reading I never saw this mentioned.

     

    Sadly I didn't know to test this specifically when I was running 6.9 so nothing solid to report there.

     

    I have tested with the 10gb, the intel T340 card and the onboard intel NIC, all give the same basic results with small files. With large files is the only time the 10GB really gives a noticeable improvement.

     

    I will be doing a backup sometime in the next few days and that will really show how things are compared to windows.

    Edited by TexasUnraid
    Link to comment

    When copying large files to my Unraid server over gigabit Ethernet I saturate the client NIC at around 120MB/s and I see no difference between copying directly to disk or using a share in /mnt/user.  This is the same speed at which I can copy files to a Windows 10 share.  However, if I copy a lot of small files, I only copy at around 60KB/s to a share in /mnt/user, but I get around 120KB/s direct to a disk share.  The latter is the same performance I get when copying to a Windows 10 share.

    I know copying small files I should expect to see a performance hit, but, that's a pretty big difference and really the only way I'm able to reproduce any example of slowness.  Even though my server has a 4x1Gbps port NIC configured using 802.3.ad I don't have a client with a NIC faster than 1Gbps that can get anywhere near saturating my cache drive.

     

    When doing packet captures and copying a few thousand 4k files I notice that Unraid returns a lot of errors that indicate it's waiting for an I/O operation to complete, so the client actually waits for a timeout to elapse.  I don't see these when doing a capture when copying the same files to/from a share on an Ubuntu box or Windows 10 machine.  Yes, FUSE is slowing things down, but, the SMB protocol is adding additional slowness making it more pronounced.  I've been playing around with various Samba settings but haven't a lot of time to dedicate to it.  I need to get a capture when copying directly to a disk share to see what it looks like.

    Link to comment

    Yep, the same basic performance I am seeing. Anything using fuse seems to be at about half speed.

     

    The SMB also seems to have issues as well, I will know more when I do the full backup as it has to scan and sync over 1 million small files, so a good stress test. I will use the disk share this time around.

     

    I know the whole process used to take me about 1 hour to complete windows client to windows server. So that is the baseline.

    Edited by TexasUnraid
    Link to comment

    I also have this problem. I get around 36Mbps speed directly to the cache disk share or to a user share that uses the cache.

    But when I mount the /mnt/user with CloudMounter using SFTP. I get around 1300!! Mbps. That is a 36x speed difference.

    Both tests from a Mac using LAN SpeedTest using a 500mb file.

    Link to comment

    You

    35 minutes ago, Womabre said:

    I also have this problem. I get around 36Mbps speed directly to the cache disk share or to a user share that uses the cache.

    But when I mount the /mnt/user with CloudMounter using SFTP. I get around 1300!! Mbps. That is a 36x speed difference.

    Both tests from a Mac using LAN SpeedTest using a 500mb file.

    You really should use a much larger file for testing.  At least bigger than your RAM size.  I typically use 50-100GB files for testing.

    Edited by StevenD
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.