SMB browsing extremely slow, have tried caching and RSS tuning...


Recommended Posts

I am running the latest unRAID 6.10.3 with Folder Caching implemented and activated, with cache_pressure of 1 set.

 

I have a few directories/shares that have thousands of files in the top level. Formerly these shares were on a Windows Server box, and SMB browsing of the directories was a bit slow in Windows Explorer, but after indexing, was fine. From the command line after a single file listing (to presumably cache the file structure), access was nearly instant.

 

This is not the case on unRAID. I have batch scripts that require listing the files from the directory nightly, and the slowdown is somewhere between 200-1000x slower despite the Folding Caching plugin, a 10G-SFP connection, local access, and RSS support enabled (along with a client reboot and server samba restart command issued).

 

Checking the Folder Caching logs doesn't seem like it's doing much, just repeating an "Executed find" log message every ten seconds or so despite a client machine requesting a directory listing and hanging for several minutes:

 

Quote

2022.08.07 16:21:56 Executed find in (0s) 00.18s, wavg=00.18s Idle____________  depth 9999 slept 10s Disks idle before/after scan 9998s/9998s Scan completed/timedOut counter cnt=14694/14695/0 mode=4 scan_tmo=30s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU= 5%, filecount[9999]=202851
2022.08.07 16:22:07 Executed find in (0s) 00.17s, wavg=00.18s Idle____________  depth 9999 slept 10s Disks idle before/after scan 9998s/9998s Scan completed/timedOut counter cnt=14695/14696/0 mode=4 scan_tmo=30s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU= 5%, filecount[9999]=202851
2022.08.07 16:22:17 Executed find in (0s) 00.18s, wavg=00.18s Idle____________  depth 9999 slept 10s Disks idle before/after scan 9998s/9998s Scan completed/timedOut counter cnt=14696/14697/0 mode=4 scan_tmo=30s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU= 4%, filecount[9999]=202851
2022.08.07 16:22:27 Executed find in (0s) 00.18s, wavg=00.18s Idle____________  depth 9999 slept 10s Disks idle before/after scan 9998s/9998s Scan completed/timedOut counter cnt=14697/14698/0 mode=4 scan_tmo=30s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU= 4%, filecount[9999]=202851


 

I applied these fixes without much change:

 

 

Any suggestions? Screenshots of my Folder Caching uploaded along with Diagnostics.

 

image.png.b912556db248fd13ff4d4e6c96acf7ac.png

 

And heck, while you're here, if you can look into why I can't bind my Mellanox 10G-SFP card to eth0, feel free to have a look at this thread that has been dying a slow death:

 

 

morra-diagnostics-20220807-1625.zip

Edited by Kyle Boddy
server edit
  • Like 1
Link to comment
4 minutes ago, JorgeB said:

User shares can be very slow with many small files, if it's an option make those shares use a single disk then use a disk share instead of a user share.

 

I might try that in the future, but I'd probably just switch back to Windows if I was going to remigrate the data. Is there any way to cache the folder structure in RAM (I have 40 GB available) to reduce this issue? Is the Folder Caching plugin supposed to do this?

Link to comment

You might also try this setting in Windows Explorer.  It tends to eliminate a lot of overhead compared with  the other selections, Windows does a lot of file reading to compile the data for the other choices.  Begin by opening up the Server in Windows Explorer.  Then right click on the share with the issue.

 

image.png.ff6de1018d1468265f49cca6f94ec3c5.png

Edited by Frank1940
Link to comment
12 hours ago, Kyle Boddy said:

Checking the Folder Caching logs doesn't seem like it's doing much, just repeating an "Executed find" log message every ten seconds or so despite a client machine requesting a directory listing and hanging for several minutes:

 

I assume that you are talking about Dynamix Cache Directories plugin.  As I understand it, it is a mixed bag of worms.  (and I do use it...)  My experience is that you only want to cache a minimum number of files and folders.   The RAM allocated for the plugin script--- I understand that this plugin is a shell script--- has a very low priority level.  Thus, it likely to reallocated when any other process makes a memory call.   As an example, on my Media server, I only cache the Media folders.  This means I only have about 3000 items in the cache.  That seems to work without many issues. 

 

(When I go exploring inside of, say, the backup folders, there are occasional substantial delays while disks spin up and the file/directory information is transferred.  The more files and folders in a directory, the longer the delay.) 

Link to comment
5 hours ago, Frank1940 said:

You might also try this setting in Windows Explorer.  It tends to eliminate a lot of overhead compared with  the other selections, Windows does a lot of file reading to compile the data for the other choices.  Begin by opening up the Server in Windows Explorer.  Then right click on the share with the issue.

 

image.png.ff6de1018d1468265f49cca6f94ec3c5.png

 

Good advice - I do have this set along with other settings on client-side computers to improve the access speed, and it's still 200x+ slower in unRAID.

Link to comment

I'm also experiencing slow SMB browsing, mainly in Kodi when doing a library scan for updates. I noticed this after switching off SMBv1 and using SMBv3. I did a test and enabled an identical NFS share and the media scanning was drastically faster, maybe 20 seconds vs ~1-2 minutes. However I experienced weird audio popping issues on the NFS share and switched back to SMB. Going back to SMBv1 isn't an option since it's a dated and insecure protocol. I am able to watch movies, so transfer speed doesn't seem to be an issue.

 

Some more info: There are 127 movies in individual folders, 18 tv shows in individual folders, so not a ton.

 

Why is SMB so slow when NFS browsing / listing is near instant?

  • Upvote 1
Link to comment

From what others have said that I've had to piece together (general support has not been very good with unRAID, I might add), this seems like an intractable problem with the fuse filesystem that unRAID uses. I'll likely be switching back to Windows Server 2019, which is unfortunate.

Link to comment

hi i am in the process of migrating to (or maybe away from xD) unraid because of this

 

im noticing the same problems you are, but i don't really get it... like im assuming a lot of people here use SMB right? it isn't some unknown, niche and new technology so how has throughout the entirety of unraids existence, no developer or limetech in general cared to fix this?

 

like im of the mind that i MUST be doing something wrong because of how poor the performance is


we can watch it in real time, the 100 - 1000x slower performance of unraid (left) vs windows server (right)

 

09192b4277cd5e47cfc61de1b4ac0f88.gif

 

and then we can see this issue not exist in disk shares, unless of course you dare use the unraid user shares at the same time

213d437a9ee5c50aeb7246b6731abd8f.gif

 

its so bad that it even 'infects' the fast disk1 view, and once the shares SMB version are stopped it instantly jumps to being performant and in line with every other major implementation used by other providers...

 

im hoping someone will reply with 'damn pops you are just dumb, you click these 5 buttons and magically the performance you expect is granted' but im not seeing it... 

 

are you also noticing similar things to me? is everybody experiencing this? how the hell are you all happy with this? o__O

 

Link to comment

Shrug. I'm already planning on moving back to Windows Server 2019 because of the fuse filesystem (primarily) and unRAID's general not very good support on this as well as other issues (like my Mellanox thread).

 

It's insanely slow for large amounts of files. Only wish I had figured that out before I installed unRAID, paid for a license, and spent 3 weeks migrating data + configs. It's a shame because I love the dashboard, docker ecosystem, and tooling, but the core product is extremely flawed for this use case.

Link to comment

haha im in your exact situation (but i migrated away from a windows server 2019 install) i knew going in that SMB would be bad, but i wasn't prepared for just how bad it is

 

I pretty much have the same conclusion as you do (fortunately i haven't paid for a licence yet). i'm now trying desperately to think of how i can keep unraid in the mix but nothing springs to mind

 

its clear the smb shares aren't fit for purpose - so with developer reluctance in fixing this bug, what could we do? part of what would be needed would be something that takes the disk views and merges them into a single / unified directory, i wonder if something like that could be accomplished by some plugin

 

i also tried that folder caching script, it worked (in that i could spin down a disk and still read the directories) but was plagued with the exact same issues.. oh i never tried disk view with it, let me do that now. 

 

61dc3098d55cf8fddfc6ac9cf7032588.gif

 

nope still all the same

Link to comment

what I found kind of interesting was that if you access via SFTP its super fast. This video is showing cached results (i think) as the first time round it didn't perform as fast, but it wasn't far off it. 

 

all of this was done with my drives spun down too. 

3781149e599c4b35da69249841adb4db.gif

 

I don't know what the fuse filesystem they use governs and what it doesnt, but we can clearly see from that SFTP usage that it doesn't control everything poorly...I don't like to go down without a fight, so maybe if we can come up with a solution which can circumvent its utilisation

 

 

so i recorded more footage to get a feel for what is happening, or at least some stuff we can see

 

6f667164771d7f88a6b8072e8ca29176.gif

I thought maybe SFTP would just be fast whilst using the disks, but no its clearly just as fast when being utilised from mnt/user or mnt/user0 (i believe this is the same as being from the shares)

a7edcefe66bf9e04e1fcecfdbdb7437b.gif

utilisation spikes quite a bit for what it is, considering throughout the entirety of this my drives were and remained spun down. this is basically just reading from ram (or should be)

 

I looked what happened in netdata during that time, since its basically just this its pretty obvious

5153a790ba403fe89863ff3cbb11bb66.gif

 

and then I did the same again whilst running shares smb

c9a4226300798d11e50685269f3d8045.gif

 

so yeah im not really sure where to go from here....

 

 

ok returned to this idea after an hour of brain storming, we know SFTP is gr8 right so lets fk it we do it live with SFTP?

 

https://github.com/winfsp/winfsp

https://github.com/winfsp/sshfs-win

 

when i mounted it as a network drive it didn't work at first, but then reading the documentations it mentions you can change the home location by adding .r to it, which let me into

 

309305318572555cc904c5f3f66b724d.gif

 

1b730073479130dabb81a1644f92deba.gif

 

averaging roughly 100 files per second or more, NICE

no shitty overhead either

 

but im sure we'll run into a bunch of edge cases, i mean its not like this will just work right?

0a71415669d93a54eb892dac72260844.gif

 

well i'll be damned...

 

so what about networking streaming of audio / video? surely that wont be the same?

 

3485f35e42d9bc2b5f5cf64374c9c9e7.gif

 

looks great!

 

oh lets return to those small files, uploading them was fine but maybe it'll be super slow when downloading? 

8550a55eee098ac8cb6f96ab34b9a971.gif

nope, great @ 100+ files per second, only bad thing was that during the 'discovery' period of transfer, the video i had playing in the background (located on disk1 whereas the super small file tests are located on cache SSD) and that started to stutter, only during the discovery period though. whilst it was transferring the video playback resumed to normal speed. 

 

im pretty pleased at how this all panned out, i think this solution is a viable one for windows users - i can go somewhere from here at least, i'll do more testing later on though

 

limetech i wouldn't say no to a free key for all this testing and solution based strategy :D

 

 

Link to comment

I'm using Unraid on a QNAP device - with some strange performance issue (slowness in listing directories and running find command on shares).
At a time I did some SFTP tests and saw also better performances compared to SMB access.
I really think sftpgo should be integrated into Unraid - I tried the Docker version but it would be best to integrate it.

Edited by ds9
Link to comment
  • 3 weeks later...

FWIW, I upgraded from 6.9.2 to 6.10.3 and see a fairly dramatic drop in smb performance, similar to others.   @limetech  Can someone please do some checks from unraid compared to to other forms of access?    I have a simple directory with 300 photos in it and it can take 15 seconds to load just the directory list.  I'm using the dynamix cache directory plugin as well.   I can understand shfs has some overhead, but such a huge order of magnitude difference in performance is really aggravating and should really be looked at.

 

  • Upvote 2
Link to comment

yep its a shame that this bug is completely ignored. not sure why it is. 

 

i made a bug post here

 

but it seems its completely ignored. Unraid's implementation of SMB is completely non-functional. its not even worth considering using it, its absolute garbage. 

there are some other options you can use, like SFTP, or NFS. those are slower (sometimes significantly) but are passable. NFS is probably the best in terms of general directory browsing, it only takes 15 seconds or so to list my 75k file directory using it. 

 

for reference, i can instantly (1-2 seconds) view a directory with 500k files in it, using windows server 2019 SMB.

 

heres some random benchmarks i did.

 

===================================
13,238 Files, 116 Folders / 1.71 GB (1,845,219,328 bytes)

transfer FROM pc TO server

windows server 2019 
1 minute 45 seconds
delete @ 700~ files per second

windows server 2019 drivepool, many drive pooling solution (FUSE equiv?)
1 minute 56 seconds
delete @ 572~ files per second

transfer TO pc FROM server

windows server 2019 
1 minute 22 seconds

windows server 2019 drivepool, many drive pooling solution (FUSE equiv?)
1 minute 35 seconds
===================================
NFS unraid benchmark
13,238 Files, 116 Folders / 1.71 GB (1,845,219,328 bytes)

transfer FROM pc TO unraid server
28 minutes 51 seconds (it started off really fast and progressively got slower, i bet if you did it in batches of 1k it'd remain fast throughout)
delete @ 52~ files per second (increases as time goes on to 130~ files per second)

transfer FROM unraid server TO pc
2 minutes 8 seconds
===================================
SSHFS / SFTP
13,238 Files, 116 Folders / 1.71 GB (1,845,219,328 bytes)

using /mnt/cache_nvme (this should bypass SHFS according to some posts and give disk level speed)

transfer FROM pc TO unraid server
3 minutes 37 seconds

transfer FROM unraid server TO pc
1 minute 45 seconds
===================================

 

  • Upvote 2
Link to comment
  • 4 months later...
  • 1 month later...
  • 4 weeks later...
On 3/13/2023 at 2:37 PM, trott said:

I add the following in my samba extra config,  it helps a lot:

ea support = no
store dos attributes = no

 

Hi there Trott

Any chance you can expand on your steps to help the SMB speed please? Is this done on the windows pc or on Unraid? 

 

Thanks in advance :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.