Lots of small files VERY VERY slow over SMB with unraid?


Recommended Posts

Ok, I dealt with this since I started testing unraid but thought it was just because I was using an NTFS drive in UD and that was causing issues. I just got finished adding my first drive to the array and transferring everything over. I am NOT using a parity drive right now.

 

I have ~1 million small files that have to be scanned and synced on a regular basis.

 

When these files were on a basic windows share the backup process would take ~1 hour start to finish for all the systems in the house. Pretty easy weekend morning project before people got up and needed their systems.

 

It is taking it 10 minutes for a small test folder with ~5k files to delete / copy / sync everything.

 

If I do this to a windows machine share, it takes less then a minute to do the same thing.

 

Is this normal for unraid? Is there a way to speed this up, when I did a full backup last week before adding the drive to the array, it took all day to do what used to take an hour. That ruins the primary use of the server.

 

Why would it be so slow? Raw throughput is fine, I can max out the gigabit connection just fine on large files.

 

EDIT: I am testing another folder now and it is going even slower, normally it rolls through the files so fast I can hardly see them (several hundred files a second) but it is literally taking 1-2 seconds for each file going to the unraid share just to delete the old files before copying the new ones????!!??

 

To copy 2,000 small files to unraid (I tried both SSD cache and HDD array) took over 5 minutes.

 

To copy the same 2000 files to a windows computer took 30 seconds.

 

This can't be normal? Once again, no parity drive is being used.

Edited by TexasUnraid
Link to comment

given that you dont have a parity drive yet... I will note that having the files local on your computer vs having them on a network drive are two different things... things to consider are bandwidth... wireless/wired connection... SSD vs HHD... and protocol... these are are few things to consider when trying to figure out what is normal or not...

 

depending on your needs... perhaps a normal raid might be a better fit... unraid files are written to a single disk ie. slower on the read and write... normal raid will write files to many disk... thus when you wait to access a file... you have 2+ hdds to read the file(twice as fast) but also twice the cost for 2 drives vs 1... 

 

just some things to consider when choosing your storage solution

Link to comment
3 minutes ago, mathomas3 said:

given that you dont have a parity drive yet... I will note that having the files local on your computer vs having them on a network drive are two different things... things to consider are bandwidth... wireless/wired connection... SSD vs HHD... and protocol... these are are few things to consider when trying to figure out what is normal or not...

 

depending on your needs... perhaps a normal raid might be a better fit... unraid files are written to a single disk ie. slower on the read and write... normal raid will write files to many disk... thus when you wait to access a file... you have 2+ hdds to read the file(twice as fast) but also twice the cost for 2 drives vs 1... 

 

just some things to consider when choosing your storage solution

Yep, all considered and taken into account with the tests above. Also been doing these same backups weekly for years, so pretty well versed on what to expect by now lol.

 

Running a test with a few thousand files to a LAN based windows HDD share on another computer = ~30 seconds

 

Doing the exact same test to the raid0 SSD cache on unraid = over 5 minutes

 

Deleting files in particular goes stupidly slow on unraid, like literally 1-2 seconds per ~10kb file.

 

I can't explain it.

Link to comment

Well, just ran another test. I decided to enable NFS and try that to see if the results were any different.

 

Turns out I was on the right track. Speed over NFS is basically what I would expect, pretty close to the windows SMB transfer speed (Would need to do a much larger transfer to see the real differences).

 

I don't like NFS though since the security seems really weak. Is there a way to password protect an NFS share like SMB?

 

So the issue is with the SMB implementation in unraid and not the filesystem etc, that much I have narrowed down.

Link to comment

my array has 8 disk with 2 cache and 1 parity... the bottle neck in my system is the 1 parity drive... and I have never had that much of a lag in deleting files... 

 

you might try deleting those files from the unraid box itself and see if the lag persists... I wouldnt expect it to... 

 

you can open us a console in the web browser and using basic commands to navigate to your share...

 

normally it would be something like 

cd /mnt/usr/

ls (will list contents in your current directory)

cd (name of your directory/share)

rm -rf ./* (please note that linux expects you to know what your doing... you can delete some important files... and it will let you...)

 

that can give you an idea how fast things can be deleted in your current setup... given that it goes well there... I would explore How you are accessing the share with your windows computer... your issue might be there...

 

 

Link to comment
4 minutes ago, TexasUnraid said:

Well, just ran another test. I decided to enable NFS and try that to see if the results were any different.

 

Turns out I was on the right track. Speed over NFS is basically what I would expect, pretty close to the windows SMB transfer speed (Would need to do a much larger transfer to see the real differences).

 

I don't like NFS though since the security seems really weak. Is there a way to password protect an NFS share like SMB?

 

So the issue is with the SMB implementation in unraid and not the filesystem etc, that much I have narrowed down.

I havent done NFS with unraid... I would be of little use

Link to comment
15 minutes ago, mathomas3 said:

my array has 8 disk with 2 cache and 1 parity... the bottle neck in my system is the 1 parity drive... and I have never had that much of a lag in deleting files... 

 

you might try deleting those files from the unraid box itself and see if the lag persists... I wouldnt expect it to... 

 

you can open us a console in the web browser and using basic commands to navigate to your share...

 

normally it would be something like 

cd /mnt/usr/

ls (will list contents in your current directory)

cd (name of your directory/share)

rm -rf ./* (please note that linux expects you to know what your doing... you can delete some important files... and it will let you...)

 

that can give you an idea how fast things can be deleted in your current setup... given that it goes well there... I would explore How you are accessing the share with your windows computer... your issue might be there...

 

 

Yep, tried this early on, everything is fine when working directly on the unraid box, maybe a bit slower then windows but nothing worth talking about.

 

The issue only presents itself over SMB so far that I have noticed. The first few hundred files also delete as expected then it screeches to a halt and 1 second per file from there on out it seems.

 

Netdata doesn't have anything that stands out to me as an issue but also not really sure what to look for.

 

The fact NFS works as expected confirms the issue is samba related since that rules out hardware / network / filesystem issues.

Edited by TexasUnraid
Link to comment

You might want to experiment on turning on case sensitive names on your SMB share and see if it improves your use case performance.   It can be found on a share‘s settings on the SMB Security Settings.
 

By default directory/file names are case insensitive on SMB but as Linux uses case sensitive file systems the SMB server has to emulate the case insensitive behaviour and as the number of entries in a directory increases the overhead increases significantly.

Edited by remotevisitor
Link to comment

I have done quite a few more tests and it doesn't seem to matter what method I use to interact with the files, if it goes over SMB, it is painfully slow for small files.

 

If it goes over NFS, it matches windows SMB performance which is 10-20x faster.

 

So for the time being I am thinking the best option is to enable NFS for backups and then disable it when they are done as NFS is far from secure, this is not ideal but at least would get things working until a proper fix is found.

 

Can NFS be enabled / disabled from a script? I run small automated backups daily and would like to enable NFS when these are taking place automatically.

 

Then just manually enable NFS when I do the weekly backups.

 

Link to comment

I found the command for enabling or disabling NFS from a script:

/etc/rc.d/rc.nfsd start|stop|restart|status

Looks like if I leave the GUI set to off, that will be the defualt state. All the share settings are saved though so if I manually turn NFS on it seems to work fine.

 

So I can set it to turn on for a daily backup and turn back off again to minimize security hole, although it is still a gaping hole from what I am reading online.

 

Is is possible to set the NFS "home path" to a folder inside a share? So that only the folder I need to backup to would be exposed and anything below it would not be accessible? Changing my entire directory structure is not real practical for this.

Edited by TexasUnraid
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.