Improving Unraid Performance with Many Small Files


Recommended Posts

I purchased Unraid Plus for an HP Proliant P4300 G2 server with dual quad core CPUs, 32GB RAM, six 6GB SAS disks, and a 500GB NVMe cache that tests at around 800MB/s. I installed a 10Gbe NIC and have it directly connected to an iMac Pro's 10Gb interface. The server is set up with a bridged interface so the iMac can access the rest of the network. In iPerf between the iMac Pro and NAS I can hit 7Gb/s. Not quite 10Gb, but decent performance. (Maybe it's slow because I'm using a bridge interface?)

 

I'm using SMB shares that I have mounted on macOS and pretty much everything is peachy ... except that I'm often very disappointed in the performance I'm seeing in some scenarios. I can transfer large files at decent speeds, about 400 Megabytes per second. I can copy a 16GB movie in a minute or so. Awesome.

 

But not everything is awesome... if I copy a folder with a lot of smaller files over the network I see dismal performance... A backup with a lot of text files and documents took 3-4 hours to copy 2GB. I copied a 70GB collection of photos and it took several hours. I'm not sure if it's related but on the iMac Pro a Time Machine backup is taking over a week to backup 500GB of data, but my MacBook Pro connected over Gigabit (not even 10Gb!) did this in about five hours. This might be something else screwy going on specific to this desktop.

 

Obviously there's more overhead when copying a lot of small files versus streaming one big one, TCP sliding window size and protocol overhead and so on, but this seems like an outrageous discrepancy.

 

This begs the question: Is there anything I can do to improve performance with these smaller transfers? I've already got jumbo frames turned on both on the Unraid box and the iMac Pro, and I'm just using SMB for most things. I considered something like iSCSI but I'm not sure if this performs better - I know virtually nothing about it, and I don't think Unraid supports it anyways.

 

For doing these large backups or initial copies it almost seems like it would be faster to just TAR everything and transfer big files over to the NAS. But are there any performance optimizations that I might be missing, protocols I could try, or any other ways of connecting the NAS that might be faster for these sorts of tasks? I feel like something isn't right here, I'm getting performance that seems like a fraction of what I should get on this beast of a configuration with a 10Gb network connection and NVMe SSD cache.

 

I'm really enjoying the Unraid OS but would like to see if I could improve on the speed of these smaller transfers. Does anyone have pro-tips or thoughts on things to check or try to make things a bit faster? Advice would be much appreciated!

 

 

Link to comment
16 hours ago, Supacon said:

A backup with a lot of text files and documents took 3-4 hours to copy 2GB. I copied a 70GB collection of photos and it took several hours.

That's very far from normal, but I don't have a Mac, there have been been other reports of slow SMB performance with them, if you google mac slow smb you'll find lots of hits, might also find something that helps.

Link to comment

Good advice, it should probably have been more obvious that there was something Mac specific happening.

 

I did find this article which talked about one thing that helped a fair bit:

http://www.techkaki.com/slow-samba-file-copying-speeds-in-mac-os-x/

By turning off TCP delayed ACK on the offending Mac with the following command

sudo sysctl -w net.inet.tcp.delayed_ack=0

This resets on reboot, so to make this permament /etc/sysctl.conf needs the line

net.inet.tcp.delayed_ack=0

Performance increased significantly. Copying a folder with 1GB of photos went from taking around 90 seconds to 50, so that's not insignificant, but still seems suboptimal. I'll see what else I can dig up.

 

Incidentally I had actually found and tried this solution before, but found no difference — but in that case I was trying to see if it could improve iperf3 performance (to see if I could get from 7Gb/s to 10Gb/s). My chief issue is with small files, though, and it makes more sense that this setting would affect many small transfers more than one big one.

Edited by Supacon
Link to comment
7 minutes ago, Supacon said:

This resets on reboot, so to make this permament /etc/sysctl.conf needs the line

/etc resets on reboot also. All the usual linux OS folders are unpacked fresh from the archives on flash, into RAM, and the OS runs completely in RAM.

 

The usual method to get something like this to persist is a script that runs at startup using the User Scripts plugin.

Link to comment

Hmm, interesting. I set "Case-sensitive names" to "Yes" for one share and tried my 1GB photo copy (which was only 350 files) and this copy job went down to 40 seconds. So that was a promising but slight improvement.

 

I then tried a 800MB folder of emails (many tiny files) and this wasn't too promising... parts of the copy job started running at oldskool dial-up speed, almost, measured in KB/sec. In this case I'd probably just be better to zip it up first. Copying this folder around between SSDs on my Mac is practically instant, maybe takes 10 seconds max.

 

Edit: Tried again with a 100MB subset of this data and it took 7 minutes. Pretty lame.

The same copy on a low-end Windows laptop over Gigabit took 3:40, so nearly twice as fast with slower hardware and network speed.

Edited by Supacon
Link to comment

I just attempted another change from this article discussing how to disable a "client signing" requirement, which can apparently help speed things up if you don't care so much about security.

https://mackonsti.wordpress.com/2016/12/21/speed-up-smb-transfers-el-capitan-mac-os-10-11/

 

Doing this involves using the following command on the Mac:

sudo printf "[default]\nsigning_required=no\n" | sudo tee /etc/nsmb.conf >/dev/null

 

Although I couldn't see any difference with smbutil statshares -a, trying my 1GB photo test again managed to transfer in 22 seconds, which is way faster than before. The 100MB small file test took 6 minutes. It seems there are some slight improvements, but still not close to where it could be.

 

 

Edited by Supacon
Link to comment

I did more testing, this time comparing Unraid's SMB implementation to SMB.

 

For my 120MB small file test, AFP transferred the files in 3:00 and SMB did the transfer in 6:43. Quite different results!

The 1GB photo transfer took 20–70 seconds over AFP and 30–60 seconds over SMB.

Finally, a 10GB movie transfer took 63 seconds over AFP and 37 seconds over SMB.

 

It seems that AFP has some performance advantages here for smaller files, but performs worse for large transfers. Perhaps I'll start using that instead of SMB when I know I'm transferring lots of small files... (despite the warnings about it being deprecated/obsolete). This isn't a terribly satisfying solution, but it's an option.

Edited by Supacon
Link to comment

Another problem I've been seeing sometimes is that when I'm writing a lot of files, the copy job just "freezes" and nothing is written for a while while all writing and network activity pauses, seemingly for no reason. I can't think of why this would happen. I used iStat Menus to show a graph of network utilization during these times. I wonder if the server is busy for some reason, but it shouldn't be doing anything else intensive and it has 8 CPU cores and 32GB of RAM and a very fast SSD. So weird.

 

Using Dynamix System Statistics I'm not seeing substantial CPU use, nor any disk activity or network activity during this time, so I don't think it's even doing parity activity or anything of the sort.

708175527_Screenshot2020-04-0313_56.21copy.thumb.png.4df1bdb7b1d9baa608324b7a992327ee.png

Edited by Supacon
Link to comment

Well... this just gets worse and worse. Apparently whatever I have done or am doing is causing even more issues.

 

First, I’m noticing that I basically never can hit 400MB/s anymore like I used to... at best I can hit 200MB/s, but usually much less than that, and only on very large files over a few GB.

 

Significantly worse than that, however, is that my iMac basically just craps out after copying anything for a while to the point where no network transfers at all work, and the only way to get Finder responding is to reboot the whole machine. This happens almost every time. It’s pretty miserable. The transfer just gets more and more sporadic, taking longer breaks between little spurts of transferring files until finally it stops altogether. If I’m lucky it might error, but more often Finder is completely frozen and I can’t even just restart Finder to continue on.

 

I’m going to have to probably start over and undo any changes I made to see if I can get this working reliably again. I’ll start by turning AFP back off in Unraid and maybe resetting the SMB security and ACK delay settings. I’m pretty unhappy with how badly this is going right now :(

Link to comment

Following up on my last issue with the Mac becoming unresponsive and having long pauses in the middle of transfers, one change I made that seems to have helped was for me to turn off “Enhanced Mac Interoperability” from the Settings->SMB section and start the array again.

 

I’m not sure if this was a fluke or not but things seemed to go way more smoothly with this off. I’m not necessarily noticing speeds that are different in general, but not having to reboot my Mac every time I try doing a big transfer is a welcome improvement. I will update this thread if I learn more about this, continue to see these issues, or find that this setting isn’t what made the difference.

 

The drawback to this change is that now I won’t be able to use Time Machine (which didn’t work well on my iMac Pro, but somehow worked quite well on my MacBook Pro).

Link to comment
  • 2 weeks later...

I have a problem with small files too. According to my investigations, this is unRAID SHFS problem and not SMB. You can try to create a share on an unassigned SSD drive and compare it to the normal share (even with cache only). In my case, it is much faster. It would be very interesting to hear about your experience. I like unRAID as a media server, but it is almost unusable as network storage for small file projects.

Link to comment

This is interesting and sounds plausible. How does one create a share on an unassigned drive like this? I don’t happen to have another SSD lying around to test this with, however.

 

It did seem like I got much better performance with a small test using only an SSD cache and a single hard drive with no parity back when I was still evaluating UnRAID. I get quite decent performance with something like Reslio Sync (Averaging over 150-200 MB/s) but trying to transfer files over SMB to Unraid is painful - either sub 1MB/s speeds or hang ups and pauses that require me to reboot something.

Link to comment

Enable share is easy. It is a temporary solution for me now. One problem you cannot use standard share UI and create different share folders with different parameters and only share the disk as a whole. 

image.thumb.png.b71f09fa5242bcc143d46778dd8ca4b6.png

There are different threads on the forum addressing strange problems with SMB/SHFS. Here are some of them: 

 

 

 

In the last topic at the end I did some tests comparing User Share vs Disk Share. Will do now the tests with unassigned device as well.

Link to comment

I installed the Unassigned Devices Plugin and tried enabling a share on an old 32GB 15K RPM SAS drive I have. Now the drive probably only does around 80MB/s writes or so, so I wasn't expecting much. Here are my quick and crude test results:

 

Doing a large file write (10GB over 10Gbit) to the SAS HDD was kind of sluggish and bursty, taking a few minutes, where it'd probably write to the SSD in a minute.

 

Writing a folder with 100MB of small files only took about 3 minutes, however... doing the same write directly to the NVMe SSD cache took about 4 times as long. This is fascinating that it's 4 times faster to a drive that only runs at a tenth of the speed. Something is clearly very not good in Unraid. Is this possibly because I'm using two parity disks? Would I expect a performance increase by using only one?

Edited by Supacon
Mistakenly referred to SSD as ”SAS drive”
Link to comment

Another quick test I just ran was to run the array with the Parity disks removed. I thought maybe a potential bottleneck was in calculating and writing parity, but it seems that the performance (writing only to cache, which doesn't even really involve parity) was the same. So I don't think Parity is the issue here.

Link to comment

Another thing to test. Try to set Case-sensitive names: Yes on a share. I did yesterday and now my share is only 20-30% slower then unassigned device (beside it is NVME vs old SATA SSD in the USB port).

 

Also, if you have Samsung NVME, in some of the threads here, people found it is not good with BTRFS. So I formatted it to XFS. 

Link to comment

I have tried Jumbo Frames (it is probably off right now, partly because I have to keep swapping ports because I don't have a 10Gb switch) and it makes no appreciable difference.

 

If you read my post following tjb_altf4's, you'll see I have tried Case Sensitive names before and it did seem to make a minor difference. The weird thing is that all the tests I tried at that time aren't repeatable - if I try the exact same tests now under the same configuration I'll almost certainly get worse results. It's like this whole system just gets slower and slower over time. When I first set it up I was getting 400MB/s transfers on large files and now there's no way I can even get a consistent 200MB/s with large files. And small file transfers are horrendously slow, maybe 100KB/s on a good day.

 

The transfer to the SAS drive might have been limited by the SAS drive itself, but I don't know why it was slow and stuttery. I'd think that it'd be a smooth, consistent speed, but it kept pausing and looked like it was choking at times.

Edited by Supacon
Link to comment
10 hours ago, rhard said:

Did you also enable Jumbo frames on your 10GbE? What is iperf3 -D 8 results?

Not sure what iperf3 -D 8 does, that command doesn't make sense to me, but just a regular run of iPerf3 -c (imac) where the iMac is the server gets me up to 7Gb/s typically.

 

Interestingly just now when i ran it, I had the hardware settings in system preferences/network set to Auto and I was only getting 2.6Gb/s. Turning jumbo frames back on made it go back up to 7. In the past, turning jumbo frames on maybe got me from 6 to 7 Gb/s, so that's odd. Maybe because Jumbo frames were on in Unraid but not on my Mac?

 

When I connect with Unraid as the server and use the Mac as the client, I get slower speeds, usually around 6.5 Gb/s when using Jumbo frames. In the past I'd get 6 Gb/s without Jumbo frames.

Edited by Supacon
Link to comment

iPerf3 -c -D 8 will start 8 threads to fully saturate the network. With one thread I can also get only 6~7 Gb/s. With more threads it gets fully saturated to 1,17 GB/s. I don't know if SMB uses more threads, but at least you can check your network at full speed.  

Edited by rhard
Link to comment

That appears to be the command to start as a Daemon - doesn't that just mean it runs in the background? I ran the server with that parameter and it seems to make no difference. When I run the client with -k 8 I get an extremely fast speed that claims 20Gbits/sec which seems like a dubious result.

 

Edit:

Ah, I think I figured out what you meant - -P 8 runs with 8 parallel threads. Doing that didn't make it any faster for me, I only got 6.6 Gb/s.

At any rate, although my network speed isn't quite full 10Gbit, I don't think the network speed is the bottleneck.

Edited by Supacon
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.