Jump to content

Maddening SMB Performance Issue


Go to solution Solved by SixClover,

Recommended Posts

I'm sure everyone clicked on this and thought "oh great another SMB performance thread", lol. Ok, but at least I can assure you have I dived quite deep into this rabbit hole and emerged with an extremely specific question/issue on the subject.

 

I've setup a 10gbe network, I can transfer data at about ~8.5 gbps right now without having done any tuning, confirmed by numerous and extensive iperf3 tests. So we have a baseline for my network throughput right now.  In additional tests I've written 15 GB files to my /mnt/cache/Media/Movies location with DD, I can tell you with absolutely confidence my NVMe SSD cache can sustain 950 MB/s write throughput. Absolutely. For memory I have 8 GB of DDR4 2666 memory, single stick.

 

And here is where I'm seeing a performance issue I can't make sense of. I will begin a file transfer which starts off 700 MB/s over SMB and within moments falls to 100 MB/s and stays there or worse.

 

As I monitor htop I can see a very obvious and consistent behavior, my memory fills up and the transfer slows down. This does seem to make sense at face value, but it really doesn't make any sense at all. My components are capable of dumping data from memory to disk at 950 MB/s. I've tested the throughput of my NVMe, I bought this thing specifically for these speeds. And yet memory clears out at a very slow rate.

 

In HTOP Samba is utilizing maybe 10% of CPU after transfers slow down, and shfs is writing maybe 100 MB/s to disk at most. Why are these processes moving so slowly? I'm pulling out my hair trying to understand why the memory fills up so fast and does not dump out faster to disk. Why does memory seem to be a bottleneck when there is plenty of CPU unused to do work and such a high throughput availability on my SSD to get the data written?

 

I am only using a single stick of memory and have ordered a second stick to see if dual channel offers any improvement. I'm not optimistic about that solving this issue.

 

Any ideas, insight or guidance would be greatly appreciated.

Link to comment
9 hours ago, SixClover said:

And here is where I'm seeing a performance issue I can't make sense of. I will begin a file transfer which starts off 700 MB/s over SMB and within moments falls to 100 MB/s and stays there or worse.

That suggests everything is OK with the LAN but the device can't keep up, what model NVMe?

Link to comment

It's a Corsair Force MP510, benchmarked it at 950+ MB/s write speed sustained. I've run the following command a number of times with the similar results.

 

dd if=/dev/zero of=/mnt/cache/Media/Movies/test.file bs=64M count=220 oflag=dsync
220+0 records in
220+0 records out
14763950080 bytes (15 GB, 14 GiB) copied, 15.3636 s, 961 MB/s

 

I also tried unraid's FTP and in iotop can see vsftpd averaging 100 MB/s after a while, briefly 400 MB/s+. Unraid web GUI shows the SSD writing over 400 MB/s briefly as well, so I know it can actually write to the SSD faster.

 

It's very confusing, I'm not understanding how the NVMe SSD can sustain 950 MB/s with dd but any other transfer protocol I try so far slows down so much. It does seem to correlate with memory utilization, but I don't know of any way to analyze memory IO to see what's happening in more detail.

Edited by SixClover
Link to comment

Not familiar with that particular model but zeros are highly compressible, some SSDs are much faster when writing only those, if you have decent non SMR performing disks in the array, enable turbo write and test writing directly to the array, with modern disks you can sustain around 200MB/s, if that's the case it will confirm a device issue.

Link to comment

If you saw my last post I misunderstood your comment. I just now tested on a 6TB WD Red 7200 RPM non-SMR (verified not one of the fake SMR drives from the WD SMR drama last year). The parity disk is also the same model/size/speed.

 

FTP (no encryption), the transfer averaged down to 75 MB/s.

 

Edited by SixClover
Link to comment

Before I test, the disks are not empty, is that a bust? All of my drives are nearly at capacity at the moment. I don't have an empty HDD to test with. I'm not really sure this is going to address the NVMe issue regardless so I kept doing research and found something awesome.

 

For everyone's records, here is a really clever trick to test vast amounts of non-0 write performance. Check out this awesome line, it uses openssl to encrypt 0's in memory, generating a very large amount of non-0 data to write to disk.

 

dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=/mnt/cache/Media/Movies/test.file bs=1M count=15000 iflag=fullblock
15000+0 records in
15000+0 records out
15728640000 bytes (16 GB, 15 GiB) copied, 17.0936 s, 920 MB/s

 

 

Here's the proof it wrote non-0 data at 920 MB/s, as you can see the data is a cipher stream, varying data.

cat test.file | head -c 100 | xxd
00000000: 0e19 44e2 e693 258d a1e6 2da7 6e11 31e9  ..D...%...-.n.1.
00000010: 0523 df56 c2bb 77be 68d8 5e15 87e4 4ae7  .#.V..w.h.^...J.
00000020: b15d 3773 0d67 bb7f 2702 9c0f 3b91 8182  .]7s.g..'...;...
00000030: fba6 6ea5 8572 85b5 39f6 4e7f cb01 f5e3  ..n..r..9.N.....
00000040: b8e9 9b8a 799a 00b2 0845 e71a 34a6 e53c  ....y....E..4..<
00000050: 1a20 52f5 6dd7 2e3d 90d4 9e3c b757 7672  . R.m..=...<.Wvr
00000060: b365 3ad4                                .e:.

 

I think the NVMe performance is quite clear, also independently tested and verified at 1,100 MB/s, and the official specs are far higher. This NVMe is specifically built for durability and speed which is why I bought it.

Edited by SixClover
Link to comment
1 hour ago, SixClover said:

All of my drives are nearly at capacity at the moment.

If they are near capacity no good to test, since speed will be <100MB/s.

 

Problem might not be the NVMe device, but it also doesn't sound like an SMB problem, unless it's a source problem, that 's why it would be good to test with another device, single unsigned fast HDD (empty) or SSD would do.

Link to comment

I do have a cold spare in the event of a disk failure (I forgot about this, had it buried away), I could utilize that in a test. I suppose I could also create a ram disk. I'll explore some options there and continue with testing.

 

It also occurred to me I can use strace to record all the system calls with timestamps, I might be able to figure out specifically where shfs/vsftpd are spending most of their time while data transfers are occurring.

Edited by SixClover
Link to comment

I definitely learned a lot of interesting details on this last run.

 

Writing to a new HDD slows down to 100 MB/s still.

 

/dev/shm is apparently a built-in ram disk, very convenient, I tested a transfer to that as a destination which started out 1.0 GB/s (!!) but quickly slows down to 100 MB/s as well.  (FTP Memory -> (slow) -> Ram disk Memory) - seriously, how does that one make any sense?  Interesting fact, iotop does not capture IO operations to /dev/shm.

 

Something makes absolutely no sense. Can anyone confirm SMB writing to disk faster than 100 MB/s? What if this whole time people have been transferring to huge amounts of RAM and never noticed the crazy slow transfer rates from system memory to NVMe? I might be the only person to setup a 10gbe network to a NAS with only 8 GB of memory and so I'm the first person to see this behavior. (I'm just kidding. Or am I?! 😟)

 

I boosted MTU across my network to 9k (jumbo frames on NIC's on both ends and each SFP+ switch port). That did get my network throughput up to 9.5 Gbps consistently in iperf3, quite nice, but didn't help anywhere else.

 

I ran strace on vsftpd, ~92% of the time is spent on read/writes, so I don't see any obvious signs of interference with memory flushing. To clarify, SMB and FTP in unraid behave identically on the transfer slow downs, the common factors between them are, of course; network, memory and storage. At least I feel 100% about storage and network performance, it's the memory I'm not entirely sure of I suppose, or any other system buffers/caches between memory and storage.

 

I tried moving a 2GB file from /dev/shm (ram disk) to /mnt/cache/Media/Movies (NVMe SSD) and the file transferred just about instantly. I suppose that confirms data moving from Memory to Disk at GB/s speeds, it happened instantly.  Notice, this transfer involves no transfer protocol applications (SMB, FTP, etc), so remove software and things are lightening fast again *** interesting ***.

 

I'm borderline ready to write my own TCP file transfer software to really investigate what the heck is going on here, hypothetically I should see the same behavior in my own software as depicted in FTP and SMB which I can then analyze in fine detail. I don't know what else to really do.

 

 

Link to comment

I'm not talking about transfer speeds to the system at a high level, can you specifically confirm with iotop and htop that 1. memory is clearing out rapidly to disk and 2. IO to disk is > 100 MB/s? If you have sufficient memory for the filesize you transfer you might never notice the difference.

 

I can transfer a small file at 700 MB/s and would otherwise not be the wiser. I believe it's because I'm testing with a file size at substantially larger size than the total memory I have available to cache that I'm hitting this mystery bottleneck. I can get 3-4 GB into a transfer at pretty high speeds, after that things crawl, and my memory utilization is such that I can get about that much free typically.

 

I'm sure your system is actually writing faster to disk, something's just off on my NAS.

Edited by SixClover
Link to comment
  • Solution

Solved.

 

My critical error was assuming my source would perform as expected and didn't benchmark disks on my PC, I was too focused on the server. The disks on my PC are supposed to do at least 500 MB/s read, and while they momentarily can do 500 MB/s+ read it seems they crap out real fast and slow down. The relationship to memory filling up and transfer slowing down was a coincidence. I started doing copies off of a low capacity NVMe SSD I had and transfer speeds flew high and consistently. So I'll be replacing my PC's HDD's with higher capacity SSD's that can maintain high read speeds for the transfers.

 

Thanks for helping out JorgeB, you did even mention the source could be the issue but it took me a minute to get there.

 

Happy ending. :)

Edited by SixClover
Link to comment
2 minutes ago, SixClover said:

My critical error was assuming my source would perform as expected and didn't benchmark disks on my PC

Yep, that's why I mentioned

 

On 3/10/2022 at 2:46 PM, JorgeB said:

doesn't sound like an SMB problem, unless it's a source problem

 

Glad you found the issue.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...