Jump to content
ryann

Cannot get 10Gb speeds on 6.8.3

18 posts in this topic Last Reply

Recommended Posts

Hey everyone,

 

I've had this issue for months (although I put it on the back burner for a bit). I have 10Gb networking setup at my home. I can easily saturate the link between my normal desktop and laptop using iperf3 w/ -P4:

image.png.f158d525742afe787dd2187984b98553.png

 

Unraid however... I cannot get it to work even close to 10Gb with iperf3 w/ -P4:

image.png.8fac2697b26c95b5479d0ad23e9dfbfc.png

 

Details:

- I tried setting my MTU to 9014 across the board. It works between my laptop and desktop, so they're not the issue. This goes through my US-16-XG switch, so that's also not the issue. 

- If I set my MTU to anything over 1500, I can no longer acess the Web UI or my drives. Sounds like jumbo frames aren't supported... but they should be (Intel X722). 

- I found, by default, using ethtool, that my RX/TX buffers are 512. Setting them to 4096 doesn't change anything. 

image.png.8edaf88320d1f3354ad59429057feadb.png

 

I'm somewhat confused on the RX Jumbo max is 0... but looking online, that doesn't seem all too uncommon. 

 

I know my Unraid is unregistered (fun story there... my USB drive died this morning during a reboot to change the interfaces around). Running on a freebie drive until I get a new trusted USB drive tomorrow.)

 

Other info:

image.png.fa791ed7ecb7f4221911c0e495adc170.png

 

 

image.png.60f5d71c75ee0b1857c155cc5dfebafa.png

 

 

 

 

nazaretianrack-diagnostics-20200801-1224.zip

Share this post


Link to post
Posted (edited)

Forgot to mention... I get about the same speeds testing with a VM running on the server to the server's iperf3 server. So it's something local to the server itself.  

Edited by ryann

Share this post


Link to post

New experiement. I ran iperf3 on the VM running on the server. I get about 10GB throughput... so the hardware seems to work. 

 

 

image.png

Share this post


Link to post

It affects file transfers as well:

 

From Desktop to VM running on server (on cache drive):

image.png.5a45f69aff0c4f6473a34a72f0b7f818.png

 

From Desktop to SMB share running on server (also on cache drive):

image.png.aeeeba66b1d9dbbc159d36b9ad0dbcb5.png

Share this post


Link to post

You have another 10G NIC (82599ES), if test by it does any different ?

You also setup with multiple VLAN, how about if test it in simple network first.

Share this post


Link to post
40 minutes ago, Benson said:

You have another 10G NIC (82599ES), if test by it does any different ?

You also setup with multiple VLAN, how about if test it in simple network first.

Yea, that was the NIC I first noticed it on. When I found out today that the 82599ES didn't support SR-IOV, I switched to the X722. I fought the 82599ES for several days though. 

 

Great idea trying on a simple network first. The VM and my PCs are all on the same VLAN. The server is on a separate network. And that's exactly what is going on... I can get much closer to my 10Gbit ceiling.

 

So now I need to figure out how to fix that... 

 

Thanks so much! It's been driving me nuts. 

Share this post


Link to post

Last update. My network runs on top of Ubiquiti Unifi equipment. For the 10Gb network, I have a US-XG-16 and a Dream Machine Pro. The US-XG-16 is a layer 2 switch, so no routing table lives on the switch. To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro). 

Share this post


Link to post
Posted (edited)
3 hours ago, ryann said:

To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro).

You means some test going to UDMP ( Inter VLAN routing ) ? That may cause slow.

 

I have 82599ES and other two different 10G NIC and US-16-XG, no performance problem with Unraid.

Iperf3 got ~8Gbps ( single session ) test result but SMB have 1GB/s.

Edited by Benson

Share this post


Link to post

I may be having the exact same issue.

I cannot get above 1.2Gbit (122Mbyte/sec) on my all 10GB network.

 

It's like I'm able to saturate a 1Gbit link but it's all running on 10!

 

iPerf tests can saturate the 10Gbit links, but try to write to disks (even with SSD cache) and i hit the 1gb wall.

Share this post


Link to post
13 hours ago, SNIPER_X said:

I may be having the exact same issue.

I cannot get above 1.2Gbit (122Mbyte/sec) on my all 10GB network.

 

It's like I'm able to saturate a 1Gbit link but it's all running on 10!

 

iPerf tests can saturate the 10Gbit links, but try to write to disks (even with SSD cache) and i hit the 1gb wall.

 

That's where I'm at now. What's funny is I can setup a share in my VM running on the server and transfer at near 10Gb speeds, but using SMB, I'm hitting around 1.2Gbps just as you. 

 

iperf3 w/ a single thread I can hit around 9Gbps. Multi-streams at 9.8Gbps. So I think my target should be close to 9Gbps for SMB transfers (w/ NVME SSDs)

 

 

Edited by ryann

Share this post


Link to post

One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled.  Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be.

Share this post


Link to post
Just now, Xaero said:

One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled.  Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be.

I'll admit... this isn't my day job, so I don't have a bunch of expertise. Just relying on what I can read. :)

 

I actually have my MTU set to 1500. I found issues getting some of my Dockers running with Jumbo Frames enabled (don't recall if it was 9000 or 9014). I know the lower MTU can affect speeds, but everything I've read so far says Jumbo Frames won't really fix the transfer issue. 

 

Let me figure out how to disable SMBv1/2 support. That does sound promising. 

Share this post


Link to post

Looks like I can add this to my SMB extra parameters? 

#disable SMB1 for security reasons
[global]
   min protocol = SMB2

 

Share this post


Link to post

You may also need to add:

# ENABLE SMB MULTICHANNEL

server multi channel support = yes

I've not heard of issues using 9000 MTU with docker yet, but I also cannot run 9000 MTU with my current network configuration (my ISP provided modem will not connect above 1500 MTU)

There will be significant performance implications if you use MIXED MTU. if everything is 1500 or everything is 9000 then things should be "more or less the same" outside of a large number (thousands) of large sequential transfers (gigabytes) - where the larger MTU will start to pull ahead. With MIXED MTU the problem is that any incoming packets must be fragmented when sent to a client that isn't using the larger MTU. This wastes a ton of resources on the switch or router to accomplish. On the flipside, when the smaller MTU client sends a packet it will use the smaller MTU and the potential overhead savings of the large frame is lost, though this is not as bad of an impact.

EDIT:

Removed flawed testing, will update later with proper testing again. I'm not on a mixed MTU network atm so I can't actually test this haha.

Edited by Xaero

Share this post


Link to post
# SMB Performance & Security Tweaks
   min protocol = SMB2
   server multi channel support = yes
   aio read size = 1
   aio write size = 1

This is the best combo so far. 

 

Read speeds are around 850MB/s. Writes are still pretty low at 320MB/s.

Read from share to desktop (not perfect, but I can accept it):

image.png.48337d54fdbf772db37228cca20457df.png

Write from desktop to share:

image.png.d726ad1ee0011a370db8db49c69b6264.png

 

I retested transferring a file from my Desktop PC (Windows 10) the VM (Windows Server 2016) running on Unraid and hit my network bottleneck:

Read:

image.png.c830e2859ebb350210798b11c803a097.png

Write:

image.png.06912bd564e995e118b0c977ac591ad3.png

Share this post


Link to post

Also note that mixed MTU affects inbound (write) performance more than outbound (read) performance. The reason is pretty simple; inbound 9000 MTU packets must be split (fragmented) by the network appliance (switch) before they are transmitted to the client. This nets a rather substantial loss in throughput per packet, and results in increased latency as well.  Where as packets transferred by the lower MTU client are less than the frame size, and rather than having to combine or split them, they are just sent with zeroes padding to the right. Latency isn't increased at all, but there is a small (yet measurable) loss in throughput.

The performance definitely improved substantially just being able to take advantage of multichannel.

Share this post


Link to post

I did try SMB3, but didn't see any performance improvement. That said, I misunderstood the original comment. I have it set to a min of SMB3 now. 

 

With Jumbo Frames, I enabled 9014 on the server and 9014 on my desktop. I don't see any change in performance. Like I mentioned in my last experiment, transferring files to the and from the VM from and to my desktop behaves as I would expected a 10Gbps connection to work. It's reading & writing to the same SSD on my server as is shared in SMB. 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.