Cannot get 10Gb speeds on 6.8.3


Recommended Posts

Hey everyone,

 

I've had this issue for months (although I put it on the back burner for a bit). I have 10Gb networking setup at my home. I can easily saturate the link between my normal desktop and laptop using iperf3 w/ -P4:

image.png.f158d525742afe787dd2187984b98553.png

 

Unraid however... I cannot get it to work even close to 10Gb with iperf3 w/ -P4:

image.png.8fac2697b26c95b5479d0ad23e9dfbfc.png

 

Details:

- I tried setting my MTU to 9014 across the board. It works between my laptop and desktop, so they're not the issue. This goes through my US-16-XG switch, so that's also not the issue. 

- If I set my MTU to anything over 1500, I can no longer acess the Web UI or my drives. Sounds like jumbo frames aren't supported... but they should be (Intel X722). 

- I found, by default, using ethtool, that my RX/TX buffers are 512. Setting them to 4096 doesn't change anything. 

image.png.8edaf88320d1f3354ad59429057feadb.png

 

I'm somewhat confused on the RX Jumbo max is 0... but looking online, that doesn't seem all too uncommon. 

 

I know my Unraid is unregistered (fun story there... my USB drive died this morning during a reboot to change the interfaces around). Running on a freebie drive until I get a new trusted USB drive tomorrow.)

 

Other info:

image.png.fa791ed7ecb7f4221911c0e495adc170.png

 

 

image.png.60f5d71c75ee0b1857c155cc5dfebafa.png

 

 

 

 

nazaretianrack-diagnostics-20200801-1224.zip

Link to comment
40 minutes ago, Benson said:

You have another 10G NIC (82599ES), if test by it does any different ?

You also setup with multiple VLAN, how about if test it in simple network first.

Yea, that was the NIC I first noticed it on. When I found out today that the 82599ES didn't support SR-IOV, I switched to the X722. I fought the 82599ES for several days though. 

 

Great idea trying on a simple network first. The VM and my PCs are all on the same VLAN. The server is on a separate network. And that's exactly what is going on... I can get much closer to my 10Gbit ceiling.

 

So now I need to figure out how to fix that... 

 

Thanks so much! It's been driving me nuts. 

  • Like 1
Link to comment

Last update. My network runs on top of Ubiquiti Unifi equipment. For the 10Gb network, I have a US-XG-16 and a Dream Machine Pro. The US-XG-16 is a layer 2 switch, so no routing table lives on the switch. To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro). 

Link to comment
3 hours ago, ryann said:

To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro).

You means some test going to UDMP ( Inter VLAN routing ) ? That may cause slow.

 

I have 82599ES and other two different 10G NIC and US-16-XG, no performance problem with Unraid.

Iperf3 got ~8Gbps ( single session ) test result but SMB have 1GB/s.

Edited by Benson
Link to comment

I may be having the exact same issue.

I cannot get above 1.2Gbit (122Mbyte/sec) on my all 10GB network.

 

It's like I'm able to saturate a 1Gbit link but it's all running on 10!

 

iPerf tests can saturate the 10Gbit links, but try to write to disks (even with SSD cache) and i hit the 1gb wall.

Link to comment
13 hours ago, SNIPER_X said:

I may be having the exact same issue.

I cannot get above 1.2Gbit (122Mbyte/sec) on my all 10GB network.

 

It's like I'm able to saturate a 1Gbit link but it's all running on 10!

 

iPerf tests can saturate the 10Gbit links, but try to write to disks (even with SSD cache) and i hit the 1gb wall.

 

That's where I'm at now. What's funny is I can setup a share in my VM running on the server and transfer at near 10Gb speeds, but using SMB, I'm hitting around 1.2Gbps just as you. 

 

iperf3 w/ a single thread I can hit around 9Gbps. Multi-streams at 9.8Gbps. So I think my target should be close to 9Gbps for SMB transfers (w/ NVME SSDs)

 

 

Edited by ryann
Link to comment

One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled.  Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be.

Link to comment
Just now, Xaero said:

One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled.  Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be.

I'll admit... this isn't my day job, so I don't have a bunch of expertise. Just relying on what I can read. :)

 

I actually have my MTU set to 1500. I found issues getting some of my Dockers running with Jumbo Frames enabled (don't recall if it was 9000 or 9014). I know the lower MTU can affect speeds, but everything I've read so far says Jumbo Frames won't really fix the transfer issue. 

 

Let me figure out how to disable SMBv1/2 support. That does sound promising. 

Link to comment

You may also need to add:

# ENABLE SMB MULTICHANNEL

server multi channel support = yes

I've not heard of issues using 9000 MTU with docker yet, but I also cannot run 9000 MTU with my current network configuration (my ISP provided modem will not connect above 1500 MTU)

There will be significant performance implications if you use MIXED MTU. if everything is 1500 or everything is 9000 then things should be "more or less the same" outside of a large number (thousands) of large sequential transfers (gigabytes) - where the larger MTU will start to pull ahead. With MIXED MTU the problem is that any incoming packets must be fragmented when sent to a client that isn't using the larger MTU. This wastes a ton of resources on the switch or router to accomplish. On the flipside, when the smaller MTU client sends a packet it will use the smaller MTU and the potential overhead savings of the large frame is lost, though this is not as bad of an impact.

EDIT:

Removed flawed testing, will update later with proper testing again. I'm not on a mixed MTU network atm so I can't actually test this haha.

Edited by Xaero
Link to comment
# SMB Performance & Security Tweaks
   min protocol = SMB2
   server multi channel support = yes
   aio read size = 1
   aio write size = 1

This is the best combo so far. 

 

Read speeds are around 850MB/s. Writes are still pretty low at 320MB/s.

Read from share to desktop (not perfect, but I can accept it):

image.png.48337d54fdbf772db37228cca20457df.png

Write from desktop to share:

image.png.d726ad1ee0011a370db8db49c69b6264.png

 

I retested transferring a file from my Desktop PC (Windows 10) the VM (Windows Server 2016) running on Unraid and hit my network bottleneck:

Read:

image.png.c830e2859ebb350210798b11c803a097.png

Write:

image.png.06912bd564e995e118b0c977ac591ad3.png

Link to comment

Also note that mixed MTU affects inbound (write) performance more than outbound (read) performance. The reason is pretty simple; inbound 9000 MTU packets must be split (fragmented) by the network appliance (switch) before they are transmitted to the client. This nets a rather substantial loss in throughput per packet, and results in increased latency as well.  Where as packets transferred by the lower MTU client are less than the frame size, and rather than having to combine or split them, they are just sent with zeroes padding to the right. Latency isn't increased at all, but there is a small (yet measurable) loss in throughput.

The performance definitely improved substantially just being able to take advantage of multichannel.

Link to comment

I did try SMB3, but didn't see any performance improvement. That said, I misunderstood the original comment. I have it set to a min of SMB3 now. 

 

With Jumbo Frames, I enabled 9014 on the server and 9014 on my desktop. I don't see any change in performance. Like I mentioned in my last experiment, transferring files to the and from the VM from and to my desktop behaves as I would expected a 10Gbps connection to work. It's reading & writing to the same SSD on my server as is shared in SMB. 

Link to comment
  • 2 weeks later...
On 8/3/2020 at 12:05 PM, ryann said:

What's funny is I can setup a share in my VM running on the server and transfer at near 10Gb speeds, but using SMB, I'm hitting around 1.2Gbps

I've been struggling with that for years!  Last week the bright idea came to me to look into ACL in Samba.  Turned out that was enabled by default.  I disabled it, and BOOM!  For the first time in my life I saw Unraid saturate the network adapter!  

 

Try adding this in the global section of smb-extra.conf, and let us know how it goes.

    nt acl support = No

 

Link to comment
2 hours ago, Pourko said:

I've been struggling with that for years!  Last week the bright idea came to me to look into ACL in Samba.  Turned out that was enabled by default.  I disabled it, and BOOM!  For the first time in my life I saw Unraid saturate the network adapter!  

 

Try adding this in the global section of smb-extra.conf, and let us know how it goes.

    nt acl support = No

 

I'm not sure if I successfully got this enabled or not. I took my array down, Docker hung, so it never went fully down. I had to force power-cycle the server. Now it's going through a parity check and I found out the Recycle Bin modified the SMB config so now I have two [global] sections and I cannot change that until the parity check finishes. 

 

If two [global] sections is valid, then I don't see any improvement. It's either about the same or a little slower than before.  

Link to comment
26 minutes ago, ryann said:

I'm not sure if I successfully got this enabled or not. I took my array down, Docker hung, so it never went fully down. I had to force power-cycle the server. Now it's going through a parity check and I found out the Recycle Bin modified the SMB config so now I have two [global] sections and I cannot change that until the parity check finishes.

I don't know anything about Docker.  I was talking about the stock Samba service that comes with Unraid. It uses two config files -- one is recreated in RAM every time the server boots, the other (i.e., smb-extra.conf) is located in the config folder on your boot flash drive.  There's no recycle bins in any of that.

Link to comment
Just now, Pourko said:

I don't know anything about Docker.  I was talking about the stock Samba service that comes with Unraid. It uses two config files -- one is recreated in RAM every time the server boots, the other (i.e., smb-extra.conf) is located in the config folder on your boot flash drive.  There's no recycle bins in any of that.

Sorry for the confusion...

 

Recycle Bin is a Unraid Plugin: 

 

 

Docker hung as I took my array down and because I had to do a hard reset, my array got flagged as dirty. 

 

You can edit smb-extra.conf from the /boot/config, or from The Settings->SMB page. Where I'm thinking I may have a bad config is where the Plugin updated and seemd to goof up my config a bit:

 

veto files = /._*/.DS_Store/

[global]

# SMB Performance & Security Tweaks
   min protocol = SMB2
   server multi channel support = yes
   aio read size = 1
   aio write size = 1
   use sendfile = yes
   nt acl support = no
#   min receivefile size = 16384
#   socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_RCVBUF=131072 SO_SNDBUF=131072
#vfs_recycle_start
#Recycle bin configuration
[global]
   syslog only = Yes
   syslog = 0
   logging = 0
   log level = 0 vfs:0
#vfs_recycle_end

 

It's easy enough to fix, but I cannot try it until the Parity check finishes (probably sometime tomorrow morning). 

Link to comment
On 8/2/2020 at 7:00 AM, ryann said:

Last update. My network runs on top of Ubiquiti Unifi equipment. For the 10Gb network, I have a US-XG-16 and a Dream Machine Pro. The US-XG-16 is a layer 2 switch, so no routing table lives on the switch. To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro). 

Before you totally go around changing settings on everything. Did you update your UDM to the newest beta. It was/is a known issue with vlan traffic being routed out the 10gp sfp port to cause slowdowns and issues and in general terrible performance enough that most were running the UDM pro over the gigabit line to the xg-16 to at least get 1gb vlan routing. 

Edited by halfelite
Link to comment

UDMP backplane is a 2G if I remember correctly? So I don't think you can push 10G through. Not 100% sure but I think they changed the backplane speed between board revision 3 and 5. Now the board revision is a 8.   May wanna look into that on the Unifi forum? I may be way off :) 

Edited by johnwhicker
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.