Jump to content

10gbe SMB additional settings


Recommended Posts

I am working to improve SMB performance and have found a collection of settings that work well.  I would like to deploy to all interfaces, but I get strange delays / drops when applied as written:

 

server multi channel support = yes
aio read size = 1
aio write size = 1
interfaces = "192.168.1.248;capability=RSS,speed=10000000000" "192.168.40.2;capability=RSS,speed=10000000000"

 

with both interface characteristics specified the main vm on 192.168.1.x hangs on file transfer and transfers dreadfully slow

if i remove the 2nd interface (192.168.40.2) main vm on 192.168.1.x works OK, but VM on that interface does not exceed ~175mb/s transfer

 

are the interfaces listed correctly?

 

Link to comment

I have hit a bit of a speed limit on my setup

 

iperf3 tests indicate 8+gb/s of available bandwidth between unraid and my test PC, but I cant transfer over SMB faster than 490-520 mb/s

 

unraid v6.11.5 on

EPYCD8-2T w/ 7551

eth0 10gb rj45 --> sfp+ @ mikrotik CRS305-1G-4S+

test share/disk = INTEL SSDPF2KX038TZ

 

test pc = unraid VM w/ mobo eth1 passed through

24c 24gb memory

disk = Seagate FireCuda 530

 

diagnostics attached

crunch-diagnostics-20240129-1604.zip

Link to comment

Are you transferring to a User Share or direct to the drive?   Asking as the Fuse layer used to support User Shares can impose that sort of speed limit.   If you are transferring to a User Share then if it is all one one device/pool so it can become an Exclusive share (which bypasses Fuse) you can get the same performance as transferring directly to the physical device.

Link to comment
3 hours ago, itimpi said:

Are you transferring to a User Share or direct to the drive?   Asking as the Fuse layer used to support User Shares can impose that sort of speed limit.   If you are transferring to a User Share then if it is all one one device/pool so it can become an Exclusive share (which bypasses Fuse) you can get the same performance as transferring directly to the physical device.

 

I have been doing my testing with both

I have a EVO 860 SSD as my main cache disk...so current performance saturates that read/write

I have an intel NVME disk installed for VM VHDs and can test full 10gbe bandwidth through that

 

Copying a testfile from RAID1 "cache-protected" to nvme gets speeds I would expect from RAID1 SAS SSD --> NVME

609337539_Screenshot2024-01-29195215.png.63131546b90a547a0e2bfc090eb93d31.png

 

Benchmarking the VHD for the VM (which is on the nvme) gets results I expect

535601964_Screenshot2024-01-29195609.png.560bd79bfe5cf71df53daa63028d5c66.png

 

iperf shows bandwidth is there

image.thumb.png.fd7b400469939475e97b6063fcd90cba.png

 

But when I transfer to DISK share using SMB the speed doesnt translate

image.thumb.png.375ecba24dd38cc4e2df64fd320232d4.png

 

 

I thought that it was a networking issue At first because originally my VM was coming out through br0 on unraid. I was able to pass through one of the 10ge ports from my motherboard directly to the VM and improved speed but not to where I think it should be

Screenshot 2024-01-29 195609.png

Edited by therapist
Link to comment
On 1/30/2024 at 3:21 AM, JorgeB said:

This looks like a dual stream test, try single stream, that should get you the closest result to a single transfer. 

image.png.2822169cb3b78ca108386d7456caab76.png

 

this is indicating nothing better than gigabit...

all links are indicating 10gb

any idea where to look into what i am doing wrong?

 

 

Link to comment
On 1/31/2024 at 1:54 PM, JorgeB said:

Low iperf results are usually related to NICs (or NIC driver/settings), cables, switch, client PC, etc

so i have a VM on this unraid box that is on a vlan (192.168.40.253)
it connects to unraid SMB at 192.168.40.2

image.thumb.png.2b5f97ea9b35a2272ee4b429051d9a6d.png

 

file transfer rates are close to reported bandwidth

image.png.23378108a1a5240972a850a3ab32e55c.png

 

there are no wires, just the virtual 10gbe adapter and unraid

i have reset the VM network settings to default, adjusted max rss to 6 to match cores on VM

unraid has all default network settings except for buffers through tips & tweaks plugin...which shouldnt matter for a VM using br0.x, no?

 

image.png.c50ebbab3ecacdb05c3c18f591af8b80.png

 

I get better than gigabit, but something isnt right since im not seeing anywhere near proper speeds

 

 

 

 

 

 

Link to comment
  • 5 months later...
  • 2 months later...

Just wanted to give my experience with this.
I haven't tested SMB, but I'm using 10 Gbit networking, which is fluctuating a bit, but for the most part, it runs above 8 Gbit out of 10 Gbit on iperf tests.
I've enabled jumbo frames in my switch at 9000 MTU, and the same for the NIC in Unraid, and on the other server (Proxmox).

Hardware used:
Switch:
USW EnterpriseXG 24

NIC:
X540-T2

I'm sure there's room for improvements, but this is where I'm at right now.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...