therapist Posted January 24 Share Posted January 24 I am working to improve SMB performance and have found a collection of settings that work well. I would like to deploy to all interfaces, but I get strange delays / drops when applied as written: server multi channel support = yes aio read size = 1 aio write size = 1 interfaces = "192.168.1.248;capability=RSS,speed=10000000000" "192.168.40.2;capability=RSS,speed=10000000000" with both interface characteristics specified the main vm on 192.168.1.x hangs on file transfer and transfers dreadfully slow if i remove the 2nd interface (192.168.40.2) main vm on 192.168.1.x works OK, but VM on that interface does not exceed ~175mb/s transfer are the interfaces listed correctly? Quote Link to comment
therapist Posted January 29 Author Share Posted January 29 I have hit a bit of a speed limit on my setup iperf3 tests indicate 8+gb/s of available bandwidth between unraid and my test PC, but I cant transfer over SMB faster than 490-520 mb/s unraid v6.11.5 on EPYCD8-2T w/ 7551 eth0 10gb rj45 --> sfp+ @ mikrotik CRS305-1G-4S+ test share/disk = INTEL SSDPF2KX038TZ test pc = unraid VM w/ mobo eth1 passed through 24c 24gb memory disk = Seagate FireCuda 530 diagnostics attached crunch-diagnostics-20240129-1604.zip Quote Link to comment
itimpi Posted January 29 Share Posted January 29 Are you transferring to a User Share or direct to the drive? Asking as the Fuse layer used to support User Shares can impose that sort of speed limit. If you are transferring to a User Share then if it is all one one device/pool so it can become an Exclusive share (which bypasses Fuse) you can get the same performance as transferring directly to the physical device. Quote Link to comment
therapist Posted January 30 Author Share Posted January 30 (edited) 3 hours ago, itimpi said: Are you transferring to a User Share or direct to the drive? Asking as the Fuse layer used to support User Shares can impose that sort of speed limit. If you are transferring to a User Share then if it is all one one device/pool so it can become an Exclusive share (which bypasses Fuse) you can get the same performance as transferring directly to the physical device. I have been doing my testing with both I have a EVO 860 SSD as my main cache disk...so current performance saturates that read/write I have an intel NVME disk installed for VM VHDs and can test full 10gbe bandwidth through that Copying a testfile from RAID1 "cache-protected" to nvme gets speeds I would expect from RAID1 SAS SSD --> NVME Benchmarking the VHD for the VM (which is on the nvme) gets results I expect iperf shows bandwidth is there But when I transfer to DISK share using SMB the speed doesnt translate I thought that it was a networking issue At first because originally my VM was coming out through br0 on unraid. I was able to pass through one of the 10ge ports from my motherboard directly to the VM and improved speed but not to where I think it should be Edited January 30 by therapist Quote Link to comment
JorgeB Posted January 30 Share Posted January 30 7 hours ago, therapist said: iperf shows bandwidth is there This looks like a dual stream test, try single stream, that should get you the closest result to a single transfer. Quote Link to comment
therapist Posted January 31 Author Share Posted January 31 On 1/30/2024 at 3:21 AM, JorgeB said: This looks like a dual stream test, try single stream, that should get you the closest result to a single transfer. this is indicating nothing better than gigabit... all links are indicating 10gb any idea where to look into what i am doing wrong? Quote Link to comment
JorgeB Posted January 31 Share Posted January 31 Low iperf results are usually related to NICs (or NIC driver/settings), cables, switch, client PC, etc Quote Link to comment
therapist Posted February 1 Author Share Posted February 1 On 1/31/2024 at 1:54 PM, JorgeB said: Low iperf results are usually related to NICs (or NIC driver/settings), cables, switch, client PC, etc so i have a VM on this unraid box that is on a vlan (192.168.40.253) it connects to unraid SMB at 192.168.40.2 file transfer rates are close to reported bandwidth there are no wires, just the virtual 10gbe adapter and unraid i have reset the VM network settings to default, adjusted max rss to 6 to match cores on VM unraid has all default network settings except for buffers through tips & tweaks plugin...which shouldnt matter for a VM using br0.x, no? I get better than gigabit, but something isnt right since im not seeing anywhere near proper speeds Quote Link to comment
JorgeB Posted February 2 Share Posted February 2 In my experience a VM is not a good way to test, the virtual NIC usually performs very far from 10GbE line speed. Quote Link to comment
DiegoFLima Posted July 24 Share Posted July 24 Hello good afternoon. I don't know if you managed to solve the problem. But I'm in a similar situation. Could you tell me if you succeeded? Quote Link to comment
Nanobug Posted Friday at 09:15 AM Share Posted Friday at 09:15 AM Just wanted to give my experience with this. I haven't tested SMB, but I'm using 10 Gbit networking, which is fluctuating a bit, but for the most part, it runs above 8 Gbit out of 10 Gbit on iperf tests. I've enabled jumbo frames in my switch at 9000 MTU, and the same for the NIC in Unraid, and on the other server (Proxmox). Hardware used: Switch: USW EnterpriseXG 24 NIC: X540-T2 I'm sure there's room for improvements, but this is where I'm at right now. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.