SiNtEnEl Posted November 15, 2021 Share Posted November 15, 2021 (edited) Hello unraid friends, For some reasons since a couple of versions of unraid i have been stuggling with my network speeds. In the past i was able to saturate the connection to full 9Gbe plus speeds, where it dropped a while ago. Any direction speeds seem to be capped to 1.5Gbe on a 10Gbe my network. Where i simply lacked the time to fully debug it or find the root cause, its been like this for a long while. [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 182 MBytes 1.53 Gbits/sec [ 4] 1.00-2.00 sec 183 MBytes 1.54 Gbits/sec [ 4] 2.00-3.00 sec 183 MBytes 1.54 Gbits/sec [ 4] 3.00-4.00 sec 183 MBytes 1.54 Gbits/sec [ 4] 4.00-5.00 sec 184 MBytes 1.54 Gbits/sec [ 4] 5.00-6.00 sec 183 MBytes 1.54 Gbits/sec [ 4] 6.00-7.00 sec 182 MBytes 1.52 Gbits/sec [ 4] 7.00-8.00 sec 183 MBytes 1.53 Gbits/sec [ 4] 8.00-9.00 sec 183 MBytes 1.54 Gbits/sec [ 4] 9.00-10.00 sec 185 MBytes 1.55 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.79 GBytes 1.54 Gbits/sec sender [ 4] 0.00-10.00 sec 1.79 GBytes 1.54 Gbits/sec receiver Primary i had a untagged management network, and a trunk towards unraid. Where i run a bridged network, with the vlans configured on it. At first i thought the bridge was to blame in my Unraid set, so i have redone interfaces and setup a untagged unbridged network tagged on the switch. Same result, still limited to 1.5Gbe. Then i suspected my netgear GS110EMX 10Gbe ports, so i tested with a direct connection (cross) between the 10Gbe nics. Same result capped at 1.5Gbe. After that i tested some new cables, also same results. Lots of testing further, i can only conclude it seems to be a issue in unraid or driver related. Tried differnt MTU and settings, all resulting in 1.5Gbe speeds. No high packat droprates either. So anyone else experience issues with the Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) in combination with Unraid? Im considering getting a new nic to further debug this.. but i rather not spend the extra money if this is likely driver related. Any ideas or tips are welcome as well. Best regards, Sintenel Edited November 17, 2021 by SiNtEnEl Solved, windows driver issue, not unraid. Quote Link to comment
Vr2Io Posted November 16, 2021 Share Posted November 16, 2021 (edited) Check NIC running in what PCIe ver and link-width or simple re-seat the NIC or different PCie slot. Edited November 16, 2021 by Vr2Io 1 Quote Link to comment
SiNtEnEl Posted November 16, 2021 Author Share Posted November 16, 2021 Nov 15 21:12:00 UnNASty kernel: ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver Nov 15 21:12:00 UnNASty kernel: ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12 XDP Queue count = 0 Nov 15 21:12:00 UnNASty kernel: ixgbe 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link) Nov 15 21:12:00 UnNASty kernel: ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection Was already in a 8x slot, tried a differnt 8x slot.. No joy sadly Quote Link to comment
Vr2Io Posted November 16, 2021 Share Posted November 16, 2021 It seems network issue rather then NIC issue, pls also check other end, especially any firewall / antivirus software limit the speed, pls ensure MTU match at both end. 1 Quote Link to comment
SiNtEnEl Posted November 17, 2021 Author Share Posted November 17, 2021 Using Opensense for intervlan routing, firewall and tunnels. No traffic passing trough that while testing. (checked logging and tested with opensense shutdown) 6 VLANS on my network currently, mainly to separate management, docker, trusted, guest, wlan, iot services on my network and outside of it. - Tested with windows 10 with firewall on, 1.55 Gbits/sec (old install) - Tested MTU 1500 both end + Windows11 clean, 1.54 Gbits/sec - Tested MTU 9000 both end + Windows11 clean, 1.55 Gbits/sec - Tested on Untagged network + Windows11 clean, 1.55 Gbits/sec - Tested on Tagged VLAN20 network + Windows11 clean, 1.55 Gbits/sec (VLAN unraid) - Tested on Tagged VLAN20 network, + Windows11 clean 1.55 Gbits/sec (bridged + VLAN unraid) - Tested with windows 11 with firewall off, 1.55 Gbits/sec (clean install, OS + NIC driver) - Tested with cross connection (direct), 1.55 Gbits/sec (to rule out switch issue) Checked with wireshark, don't see anything strange in there either. - Validated PCIe on both ends 8x, tested different slots as well. - Tested multiple CAT6 / 7 cables and tested all cables. Planning to test on linux today on the desktop side, to rule out windows driver issues.. If that result in same results.. i'm going to test with a different linux on my unraid server. If that results in nothing... i will start replacing nics.. because there is nothing left to test.. But my feeling is that its a issue with the ixgbe kernel module on the unraid end. Since there more people complaining about it on the forums. For example: Quote Link to comment
SiNtEnEl Posted November 17, 2021 Author Share Posted November 17, 2021 (edited) Ok, it seems i was wrong with my feeling. As it looks like it's something to do with the windows driver.. Same hardware / network configuration with linux performs with full 10Gbe. Used a instance with the same and other kernel module / driver and seen no issues at all. Will need to test with various other windows drivers for the x540-t2, to see how to get rid of the bottleneck. So this mystery is solved. Sorry, i suspected Unraid in this case. Thank you @Vr2Io for thinking along in this process. Edited November 17, 2021 by SiNtEnEl forgot a thank you 1 Quote Link to comment
SiNtEnEl Posted December 13, 2021 Author Share Posted December 13, 2021 Update: issues caused on my end was the Large Send Offload (LSO) breaking on the intel driver on windows. Disable Large Send Offload (LSO) on windows resolved it on my end. Quote Link to comment
Bender Seb Posted March 17, 2023 Share Posted March 17, 2023 On 12/13/2021 at 2:20 PM, SiNtEnEl said: Update: issues caused on my end was the Large Send Offload (LSO) breaking on the intel driver on windows. Disable Large Send Offload (LSO) on windows resolved it on my end. Same issue here, but disabled LSO did not succeed... my nic (TP link TX-401) still have bottleneck at 1.5GB/s Quote Link to comment
SiNtEnEl Posted March 26, 2023 Author Share Posted March 26, 2023 On 3/17/2023 at 10:02 AM, Bender Seb said: Same issue here, but disabled LSO did not succeed... my nic (TP link TX-401) still have bottleneck at 1.5GB/s Best to test with iperf3 on unraid and windows to see what the bottle neck is. Windows vs Unraid. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.