pish180 Posted February 5, 2020 Share Posted February 5, 2020 Hi, I've recently decided to switch over to UnRaid from FreeNAS and I've been having a few issues here and there along the way. My current issue is that I am not able to get 10G performance out of my dual port 10G network card. I've been tinkering around with setting and reading many of the other posts but a lot of the threads are not resolved so I'm not sure what to do. Currently the max I can get when testing the link using iperf3 (Installed via Nerd Tools) is avg 3.16 Gb/s (and honestly not acceptable for this hardware). I'm tried several setting in the TipsAndTweaks app. Switched and tested FlowControl, Offloading, BufferSizes, etc. The only change is that Disabling Offloading drops it down to 2Gb/s, so wrong direction. I was reading a more recent thread and people are mentioning speed drops after updating to the latest version. I've started at this version, but could there have been an update that disrupts this? Any help would be appreciated. Thanks! Info: Dell C2100 Server UnRaid Version 6.8.2 eth0 (only NIC currently used). Add-in Dual Port 10G Mezzanine card. -> DAC -> Unifi Switch ethtool: root@Zeus:~# ethtool eth0 Settings for eth0: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: Symmetric Supports auto-negotiation: No Supported FEC modes: Not reported Advertised link modes: 10000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: No Advertised FEC modes: Not reported Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: internal Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes lspci: root@Zeus:~# lspci | grep -i network 02:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) ifconfig: (quite certain the drops are trying to run iperf the other direction - unsuccessful) eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.10.241 netmask 255.255.255.0 broadcast 0.0.0.0 ether 60:eb:69:dc:4d:0e txqueuelen 1000 (Ethernet) RX packets 75678149 bytes 113964418192 (106.1 GiB) RX errors 0 dropped 1791 overruns 0 frame 0 TX packets 1446891 bytes 268037153 (255.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 log: Feb 4 23:23:10 Zeus kernel: Intel(R) 10GbE PCI Express Linux Network Driver - version 5.6.5 Feb 4 23:23:10 Zeus kernel: Copyright(c) 1999 - 2019 Intel Corporation. Feb 4 23:23:10 Zeus kernel: cryptd: max_cpu_qlen set to 1000 Feb 4 23:23:10 Zeus kernel: SSE version of gcm_enc/dec engaged. Feb 4 23:23:10 Zeus kernel: ipmi_si IPI0001:00: Found new BMC (man_id: 0x001c4c, prod_id: 0x5399, dev_id: 0x20) Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0 eth0: mixed HW and IP checksum settings. Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0: added PHC on eth0 Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Linux Driver Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0: eth0: (PCIe:2.5GT/s:Width x4) Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0 eth0: MAC: 60:eb:69:c5:d8:3e Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0: eth0: PBA No: FFFFFF-0FF Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0: LRO is disabled Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s) Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: MSI-X vectors supported: 1, no of cores: 24, max_msix_vectors: -1 Feb 4 23:23:10 Zeus kernel: mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 32 Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: iomem(0x0000000078ac0000), mapped(0x00000000c63fdef4), size(16384) Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: ioport(0x0000000000008000), size(256) Feb 4 23:23:10 Zeus kernel: kvm: VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL does not work properly. Using workaround Feb 4 23:23:10 Zeus kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k Feb 4 23:23:10 Zeus kernel: IPMI SSIF Interface driver Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: Allocated physical memory: size(1824 kB) Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: Current Controller Queue Depth(3640),Max Controller Queue Depth(3712) Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: Scatter Gather Elements per IO(128) Feb 4 23:23:10 Zeus kernel: ixgbe 0000:02:00.0: Multiqueue Enabled: Rx Queue count = 24, Tx Queue count = 24 XDP Queue count = 0 Feb 4 23:23:10 Zeus kernel: ixgbe 0000:02:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x4 link at 0000:00:03.0 (capable of 32.000 Gb/s with 5 GT/s x8 link) Feb 4 23:23:10 Zeus kernel: ixgbe 0000:02:00.0 eth1: MAC: 2, PHY: 14, SFP+: 3, PBA No: FFFFFF-0FF Feb 4 23:23:10 Zeus kernel: ixgbe 0000:02:00.0: 60:eb:69:dc:4d:0e Feb 4 23:23:10 Zeus kernel: ixgbe 0000:02:00.0 eth1: Enabled Features: RxQ: 24 TxQ: 24 FdirHash Feb 4 23:23:10 Zeus kernel: ixgbe 0000:02:00.0 eth1: Intel(R) 10 Gigabit Network Connection Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.1 eth2: mixed HW and IP checksum settings. Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.1: added PHC on eth2 Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: overriding NVDATA EEDPTagMode setting Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.1: Intel(R) Gigabit Ethernet Linux Driver Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.1: eth2: (PCIe:2.5GT/s:Width x4) Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.1 eth2: MAC: 60:eb:69:c5:d8:3f Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.1: eth2: PBA No: FFFFFF-0FF Feb 4 23:23:10 Zeus kernel: igb 0000:05:00.1: LRO is disabled Feb 4 23:23:10 Zeus kernel: mpt2sas_cm0: LSISAS2008: FWVersion(11.00.00.00), ChipRevision(0x02), BiosVersion(07.21.00.00) Quote Link to comment
bonienl Posted February 5, 2020 Share Posted February 5, 2020 I don't have Intel 10G NICs, but Asus (Aquantia AQC107). Some quick tests between my two Unraid servers (6.8.2) Default MTU (1500 bytes) # iperf3 -i0 -t20 -c 10.0.101.11 Connecting to host 10.0.101.11, port 5201 [ 5] local 10.0.101.12 port 45694 connected to 10.0.101.11 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-20.00 sec 21.7 GBytes 9.34 Gbits/sec 0 314 KBytes Jumbo frames (9198 bytes) # iperf3 -i0 -t20 -c 10.0.101.11 Connecting to host 10.0.101.11, port 5201 [ 5] local 10.0.101.12 port 45690 connected to 10.0.101.11 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-20.00 sec 22.9 GBytes 9.83 Gbits/sec 0 429 KBytes Quote Link to comment
JorgeB Posted February 5, 2020 Share Posted February 5, 2020 53 minutes ago, pish180 said: I was reading a more recent thread and people are mentioning speed drops after updating to the latest version. There are some users complaining of a slowdown with SMB, iperf should still give full speed, and until it does transfers won't be faster, or at least never faster than iperf results. Quote Link to comment
Vr2Io Posted February 6, 2020 Share Posted February 6, 2020 (edited) I can't remember the result of previous Unraid ver., longtime haven't test that. BTW on 6.8.2 got 7.75Gb/s in single stream and two stream ( -P 2 ) got 9.41Gb/s Many factor could affect the result, i.e. both end NIC setting, CPU clock speed ....... I notice your NIC running in x4 PCIe and some drop frame records. [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 9.02 GBytes 7.75 Gbits/sec 0 sender [ 4] 0.00-10.00 sec 9.02 GBytes 7.75 Gbits/sec receiver [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 5.47 GBytes 4.69 Gbits/sec 1 sender [ 4] 0.00-10.00 sec 5.46 GBytes 4.69 Gbits/sec receiver [ 6] 0.00-10.00 sec 5.49 GBytes 4.72 Gbits/sec 0 sender [ 6] 0.00-10.00 sec 5.49 GBytes 4.72 Gbits/sec receiver [SUM] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec 1 sender [SUM] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec receiver Server : Intel 82599ES ( Windows ) Client : ConnectX3 ( Unraid ) Cable : Fiber Edited February 6, 2020 by Benson Quote Link to comment
pish180 Posted February 6, 2020 Author Share Posted February 6, 2020 3 hours ago, Benson said: I can't remember the result of previous Unraid ver., longtime haven't test that. BTW on 6.8.2 got 7.75Gb/s in single stream and two stream ( -P 2 ) got 9.41Gb/s Many factor could affect the result, i.e. both end NIC setting, CPU clock speed ....... I notice your NIC running in x4 PCIe and some drop frame records. [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 9.02 GBytes 7.75 Gbits/sec 0 sender [ 4] 0.00-10.00 sec 9.02 GBytes 7.75 Gbits/sec receiver [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 5.47 GBytes 4.69 Gbits/sec 1 sender [ 4] 0.00-10.00 sec 5.46 GBytes 4.69 Gbits/sec receiver [ 6] 0.00-10.00 sec 5.49 GBytes 4.72 Gbits/sec 0 sender [ 6] 0.00-10.00 sec 5.49 GBytes 4.72 Gbits/sec receiver [SUM] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec 1 sender [SUM] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec receiver Server : Intel 82599ES ( Windows ) Client : ConnectX3 ( Unraid ) Cable : Fiber I ran it with the -P 2 option and it effectively doubled 6.29Gb/s, but not sure what that means.... The other end, the switch and the cables between everything are all 100% good. Quote Link to comment
bonienl Posted February 6, 2020 Share Posted February 6, 2020 19 minutes ago, pish180 said: I ran it with the -P 2 option and it effectively doubled 6.29Gb/s, but not sure what that means.... It means you have latency on the link which impacts the throughput of a single stream. Try with -P 4 (=4 concurrent streams) and see if you come closer to the link capacity. Quote Link to comment
pish180 Posted February 6, 2020 Author Share Posted February 6, 2020 Latency? Caused by what? Bad hardware?? Its a Vanilla install with a few plugins installed and nothing running. My Dual Xeon CPUs are at like 0-5% usage. The Network link has nothing going on either and its a total of 5 feet away from the remote computer I'm testing with. FreeNas had ZERO issues maxing out the 10G link with the same host. 4 sessions puts it up close to where it should be... 8.52 Gb/s So what can be done in your option to solve the problem? Quote Link to comment
bonienl Posted February 6, 2020 Share Posted February 6, 2020 9 minutes ago, pish180 said: So what can be done in your option to solve the problem? Starting multiple streams shows your link is capable of higher speeds. Latency can also occur due to slow acknowledgement of the receiver. You can test by increasing the window size. E.g. iperf3 -i0 -t20 -c<server ip> -w5m The above test uses a window size of 5 MB, try different values to see the effect. Quote Link to comment
bonienl Posted February 6, 2020 Share Posted February 6, 2020 You can also test the opposite direction and compare iperf3 -i0 -t20 -c<server ip> -w5m -R Quote Link to comment
pish180 Posted February 6, 2020 Author Share Posted February 6, 2020 The 1st test = 3.82 Gb/s The 2nd test = 1.14 Gb/s The -R option really slows things down no matter which window size I use. Generally under 1 Gb/s Quote Link to comment
pish180 Posted February 6, 2020 Author Share Posted February 6, 2020 I'm going to try another 10G Nic to see if I get the same results. Quote Link to comment
pish180 Posted February 6, 2020 Author Share Posted February 6, 2020 (edited) So def not the NIC or the cables. With FreeNAS (FreeBSD I believe is the OS) was easily able to saturate 100% of the 10G link with 100% the same hardware. Using the other NIC (that successfully was transfering 10G speeds on FreeNAS) I am still in the same range of bandwidth running the standard iperf3 -c <host> == Results = 3.2 Gb/s I'm certain the chipset is different than the dual port 10G NIC (previous tests) but Unraid lists this Controller as an Intel 82599ES 10-Gigabit SFI/SFP+ ... (rev 01) which is the same as the dual port NIC. Both are using the same driver ixgbe v5.6.5. Not sure if there is another driver or perhaps something is not correct with how Linux is using the NIC. I see so many posts in the Unraid forums about slow link speeds so I'm really wondering if maybe this drivers doesn't work properly with the SFP DAC. Honestly not sure but I CAN positively say it's not a Hardware problem at this point. Edited February 7, 2020 by pish180 Quote Link to comment
pish180 Posted February 7, 2020 Author Share Posted February 7, 2020 On 2/5/2020 at 9:40 PM, Benson said: I can't remember the result of previous Unraid ver., longtime haven't test that. BTW on 6.8.2 got 7.75Gb/s in single stream and two stream ( -P 2 ) got 9.41Gb/s Many factor could affect the result, i.e. both end NIC setting, CPU clock speed ....... I notice your NIC running in x4 PCIe and some drop frame records. [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 9.02 GBytes 7.75 Gbits/sec 0 sender [ 4] 0.00-10.00 sec 9.02 GBytes 7.75 Gbits/sec receiver [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 5.47 GBytes 4.69 Gbits/sec 1 sender [ 4] 0.00-10.00 sec 5.46 GBytes 4.69 Gbits/sec receiver [ 6] 0.00-10.00 sec 5.49 GBytes 4.72 Gbits/sec 0 sender [ 6] 0.00-10.00 sec 5.49 GBytes 4.72 Gbits/sec receiver [SUM] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec 1 sender [SUM] 0.00-10.00 sec 11.0 GBytes 9.41 Gbits/sec receiver Server : Intel 82599ES ( Windows ) Client : ConnectX3 ( Unraid ) Cable : Fiber Curious... Digging around some more. Idk if this will help: root@Zeus:~# cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 CPU8 CPU9 CPU10 CPU11 CPU12 CPU13 CPU14 CPU15 CPU16 CPU17 CPU18 CPU19 CPU20 CPU21 CPU22 CPU23 0: 140 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 2-edge timer 8: 0 0 0 0 0 0 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 8-edge rtc0 9: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 9-fasteoi acpi 18: 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 18-fasteoi i801_smbus 20: 0 0 0 0 1585 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 20-fasteoi ehci_hcd:usb1 21: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 21-fasteoi uhci_hcd:usb5, uhci_hcd:usb8 22: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 22-fasteoi uhci_hcd:usb4, uhci_hcd:usb7 23: 0 0 0 0 0 281484 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 23-fasteoi ehci_hcd:usb2, uhci_hcd:usb3, uhci_hcd:usb6 24: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 DMAR-MSI 0-edge dmar0 25: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 2621440-edge eth1 26: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 17067 0 0 0 0 0 0 0 0 0 PCI-MSI 2621441-edge eth1-TxRx-0 27: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 512000-edge ahci[0000:00:1f.2] 32: 0 0 0 0 0 0 0 0 0 0 357015 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 524288-edge mpt2sas0-msix0 33: 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1048576-edge eth0-TxRx-0 34: 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1048577-edge eth0-TxRx-1 35: 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 PCI-MSI 1048578-edge eth0-TxRx-2 36: 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 PCI-MSI 1048579-edge eth0-TxRx-3 37: 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 PCI-MSI 1048580-edge eth0-TxRx-4 38: 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 PCI-MSI 1048581-edge eth0-TxRx-5 39: 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 PCI-MSI 1048582-edge eth0-TxRx-6 40: 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 PCI-MSI 1048583-edge eth0-TxRx-7 41: 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 PCI-MSI 1048584-edge eth0-TxRx-8 42: 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 0 PCI-MSI 1048585-edge eth0-TxRx-9 43: 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 0 PCI-MSI 1048586-edge eth0-TxRx-10 44: 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 1 PCI-MSI 1048587-edge eth0-TxRx-11 45: 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1048588-edge eth0-TxRx-12 46: 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1048589-edge eth0-TxRx-13 47: 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 0 PCI-MSI 1048590-edge eth0-TxRx-14 48: 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 0 PCI-MSI 1048591-edge eth0-TxRx-15 49: 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 0 PCI-MSI 1048592-edge eth0-TxRx-16 50: 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 0 PCI-MSI 1048593-edge eth0-TxRx-17 51: 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 0 PCI-MSI 1048594-edge eth0-TxRx-18 52: 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 0 PCI-MSI 1048595-edge eth0-TxRx-19 53: 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 0 PCI-MSI 1048596-edge eth0-TxRx-20 54: 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 0 PCI-MSI 1048597-edge eth0-TxRx-21 55: 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 0 PCI-MSI 1048598-edge eth0-TxRx-22 56: 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 16728 PCI-MSI 1048599-edge eth0-TxRx-23 57: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1048600-edge eth0 59: 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 PCI-MSI 1050624-edge 60: 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 PCI-MSI 1050625-edge 61: 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 PCI-MSI 1050626-edge 62: 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 PCI-MSI 1050627-edge 63: 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 PCI-MSI 1050628-edge 64: 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 PCI-MSI 1050629-edge 65: 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 PCI-MSI 1050630-edge 66: 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 PCI-MSI 1050631-edge 67: 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 PCI-MSI 1050632-edge 68: 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1050633-edge 69: 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1050634-edge 70: 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1050635-edge 71: 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1050636-edge 72: 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1050637-edge 73: 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 0 PCI-MSI 1050638-edge 74: 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 0 PCI-MSI 1050639-edge 75: 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 0 PCI-MSI 1050640-edge 76: 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 0 PCI-MSI 1050641-edge 77: 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 0 PCI-MSI 1050642-edge 78: 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 0 PCI-MSI 1050643-edge 79: 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 0 PCI-MSI 1050644-edge 80: 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 0 PCI-MSI 1050645-edge 81: 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 0 PCI-MSI 1050646-edge 82: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 192 PCI-MSI 1050647-edge 85: 114189 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 PCI-MSI 1572864-edge eth4-TxRx-0 86: 0 169860 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 PCI-MSI 1572865-edge eth4-TxRx-1 87: 0 0 216488 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 PCI-MSI 1572866-edge eth4-TxRx-2 88: 0 0 0 102416 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 PCI-MSI 1572867-edge eth4-TxRx-3 89: 0 0 0 0 128894 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 PCI-MSI 1572868-edge eth4-TxRx-4 90: 0 0 0 0 0 81180 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 PCI-MSI 1572869-edge eth4-TxRx-5 91: 0 0 0 0 0 0 136703 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 PCI-MSI 1572870-edge eth4-TxRx-6 92: 0 0 0 0 0 0 0 106804 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 PCI-MSI 1572871-edge eth4-TxRx-7 93: 0 0 0 0 0 0 0 0 57671 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 PCI-MSI 1572872-edge eth4-TxRx-8 94: 1 0 0 0 0 0 0 0 0 71797 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1572873-edge eth4-TxRx-9 95: 0 1 0 0 0 0 0 0 0 0 57061 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1572874-edge eth4-TxRx-10 96: 0 0 1 0 0 0 0 0 0 0 0 112136 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1572875-edge eth4-TxRx-11 97: 0 0 0 1 0 0 0 0 0 0 0 0 63151 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1572876-edge eth4-TxRx-12 98: 0 0 0 0 1 0 0 0 0 0 0 0 0 63165 0 0 0 0 0 0 0 0 0 0 PCI-MSI 1572877-edge eth4-TxRx-13 99: 0 0 0 0 0 1 0 0 0 0 0 0 0 0 50261 0 0 0 0 0 0 0 0 0 PCI-MSI 1572878-edge eth4-TxRx-14 100: 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 116555 0 0 0 0 0 0 0 0 PCI-MSI 1572879-edge eth4-TxRx-15 101: 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 154781 0 0 0 0 0 0 0 PCI-MSI 1572880-edge eth4-TxRx-16 102: 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 40547 0 0 0 0 0 0 PCI-MSI 1572881-edge eth4-TxRx-17 103: 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 79458 0 0 0 0 0 PCI-MSI 1572882-edge eth4-TxRx-18 104: 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 40797 0 0 0 0 PCI-MSI 1572883-edge eth4-TxRx-19 105: 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 36020 0 0 0 PCI-MSI 1572884-edge eth4-TxRx-20 106: 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 56664 0 0 PCI-MSI 1572885-edge eth4-TxRx-21 107: 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 77501 0 PCI-MSI 1572886-edge eth4-TxRx-22 108: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 29436 PCI-MSI 1572887-edge eth4-TxRx-23 109: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 PCI-MSI 1572888-edge eth4 111: 0 0 0 0 0 0 0 0 0 0 0 304584 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI 2097152-edge mpt2sas1-msix0 NMI: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Non-maskable interrupts LOC: 1692467 1710266 1673185 1703803 1598424 1811999 1799898 1787060 1767003 1724495 1709871 1688585 1688620 1693026 1675654 1665376 1712549 1735945 1726088 1709652 1720864 1739209 1707723 1749515 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Performance monitoring interrupts IWI: 27268 19294 18211 17757 20987 17667 14834 14559 14011 14051 16412 15789 17586 17250 16644 16235 16292 15237 14571 14213 14085 13853 18696 13708 IRQ work interrupts RTR: 19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 APIC ICR read retries RES: 535374 177067 102555 58015 41320 15504 7275 4434 3863 7388 5865 3934 2602 3481 2384 4920 14387 2416 2895 2407 2029 4267 6104 2352 Rescheduling interrupts CAL: 35870 34629 37998 37205 33307 37872 8009 8126 8002 7979 7659 7947 34046 29983 32587 31342 32951 32059 3155 3438 3546 3540 3062 3177 Function call interrupts TLB: 77 89 88 84 101 54 184 151 169 152 144 151 55 114 79 106 102 68 117 127 84 99 167 132 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Threshold APIC interrupts DFR: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts MCE: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 105 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 106 Machine check polls HYP: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Hypervisor callback interrupts HRE: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Hyper-V reenlightenment interrupts HVS: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Hyper-V stimer0 interrupts ERR: 0 MIS: 0 PIN: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Posted-interrupt notification event NPI: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Nested posted-interrupt event PIW: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event Quote Link to comment
Vr2Io Posted February 7, 2020 Share Posted February 7, 2020 Due to you got full speed on FreeNAS, so I belive issue was on Unraid 6.8.2 If my memory correct, previous Unraid shouldn't have this issue on iperf3 test. Would you make test on previous Unraid OS ver 😁 Quote Link to comment
pish180 Posted February 7, 2020 Author Share Posted February 7, 2020 (edited) top -d 1 When running the iperf3 command. I ran the command 2 times. You can see the increase in the ksoftirqd process. This normal? Edited February 7, 2020 by pish180 Quote Link to comment
pish180 Posted February 7, 2020 Author Share Posted February 7, 2020 (edited) Looking at the release notes: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.8.2-x86_64.txt Seems the ixgbe driver was updated in: Version 6.8.0 2019-12-10 @Benson What version are you on? And what driver are you using? Edited February 7, 2020 by pish180 Quote Link to comment
Vr2Io Posted February 7, 2020 Share Posted February 7, 2020 As my test ( ConnectX3 not Intel ) also not got full speed result in iperf3, so I don't think it is driver issue. But this need further confirm by downgrade Unraid OS to verify to be fair. Quote Link to comment
Vr2Io Posted February 7, 2020 Share Posted February 7, 2020 (edited) 5 minutes ago, pish180 said: @Benson What version are you on? And what driver are you using? Same 6.8.2, my Intel NIC was run in Windows Edited February 7, 2020 by Benson Quote Link to comment
pish180 Posted February 7, 2020 Author Share Posted February 7, 2020 Now were to find an older version... Quote Link to comment
Vr2Io Posted February 7, 2020 Share Posted February 7, 2020 11 minutes ago, pish180 said: Now were to find an older version... For example https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.6.0-x86_64.zip Quote Link to comment
pish180 Posted February 7, 2020 Author Share Posted February 7, 2020 12 minutes ago, Benson said: For example https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.6.0-x86_64.zip Thanks! I feel like they should have directory listing enabled... Not sure why they don't. Quote Link to comment
pish180 Posted February 7, 2020 Author Share Posted February 7, 2020 Adding this up here as an artifact. root@Zeus:/etc/modprobe.d# modinfo ixgbe filename: /lib/modules/4.19.98-Unraid/updates/drivers/net/ethernet/intel/ixgbe/ixgbe.ko.xz version: 5.6.5 license: GPL description: Intel(R) 10GbE PCI Express Linux Network Driver author: Intel Corporation, <linux.nics@intel.com> srcversion: EAC7860CAE7CF949DE3DAC6 alias: pci:v00008086d000015E5sv*sd*bc*sc*i* alias: pci:v00008086d000015E4sv*sd*bc*sc*i* alias: pci:v00008086d000015CEsv*sd*bc*sc*i* alias: pci:v00008086d000015CCsv*sd*bc*sc*i* alias: pci:v00008086d000015CAsv*sd*bc*sc*i* alias: pci:v00008086d000015C8sv*sd*bc*sc*i* alias: pci:v00008086d000015C7sv*sd*bc*sc*i* alias: pci:v00008086d000015C6sv*sd*bc*sc*i* alias: pci:v00008086d000015C4sv*sd*bc*sc*i* alias: pci:v00008086d000015C3sv*sd*bc*sc*i* alias: pci:v00008086d000015C2sv*sd*bc*sc*i* alias: pci:v00008086d000015AEsv*sd*bc*sc*i* alias: pci:v00008086d000015ADsv*sd*bc*sc*i* alias: pci:v00008086d000015ACsv*sd*bc*sc*i* alias: pci:v00008086d000015ABsv*sd*bc*sc*i* alias: pci:v00008086d000015B0sv*sd*bc*sc*i* alias: pci:v00008086d000015AAsv*sd*bc*sc*i* alias: pci:v00008086d000015D1sv*sd*bc*sc*i* alias: pci:v00008086d00001563sv*sd*bc*sc*i* alias: pci:v00008086d00001560sv*sd*bc*sc*i* alias: pci:v00008086d00001558sv*sd*bc*sc*i* alias: pci:v00008086d0000154Asv*sd*bc*sc*i* alias: pci:v00008086d00001557sv*sd*bc*sc*i* alias: pci:v00008086d0000154Dsv*sd*bc*sc*i* alias: pci:v00008086d00001528sv*sd*bc*sc*i* alias: pci:v00008086d000010F8sv*sd*bc*sc*i* alias: pci:v00008086d0000151Csv*sd*bc*sc*i* alias: pci:v00008086d00001529sv*sd*bc*sc*i* alias: pci:v00008086d0000152Asv*sd*bc*sc*i* alias: pci:v00008086d000010F9sv*sd*bc*sc*i* alias: pci:v00008086d00001514sv*sd*bc*sc*i* alias: pci:v00008086d00001507sv*sd*bc*sc*i* alias: pci:v00008086d000010FBsv*sd*bc*sc*i* alias: pci:v00008086d00001517sv*sd*bc*sc*i* alias: pci:v00008086d000010FCsv*sd*bc*sc*i* alias: pci:v00008086d000010F7sv*sd*bc*sc*i* alias: pci:v00008086d00001508sv*sd*bc*sc*i* alias: pci:v00008086d000010DBsv*sd*bc*sc*i* alias: pci:v00008086d000010F4sv*sd*bc*sc*i* alias: pci:v00008086d000010E1sv*sd*bc*sc*i* alias: pci:v00008086d000010F1sv*sd*bc*sc*i* alias: pci:v00008086d000010ECsv*sd*bc*sc*i* alias: pci:v00008086d000010DDsv*sd*bc*sc*i* alias: pci:v00008086d0000150Bsv*sd*bc*sc*i* alias: pci:v00008086d000010C8sv*sd*bc*sc*i* alias: pci:v00008086d000010C7sv*sd*bc*sc*i* alias: pci:v00008086d000010C6sv*sd*bc*sc*i* alias: pci:v00008086d000010B6sv*sd*bc*sc*i* depends: retpoline: Y name: ixgbe vermagic: 4.19.98-Unraid SMP mod_unload parm: IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int) parm: InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode (deprecated) (array of int) parm: MQ:Disable or enable Multiple Queues, default 1 (array of int) parm: RSS:Number of Receive-Side Scaling Descriptor Queues, default 0=number of cpus (array of int) parm: VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable (1 queue) 2-16 enable (default=8) (array of int) parm: max_vfs:Number of Virtual Functions: 0 = disable (default), 1-63 = enable this many VFs (array of int) parm: VEPA:VEPA Bridge Mode: 0 = VEB (default), 1 = VEPA (array of int) parm: InterruptThrottleRate:Maximum interrupts per second, per vector, (0,1,956-488281), default 1 (array of int) parm: LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int) parm: LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of int) parm: LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of int) parm: LLIEType:Low Latency Interrupt Ethernet Protocol Type (array of int) parm: LLIVLANP:Low Latency Interrupt on VLAN priority threshold (array of int) parm: FdirPballoc:Flow Director packet buffer allocation level: 1 = 8k hash filters or 2k perfect filters 2 = 16k hash filters or 4k perfect filters 3 = 32k hash filters or 8k perfect filters (array of int) parm: AtrSampleRate:Software ATR Tx packet sample rate (array of int) parm: MDD:Malicious Driver Detection: (0,1), default 1 = on (array of int) parm: LRO:Large Receive Offload (0,1), default 0 = off (array of int) parm: allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599 based adapters, default 0 = Disable (array of int) parm: dmac_watchdog:DMA coalescing watchdog in microseconds (0,41-10000), default 0 = off (array of int) parm: vxlan_rx:VXLAN receive checksum offload (0,1), default 1 = Enable (array of int) Quote Link to comment
pish180 Posted February 7, 2020 Author Share Posted February 7, 2020 Still having the issue with that version as well! Maybe someone from the LimeTech team can chime in and provide some insight... ? ----------------------- root@Tower:~# modinfo ixgbe filename: /lib/modules/4.18.8-unRAID/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko.xz version: 5.1.0-k license: GPL description: Intel(R) 10 Gigabit PCI Express Network Driver author: Intel Corporation, <linux.nics@intel.com> srcversion: 78836A7EE2A82CE71119B7B alias: pci:v00008086d000015E5sv*sd*bc*sc*i* alias: pci:v00008086d000015E4sv*sd*bc*sc*i* alias: pci:v00008086d000015CEsv*sd*bc*sc*i* alias: pci:v00008086d000015C8sv*sd*bc*sc*i* alias: pci:v00008086d000015C7sv*sd*bc*sc*i* alias: pci:v00008086d000015C6sv*sd*bc*sc*i* alias: pci:v00008086d000015C4sv*sd*bc*sc*i* alias: pci:v00008086d000015C3sv*sd*bc*sc*i* alias: pci:v00008086d000015C2sv*sd*bc*sc*i* alias: pci:v00008086d000015AEsv*sd*bc*sc*i* alias: pci:v00008086d000015ACsv*sd*bc*sc*i* alias: pci:v00008086d000015ADsv*sd*bc*sc*i* alias: pci:v00008086d000015ABsv*sd*bc*sc*i* alias: pci:v00008086d000015B0sv*sd*bc*sc*i* alias: pci:v00008086d000015AAsv*sd*bc*sc*i* alias: pci:v00008086d000015D1sv*sd*bc*sc*i* alias: pci:v00008086d00001563sv*sd*bc*sc*i* alias: pci:v00008086d00001560sv*sd*bc*sc*i* alias: pci:v00008086d0000154Asv*sd*bc*sc*i* alias: pci:v00008086d00001557sv*sd*bc*sc*i* alias: pci:v00008086d00001558sv*sd*bc*sc*i* alias: pci:v00008086d0000154Fsv*sd*bc*sc*i* alias: pci:v00008086d0000154Dsv*sd*bc*sc*i* alias: pci:v00008086d00001528sv*sd*bc*sc*i* alias: pci:v00008086d000010F8sv*sd*bc*sc*i* alias: pci:v00008086d0000151Csv*sd*bc*sc*i* alias: pci:v00008086d00001529sv*sd*bc*sc*i* alias: pci:v00008086d0000152Asv*sd*bc*sc*i* alias: pci:v00008086d000010F9sv*sd*bc*sc*i* alias: pci:v00008086d00001514sv*sd*bc*sc*i* alias: pci:v00008086d00001507sv*sd*bc*sc*i* alias: pci:v00008086d000010FBsv*sd*bc*sc*i* alias: pci:v00008086d00001517sv*sd*bc*sc*i* alias: pci:v00008086d000010FCsv*sd*bc*sc*i* alias: pci:v00008086d000010F7sv*sd*bc*sc*i* alias: pci:v00008086d00001508sv*sd*bc*sc*i* alias: pci:v00008086d000010DBsv*sd*bc*sc*i* alias: pci:v00008086d000010F4sv*sd*bc*sc*i* alias: pci:v00008086d000010E1sv*sd*bc*sc*i* alias: pci:v00008086d000010F1sv*sd*bc*sc*i* alias: pci:v00008086d000010ECsv*sd*bc*sc*i* alias: pci:v00008086d000010DDsv*sd*bc*sc*i* alias: pci:v00008086d0000150Bsv*sd*bc*sc*i* alias: pci:v00008086d000010C8sv*sd*bc*sc*i* alias: pci:v00008086d000010C7sv*sd*bc*sc*i* alias: pci:v00008086d000010C6sv*sd*bc*sc*i* alias: pci:v00008086d000010B6sv*sd*bc*sc*i* depends: mdio retpoline: Y intree: Y name: ixgbe vermagic: 4.18.8-unRAID SMP mod_unload parm: max_vfs:Maximum number of virtual functions to allocate per physical function - default is zero and maximum value is 63. (Deprecated) (uint) parm: allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599-based adapters (uint) parm: debug:Debug level (0=none,...,16=all) (int) ------------------- Driver: root@Tower:~# ethtool -i eth3 driver: ixgbe version: 5.1.0-k firmware-version: 0x00012425 expansion-rom-version: bus-info: 0000:02:00.1 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes 1 Quote Link to comment
JorgeB Posted February 7, 2020 Share Posted February 7, 2020 6 minutes ago, pish180 said: Maybe someone from the LimeTech team can chime in and provide some insight... ? Not much LT can do on their side if iperf is slow, other than possibly try the out-of-tree driver, but I believe there were other problems with that one, anyone else using same Intel NIC that could make a test? This is what I get with a single stream and Mellanox NICs: D:\temp\iperf>iperf3 -c 10.0.0.7 Connecting to host 10.0.0.7, port 5201 [ 4] local 10.0.0.50 port 59456 connected to 10.0.0.7 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 1.03 GBytes 8.84 Gbits/sec [ 4] 1.00-2.00 sec 1.05 GBytes 9.03 Gbits/sec [ 4] 2.00-3.00 sec 1.05 GBytes 9.05 Gbits/sec [ 4] 3.00-4.00 sec 1.04 GBytes 8.94 Gbits/sec [ 4] 4.00-5.00 sec 1.04 GBytes 8.93 Gbits/sec [ 4] 5.00-6.00 sec 1.04 GBytes 8.97 Gbits/sec [ 4] 6.00-7.00 sec 1.01 GBytes 8.66 Gbits/sec [ 4] 7.00-8.00 sec 1.05 GBytes 9.06 Gbits/sec [ 4] 8.00-9.00 sec 1.05 GBytes 9.04 Gbits/sec [ 4] 9.00-10.00 sec 1.04 GBytes 8.91 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 10.4 GBytes 8.94 Gbits/sec sender [ 4] 0.00-10.00 sec 10.4 GBytes 8.94 Gbits/sec receiver Reversed (receive from server): D:\temp\iperf>iperf3 -c 10.0.0.7 -R Connecting to host 10.0.0.7, port 5201 Reverse mode, remote host 10.0.0.7 is sending [ 4] local 10.0.0.50 port 59467 connected to 10.0.0.7 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 1.09 GBytes 9.36 Gbits/sec [ 4] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec [ 4] 2.00-3.00 sec 1.09 GBytes 9.36 Gbits/sec [ 4] 3.00-4.00 sec 1.12 GBytes 9.59 Gbits/sec [ 4] 4.00-5.00 sec 1.08 GBytes 9.29 Gbits/sec [ 4] 5.00-6.00 sec 1.11 GBytes 9.58 Gbits/sec [ 4] 6.00-7.00 sec 1.10 GBytes 9.48 Gbits/sec [ 4] 7.00-8.00 sec 1.10 GBytes 9.42 Gbits/sec [ 4] 8.00-9.00 sec 1.10 GBytes 9.43 Gbits/sec [ 4] 9.00-10.00 sec 1.10 GBytes 9.47 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 11.0 GBytes 9.44 Gbits/sec 1 sender [ 4] 0.00-10.00 sec 11.0 GBytes 9.44 Gbits/sec receiver Quote Link to comment
bonienl Posted February 7, 2020 Share Posted February 7, 2020 1 hour ago, pish180 said: Still having the issue with that version as well! What version exactly did you test? Older Unraid versions use the Linux in-tree igb driver, while Unraid 6.8.2 uses the out-of-tree driver of Intel itself. 1 hour ago, pish180 said: Maybe someone from the LimeTech team can chime in and provide some insight... ? Limetech does not develop these ethernet drivers themselves. Your testing shows the link capacity is present when concurrent streams are used. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.