10gbit speeds?


MattD

Recommended Posts

Hello Everyone! I'm trying to setup my network (new unraid install) and can't seem to locate this bottleneck problem I'm having. I've attached an image of what my setup currently is. For some reason I cannot hit a 10gbit transfer speed. I know there is some overhead but I can't even come close. The most I've ever seem testing was 200-300mbit up or down. I created a temp share with all files going to the cache. I have even tried to do a direct connection from my pc to the server itself and get the same results. I have a feeling it might the server itself not being able to move that much data at once.

 

 If anyone has any ideas I can look over to try and resolve this I would be greatly in your debt :)

 

 

Capture.JPG

Edited by MattD
Link to comment

The M.2 cache drive should be capable of delivering higher speeds than that.

Use something more low level than Samba for testing first.
https://datapacket.com/blog/10gbps-network-bandwidth-test-iperf-tutorial/

Use iperf to test that the link is actually capable of full 10gbit speed.

If not, eliminating the switch is fairly easy (use a crossover cable, and set a static IP on the PC in the same subnet)
Rerun the iperf test. If the speed is recitfied without the switch in place - several things can come up;

I've heard that these Mikrotrik devices have a "dual OS" and that sometimes the RouterOS works faster than the SwitchOS; and vice-versa, depending on the internal hardware of the device itself. 

If it is, then you are looking into Samba performance tuning and ensuring both clients are using SMB3 and not SMB2.

Link to comment

Ok here are the iperf results. Now I did it the way I had it setup prior and the way you suggested by eliminating the switch. Both setup results were the same. First result is the 10gbit connection between my pc and the unraid box. Second result is on the 1 gigabit connection.

 

C:\iperf>iperf3 -c 192.168.11.50
Connecting to host 192.168.11.50, port 5201
[  4] local 192.168.11.5 port 50483 connected to 192.168.11.50 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   292 MBytes  2.45 Gbits/sec
[  4]   1.00-2.00   sec   280 MBytes  2.35 Gbits/sec
[  4]   2.00-3.00   sec   276 MBytes  2.31 Gbits/sec
[  4]   3.00-4.00   sec   274 MBytes  2.30 Gbits/sec
[  4]   4.00-5.00   sec   280 MBytes  2.35 Gbits/sec
[  4]   5.00-6.00   sec   284 MBytes  2.38 Gbits/sec
[  4]   6.00-7.00   sec   285 MBytes  2.39 Gbits/sec
[  4]   7.00-8.00   sec   284 MBytes  2.38 Gbits/sec
[  4]   8.00-9.00   sec   285 MBytes  2.39 Gbits/sec
[  4]   9.00-10.00  sec   287 MBytes  2.41 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  2.76 GBytes  2.37 Gbits/sec                  sender
[  4]   0.00-10.00  sec  2.76 GBytes  2.37 Gbits/sec                  receiver

iperf Done.

 

C:\iperf>iperf3 -c 192.168.1.50
Connecting to host 192.168.1.50, port 5201
[  4] local 192.168.1.5 port 50489 connected to 192.168.1.50 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   109 MBytes   914 Mbits/sec
[  4]   1.00-2.00   sec   108 MBytes   909 Mbits/sec
[  4]   2.00-3.00   sec   110 MBytes   922 Mbits/sec
[  4]   3.00-4.00   sec   109 MBytes   917 Mbits/sec
[  4]   4.00-5.00   sec   110 MBytes   925 Mbits/sec
[  4]   5.00-6.00   sec   110 MBytes   920 Mbits/sec
[  4]   6.00-7.00   sec   110 MBytes   923 Mbits/sec
[  4]   7.00-8.00   sec   109 MBytes   917 Mbits/sec
[  4]   8.00-9.00   sec   110 MBytes   926 Mbits/sec
[  4]   9.00-10.00  sec   109 MBytes   914 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.07 GBytes   919 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.07 GBytes   919 Mbits/sec                  receiver

iperf Done.

C:\iperf>

 

 

The 10gbit connection is the same speed I get when test moving files. Let me know your thoughts, thank you :)


 

Link to comment

I did 20 threads on this test, same results. Seems to be capped/bottlenecking at the 2gbit mark. Unless my syntax is wronge...I used iperf3 -c 192.168.11.50 -t 20 -P 20

 

 

 

[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[  4]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[  6]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[  6]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[  8]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[  8]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 10]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 10]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 12]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 12]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 14]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 14]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 16]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 16]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 18]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 18]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 20]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 20]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 22]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 22]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 24]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 24]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 26]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 26]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 28]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 28]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 30]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 30]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 32]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 32]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 34]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 34]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 36]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 36]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 38]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 38]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 40]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 40]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 42]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 42]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[SUM]   0.00-20.08  sec  5.79 GBytes  2.48 Gbits/sec                  sender
[SUM]   0.00-20.08  sec  5.79 GBytes  2.47 Gbits/sec                  receiver

iperf Done.

C:\iperf>

Edited by MattD
Link to comment
10 hours ago, MattD said:

I did 20 threads on this test, same results. Seems to be capped/bottlenecking at the 2gbit mark. Unless my syntax is wronge...I used iperf3 -c 192.168.11.50 -t 20 -P 20

 

 

 

[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[  4]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[  6]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[  6]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[  8]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[  8]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 10]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 10]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 12]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 12]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 14]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 14]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 16]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 16]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 18]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 18]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 20]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 20]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 22]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 22]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 24]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 24]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 26]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 26]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 28]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 28]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 30]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 30]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 32]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 32]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 34]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 34]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 36]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 36]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 38]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 38]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 40]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 40]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[ 42]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  sender
[ 42]   0.00-20.08  sec   296 MBytes   124 Mbits/sec                  receiver
[SUM]   0.00-20.08  sec  5.79 GBytes  2.48 Gbits/sec                  sender
[SUM]   0.00-20.08  sec  5.79 GBytes  2.47 Gbits/sec                  receiver

iperf Done.

C:\iperf>

push it higher. I had to go to 100 threads to hit 9.x Gbps but at the same time, on 10 I hit 7.xGbps so you do have some other issue... how long are your cat6 runs? are they just cat6 or 6a? bend in any cables?

Link to comment

You should be able to hit 10GbE or close with a single iperf thread, I get around 9Gbits, and if not a single transfer will never be that quick, it's usually not faster than a single iperf thread, unless you plan on doing multiple simultaneous transfers.

 

D:\temp\iperf>iperf3 -c 10.0.0.7
Connecting to host 10.0.0.7, port 5201
[  4] local 10.0.0.50 port 59456 connected to 10.0.0.7 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.03 GBytes  8.84 Gbits/sec
[  4]   1.00-2.00   sec  1.05 GBytes  9.03 Gbits/sec
[  4]   2.00-3.00   sec  1.05 GBytes  9.05 Gbits/sec
[  4]   3.00-4.00   sec  1.04 GBytes  8.94 Gbits/sec
[  4]   4.00-5.00   sec  1.04 GBytes  8.93 Gbits/sec
[  4]   5.00-6.00   sec  1.04 GBytes  8.97 Gbits/sec
[  4]   6.00-7.00   sec  1.01 GBytes  8.66 Gbits/sec
[  4]   7.00-8.00   sec  1.05 GBytes  9.06 Gbits/sec
[  4]   8.00-9.00   sec  1.05 GBytes  9.04 Gbits/sec
[  4]   9.00-10.00  sec  1.04 GBytes  8.91 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  10.4 GBytes  8.94 Gbits/sec                  sender
[  4]   0.00-10.00  sec  10.4 GBytes  8.94 Gbits/sec                  receiver

 

Reversed (receive from server):

D:\temp\iperf>iperf3 -c 10.0.0.7 -R
Connecting to host 10.0.0.7, port 5201
Reverse mode, remote host 10.0.0.7 is sending
[  4] local 10.0.0.50 port 59467 connected to 10.0.0.7 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.09 GBytes  9.36 Gbits/sec
[  4]   1.00-2.00   sec  1.09 GBytes  9.38 Gbits/sec
[  4]   2.00-3.00   sec  1.09 GBytes  9.36 Gbits/sec
[  4]   3.00-4.00   sec  1.12 GBytes  9.59 Gbits/sec
[  4]   4.00-5.00   sec  1.08 GBytes  9.29 Gbits/sec
[  4]   5.00-6.00   sec  1.11 GBytes  9.58 Gbits/sec
[  4]   6.00-7.00   sec  1.10 GBytes  9.48 Gbits/sec
[  4]   7.00-8.00   sec  1.10 GBytes  9.42 Gbits/sec
[  4]   8.00-9.00   sec  1.10 GBytes  9.43 Gbits/sec
[  4]   9.00-10.00  sec  1.10 GBytes  9.47 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  11.0 GBytes  9.44 Gbits/sec    1             sender
[  4]   0.00-10.00  sec  11.0 GBytes  9.44 Gbits/sec                  receiver

 

Link to comment
8 hours ago, 1812 said:

push it higher. I had to go to 100 threads to hit 9.x Gbps but at the same time, on 10 I hit 7.xGbps so you do have some other issue... how long are your cat6 runs? are they just cat6 or 6a? bend in any cables?

I did that as well with the same results, sorry I should have posted that. The cat6 cable is/was approx 35ft long, straight shot from my pc to the server in a crawlspace, no major bends. I installed it personally so I know how it was ran. I did take a different approach last night and ditched the nic cards and went with the Mellanox fiber cards and the OM3 Duplex Fiber Optic Patch Cable with the same results leading me to believe that it has to be my hardware. I don't know if the Dell R710 mother board or pci is even able to do gigabit speeds.


 

 

8 hours ago, johnnie.black said:

You should be able to hit 10GbE or close with a single iperf thread, I get around 9Gbits, and if not a single transfer will never be that quick, it's usually not faster than a single iperf thread, unless you plan on doing multiple simultaneous transfers.

 


D:\temp\iperf>iperf3 -c 10.0.0.7
Connecting to host 10.0.0.7, port 5201
[  4] local 10.0.0.50 port 59456 connected to 10.0.0.7 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.03 GBytes  8.84 Gbits/sec
[  4]   1.00-2.00   sec  1.05 GBytes  9.03 Gbits/sec
[  4]   2.00-3.00   sec  1.05 GBytes  9.05 Gbits/sec
[  4]   3.00-4.00   sec  1.04 GBytes  8.94 Gbits/sec
[  4]   4.00-5.00   sec  1.04 GBytes  8.93 Gbits/sec
[  4]   5.00-6.00   sec  1.04 GBytes  8.97 Gbits/sec
[  4]   6.00-7.00   sec  1.01 GBytes  8.66 Gbits/sec
[  4]   7.00-8.00   sec  1.05 GBytes  9.06 Gbits/sec
[  4]   8.00-9.00   sec  1.05 GBytes  9.04 Gbits/sec
[  4]   9.00-10.00  sec  1.04 GBytes  8.91 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  10.4 GBytes  8.94 Gbits/sec                  sender
[  4]   0.00-10.00  sec  10.4 GBytes  8.94 Gbits/sec                  receiver

 

Reversed (receive from server):


D:\temp\iperf>iperf3 -c 10.0.0.7 -R
Connecting to host 10.0.0.7, port 5201
Reverse mode, remote host 10.0.0.7 is sending
[  4] local 10.0.0.50 port 59467 connected to 10.0.0.7 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.09 GBytes  9.36 Gbits/sec
[  4]   1.00-2.00   sec  1.09 GBytes  9.38 Gbits/sec
[  4]   2.00-3.00   sec  1.09 GBytes  9.36 Gbits/sec
[  4]   3.00-4.00   sec  1.12 GBytes  9.59 Gbits/sec
[  4]   4.00-5.00   sec  1.08 GBytes  9.29 Gbits/sec
[  4]   5.00-6.00   sec  1.11 GBytes  9.58 Gbits/sec
[  4]   6.00-7.00   sec  1.10 GBytes  9.48 Gbits/sec
[  4]   7.00-8.00   sec  1.10 GBytes  9.42 Gbits/sec
[  4]   8.00-9.00   sec  1.10 GBytes  9.43 Gbits/sec
[  4]   9.00-10.00  sec  1.10 GBytes  9.47 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  11.0 GBytes  9.44 Gbits/sec    1             sender
[  4]   0.00-10.00  sec  11.0 GBytes  9.44 Gbits/sec                  receiver

 

 

I though the same thing, I've seen multiple YouTube videos on the same setup I have (different hardware) and able to transfer one file in windws at almost 10gbit speeds.

Link to comment

Ok, pinned it down to being my personal PC. I move the fiber card from my PC to my sons PC and now I can push 9.5 gbit up and down. It makes no since since my hardware is better than his and much newer, but I guess it doesn't really come down to that in this case.

 

So, that leaves just one thing, my windows install/settings. Do you guys think it's worth doing a fresh install of windows or are there any settings I should be looking at.

 

Thanks guys.

Link to comment
3 hours ago, MattD said:

Ok, pinned it down to being my personal PC. I move the fiber card from my PC to my sons PC and now I can push 9.5 gbit up and down. It makes no since since my hardware is better than his and much newer, but I guess it doesn't really come down to that in this case.

 

So, that leaves just one thing, my windows install/settings. Do you guys think it's worth doing a fresh install of windows or are there any settings I should be looking at.

 

Thanks guys.

What motherboard? Which slot on the board? If it's consumer hardware, most likely one or more of the slots is gimped since consumer CPUs don't have enough pcie lanes to provide full, current gen lanes to each slot.

 

Also the reason we test with iperf is to more or less rule out OS tuning and configuration. If it works with iperf the hardware is fine. If it DOESN'T work with iperf something in the hardware is usually the bottleneck.

Edited by Xaero
Link to comment
1 hour ago, MattD said:

Can a video card cause bottleneck?

I don't think so. Two thing you may check

 

1. Does power management setting to power save, so CPU turbo clock limited to low.

 

2. NIC interrupt moderate handling setting, for my understanding, during 10G transfer, huge interrupt will occur.

Link to comment

Ok, I think I pinned it down to my OS (Windows 10). Here's what I did, Ran a cat6 crossover cable from my PC to the Unraid server and connected it with Asus xg-c100c adapters. Did the test in windows, got about 2-3gbit in transfer.

 

Then, fired up my pc with a usb stick with unraid on it, opened a terminal on both machines and did the iperf test like I have been, and bam, got 9.4gbit on average. Did 5 tests and all were consistent.

 

root@Tower:~# iperf3 -c 192.168.11.50
Connecting to host 192.168.11.50, port 5201
[  4] local 192.168.11.55 port 58450 connected to 192.168.11.50 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.10 GBytes  9.42 Gbits/sec    0    441 KBytes      
[  4]   1.00-2.00   sec  1.10 GBytes  9.42 Gbits/sec    0    427 KBytes      
[  4]   2.00-3.00   sec  1.09 GBytes  9.41 Gbits/sec    0    427 KBytes      
[  4]   3.00-4.00   sec  1.10 GBytes  9.42 Gbits/sec    0    433 KBytes      
[  4]   4.00-5.00   sec  1.09 GBytes  9.41 Gbits/sec    0    427 KBytes      
[  4]   5.00-6.00   sec  1.10 GBytes  9.42 Gbits/sec    0    447 KBytes      
[  4]   6.00-7.00   sec  1.10 GBytes  9.42 Gbits/sec    0    444 KBytes      
[  4]   7.00-8.00   sec  1.09 GBytes  9.41 Gbits/sec    0    421 KBytes      
[  4]   8.00-9.00   sec  1.10 GBytes  9.42 Gbits/sec    0    436 KBytes      
[  4]   9.00-10.00  sec  1.09 GBytes  9.40 Gbits/sec    0    433 KBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  11.0 GBytes  9.41 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  11.0 GBytes  9.41 Gbits/sec                  receiver

iperf Done.
root@Tower:~#

 

 

So I think that rules out the hardware issue, at least I think. I want to thank all of that help so far. It's been a learning experience.

 

Time to collect my personal crap,

a do a format with windows and see what that does :|

 

 

On 12/7/2019 at 2:58 PM, Benson said:

I don't think so. Two thing you may check

 

1. Does power management setting to power save, so CPU turbo clock limited to low.

 

2. NIC interrupt moderate handling setting, for my understanding, during 10G transfer, huge interrupt will occur.

 

No power settings restricted, not sure about #2.

 

 

 

 

Link to comment

Man this is a total bummer. Reinstalled windows, updated, latest drivers yada yada...still no good. Me and google have become best friends to no end...

 

If going from windows to the unraid server, I get ~2.5gbit via iperf.

If going from unraid to the unraid server, I get 9.5 gbit via iperf every time.

If going from ubuntu desktop to the unraid server, I get 9.5 gbit via iperf every time.

 

So it would seem my hardware can do it just fine but windows is holding me back somewhere. I've tried just about every setting I can find in windows to adjust.

 

Anyone else has any thoughts for windows?  

Edited by MattD
Link to comment
On 12/11/2019 at 8:41 PM, MattD said:

Man this is a total bummer. Reinstalled windows, updated, latest drivers yada yada...still no good. Me and google have become best friends to no end...

 

If going from windows to the unraid server, I get ~2.5gbit via iperf.

If going from unraid to the unraid server, I get 9.5 gbit via iperf every time.

If going from ubuntu desktop to the unraid server, I get 9.5 gbit via iperf every time.

 

So it would seem my hardware can do it just fine but windows is holding me back somewhere. I've tried just about every setting I can find in windows to adjust.

 

Anyone else has any thoughts for windows?  

It's probably one of the options in the Advanced Settings for the NIC driver in device manger.
Device Manager -> Network Adapters -> Right-Click your adapter, click properties -> Advanced.

Look for Recv Segment Coalescing (IPv4) - if it's there, disable it
There are other settings that can cause substantially reduced performance depending on the chipset. If you can post a screenshot of that page (Alt+prt Screen with that window in focus)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.