Gigabit speed in only one direction


jsmj

Recommended Posts

Hey guys, I'm trouble shooting an asymmetrical network link. I get gigabit speeds when going from client -> unraid server, but 100 Mbits/sec (at best) when testing from Unraid -> client. I've changed cables, tested cables, changed ports on the switch and router, and I'm out of ideas. The lights on the switch show full duplex 1000 link, as does Unraid on the dashboard. Here are a couple iperf3 results with the server acting as client (sending):

 

root@Tower:~# iperf3 -c 192.168.1.86 -i 20
Connecting to host 192.168.1.86, port 5201
[  4] local 192.168.1.101 port 40006 connected to 192.168.1.86 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  36.6 MBytes  30.7 Mbits/sec  24588   79.2 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  36.6 MBytes  30.7 Mbits/sec  24588             sender
[  4]   0.00-10.00  sec  34.5 MBytes  29.0 Mbits/sec                  receiver

iperf Done.
root@Tower:~# iperf3 -c 192.168.1.208 -i 20
Connecting to host 192.168.1.208, port 5201
[  4] local 192.168.1.101 port 47854 connected to 192.168.1.208 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  94.8 MBytes  79.5 Mbits/sec  65838   67.9 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  94.8 MBytes  79.5 Mbits/sec  65838             sender
[  4]   0.00-10.00  sec  93.6 MBytes  78.5 Mbits/sec                  receiver

iperf Done.

 

And here are a couple more with the -R flag to show I get 1Gps in the other direction (receiving):

 

root@Tower:~# iperf3 -c 192.168.1.86 -i 20 -R
Connecting to host 192.168.1.86, port 5201
Reverse mode, remote host 192.168.1.86 is sending
[  4] local 192.168.1.101 port 40156 connected to 192.168.1.86 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.07 GBytes   919 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.07 GBytes   920 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.07 GBytes   919 Mbits/sec                  receiver

iperf Done.
root@Tower:~# iperf3 -c 192.168.1.208 -i 20 -R
Connecting to host 192.168.1.208, port 5201
Reverse mode, remote host 192.168.1.208 is sending
[  4] local 192.168.1.101 port 48004 connected to 192.168.1.208 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.08 GBytes   925 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.07 GBytes   923 Mbits/sec                  receiver

iperf Done.

 

 

My next thought is replacing the onboard Realtek NIC (8111C), but I'm kind of running out of expansion slots so I want to be sure that's the issue first before I tackle that. Any ideas? Logs are attached

 

tower-diagnostics-20190821-1526.zip

Link to comment

If the link comes up at 1G in one direction, then it's unlikely to be a cable problem.

1G uses all 4 pairs full-duplex. If they work one-way then the cable is good with high

probability.

 

You may want to try a different client device.  The symptoms suggest that the client is not

auto-negotiating a 1G link, but the server is.  This could be due to settings on the client,

or a defect in the client.

 

-- Tom

 

Link to comment
11 minutes ago, Tom3 said:

If the link comes up at 1G in one direction, then it's unlikely to be a cable problem.

1G uses all 4 pairs full-duplex. If they work one-way then the cable is good with high

probability.

 

You may want to try a different client device.  The symptoms suggest that the client is not

auto-negotiating a 1G link, but the server is.  This could be due to settings on the client,

or a defect in the client.

 

-- Tom

 

I'm testing using two clients: another Unraid server (call it server B), and an Nvidia Shield. The Shield has an app that allows it to host an iperf3 connection. I get 1Gbps/1Gbps in both directions on both those machines when testing between them. So they are both negotiating a 1G link. It's only when I test either of those clients against the problem Unraid server (call it server A).

 

A => B = <100 Mbps

A => Shield = <100 Mbps

Shield => A = 1000 Mbps

B => A = 1000 Mbps

B <=> Shield = 1000 Mbps

 

Here are the iperfs between B and Shield in both directions, but to summarize, they are both able to negotiate a 1Gbps link both ways

Connecting to host 192.168.1.208, port 5201
[  4] local 192.168.1.86 port 53388 connected to 192.168.1.208 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  1.04 GBytes   889 Mbits/sec    0   5.66 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.04 GBytes   889 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.03 GBytes   887 Mbits/sec                  receiver

iperf Done.
root@TwoTower:~# iperf3 -c 192.168.1.208 -i 20 -R
Connecting to host 192.168.1.208, port 5201
Reverse mode, remote host 192.168.1.208 is sending
[  4] local 192.168.1.86 port 53394 connected to 192.168.1.208 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.01 GBytes   867 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.01 GBytes   869 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.01 GBytes   867 Mbits/sec                  receiver

iperf Done.

 

Link to comment

Ok.  I misunderstood the original post problem directionality.

Check the interface setting on the problem Unraid server using the CLI ethtool command.  Example from my system:

 

root@Tower:~# ethtool eth0
Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: off (auto)
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes
root@Tower:~#

Link to comment
7 minutes ago, Tom3 said:

Ok.  I misunderstood the original post problem directionality.

Check the interface setting on the problem Unraid server using the CLI ethtool command. 

Here's mine for the problem server. Looks slightly different from yours. Particularly the bits about "Supported pause frame use: Symmetric Receive-only" and the port being MII while yours is "twisted pair". PHYAD is different as well

 

root@Tower:~# ethtool eth0
Settings for eth0:
	Supported ports: [ TP MII ]
	Supported link modes:   10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Half 1000baseT/Full 
	Supported pause frame use: Symmetric Receive-only
	Supports auto-negotiation: Yes
	Supported FEC modes: Not reported
	Advertised link modes:  10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Half 1000baseT/Full 
	Advertised pause frame use: Symmetric Receive-only
	Advertised auto-negotiation: Yes
	Advertised FEC modes: Not reported
	Link partner advertised link modes:  10baseT/Half 10baseT/Full 
	                                     100baseT/Half 100baseT/Full 
	                                     1000baseT/Full 
	Link partner advertised pause frame use: Symmetric Receive-only
	Link partner advertised auto-negotiation: Yes
	Link partner advertised FEC modes: Not reported
	Speed: 1000Mb/s
	Duplex: Full
	Port: MII
	PHYAD: 0
	Transceiver: internal
	Auto-negotiation: on
	Supports Wake-on: pumbg
	Wake-on: g
	Current message level: 0x00000033 (51)
			       drv probe ifdown ifup
	Link detected: yes
root@Tower:~# 

 

And here's the output from the other server that has a working 1G/1G link.

 

Settings for eth0:
	Supported ports: [ TP ]
	Supported link modes:   10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Full 
	Supported pause frame use: No
	Supports auto-negotiation: Yes
	Supported FEC modes: Not reported
	Advertised link modes:  10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Full 
	Advertised pause frame use: No
	Advertised auto-negotiation: Yes
	Advertised FEC modes: Not reported
	Speed: 1000Mb/s
	Duplex: Full
	Port: Twisted Pair
	PHYAD: 1
	Transceiver: internal
	Auto-negotiation: on
	MDI-X: off
	Supports Wake-on: g
	Wake-on: d
	Link detected: yes

 

Link to comment

The problem interface appears correct. Depending on age and vendor, some ethernet NICs

had autonegotiation problems. You can turn off autonegotiation for the problem interface and force

1000 full duplex and see if it comes up correctly:

 

$ ethtool -s eth0 autoneg off speed 1000 duplex full

 

This is not a 'sticky' setting, it should revert to default on next boot.

 

-- Tom

 

Link to comment
2 hours ago, Tom3 said:

The problem interface appears correct. Depending on age and vendor, some ethernet NICs

had autonegotiation problems. You can turn off autonegotiation for the problem interface and force

1000 full duplex and see if it comes up correctly:

 

$ ethtool -s eth0 autoneg off speed 1000 duplex full

 

This is not a 'sticky' setting, it should revert to default on next boot.

 

-- Tom

 

Edit: Nevermind, the problem persists. I'm still at 1000 down / 100 up. I get 1G/1G only if the array is offline. If I start the array, the link goes back to 1G/100M 👎

 

So the bad news, that command took the server off the network completely. I hooked up a monitor to try to do a graceful reboot from the command line, but couldn't get an image, so eventually had to just hard reset it. 

 

The good news is my link is 1G/1G in all directions every which way. No idea why or what or how.

 

The following iperf tests were done with the array offline and result in 1G/1G

iperf3 in both directions for problem server <=> Shield

root@Tower:~# iperf3 -c 192.168.1.208 -i 20
Connecting to host 192.168.1.208, port 5201
[  4] local 192.168.1.101 port 44286 connected to 192.168.1.208 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  1.06 GBytes   910 Mbits/sec    0   5.66 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.06 GBytes   910 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.06 GBytes   908 Mbits/sec                  receiver

iperf Done.
root@Tower:~# iperf3 -c 192.168.1.208 -i 20 -R
Connecting to host 192.168.1.208, port 5201
Reverse mode, remote host 192.168.1.208 is sending
[  4] local 192.168.1.101 port 44290 connected to 192.168.1.208 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.08 GBytes   932 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.09 GBytes   932 Mbits/sec                  receiver

iperf Done.

iperf3 in both directions for the other unraid server (server B) A<=>B

root@Tower:~# iperf3 -c 192.168.1.86 -i 20
Connecting to host 192.168.1.86, port 5201
[  4] local 192.168.1.101 port 42310 connected to 192.168.1.86 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec    0   5.66 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver

iperf Done.
root@Tower:~# iperf3 -c 192.168.1.86 -i 20 -R
Connecting to host 192.168.1.86, port 5201
Reverse mode, remote host 192.168.1.86 is sending
[  4] local 192.168.1.101 port 42314 connected to 192.168.1.86 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  receiver

iperf Done.

 

 

 

Edited by jsmj
Link to comment
  • 10 months later...
4 minutes ago, OFark said:

Did you ever resolve this? I seem to be getting 1Gbps into Unraid but only 480Mbps out.

My Ethernet controller and SATA controller were sharing bandwidth and it was saturated. In the short term I moved the drives off the SATA ports and onto a PCI-e SATA card but eventually upgraded the mobo 

Link to comment
1 minute ago, jsmj said:

My Ethernet controller and SATA controller were sharing bandwidth and it was saturated. In the short term I moved the drives off the SATA ports and onto a PCI-e SATA card but eventually upgraded the mobo 

How did you find that out? As in how can I find out if my motherboard does that?

Link to comment
Just now, OFark said:

How did you find that out? As in how can I find out if my motherboard does that?

The biggest clue for me was that I had symmetrical gigabit speeds when the array was stopped, but speeds fell off when I started the array 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.