Very bad networking performance on docker (unraid v6.1.5)


ysss

Recommended Posts

Guys, I'm getting really crappy network performance from my docker downloaders (SABnzbd, NZBGet)...

I had SABnzbd working well on unraid v6.0.x, saturating my 50mbps cable connection... then I upgraded to v6.1.3 and the download speed was cut down by +95% down to 2-3mbps. I tried both SABnzbd and NZBget and they both exhibit the same performance. I'm now on latest unraid (v6.1.5) and I still get the same problem.

 

I can verify that it's not my internet connection problem, since I can still attain full speed by running NZBGet on a win server 2011 box on the same network.

 

Looking at NZBGet log, I get a lot of these:

 

ERROR Fri Dec 04 2015 00:11:44 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:11:41 Could not read from TLS-Socket: Connection closed by remote host

WARNING Fri Dec 04 2015 00:11:39 Blocking Usenetserver (secure.usenetserver.com) for 10 sec

ERROR Fri Dec 04 2015 00:11:39 Could not write to TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:11:39 Could not write to TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:11:39 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:11:38 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:11:28 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:11:20 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:11:01 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:10:53 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:10:44 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:10:38 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:10:28 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:10:08 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:09:53 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:09:44 Could not read from TLS-Socket: Connection closed by remote host

WARNING Fri Dec 04 2015 00:09:32 Blocking Usenetserver (secure.usenetserver.com) for 10 sec

ERROR Fri Dec 04 2015 00:09:32 Could not write to TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:09:32 Could not write to TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:09:32 Could not read from TLS-Socket: Connection closed by remote host

ERROR Fri Dec 04 2015 00:09:32 Could not read from TLS-Socket: Connection closed by remote host

 

iptables output:

 

root@TOWER:~# iptables -L

Chain INPUT (policy ACCEPT)

target    prot opt source              destination       

ACCEPT    udp  --  anywhere            anywhere            udp dpt:domain

ACCEPT    tcp  --  anywhere            anywhere            tcp dpt:domain

ACCEPT    udp  --  anywhere            anywhere            udp dpt:bootps

ACCEPT    tcp  --  anywhere            anywhere            tcp dpt:bootps

 

Chain FORWARD (policy ACCEPT)

target    prot opt source              destination       

ACCEPT    all  --  anywhere            192.168.122.0/24    ctstate RELATED,ESTABLISHED

ACCEPT    all  --  192.168.122.0/24    anywhere           

ACCEPT    all  --  anywhere            anywhere           

REJECT    all  --  anywhere            anywhere            reject-with icmp-port-unreachable

REJECT    all  --  anywhere            anywhere            reject-with icmp-port-unreachable

DOCKER    all  --  anywhere            anywhere           

ACCEPT    all  --  anywhere            anywhere            ctstate RELATED,ESTABLISHED

ACCEPT    all  --  anywhere            anywhere           

ACCEPT    all  --  anywhere            anywhere           

 

Chain OUTPUT (policy ACCEPT)

target    prot opt source              destination       

ACCEPT    udp  --  anywhere            anywhere            udp dpt:bootpc

 

Chain DOCKER (1 references)

target    prot opt source              destination       

ACCEPT    tcp  --  anywhere            172.17.0.3          tcp dpt:3306

ACCEPT    tcp  --  anywhere            172.17.0.4          tcp dpt:8083

 

ifconfig:

 

root@TOWER:~# ifconfig

br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 10.0.0.13  netmask 255.255.255.0  broadcast 10.0.0.255

        ether 00:25:90:f2:ba:50  txqueuelen 0  (Ethernet)

        RX packets 33477632  bytes 59074155057 (55.0 GiB)

        RX errors 0  dropped 1750  overruns 0  frame 0

        TX packets 15358137  bytes 29940594155 (27.8 GiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 172.17.42.1  netmask 255.255.0.0  broadcast 0.0.0.0

        ether 4e:13:e6:e1:e2:f0  txqueuelen 0  (Ethernet)

        RX packets 1578952  bytes 16733740161 (15.5 GiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 3816494  bytes 429773949 (409.8 MiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

eth0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 9000

        ether 00:25:90:f2:ba:50  txqueuelen 1000  (Ethernet)

        RX packets 53796605  bytes 60672890338 (56.5 GiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 34055686  bytes 30951924134 (28.8 GiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

        device memory 0xf7400000-f747ffff 

 

eth1: flags=4355<UP,BROADCAST,PROMISC,MULTICAST>  mtu 1500

        ether 00:25:90:f2:ba:51  txqueuelen 1000  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

        device memory 0xf7300000-f737ffff 

 

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 127.0.0.1  netmask 255.0.0.0

        loop  txqueuelen 0  (Local Loopback)

        RX packets 38624  bytes 7306827 (6.9 MiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 38624  bytes 7306827 (6.9 MiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

veth78746cc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        ether 4e:13:e6:e1:e2:f0  txqueuelen 0  (Ethernet)

        RX packets 702415  bytes 7541894668 (7.0 GiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 1629923  bytes 110230398 (105.1 MiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

vethd520727: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        ether 96:19:12:b6:9b:83  txqueuelen 0  (Ethernet)

        RX packets 14874  bytes 9437434 (9.0 MiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 15718  bytes 15048086 (14.3 MiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500

        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255

        ether 52:54:00:4a:a6:16  txqueuelen 0  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

 

I have tried both bridge and host network settings for this docker; right now I'm leaving it as 'host' networking.

 

What could be causing my issues?

Is there a connection timeout settings somewhere I that I need to change?

 

What method should I use to diagnose the issue?

 

 

 

Thanks in advance,

 

-ysss

 

edit:  (I've just noticed dropped packets on br0)...

 

edit 2: noticed the mtu sizes... I've disabled jumbo packet support on my switch and MTUs on all interfaces back to 1500. Still no dice...

Link to comment

only an idea to check out… i had also lots of dropped packets (received). reason was of using virtio for network setup inside proxmox vm. solution: i set network to be a E1000 and all good.

 

i don't see any sig. but maybe you run unRAID also virtualized.

 

Hi, thanks for the idea.

 

No, I don't run unraid virtualized and I don't have any KVM running, just plain unraid + some plugins + some dockers.

 

Sorry, I haven't made a sig... my hardware is something like this:

 

Xeon E3-1245-v3, supermicro x10sl7-f, 24gb, 21 drives (4tb and 5tb), v6.1.5, a pair of btrfs cache drives.

 

Edit: lots of clues down here...

 

 

root@TOWER:~# netstat -s

Ip:

    32781930 total packets received

    6 with invalid addresses

    5434638 forwarded

    0 incoming packets discarded

    27321105 incoming packets delivered

    19649266 requests sent out

Icmp:

    4789688 ICMP messages received

    7241 input ICMP message failed.

    ICMP input histogram:

        destination unreachable: 116

        redirects: 4785546

        echo requests: 3576

        echo replies: 34

    3778 ICMP messages sent

    0 ICMP messages failed

    ICMP output histogram:

        destination unreachable: 167

        echo request: 35

        echo replies: 3576

IcmpMsg:

        InType0: 34

        InType3: 116

        InType5: 4785546

        InType8: 3576

        InType9: 416

        OutType0: 3576

        OutType3: 167

        OutType8: 35

Tcp:

    7879 active connections openings

    249730 passive connection openings

    4 failed connection attempts

    1624 connection resets received

    13 connections established

    22479604 segments received

    22765918 segments send out

    65192 segments retransmited

    547 bad segments received.

    18628 resets sent

Udp:

    53034 packets received

    39 packets to unknown port received.

    0 packet receive errors

    24383 packets sent

    0 receive buffer errors

    0 send buffer errors

UdpLite:

TcpExt:

    3 resets received for embryonic SYN_RECV sockets

    1 ICMP packets dropped because they were out-of-window

    21509 ICMP packets dropped because socket was locked

    249640 TCP sockets finished time wait in fast timer

    17 packets rejects in established connections because of timestamp

    164466 delayed acks sent

    54 delayed acks further delayed because of locked socket

    Quick ack mode was activated 97851 times

    16451525 packets directly queued to recvmsg prequeue.

    110248693 bytes directly in process context from backlog

    21317026745 bytes directly received in process context from prequeue

    4014184 packet headers predicted

    14132819 packets header predicted and directly queued to user

    1154430 acknowledgments not containing data payload received

    630230 predicted acknowledgments

    423 times recovered from packet loss by selective acknowledgements

    Detected reordering 10 times using FACK

    Detected reordering 176 times using SACK

    2 congestion windows fully recovered without slow start

    36 congestion windows recovered without slow start by DSACK

    2339 congestion windows recovered without slow start after partial ack

    TCPLostRetransmit: 1

    334 timeouts after SACK recovery

    55 timeouts in loss state

    3612 fast retransmits

    356 forward retransmits

    12863 retransmits in slow start

    13197 other TCP timeouts

    TCPLossProbes: 18842

    TCPLossProbeRecovery: 6785

    56 SACK retransmits failed

    37 times receiver scheduled too late for direct processing

    98137 DSACKs sent for old packets

    135 DSACKs sent for out of order packets

    854 DSACKs received

    1305 connections reset due to unexpected data

    204 connections reset due to early user close

    1637 connections aborted due to timeout

    TCPDSACKIgnoredNoUndo: 28

    TCPSpuriousRTOs: 38

    TCPSackShifted: 1343

    TCPSackMerged: 9511

    TCPSackShiftFallback: 8989

    TCPRetransFail: 34

    TCPRcvCoalesce: 2240741

    TCPOFOQueue: 588267

    TCPOFOMerge: 134

    TCPChallengeACK: 550

    TCPSYNChallenge: 548

    TCPSpuriousRtxHostQueues: 18

    TCPAutoCorking: 206066

    TCPFromZeroWindowAdv: 6

    TCPToZeroWindowAdv: 6

    TCPWantZeroWindowAdv: 17

    TCPSynRetrans: 6457

    TCPOrigDataSent: 10208156

    TCPHystartTrainDetect: 1249

    TCPHystartTrainCwnd: 25603

    TCPHystartDelayDetect: 10

    TCPHystartDelayCwnd: 304

    TCPACKSkippedSynRecv: 2316

    TCPACKSkippedPAWS: 1

    TCPACKSkippedSeq: 6

    TCPACKSkippedTimeWait: 1

IpExt:

    InMcastPkts: 20080

    OutMcastPkts: 167

    InBcastPkts: 1308

    OutBcastPkts: 1017

    InOctets: 76598020249

    OutOctets: 48872190122

    InMcastOctets: 6774574

    OutMcastOctets: 21017

    InBcastOctets: 291599

    OutBcastOctets: 228108

    InNoECTPkts: 63960289

 

 

Link to comment
  • 3 months later...

I'm having the same issue with NZBGet docker.

 

Did you manage to resolve this?

 

Nope. Ended up installing NZBGet on a spare windows server I have, and I haven't tried it back on the unraid machine.

 

I'm put off using vm on unraid, since it doesn't even support VLANs...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.