Problem: Network Transfer Speeds fall back to 100mb


Recommended Posts

Good evening. 

 

I recently rolled to a new server - Dell T320 - from an older Lenovo D10 Thinkstation.  Since moving servers, I have found that after a few minutes of being up, all of my network transfers seem to be capped to about 100Mb/s (that's 12.5MB/second or slower).  I can fix this issue with a restart of the server for a few copies, but then anywhere from 10-30 minutes later the problem is back - and the server is scaling the speeds down to 12.5MB/second (I presume it's the server, because a reboot of the system corrects it - temporarily.

 

Here's my setup:  I have eth0 and eth1 (onboard ports) in a bond with eth2 and eth3 (Intel Pro 1000VT dual port).  I have been reading some information about the dell DRAC interface causing issues when "sharing" the nic with either eth0 or eth1 (it dumbs the speeds down to 100Mb from what I've read).  As a result, I have downed both eth0 and eth1 from the command line (sudo ifconfig eth0 down, etc) and repeated my testing after a reboot.  Like clockwork, the system allows a few copies at full 1,000Mb speeds, then dumbs itself back down to 100Mb.

 

At this point I'm at a loss.  All my cables are Cat 6, I'm using a HP ProCurve 2810-24G switch, the interfaces on my switch all report as connected at 1,000, the interfaces on the server (ethtool bond0) show the speeds at 2,000Mbs, the client pushing the data is also showing full gig on the nic and the switch.

 

Any advise or help anyone could provide would be greatly appreciated.  I'm about at my wits end.

mediaserver-syslog-20180309-2038.zip

Edited by Tiger770
Link to comment
7 hours ago, johnnie.black said:

Try an addon NIC.

Not sure what you mean by addon nic.  If you mean, use another nic other than the onboard ports, that's what I've done.  The Intel Pro 1000 VT is a dual port PCIe card that I've bonded with the two onboards - and I have shutdown the two onboard ports so their interfaces can not be used.  I've also disabled the DRAC in the bios to rule out the shared bandwidth "feature" they provide.  The problem still persists.

Link to comment
2 hours ago, bonienl said:

 

Have you tried to move the server connections to different ports on your switch or temporary use a single interface in your bond by disconnecting the others?

 

I have moved the ports, but when I tried to drop all the NICs from the bond and run just a single interface (say, eth3), my docker applications and VMs stopped functioning (no network connectivity for them).  So sadly, I'm not sure if that would work or not - but I'm willing to try it.

 

I can find where to flip the connection options on the VM, but I can't seem to find how to change them for the Docker apps.  They seem to want to stick with bond0 since that's what was being used when they were created.

 

 

Link to comment
39 minutes ago, johnnie.black said:

You always need eth0, you can choose which one is eth0 by changing the ethernet rules on the bottom of the network settings page, select one of the Intel NIC's ports as eth0 and disable bonding, you'll need to reboot for changes to take effect.

Oh awesome, thanks!  I hadn't realized that by moving around the MACs you could reorder them.  I've moved the onboard nics to eth3 and eth2 and dropped them from the bond - as well as disabled them.  Will give it some more testing before I drop the bond altogether (I'm hoping to avoid that).

Edited by Tiger770
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.