Jump to content

[SOLVED] Problem using Jumbo Frames

Recommended Posts

First off my setup:


This is my server configuration: http://lime-technology.com/forum/index.php?topic=5102.0

On the network I am using a managed switch 3Com 3CDSG8 with Jumbo Frames enabled and 1000 Mbps speed at the server side in the server room. QOS is disabled.

I am using a D-Link DGS-2208 8-Port 10/100/1000 Desktop Switch that supports Jumbo frames at the HTPC side in another room.

Both switches are connected via cat 6 app. 80 ft.

HTPC is a Win 7 32 bit machine & the workstation is a Win 7 64 bit machine both set to use Jumbo Frames MTU=9014.

In addition I have a QNAP NAS on the network with Jumbo Frames enabled.


This leaves the Unraid server on normal frames with the ifconfig eth0 command indicating MTU=1500.


In addition connected to the HTPC side switch is the HTPC, a Denon receiver & a Panasonic PJ and connected to the server side switch besides the cable connecting the two switches is the QNAP NAS, Unraid Server, a Workstation plus two 100 Mbps devices (Comcast SMB cable modem and a Sonos bridge).


I searched the forum here and got different opinions on the topic of Jumbo Frames. Speed increase in one case and no difference in a another case because on a Giga Byte LAN most of the time is spent waiting for the next packet. Wanting to make sure that I get the max read speed I thought I experiment with Jumbo Frames. At present my Write Speed to the Array is between 20 - 39 MB/s depending if it is from the workstation or from the NAS. I am using TeraCopy to get an idea of the transfer speed. I also understand that my read speed it much higher, I have not measured / read the read speed yet.


The main reason why I started to look into using Jumbo Frames was the length of time it took before TeraCopy started the actual copying to the array and in a few instances Win Explorer complained that the network location was no longer available. I realize that there maybe another cause for it.


So from the command prompt in Putty I entered the ifconfig eth0 mtu 9000 command. It complained and I found out that the on-board network will only go as high as mtu=8170. So I used different mtu sizes 8170, 8014, 8000, 7000 & 6000 but in each instance it actually worsened the network availability, to the point that Putty would no longer allow me to connect but even worse the Web Interface to the main page that allows me to reboot and shutdown the server just got stuck on the message transferring data from Tower. UnMenu did load but when trying to switch to UnRaid main from UnMenu it again got stuck.


I shutdown the server from the console and with UnRaid mtu=1500 all worked  again. So none of the Jumbo Frame settings worked for me on the UnRaid server and I am back to mtu=1500 for UnRaid and all other devices are set to Jumbo Frames. Here are a few questions and request for help.


1. Since I have two 1 lane PCI-e slots available would it be worthwhile to disable the on-board network connection and use a NIC that can set Jumbo Frame size to mtu=9014?

2. If so, are there any recommendations for a NIC?

3. If I use a separate NIC does UnRaid provide the drivers & thus it has to be a tested and approved NIC?

4. Is the driver for the on-board network part of the UnRaid OS or part of the BIOS?

5. Since this is an Asus Mother Board is there a need or way for updating the MB to get Jumbo Frames with mtu size 9014 from the on-board NIC?

6. Or is the whole idea of using jumbo frames not worth the hassle because of the speed increase being negligible?

7. Do the 5.0 beta releases make any changes to the usage of Jumbo Frames? (I could not find anything in the Change logs.)


Any help would be very much appreciated as I am trying to get this server fine tuned because this coming week I will try to prepare the box that has 8 empty slots for migration to 3 TB drives plus installation of the Supermicro AOC-SASLP-MV8 controller.






Link to comment

Well I thought I eliminate for $30 the possibility of the on-board NIC being dodgy. I got this Intel one from NewEgg http://www.newegg.com/Product/Product.aspx?Item=33-106-033. Users stated that it works very well with Linux and one user made this interesting observation regarding Jumbo Frames:

It also supports -- and most importantly works well with -- jumbo frames (tested in Linux 2.6.32.x with an MTU of 9,000; there's no RFC on this, so 'jumbo' is a loose standard). A lot of cards that claim to support this don't actually work in practice.


Other Thoughts: Soap box time! A lot of people seem to think that jumbo frames are the be all, end all of gigabit networking. Well... they're not. First off, your switch must support it, and of course both ends of the connection must support it. Second, technically anything larger than an MTU of 1500 constitutes a jumbo frame, so one manufacturer's jumbo (4000) may not be another manufacturer's jumbo (9000). If they are different on each end, they will negotiate to the smaller of the two. Finally, jumbo frames add a lot of latency because of the tremendous size of each frame.


To summarize, jumbo MTUs are great for systems that primarily transfer large files over a fast network. It reduces IO-related CPU load, makes better use of your networking equipment, and ultimately allows you to move big files faster. What they are not great for is situations where latency is important, or in diverse network environments.

I guess I spent $30 to answer a few of my own questions and will find out if my setup will allow for a MTU=9014 setting. Which still leaves one unanswered question, in the version 5.0 beta releases have there sofar been unpleasant experiences with this type of NIC & with Jumbo Frames in general?




Link to comment

Well here we go, continuing with the monologue. ;D


I installed the Intel EXPI9301CTBLK Network Adapter 1000Mbps and disabled the on-board LAN. Well what a difference and I now understand the glowing reports & comments on NewEgg about this NIC. Set MTU=9014 with no problem. No more hesitation and waiting when copying and moving data to the array. TeraCopy reports that while moving directly to the array a 1.5 Gig file speeds between 42 MB/s and 47 MB/s. Nice one, glad that this part is over. Now to the bigger job the upgrade path to 3TB drives but that I will post in another thread.



Link to comment

I would recommend this card: Intel PWLA8391GT PRO/1000 GT PCI Network Adapter

It is an Intel PCI card. If you want to add more drives to your array the PCIe slot can support 2 more drives. The PCI slot can only support 1 additional drive on all of the PCI slots. But PCI is fine for gig-e.

Good suggestion. Maybe mod that last bit to "PCI is fine for (a well-engineered) gig-e".


I got its predecessor (PWLA8390MT) [from Monoprice 5 yrs ago ($22)] and was curious what the difference might be. From this discussion (also 5 yrs ago), it doesn't seem much. It's an informative discussion, but moderately technical.


This newer card (dgaschk's one) can also be had from Amazon for $1 more, but FS (saves $1.xx) [and, you don't get any sleaze on you].


-- UhClem


Link to comment

Well I had this up for s few days & in the first post is the request for a NIC recommendation, so after a few days when none were forthcoming I ordered the PCI-e card which works like a charm & I couldn't be happier. For me no need for additional ports as the Mobo has One 8xPCI-e & One 16xPCI-e slot plus Two 1xPCI-e slots. Have ports for 17 drives but no more space for drives unless I change the 4 in 3 to 5 in 3 adapters.


But thanks for the recommendations as someone else may benefit from it.

Link to comment

The Mother Board is ASUS M4A78-E motherboard


It has as the NIC the Atheros L1E Gigabit LAN controller featuring AI NET 2 here is what one guy wrote in the NewEgg feedback:


Cons: The ethernet. If you are a gamer, beware, you might experience more interface lag than with others like Gigabit. The Atheros consumes more of the CPU's capacity than ANY OTHER NIC CHIP OUT THERE RIGHT NOW!!! Asus cut costs by using the Atheros NIC controller. While overall, this isn't a problem, if you are a gamer, the 'lag' it produces could sour the deal.
Link to comment


This topic is now archived and is closed to further replies.

  • Create New...