Jump to content

Intel Gigabit NIC - $25 shipped after promo


Rajahal

Recommended Posts

Posted

PCI is plenty fast enough to handle your network traffic.  You want to save PCIe for SATA controllers.

I'm confused. It's a bottleneck or it's not. Traffic would have to hit the slower PCI slot, regardless of the controller. Is it a wash having a PCI NIC/PCIe controller versus a PCIe NIC/PCI controller?

Posted

Do you think this NIC will be a good replacement for the built-in Realtek 8111DL that I am using now?

 

Unless you are having issues wit the Realtek I would not bother installing another component in the computer.

Posted

Consider the fastest speed you can get out of Gigabit networking.

 

From Tom's Hardware

So what is a gigabit? It is 1,000 megabits, not 1,000 megabytes. There are eight bits in a single byte, so let’s do the math: 1,000,000,000 bits divided by 8 bits = 125,000,000 bytes. There are about a million bytes in a megabyte, therefore a gigabit network should be capable of delivering a theoretical maximum transfer of about 125 MB/s.

 

While 125 MB/s might not sound as impressive as the word gigabit, think about it: a network running at this speed should be able to theoretically transfer a gigabyte of data in a mere eight seconds. A 10 GB archive could be transferred in only a minute and 20 seconds. This speed is incredible, and if you need a reference point, just recall how long it took the last time you moved a gigabyte of data back before USB keys were as fast as they are today.

 

Now if you base that on unRAID's maximum to the protected raid array (maybe 30-60MB/s) then PCI gigabit is fast enough.

Even if you base it on a cache drive at 60MB/s, it's fast enough.

If you base it on the raw speed of a high density drive freshly formatted with no data @ 120MB/s, it's fast enough.

 

Where it can be come an issue is when you combine other communication on the PCI bus.

IDE, PCI SATA, USB, etc, etc. For some of these it's minimal, for others it will hamper communications slightly.

 

On older server class boards the Gigabit interface is on the PCI-X bus if available, but if the gigabit network card is the only thing on your PCI bus, you should be OK.

 

On my tests in the past with ATTO and TTCP.

 

A gigabit to gigabit test with only network transfer occurred at 990MB/s between PCIe cards and at 770MB/s between PCI-X and PCI-X cards.  So there will be a slight performance hit, but probably not enough of one to make it that much of an issue.

I had not tested gigabit to a PCI slot since I did not have any.

Posted

PCI is plenty fast enough to handle your network traffic.  You want to save PCIe for SATA controllers.

I'm confused. It's a bottleneck or it's not. Traffic would have to hit the slower PCI slot, regardless of the controller. Is it a wash having a PCI NIC/PCIe controller versus a PCIe NIC/PCI controller?

 

WeeboTech offered a great explanation in the post above.  I'll add on one more small point of clarification in response to your question.  It will NOT be a wash having a PCI NIC + PCIe SATA controller versus a PCIe NIC + PCI SATA controller.  Putting a single hard drive on a PCI SATA controller will introduce no bottleneck iff (if and only if) there is nothing else using the PCI bus.  Putting two or more hard drives on a PCI bus will definitely introduce a bottleneck whenever both drives are accessed simultaneously (such as during a parity check or rebuild from parity).  No matter how many PCI ports a motherboard has, there is only one PCI bus (meaning that all PCI ports share bandwidth).  Therefore, putting both a SATA controller and NIC on the PCI bus will introduce a bottleneck whenever both are used simultaneously.  PCIe ports do not share bandwidth, so having multiple PCIe SATA controllers and a PCIe NIC will introduce no bottleneck.  So theoretically, there's nothing wrong with using PCIe for everything - SATA controllers and NICs.  However, in practice, most of us consider PCIe ports to be sacred, for they can support many hard drives without introducing bottlenecks.  The PCI port(s) on the other hand are somewhat superfluous.  It typically isn't very useful to add a PCI SATA card to control just a single drive, so it is a far better use of the PCI bus to run your NIC off of it.

 

Do you think this NIC will be a good replacement for the built-in Realtek 8111DL that I am using now?

 

I agree with Prostuff1, there's no reason to replace that NIC unless you are having problems with it.  The Realtek 8111DL has always been rock-solid in my experience.

  • 2 weeks later...
Posted

Will the intel adapter be faster for reading off the array (let's say a fairly fast 7200rpm 2TB disk)?

 

Would it provide any advantages for multiple access scenarios (like 2 or 4 machines playing back HD video from the server simulteneously) ?

Posted

Will the intel adapter be faster for reading off the array (let's say a fairly fast 7200rpm 2TB disk)?

 

Would it provide any advantages for multiple access scenarios (like 2 or 4 machines playing back HD video from the server simulteneously) ?

 

In some cases, yes, an Intel NIC can be faster.  However, as advised above, I wouldn't recommend replacing your NIC unless you are having serious issues with your current NIC (such as significantly hampered transfer speeds).  If you are currently seeing transfers to the parity protected array in the range of 25 - 35 mb/s, then leave your server as it is, the Intel NIC won't make much of a difference (but it will consume slightly more power).

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...