Is investing in a 2 port NIC worth it for a HP N40L server?


Recommended Posts

I am considering getting a NIC for my HP N40L server just to make sure I get the best network performance. Since I only have a PCI-e x16 slot left is the additional money for a 2 port NIC worth the investment? If so any recommendations for a low profile one that works with UnRaid, if not any recommendations for a low profile NIC with a single port?

 

Thanks

O2G

Link to comment

I have a Trendnet TEG-S80G and I don't believe it does. I was more thinking along the line of UnRaid's road map and that the usage of 2 NICs maybe on the horizon sometime. But if the switch does not have channel bonding I think I save myself $100 and go for a single port NIC. Any recommendations for a NIC?

Link to comment

It works for me also but I have all my other connected gear set up with MTU=9000 but the HP on board controller will only handle MTU=1500. I just want to make sure that the box is tweaked for maximum transfer speed. Writing to discs & internal processing within the server is what it is. I noticed that with an Intel PCI-e NIC my performance in the v4.7 server has definitely improved because of the way the Intel PCI-e NIC handles network traffic compared to the mother board Atheros L1E Gigabit LAN controller. I guess $28 will tell me if it was worth it. It would have to go into the PCI-e x16 slot unless there is such an animal as a 2 SATA port PCI-e card that also has a NIC on it and then UnRaid would have to support it. Somehow I think that this is wishful thinking.

Link to comment

I am writing directly to the array and I get between 15MB/s and sometimes 32MB/s and that is with 3TB Hitachi Coolspin 5400rpm drives. I have yet to decide whether to use a cache drive or not, I have room for 6 drives in the server and I am considering that once 240Gig SSDs have come down in price below the $1 per Gig I might stick a SSD below the optical drive cage.

 

Besides the 4 SATA ports from the MB, I have 2 ports from a PCI-e card, to use a cache drive must I hack the BIOS to get full speed from the port for the optical drive or does the latest N40L firmware overcome this obstacle, just in case you may know. Checking your build it looks like I can always use the e-SATA port for the cache drive, just didn't want to run a cable outside and then back in as I have 2 cards now in the 2 PCI-e slots. I ordered this Intel NIC for $28 and will see how it works.

 

Thanks for any help and advise you may have,

O2G

Link to comment
  • 4 weeks later...

I am writing directly to the array and I get between 15MB/s and sometimes 32MB/s and that is with 3TB Hitachi Coolspin 5400rpm drives. I have yet to decide whether to use a cache drive or not, I have room for 6 drives in the server and I am considering that once 240Gig SSDs have come down in price below the $1 per Gig I might stick a SSD below the optical drive cage.

 

Are you sure you are focussing on the right bottleneck? I do not run an unraid system, but reach speeds of over 100MB/s over de default NIC without options like Jumbo frames and link doubling. I run OpenIndiana and ZFS on 3x 2TB WD EARS drives of 5400 rpm.

 

People using WHS have reported better results http://www.avforums.com/forums/networking-nas/1534144-hp-proliant-microserver-n40l-owners-thread-2.html.

 

So if you reach speeds of around 15 / 32 MB/s it seems to me that there are other factors you should look at before investing dollars.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.