Jump to content

Multiple Nics setup?


Recommended Posts



Just started unsing unraid.  I have 4 NIC on my server. 2x10GBE and 2X1GBE.  By default Unraid used one of my 10GBE nic for its setup.  I would like to change this to a 1GBE port to connect to the rest of my network aka ip range/switch/internet router...


Then I would like to configure the two 10GBE ports with fix IPs on 2 other range to make direct connection to specific machines.


Trying to not have to buy a 10GBE switch yet.


Thanks all, 


03:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T

03:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T

04:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

05:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

05:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

Link to comment

If you want to change the order in which eth0, eth1, etc are allocated to specific Ethernet devices you need to set up a udev rule.


Alternatively disable the ones you don't want in the BIOS.


Alternatively, have you tried using eth0 at 1 Gb/s? What happens if you connect it to a gigabit switch port? Previous twisted pair Ethernet iterations were backwards compatible so I'd be surprised if 10GBaseT isn't.


Link to comment

Should probably make a diagram?  The idea is that all machine connect using normal dhcp and have access to the net and the shares.  But 2 specific machine will directly connect to unraid using fix ipand a direct connect on a other sub for faster access to the same shares on that is on the 1gig part also of the network.  Not sure if I am describing this correctly.


Will probably need to read up on my networking command under Linux?


Just to be sure what you describe doing about udev is not possible in the GUI?


The 10 gig on at 1gig works.  I would just rather have it available to directly connect to a other 10gig machine.


So I should reinstall with only one nic active and then activate after?  Because I want to use these ports in the end?

Link to comment

Linus on LinusTechTips done something similar like this recently... I can't remember if it was to an unRAID machine or just another machine on the network, but avoided the switch by linking the two 10gig ports on each machine directly.


Try www.linustechtips.com, they video should be on there on the youtube channel.  Other people on there may know more.

Link to comment

Yes i was doing this with freenas.  I actually got reminded about unraid from linus video.  Was hopping unraid and ssd cache would help my 10gbe speeds. 


Freenas with 5*3TB red in zfs1 + 32gig of ram + a XeonD 1540 was not playing nice with the 10gbe only getting around 170-200 write.  Even added a ssd zil intel s3500 300gig.


With all the reds in stripe was still getting same write speeds!?  Think Windows share on freenas are not perfect for single user transfers.  Probably more inline with multi user user cases.


Will look into manual config without the gui and come back with what i get frim it.

Link to comment

Hi again,


Looks like I stuffed up my initial post.  Think unraid6.1.9 just has driver for "Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)"


So only getting eth0 and eth1 in ifconfig.


I still whant to do what I discribed but looks like I have a other step to do first:  install drivers?


Anyone know if 6.2 will have support for "Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T"  baked in?


These are the 10gbe nic integrated to the XeonD Soc series.  My board is a X10SDV-TLN4F from supermicro




Thank you all for your replies

Link to comment
  • 2 weeks later...


This topic is now archived and is closed to further replies.

  • Create New...