Define what NIC unraid uses


Recommended Posts

Working on getting my unraid server setup and running, and looking forward to migrating my plex server onto a docker container here. To ease transfers between my Bluray ripping machine and the plex server (on unraid) I have added a 10-gigabit network card to connect each machine to my network. My windows Bluray ripping machine was quite simple to setup with the 10-gigabit PCIe card (as you might imagine) but setting it up on unraid has been quite the trick.

 

How can I define that PCIe NIC as my default NIC for unraid? By default, the machine is using the first onboard NIC

 

Also - this probably further complicates things, but I thought it was worth mentioning - the host PC has not one, but four onboard NICs - none of which I want to use.

 

Any help is appreciated!

Link to comment

@John_M - Disabling those onboard ports was my first instinct as well - unfortunately it doesn't look like this particular model of Sun (x4600) will allow me to do so. How would I go about setting up a udev rule? Linux noob here. 

 

 

@Bobphoneix - Not entirely sure what "stub" means? Could you clarify?

 

Thanks for the help!

Link to comment

@John_M - Disabling those onboard ports was my first instinct as well - unfortunately it doesn't look like this particular model of Sun (x4600) will allow me to do so. How would I go about setting up a udev rule? Linux noob here. 

 

 

@Bobphoneix - Not entirely sure what "stub" means? Could you clarify?

 

Thanks for the help!

If you add a stub to Syslinux.cfg (click on the flash drive link).  My Syslinux.cfg looks like this:

default /syslinux/menu.c32
menu title Lime Technology
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append iommu=pt intel_pstate=disable pci-stub.ids=11ab:6081 initrd=/bzroot
label unRAID OS Safe Mode (no plugins)
  kernel /bzimage
  append initrd=/bzroot unraidsafemode
label Memtest86+
  kernel /memtest

 

 

 

My SAT2-MV8 controller is stubbed with the "pci-stub.ids=11ab:6081" above.

 

 

The value can be found by issuing a "lspci -nn" from command line in unRAID

 

 

So for my SAT2-MV8 it looks like this

0b:01.0 SCSI storage controller [0100]: Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller [11ab:6081]

 

 

Just add the appriate pci-stub.ids= line to syslinux.cfg for your network controllers.  As you can see above the device ID is 11ab:6081 for my SAT2-MV8 controller.

Link to comment

I appreciate the info from you both, but I feel as I am more inexperienced in Linux than I had thought...  :-[

 

After reading through the info from BobPhoenix and the link from John_M, I'm at a loss. I really having trouble understanding the whole kernel, udev thing - much different than Windows concepts.

 

I have included a picture of the results from "lspci -nn"

The "Melanox" listing is for the 10Gb card, and the four intel listings are for the onboard nics, I presume.

 

@BobPhoenix - What I'm getting from your post is that I should add these four intel listings to the syslinux.cfg - Is that correct? And can you clarify how exactly I do that?

 

@John_M - Does this info have any relevance to the udev edit method?

 

My apologies for the lack of knowledge - thanks for working with me!

 

Cheers!

Link to comment

I appreciate the info from you both, but I feel as I am more inexperienced in Linux than I had thought...  :-[

 

After reading through the info from BobPhoenix and the link from John_M, I'm at a loss. I really having trouble understanding the whole kernel, udev thing - much different than Windows concepts.

 

I have included a picture of the results from "lspci -nn" http://imgur.com/JEOuntC

The "Melanox" listing is for the 10Gb card, and the four intel listings are for the onboard nics, I presume.

 

@BobPhoenix - What I'm getting from your post is that I should add these four intel listings to the syslinux.cfg - Is that correct? And can you clarify how exactly I do that?

 

@John_M - Does this info have any relevance to the udev edit method?

 

My apologies for the lack of knowledge - thanks for working with me!

 

Cheers!

Correct add the following to the syslinux.cfg file editable from the GUI when you click on the word "Flash" on the main tab in the unRAID GUI.

pci-stub.ids=8086:1010

 

 

So your syslinux.cfg might look like this if you only make this one modification:

default /syslinux/menu.c32
menu title Lime Technology
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append pci-stub.ids=8086:1010 initrd=/bzroot
label unRAID OS Safe Mode (no plugins)
  kernel /bzimage
  append initrd=/bzroot unraidsafemode
label Memtest86+
  kernel /memtest

 

 

What it should do is hide the network controllers from unRAID.  My assumption is that your 10GB nic is a Mellanox controller anyway.

 

 

Attached are some pictures.  Click on flash on the main tab in the unRAID GUI then scroll down until you see the syslinux config.

Click_on_Flash.png.b99b7116dfaccf4fb59eaafe62205603.png

Scroll_down_until_you_see_this.png.f5bca55caa5ca9efd4bebbbf5bf87154.png

Link to comment

We're just offering you different approaches to address the problem. Bob is suggesting that you "hide" your unwanted NICs from unRAID by reserving them for use by potential VMs. The advantage is he's given you a very big hint how to do it.

 

I'm suggesting you could use a udev rule to force the association of the device name "eth0" with your NIC's MAC address. The advantage is you don't "waste" four perfectly good gigabit ports; you just shuffle their names.

 

I also suggested two other approaches, namely turning off the unwanted NICs in the BIOS (which would be extremely simple to do, but unfortunately you don't think it's possible - it might be worth another look), or manually configuring the port with ifconfig.

 

Perhaps a feature request to allow an arbitrary number of Ethernet ports to be configured via the GUI is in order, though I imagine it must have already been requested.

 

Link to comment
  • 2 weeks later...

At the moment you can only configure eth0 via the GUI. So either disable all the other ports in the BIOS, or configure it manually, or set up a persistent udev rule that forces your card to be allocated as eth0.

 

John_M,

 

Not 100% true, my IPMI connection is eth0 and unraid has no problem configuring my 10gig card which is eth1. I would like to enable the second onboard 1gig and try to use it for WOL as 10gig cards don't support it. I think I have the udev rules sorted out but how do I get the file into the etc/udev/rules.d folder at bootup? 

 

thanks

 

Link to comment
  • 2 years later...

I have my doubts if that will fully work, but I will try it tomorrow - thanks for pointing it out.  I saw it 100 times today and didn't think of using it like that lol.

 

My problem is I've got ETH0 selected correctly now, but the default gateway still insists on being the motherboard NIC.  I ended up just putting a false private subnet on that to get around it, otherwise the whole server couldn't talk to the internet.

Link to comment

Can you confirm 100% routed traffic also goes through that?  Maybe yours is a different scenario, I'm having a really hard time getting traffic to go away from my onboard ethernet.  Even disabling it in the motherboard doesn't work.  And when I move another interface to ETH0 it keeps the gateway on the new name for the onboard - so default route is now on eth2.  I have even completely deleted the network config by removing the cfg file, assigned the correct interface to Eth0, ensured that the default route is going to ETH0 (not onboard) then upon reboot it reverts the default route back to the onboard (now ETH2).  I had to put a dummy subnet on it to get it to work.  25+ years doing plenty of IT technical work at an enterprise level and Unraid is stumping me. :/

Link to comment

Thanks good to know - I am not sure what's going on yet, there are other related issues that may be playing into it.  For example my dual port Nic pings on both ports while only one port is plugged in.  Neither are bonded and there is no shared bridge.  I did see they're using the e1000e driver which according to intel is incorrect - it should be e1000 - so could be something there.  But that's the PCI stuff and the problems I'm having is with the onboard.  Anyway - I got so grumpy with it this morning that my chiropractor commented how tense I was 7 hours later - so clearly done enough of that today!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.