Jump to content

Network interfaces "in use by unraid"


DrKamp
Go to solution Solved by DrKamp,

Recommended Posts

Hello everyone,

I'm trying to isolate my network interfaces to use on VMs in unraid.

I have a total a 5 ports.

  • 1x onboard NIC - 2.5GB (RLT8125)
  • 1x quad port PCIe network card  - gigabit Intel 82580

 

The onbloard NIC is connected to LAN to accès unraid via the webGUI.

I'm trying to isolate 2 of the 4 ports on the intel PCIe card (I'm willing to compromise isolating the 4 of them)

 

As per my research, I'm trying to bind those to VFIO at boot using the Tools > System devices option in unraid.

However, all of them are actually marked as "In Use By Unraid" (see screenshot below), even with VMs and Docker disabled.

 

My question is :

Are they all "in use by unraid" because they're in the same IOMMU group as the 2.5Gb NIC that I use to access Unraid via webGUI ?

If so, what is the solution to seperate them and bind them to VFIO at boot ?

 

If that's not the reason ? Why are my Nics marked as "in use by unraid" and How can I set unraid to "unuse" them.

For information, 2 of those ports (on the quad PCIe card) are bridged, each to 1 seperate brX network.

The other 2 nics are not bridged, nor bonded.

 

Thanks in advance to anyone who will take some time to answer ;-)

Have a great day !

 

 

Screenshot 2023-02-05 124301.png

unraid-diagnostics-20230205-1245.zip

Edited by DrKamp
Link to comment

Yes - to be able to use VFIO you have to be able to pass all devices in the group.  To achieve what you wat you are going to have to be able to get them into a different group.  Quite what the solution is with your motherboard I do not know - maybe trying a different PCIe slot might work?

 

It might be worth clarifying why you need to isolate them?  Maybe just bridging them onto the local LAN might be enough to achieve what you want to do without using VFIO?

Link to comment

@itimpi, Thanks for the anwser.

I'm trying to setup a VM with OPNsense (~pfsense). That's why I need to pass a least 2 of them (WAN and LAN).

My first approach was to bridge 2 of them individually to an brX network. eth1 > br1 / eth2 > br2.

In the VM config, I "linked" those 2 brX and was able to see them in OPNsense without an issue.

 

Things started to get complicated when I needed to get an IP via my ISP DHCP. The requests need to be made on VLAN100.

I have no issue in OPNsense creating a virtual interface with a VLAN number of 100 using my WAN port as parent interface.

However, I do not know how to be sure that the "100 tagged" interface in the VM is correctly passed in unraid to my Physical port...

 

To limit the issues, I tried to get the physical ports directly in the VM using passthrough to remove 1 unknown to me (how unraid manages the tagged traffic from VM brX to the pysical port).

 

I hope my explaination is clear (even thought it's not really in my head 🙂 )

 

So to recap, Here is what I need to achieve

OPNSense WAN interface (100 tagged traffic) -> OPNSense "physical interface" which is in fact a VM virtIO -> The virt-IO is a brX interface in unraid -> The brX is unraid is an unraid network bridged to the eth1 physical port on the PCIe card (quad port)

 

Hope this helps to help me 🙂

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...