HOW TO: Using SR-IOV in UnRAID with 1Gb/10Gb/40Gb network interface cards (NICs)


BVD

Recommended Posts

Same benefits with 1Gb - just ensure that whatever intel nic you get is both genuine and has sr-iov support per intel's ark page listing and you'll be fine. 

 

They're more expensive, but I typically go straight for new i350 intel oem cards these days. Too much risk of fakes out there on the used market, no matter how careful you are.

Link to comment
  • 4 months later...

thank you again for this guide and its really helpful. I have the following situation that I can't get my head around. I hope you can share some guidance on it.

 

I have 2 NICs in the system. one realtek 2.5Ghz nic and the Intel X550-T2 10Ghz nic witch is providing VFs. I have a truenas VM using 1 vf produced by the X550-T2. devices that is plug into the X550 parent PF can access to Truenas server no problem. but I kinda need to use the other realtek nic port because it has WOL capability. now if I plug the realtek nic to the main switch, other devices connected to the main switch can no longer access the Truenas VM. I have bridged the realtek nic and the parent PF in Unraid's webUI. but that does not grant access to the Truenas using VF.

 

Now, how do I access the VF from other nic in the computer, or is it simply impossible? Can I plug both the realtek Nic and the X550 nic to the same main switch? do the VFs created by the same parent PF share network(2vfs, one for the truenas VM and one not bond and bridged it in the unraid)?  I have searched online but this seems to be a difficult find.

Link to comment
On 11/19/2022 at 4:34 PM, Bobo_unraid said:

thank you again for this guide and its really helpful. I have the following situation that I can't get my head around. I hope you can share some guidance on it.

 

I have 2 NICs in the system. one realtek 2.5Ghz nic and the Intel X550-T2 10Ghz nic witch is providing VFs. I have a truenas VM using 1 vf produced by the X550-T2. devices that is plug into the X550 parent PF can access to Truenas server no problem. but I kinda need to use the other realtek nic port because it has WOL capability. now if I plug the realtek nic to the main switch, other devices connected to the main switch can no longer access the Truenas VM. I have bridged the realtek nic and the parent PF in Unraid's webUI. but that does not grant access to the Truenas using VF.

 

Now, how do I access the VF from other nic in the computer, or is it simply impossible? Can I plug both the realtek Nic and the X550 nic to the same main switch? do the VFs created by the same parent PF share network(2vfs, one for the truenas VM and one not bond and bridged it in the unraid)?  I have searched online but this seems to be a difficult find.

 

It's all networking at that point - 

 

Firstly, only virtual functions should be used (mapped) if using SR-IOV - ensure that each machine needing to communicate on the network has both a VF attached, and a valid network address assigned to them.

 

Confirm they're able to communicate by checking your router's IP assignments for the associated MAC addresses. Make sure the hostnames, MAC's, and IP's you see from the router line up with what is shown on the clients themselves as well. It's likely your router has a DHCP lease for the MAC that's not yet expired attaching that MAC to the previous device, and if that's the case, simply expire the lease. You can always manually assign static IP's on the router/DNS side to ensure you don't have to worry about this in the future should you need to do so.

Link to comment
  • 1 month later...
On 2/28/2021 at 1:57 PM, BVD said:

Via terminal, just type the following command, substituting 'eth1' for whichever interface you're planning to set up:

sudo echo 4 > /sys/class/net/eth1/device/sriov_numvfs


... That's it... no reboots, no config file changes, just bind the vfs with the scipt and no downtime, which is pretty spectactular; you now have 4 vfs (virtual functions) and one pf (physical function).

 

Like the idea of no reboot configure!

 

But the reality is that somehow this sriov_numvfs file cannot be changed on my machine.

 

I tried ssh with root and run your command echoing the number in, and it shows "echo: write error: No such file or directory"

 

I then tried nano it, when save, it says "Error writing... No such file or directory".

 

I was trying to virtualize my iGPU (Intel 12600K, Asrock Z690), so my file path was  /sys/class/graphics/fb0/device/sriov_numvfs

 

Do you have an idea why this is happening?

Link to comment

@levelel I don't have an iGPU to test, and am unsure whether sysfs supports active repartitioning of these devices.

 

For an iGPU, I really don't see the use case for it either, so it simply may not exist, not sure. They're not scale-type devices, and I somewhat doubt that development effort was expended creating options for things such as variable partitioning and the like.

 

Should just be a 'set it and forget it' configuration, and since there (likely) isn't much of a business case behind regular reconfiguration, I'd just stick with the PCI device tree method. Understanding nobody likes to reboot if they don't have to of course, in this case, you just do it once on initial system setup, and never have to think about it again.

Edited by BVD
Link to comment

I installed the VF drivers on two windows 10 VMs and was able to get ips from dhcp if the cable is plugged in. However, I was unable to get these two VMs to communicate with each other with static ips and cable unplugged. I read somewhere there should be a L2 switch for the PF and VFs on each port but it seems not working for me. How do I bridge VFs on the same port so that VMs can communicate?

Link to comment
  • 2 months later...

Bumping this thread to post an update. I have a Connectx-4 LX so the first method I tried was @jortan which is:

 

1. Check driver: mlx5_core

2. Created a file: /boot/config/modprobe.d/mlx5_core.conf

3. added: options mlx5_core num_vfs=8

However, I get an error with mlx5_core num_vfs config not found (something like that) when I reboot unraid.

 

So what I ended up doing is a hybrid of @BVD and @jortan

 

1. I first did this:

 

On 2/28/2021 at 11:30 AM, BVD said:

Driver / Device specific steps

 

The following is the most generic option, and should work for most UnRAID deployments that contain SR-IOV supporting NICs, going back to around 6.4, but I would recommend no lower than 6.8.2 if you're working with any device using the i40e driver (save yourself the pain and upgrade!):

 

  1. Open your terminal and edit the go file
    nano /boot/config/go
  2. Add the following line to the bottom, specifying the number of vf's to create for this interface, replacing my ID (17.:0.1) with your own - I chose 4 per interface:
    echo 4 > /sys/bus/pci/devices/0000:17:00.3/sriov_numvfs
    echo 4 > /sys/bus/pci/devices/0000:17:00.3/sriov_numvfs
  3. Hit 'Ctrl+x' , then 'Y', then Enter (just following the on screen prompts to save the file), and it's time for a reboot (one of the joys of UnRAID!)
     
  4. Now that your system is back up and running, head to the system devices screen (Tools -> System Devices) - you should see something pretty:
    ixgbeVfDevices.png.dc5011138941791d0b3c34da01bb086d.png

 

2. then:

 

On 3/4/2021 at 6:56 AM, jortan said:

Bind to vfio on startup

 

Option 1 - unRAID GUI

 

In unRAID Tools \ System Devices, select all the "Virtual Function" devices and click Bind Selected to vfio at boot

 

image.thumb.png.70affc1b62940b83c442ab664ba36d8c.png

 

If we do nothing else, this will fail because vfio-pci script is run before device drivers are loaded as noted here

 

Tell unRAID to re-run the vfio-pci script again (but after device drivers have loaded) by calling it in the /boot/config/go file:

 

# Relaunch vfio-pci script to bind virtual function adapters that didn't exist at boot time
/usr/local/sbin/vfio-pci >>/var/log/vfio-pci

 

Reboot.

 

If you check Tools \ System Devices \ View vfio logs, you should see the first run where the bindings failed, and then a second run where the bindings succeeded.

 

So now I have this: 

 

1982402752_Screenshot2023-04-21at12_31_13PM.thumb.png.18c57fd53d0419ed3eca8781070a2a2c.png

 

I still have to play with the Permanent MAC Address but I'm just glad it worked after spending hours on this.

Link to comment

Ok, I found another way of doing this. Not sure if it's better or worse, but it works.

 

# create and reload udev-rules 
cat << UDEVRULES >> /etc/udev/rules.d/99-sriov.rules
ACTION=="add", SUBSYSTEM=="net", ATTRS{vendor}=="0x15b3", ATTRS{device}=="0x1015", ATTR{device/sriov_drivers_autoprobe}="0", ATTR{device/sriov_numvfs}="8"
UDEVRULES

udevadm control --reload
udevadm trigger --action=add

 

References:

1. How to use the mlx5_core driver with Mellanox ConnectX-4 Lx in Debian? | ServeTheHome Forums

2. https://forums.unraid.net/topic/109081-udev-regeln-in-unraid/?do=findComment&comment=996973

 

Link to comment
  • 4 months later...

ok i want to try this, what i have

 

Mellanox CX4121A IBM 01GR253 Dual-Port SFP28

SRIOV capable Board i7 13700k SRIOV with IGPU works

 

i tried all Methods none is working afaik

 

i want 4 VFs per Controller 

 

i tried to imitade the how to 2 post above mine from @riduxd but with no effect i have no VFs shown so none to bind after the first edit of the go file

 

sriov1.thumb.png.eba230bba3ea93bb4311bd2ddf83c8bd.png

 

this are my NICs

 

i would add this to my go file 

 

#!/bin/bash

# Start the Management Utility
/usr/local/sbin/emhttp & 
echo 12.884.901.888 >>
#p8 state nvidia
nvidia-persistenced
# -------------------------------------------------
# disable haveged as we trust /dev/random
# https://forums.unraid.net/topic/79616-haveged-daemon/?tab=comments#comment-903452
# -------------------------------------------------
/etc/rc.d/rc.haveged stop
echo 4 > /sys/devices/pci0000\:00/0000\:00\:02.0/sriov_numvfs && /usr/local/sbin/vfio-pci
modprobe i915 && sleep 5
echo 2 > /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs && /usr/local/sbin/vfio-pci
echo 2 > /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs
sed -i "s/\(strSpecialAddress.*\)\$gpu_function/\1\"0\"/" /usr/local/emhttp/plugins/dynamix.vm.manager/include/libvirt.php
#VF Mellanox
echo 4 > /sys/bus/pci/devices/0000:05:00.0/sriov_numvfs
echo 4 > /sys/bus/pci/devices/0000:05:00.1/sriov_numvfs
# Relaunch vfio-pci script to bind virtual function adapters that didn't exist at boot time
/usr/local/sbin/vfio-pci >>/var/log/vfio-pci

 

 

right?

Edited by domrockt
Link to comment
On 4/22/2023 at 1:33 AM, riduxd said:

 

 

# create and reload udev-rules 
cat << UDEVRULES >> /etc/udev/rules.d/99-sriov.rules
ACTION=="add", SUBSYSTEM=="net", ATTRS{vendor}=="0x15b3", ATTRS{device}=="0x1015", ATTR{device/sriov_drivers_autoprobe}="0", ATTR{device/sriov_numvfs}="8"
UDEVRULES

udevadm control --reload
udevadm trigger --action=add

 

 

my NIC has the same  ATTRS{vendor}=="0x15b3", ATTRS{device}=="0x1015" so this code needs to be in the go file?

 

 

# create and reload udev-rules cat << UDEVRULES >> /etc/udev/rules.d/99-sriov.rules ACTION=="add", SUBSYSTEM=="net", ATTRS{vendor}=="0x15b3", ATTRS{device}=="0x1015", ATTR{device/sriov_drivers_autoprobe}="0", ATTR{device/sriov_numvfs}="4" UDEVRULES udevadm control --reload udevadm trigger --action=add

 

i just need 4 VFs ?

Link to comment
  • 3 weeks later...
2 hours ago, frodr said:

Thanks @BVD for your guide. Great work.

 

Is it possible to run one port as is, and split into VFs on the other? And run VFs on a different subnet from the port not touched?

 

Yup! All based on the PCI address, so 02:00.1 could have 4 VFs and leave 02:00.0 untouched.

  • Thanks 1
Link to comment

Thanks for responding to my questions. Much appreciated.

 

Just for me to sure running this on 6.12.4:

 

1) Run the script "wget 'https://raw.githubusercontent.com/andre-richter/vfio-pci-bind/master/vfio-pci-bind.sh'; mv vfio-pci-bind.sh /boot/config/"

 

2) Run the cpmmand: "sudo echo 4 > /sys/class/net/eth1/device/sriov_numvfs"

 

The driver is "ice".

 

Do I have to run the "wget...." after every restart?

 

 

//

Edited by frodr
Added info.
Link to comment

Ok first things first, i get the gist of what the code does maybe an educated gues but i made it work with you guys/gals help. 😅

All this on 6.12.4

 

So i have an  25gbe dual nic its Mellanox Technologies MT27710 Family [ConnectX-4 Lx]

 

I did

 

1) Use this command in the Terminal:

wget 'https://raw.githubusercontent.com/andre-richter/vfio-pci-bind/master/vfio-pci-bind.sh'; mv vfio-pci-bind.sh /boot/config/

 

2) Use this comman in the Terminal 

echo 4 > /sys/class/net/eth0/device/sriov_numvfs

echo 4 > /sys/class/net/eth2/device/sriov_numvfs

 eth0 and eth2 are my 2 NIC's 

with this comman i get the Ids from the VFs in the Tools ---> System Devices like so,

15b3:1016 0000:02:01.2

15b3:1016 0000:02:01.3

15b3:1016 0000:02:01.4

15b3:1016 0000:02:01.5

15b3:1016 0000:02:00.2

15b3:1016 0000:02:00.3

15b3:1016 0000:02:00.4

15b3:1016 0000:02:00.5

 

3) made a user script make it run on start of the array,

 

echo 4 > /sys/class/net/eth0/device/sriov_numvfs
echo 4 > /sys/class/net/eth2/device/sriov_numvfs
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:01.2
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:01.3
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:01.4
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:01.5
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:00.2
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:00.3
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:00.4
sudo bash /boot/config/vfio-pci-bind.sh 15b3:1016 0000:02:00.5
/usr/local/sbin/vfio-pci >>/var/log/vfio-pci

 

 

4) its done 

sriov.thumb.png.af5650f20e7ef444878bfb7cd4db31ec.png

sriov2.thumb.png.f36aeabe1d6ed4e144427ae18a9e675f.png

 

 

next step on my second Unraid server i have an 100GbE Mellanox x4 and its SRIOV capable but it seems to be deactivated :D  lets see how i activate it via Unraid, i think there are some hints in this thread.

 

*edit* *done*

install the Mellanox Plugin open a Terminal and use this command 

You need to use your NIC ID and you can use any VFs amount your Card allows.

 

mstconfig -d 01:00.0 set SRIOV_EN=1 NUM_OF_VFS=4

 

now my 100GbE NIC is SRIOV activated

 

then rinse and repeat the steps above 1)- 4)

 

sriov3.thumb.png.eb2ffacf8032f468b7f821bf96b12db4.png

 

sriov4.thumb.png.2ffbc452e4129ee2e3256ea6cc26c4da.png

 

i spun up a VM and voila the NIC is already installed the other not installed is the virtual one from Unraid 😁

 

sriov5.thumb.png.119fc7035b27964b4fbd34e8307eec5e.png

 

 

Edited by domrockt
  • Thanks 1
Link to comment
18 hours ago, frodr said:

Should the boxes be ticked automatically after running the script? Mine do not.

No you need to tick how much you want to use in VM‘s 

Tick the Box and reboot as usual :)

 

sr1.thumb.png.9492ad96d2704cceae10690733272488.png

 

 

then create a VM and check those box or boxes if you need more than one in your VM

 

sr2.thumb.png.55109cea79677c79550650b07de0edaa.png

Edited by domrockt
  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.