Jump to content

unRAID plugin for iGPU SR-IOV support


Recommended Posts

First of all, huge thank you to @zhtengw for the plugin and to @ich777 for all of the assistance in this thread.  I'm brand new to Unraid and trying to set up SR-IOV to use for a combined VM / transcoding setup.  This thread has been incredibly helpful in understanding what's going on and troubleshooting through each of the obstacles I've encountered.  Thank you both!

The issue I'm running into is simply that the plugin doesn't seem to accept my configuration changes or actually setup any virtual functions.  With all of the instructions up to this point, I've been able to install it (running Unraid 6.12.4) and have it recognize that my CPU supports SR-IOV.  But when I set the VF number, nothing seems to happen regardless of which button I hit, both "Save to Config File" and "Enable Now".

I've tried rebooting after making the configuration changes, but once the system is back up, the settings page for the plugin is back to default values: 7 available, none created, number of VF's set to 0:

I've tried only clicking one button (trying with both), and clicking them both in different orders.  Nothing seems to help.  Is there something else that I'm missing?

image.png

Link to comment
5 minutes ago, familial-mate3101 said:

I've tried only clicking one button (trying with both), and clicking them both in different orders.  Nothing seems to help.  Is there something else that I'm missing?

Sorry but I'm most certainly the wrong person to help here (I don't have a capable SR-IOV CPU), but without your Diagnostics or at least what hardware are you using no one will be able to help you.

Link to comment
11 minutes ago, familial-mate3101 said:

But when I set the VF number, nothing seems to happen regardless of which button I hit, both "Save to Config File" and "Enable Now".

 

Try do it at the command line:

echo 2 > /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs

 

then check if it was set successfully:

cat /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs

 

If that doesn't work, it's probably not Unraid-related and you can try get help here: https://github.com/strongtz/i915-sriov-dkms

Edited by Daniel15
Link to comment
19 hours ago, Daniel15 said:

 

Try do it at the command line:

echo 2 > /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs

 

then check if it was set successfully:

cat /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs

 

If that doesn't work, it's probably not Unraid-related and you can try get help here: https://github.com/strongtz/i915-sriov-dkms


Running the first command you provided gives me a "No such file or directory" error.  Also, oddly enough, even without having any VF's set up, I can't set up any VM's to use the iGPU as their graphics card because of an address conflict:

image.png.cdde2335e66fe2fa80dab1977bb37020.png

 

I'm guessing this is because of the SR-IOV registration showing the same VGA adapter address, but that's a total guess.  Either way, it's preventing me from using my iGPU at all on my VM's.  I don't have any of them set to use it (obviously, since this error happens when I try).

 

I'll take my issues with the plugin to the Github link you shared.  Thank you for doing that!

I've included diagnostics this time, sorry for missing that initially!

atlas-diagnostics-20231012-1233.zip

Link to comment
1 hour ago, draconastar said:

Running the first command you provided gives me a "No such file or directory" error. 

I think this means that the module is not actually loading properly. If you're on 6.12.4, the plugin in the Unraid apps section does not work properly (it only supports Linux kernels up until the version included with 6.12.3) and you'll instead have to install the one attached to this comment:

 

Issues with the plugin itself (like the kernel module not loading at all) should go in this thread. If the module loads properly (i.e. lsmod shows it, and /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs exists) but there's issues with it, they should probably go in the GitHub project I linked to. I say "probably" because the code in that repo is actually taken from Intel's fork of the kernel which adds SR-IOV support, and I'm not sure if they're actually maintaining the code or not. It's probably the best place though.

 

I've heard that Intel are working on upstreaming SR-IOV support (adding the code to the kernel) meaning a custom kernel module shouldn't be needed in the future.

 

Edited by Daniel15
Link to comment
54 minutes ago, Daniel15 said:

I think this means that the module is not actually loading properly. If you're on 6.12.4, the plugin in the Unraid apps section does not work properly (it only supports Linux kernels up until the version included with 6.12.3) and you'll instead have to install the one attached to this comment:

 

Issues with the plugin itself (like the kernel module not loading at all) should go in this thread. If the module loads properly (i.e. lsmod shows it, and /sys/devices/pci0000:00/0000:00:02.0/sriov_numvfs exists) but there's issues with it, they should probably go in the GitHub project I linked to. I say "probably" because the code in that repo is actually taken from Intel's fork of the kernel which adds SR-IOV support, and I'm not sure if they're actually maintaining the code or not. It's probably the best place though.

 

I've heard that Intel are working on upstreaming SR-IOV support (adding the code to the kernel) meaning a custom kernel module shouldn't be needed in the future.

 


Appreciate this information.  I followed the instructions for 6.12.4 that you pasted, which is how I was able to install the plugin at all.  It fails if you try and pull it from the apps section (incompatible kernel).

Your mention of the module loading properly had me check to make sure that is the case.  A few other odd things I encountered:

  • The plugin is not listed in lsmod as far as I can tell.  The plugin's setting page is still visible and usable from the Settings tab, and syslogs has plenty of logs showing that 'i915-sriov' was installed.  Running lsmod | grep -i 'i915-sriov' returns no results.
  • I know that the sriov_numvfs file exists, because I can navigate to it and open it with nano, yet I still get the "No such file or directory" error when I try to write to it.  I'm starting to think this may be a big part of the issue.  When I open it with nano, change the 0 to a 2 (or any other value), exit and try to save, saving produces the same error, "No such file or directory".  This obviously makes no sense.  ls -l shows that my user (root) has both read and write permission, so I'm not sure what's going on there.  Perhaps being blocked at the kernel or driver level?

I'm going to try completely removing the plugin and following the installation steps you pasted again, just in case I've missed something or configured something incorrectly.

Edited by draconastar
Link to comment

Since the switch from ASRock H670M-ITX to MSI MPG B760I,

I have been encountering the following error in the VFIO-PCI log:

Processing 0000:00:02.1 8086:4690 Error: Device 0000:00:02.1 does not exist, unable to bind device.

 

Unraid: Version: 6.11.5 SR-IOV Status: Supported.

 

I also noticed 1x error in the system log, but I'm not sure if it's related to the VFIO log:

kernel: i915 0000:00:02.0: [drm] GuC error state capture buffer may be too small: 2097152 < 2557128 (min = 852376)

 

Passing through the VGA Device 0000:00:02 as a test without iGPU SR-IOV Plugin works without any problems.

Link to comment
On 9/19/2023 at 10:29 PM, domrockt said:

 

Hold my Beer xD , i had it working back in 6.12.3 , afaik Sunshine worked fine, parsec worked too. I had to remove all other Displays and add an Virtual one like Amyuni USB Mobile Monitor and something else.. maybe a script that startet that VirtMonitor. its been a while didnt needed that for that long hab my 3060 working for that.

 

Did you find a solution for this?

Just recently upgraded from 8th-gen to 12th-gen i5.

 

Managed to get UHD770 showing up in the VM, though I'm only able to boot as long as I have unraid virtual Graphics enabled as well.

I've tested Parsec Virtual display, and github.com/itsmikethetech/Virtual-Display-Driver with no luch, getting the same error when attempting to connect using parsec.

 

Edit: Add Amyuni USB Mobile Monitor to that as well; no luck.

 

Edited by LAS
  • Like 1
Link to comment
2023年9月19日下午9点37分,ich777 说道:

如果您已经使用Unraid 6.12.4,请尝试以下操作:

 

  1. 从 Unraid 终端执行此操作:
    
    			
  2. 将帖子中的两个文件复制到上述文件夹中:/boot/config/plugins/i915-sriov/packages/6.1.49/
  3. 安装 SR-IOV 插件
  4. 从 Unraid 终端执行此操作:
    
    			
  5. 重新启动

帮帮我吧。我按照你的方法做的,好像没效果。

wbgtower-syslog-20231022-0818.zip

Link to comment
5 hours ago, wbgwwd said:

帮帮我吧。我按照你的方法做的,好像没效果。

Could you share a bit more information (screenshots,…).

What does not work?

 

Sorry but I can‘t help much here because I don‘t have hardware on hand to test and I only compile the packages and provide a way for users to use this plugin for now since the developer from this plugin seems to have been vanished.

This method definitely works because it is working for others too.

 

Maybe someone else can help you, maybe @domrockt?

Link to comment
8 hours ago, wbgwwd said:

帮帮我吧。我按照你的方法做的,好像没效果。

 

 

1) Mainboard Bios, enable sriov there

2) install the sriov plugin

iov1.thumb.png.6e6bb15602a152e8cb3eb10dc3fa0344.png

3) make a user script with (echo 4 means 4 Virtual GPUs)

#!/bin/bash
echo 4 > /sys/devices/pci0000\:00/0000\:00\:02.0/sriov_numvfs

iov2.thumb.png.2ac1e7f335b2c097b69fb2b97bd0a47d.png

4)make it run on array start 

iov3.thumb.png.4b47ec94fae06b116e2082942502a24b.png

5) set 4 VFs in the Plugin

iov4.thumb.png.3f95395e564b2177219620b82a21d208.png

6)reboot

 

7) bind as many as you wish in the system devices Tab

iov5.thumb.png.76ac88f6bfa720335ce4f31e95e743d0.png

8)reboot

 

Edited by domrockt
Link to comment
On 10/22/2023 at 12:43 PM, domrockt said:

 

1) Mainboard Bios, enable sriov there

2) install the sriov plugin

iov1.thumb.png.6e6bb15602a152e8cb3eb10dc3fa0344.png

3) make a user script with (echo 4 means 4 Virtual GPUs)

#!/bin/bash
echo 4 > /sys/devices/pci0000\:00/0000\:00\:02.0/sriov_numvfs

iov2.thumb.png.2ac1e7f335b2c097b69fb2b97bd0a47d.png

4)make it run on array start 

iov3.thumb.png.4b47ec94fae06b116e2082942502a24b.png

5) set 4 VFs in the Plugin

iov4.thumb.png.3f95395e564b2177219620b82a21d208.png

6)reboot

 

7) bind as many as you wish in the system devices Tab

iov5.thumb.png.76ac88f6bfa720335ce4f31e95e743d0.png

8)reboot

 

I have been struggling with this for weeks and this was the fix! If you're on 6.12.4 or higher, follow the steps from 

and then these steps above. 

  • Like 1
Link to comment

Maybe someone can provide some help. So that is what i achived on my Erying i9-12900HK mobile CPU on a Mini ITX board with Intel Iris XE graphics.

 

So i installed SR-IoV, and made 3 VFs

1066311922_Screenshot2023-10-23at22_35_49.thumb.png.4b8582c9236596207da0646cd4e00ef1.png

 

Then i got 3 VGA cards in Hardware and pushed them to VFIO

1057143338_Screenshot2023-10-23at22_56_16.thumb.png.c0983bf827d1002193dbdf09db9c708f.png

 

Made a reboot certanly, and after i tried all 3 cards in WIN10 VM. If i set them as primary cards i get no drivers except Parsec1851906859_Screenshot2023-10-23at22_37_49.png.88d77cf89044bd67a465b777f822547e.png

 

 

If i set VNC as primary and Intel 2, 3, 4 cards as Secondary, i have them installed in device manager, but the card just doesnt work (parsec says it is software decoding, any gpu test are 0) 

1775185270_Screenshot2023-10-23at22_52_46.thumb.png.b96755c8b40b38469b96fb508dbfb0eb.png

 

I tried without SR_IOV by pushing just the card to VHFO and i get device manager with error43

 

Im really tired fighting but it just doesnt work, maybe the community can help me solve out

Link to comment
1 hour ago, wbgwwd said:

It's all working fine for me on this version 6.11.5.

This is not a valid point because first of all this plugin is highly experimental as the GitHub repo from the maintainer states over here, the version for 6.11.5 is based on a completely commit, the Kernel from 6.11.5 is a different one and what it ultimately boils down to is that there are too many variables involved.

 

I would strongly recommend that you create a Issue on the GitHub from the maintainer over here.

 

TBH I'm not too sure if this repository is actively maintained anymore since the last commit was my PR that I requested to make it compatible with Kernel 6.5. last month but I could be wrong about that.

 

BTW @fromonesource also mentioned that he got it working with an older guest driver.

  • Like 1
Link to comment

Hello, first of all, I'd like to thank you for developing the sr-iov plugin. I'm currently using the unraid version 6.12.4, and my CPU is an Intel i5 12400. I've successfully installed the plugin and achieved virtualization. Below is the screenshot after my virtualization.  202310302222939.png Secondly, I previously referred to this link https://zhuanlan.zhihu.com/p/563802258, aiming to get Plex to also utilize the GPU. However, it seems that Plex isn't using the GPU as I had hoped, with the GPU usage still at 0%. Here is the docker-compose for my Plex docker installation. 202310302219575.pngIs my setup correct? Assuming I've virtualized two GPUs through integrated graphics, how can I assign one of them for Plex's use? If this isn't possible, can Plex make use of the GPU under sr-iov virtualization? I'm looking forward to your response. 

 

 

您好,首先很感谢您开发了sr-iov插件,我目前的unraid版本是6.12.4,我的cpu是intel i5 12400 成功安装了插件并实现了虚拟化,下面是我的虚拟化后的截图;其次,我之前参考这个链接https://zhuanlan.zhihu.com/p/563802258 想将plex也使用gpu,但看起来plex并没有按我想的使用gpu,gpu的使用率依然是0,这是我plex docker安装的docker-compose,请问这里设置的对吗,假设我现在通过核显虚拟化出了2个虚拟的gpu,我要如何将其中一个给plex用呢?如果不可以的话,在sr-iov虚拟化下,是否可以让plex也使用到gpu呢,等待您的回复

Link to comment
17 hours ago, coderZoe said:

Assuming I've virtualized two GPUs through integrated graphics, how can I assign one of them for Plex's use?

 

nope it is not like you imagine!!

 

You have a real IGPU that is device 00.02.00 

sriov1.thumb.png.ba26592e5f559dedc24db903287df47b.png

 

this is automaticly used by Unraid as a docker IGPU

you need the Intel TOP plugin to work with Plex.

 

your two VF's igpus can be passed trough to VM's and need to be stubbed like you have.

Link to comment

Should I be able to pass iGPU and HDMI port (with or without sr-iov installed) on an intel i5 12450H?   

 

I've been struggling: Windows 10/11, Q35/i440fx.  sr-iov on and off.   I've tried various other tweaks like vga=off video=vesafb:off,efifb:off in syslinux. 

 

0000:00:02.0 is "Intel Corporation Alder Lake-P GT1 [UHD Graphics]" weather or not sr-iov is on.  Unlike with dgpus the audio driver isn't at 2.1.  The 1f.0 device referenced below is the ISA bridge which is in the same IOMMU group as the audio driver.  

 

The errors in the vm log are:
 

2023-11-02T20:08:40.746114Z qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:00:02.0","id":"hostdev0","bus":"pcie.0","addr":"0x2"}: IGD device 0000:00:02.0 cannot support legacy mode due to existing devices at address 1f.0
2023-11-02T20:08:41.388627Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:00:02.0
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=

 

I tried @SpaceInvaderOne vbios dump script which also failed.

 

I tried downstream, multifunction and both to get the audio driver in its own IOMMU group but I'm hoping that isn't necessary.  I'm left with these items in one iommu grouped in all scenarios:  

 

[8086:5182] 00:1f.0 ISA bridge: Intel Corporation Alder Lake PCH eSPI Controller (rev 01)
[8086:51c8] 00:1f.3 Audio device: Intel Corporation Alder Lake PCH-P High Definition Audio Controller (rev 01)
[8086:51a3] 00:1f.4 SMBus: Intel Corporation Alder Lake PCH-P SMBus Host Controller (rev 01)
[8086:51a4] 00:1f.5 Serial bus controller: Intel Corporation Alder Lake-P PCH SPI Controller (rev 01)

 

I'm afraid to stub all of the above because I think some of those devices are required by the host.

 

I procured a 12th gen intel mini pc and not AMD because I thought it was further along and more likely to work here based on threads like this but I guess I needed to dig further.

 

I'm just looking for a direction to head or another mini pc platform (AMD?) where this is working.   Just iterating through different combinations without really understanding where to head next is killing hours with little fruitful movement.

 

Thanks,

 

--dimes

Link to comment
On 10/2/2023 at 4:46 PM, xlucero1 said:

Your experience speaks volumes. The Dummy plug did it the trick, even after hours of research I couldn't figure out why I couldn't enable primary display to iGPU; Thank you so much for helping getting it enabled and working, truly. And thank you very much for the plugin @zhtengw I really hope to figure out these other issues.

The problem I am experiencing now is two-part - It takes some tinkering to get the VM up to where I can utilize it - but when I can get it working in one of these two ways, I have these issues:

1. I was able to passthrough Intel igpu to previous VM, solo:
- enable a virtual display with "usbmmidd_vv2" & remote in with Splashtop Desktop; but the screen goes black when I do anything extensive (i.e. change from 1090x1080 to 2560x1440 or add a second display).
example 1 tower-diagnostics-20231002-1315.zip

2. When I passthrough: GPU 1 - 'Virtual' & GPU 2 - 'Intel';
- I can see the iGPU in the VM but it is not active. The driver seems to be up-to-date.
example 2 tower-diagnostics-20231002-1439.zip
- This issue here would be ideal to fix first. 

I am not sure if I should bind the VF's at boot, or what is the proper way to get it working. I am getting these errors in the VM logs.

char device redirected to /dev/pts/0 (label charserial0)
qxl_send_events: spice-server bug: guest stopped, ignoring
2023-10-02T18:08:11.528775Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2023-10-02T18:08:11.528813Z qemu-system-x86_64: vfio_dma_map(0x148f5d750400, 0x382000000000, 0x20000000, 0x148f32200000) = -2 (No such file or directory)


If there's anything i can do that might be helpful please let me know. Much appreciated.

Update: After keeping it running in the background a couple days it runs okay and I am able to remote in with no issues. BUT when I turned off another VM & booted another, the Splashtop display went black & and I cannot access it in parsec. 
So issue #1 above is a server/plugin related issue possibly and seems to be unrelated to the VM itself being underpowered or overdriven... (tower-diagnostics-20231004-0028.zip)
. TY

RDP- remote desktop, resolved my issues of 0% GPU utilization, hardware acceleration of programs & the screen going out/BLACK on Splashtop/Parsec/VNC. This article drove me to use RDP instead of the alternatives. I currently have igpu  02.01 passed through solo. It has been stable for days. Thank you all again for your help @ich777& @zhtengw & everyone else on this thread!

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...