[PLUGIN] Intel iGPU SR-IOV - Support Page


Recommended Posts

29 minutes ago, whallin said:

However, is QSV supposed to work in an SR-IOV setup? It works okay on the host, but in the Windows 10 guest, Handbrake seems to be failing to encode using QSV for an "unknown reason".

 

Hardware acceleration seems to work perfectly elsewhere. It seems to be just QSV who's messing up. Has anyone had success with QSV?

Works for me. I'm still on Unraid 6.12.4 though, so I'm not sure if it broke in newer Unraid versions. I'm running Windows Server 2022 and am running Blue Iris with Intel video acceleration to record my security cameras. It doesn't use a lot of GPU power, but it's definitely using it:

 

image.thumb.png.4619be2f8382ddc9ad0f67dad650ff44.png

 

iGPU is configured as second graphics card, to allow VNC to still work properly:

image.thumb.png.79966bbebb3ed88f000e499ddf8f4290.png

Link to comment
16 minutes ago, Daniel15 said:

Works for me. I'm still on Unraid 6.12.4 though, so I'm not sure if it broke in newer Unraid versions. I'm running Windows Server 2022 and am running Blue Iris with Intel video acceleration to record my security cameras.

 

Moved my BI license to the VM in question for testing, and QSV certainly does work. Tried transcoding with ffmpeg itself, and that worked too. Must be a Handbrake issue. Thanks for the Blue Iris tip! 

  • Like 1
Link to comment

The installation of the GPU SR-IOV plugin fixes the following errors on my ASROCK N100DC-ITX motherboard + UnRaid 6.12.8:

[  231.726795] i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 7f
[  278.497321] i915 0000:00:02.0: [drm] *ERROR* Unexpected DP dual mode adaptor ID 7f

But I have this line at boot (dmesg):

 

i915 0000:00:02.0: Direct firmware load for i915/adlp_dmc.bin failed with error -2

The required binary is not available under /lib/firmware/i915 only adlp_dmc_ver2_16.bin. Is it a real issue?

 

Another problem with the i915 driver, it causes UnRaid to not wake up from sleep ! My server is stuck and cannot be accessed. This is an issue for me, the problem may be found on different forums with this i915 driver but I haven't found a workaround.

Edited by phillo92
  • Like 1
Link to comment
On 2/20/2024 at 12:59 AM, jakeshake said:

Hello,

 

Thank you for keeping the pace going with this plug-in.

 

I am curious - are you able to output video from a Windows VM using the HDMI or Display Port with the iGPU using this plug-in?

 

On 2/23/2024 at 1:52 PM, giganode said:

You should also be able to use a real monitor with an hdmi dongle. Still need to verify this though.

 

I can verify that you can use a physical monitor but you can not use the onboard hdmi/dp ports.

With Intel GVT-g it was possible to use an usb to hdmi adapter. This also works with the new iGPUs, just passtrough the usb device and give it a go.. 

With my adapter I had to install a driver first, this may not be the case for every adapter.

 

image.thumb.png.892c343c862bb0343c5b841577ba11d8.png

  • Like 1
Link to comment
Rocking an i5-12400. I installed the plugin and got the drivers working in my Windows VM. No problems whatsoever.

However, is QSV supposed to work in an SR-IOV setup? It works okay on the host, but in the Windows 10 guest, Handbrake seems to be failing to encode using QSV for an "unknown reason".
 
Hardware acceleration seems to work perfectly elsewhere. It seems to be just QSV who's messing up. Has anyone had success with QSV?

QuickSync should work. Did you check if device manager reports issues like code 43 or something else?

Sunshine for example uses QuickSync. I can use Sunshine, but handbrake fails on both of my systems, although I know someone with an N100 on which the windows vm can utilize QuickSync in handbrake.
Link to comment
16 minutes ago, dopeytree said:

Hi just trying to understand SR-IOV...

 

If you use this does it still need to be GPU passthrough'ed? I.E gpu  is then solely available for VM's & not available for docker containers?

 

Hi!

 

With SR-IOV you are able to share the devices resources with VMs. The device is still usable for the host system.

 

In simple terms, the host and dockers use ..

 

image.thumb.png.5a01079968e980a73959bef9fea218a4.png

 

.. while you passthrough one of these VFs to a VM.

 

image.thumb.png.67a98a482719b8033155465fb7f76fa4.png

 

The amount of VFs (Virtual Functions) can be increased if needed and is for example limited to 7 for Intel iGPUs as far as I have seen.

image.thumb.png.0b81c94c8b7ad934bd5dde9c56dd68cd.png

Edited by giganode
Link to comment
12 minutes ago, giganode said:


I don’t see any VF in your list.

Which device did you exactly passtrough?

Raptor Lake IGPU, from Minisforum AR900i

Device ID in VM manager - 02:00:0 without sound card, as it in different group and gives me error 

VM is working, just getting error 43, not sure what does it mean overall for performance

Link to comment
Raptor Lake IGPU, from Minisforum AR900i
Device ID in VM manager - 02:00:0 without sound card, as it in different group and gives me error 
VM is working, just getting error 43, not sure what does it mean overall for performance

Then you are doing it wrong.

You mustn’t passthrough the iGPU itself.
You have to passtrough one of its VFs
  • Thanks 1
Link to comment

I can't get it to work. I have installed the plugin from giganode and restarted my Unraid.

When I go into the settings, the VFs numbers are not saved, it stays at 0 when i refresh this page. All VM's are Stopped

 

 

image.thumb.png.304e83f0a375664fe30e1de68118b7e5.png

 

I have an Intel 13500 and my Unraid Version ist the latest 6.12.8

 

 

image.thumb.png.eebe705bd1cee974b4dbcffb57860ded.png

 

Link to comment
On 2/29/2024 at 12:34 PM, Lunixx said:

I can't get it to work. I have installed the plugin from giganode and restarted my Unraid.

When I go into the settings, the VFs numbers are not saved, it stays at 0 when i refresh this page. All VM's are Stopped

 

 

image.thumb.png.304e83f0a375664fe30e1de68118b7e5.png

 

I have an Intel 13500 and my Unraid Version ist the latest 6.12.8

 

 

image.thumb.png.eebe705bd1cee974b4dbcffb57860ded.png

 

 

Please post your Diagnostics.

  • Thanks 1
Link to comment
On 2/29/2024 at 12:34 PM, Lunixx said:

When I go into the settings, the VFs numbers are not saved, it stays at 0 when i refresh this page. All VM's are Stopped

 

change the value, reboot ... they are created at boot.

 

you can may take a look at this after you changed the value in the GUI

 

root@AlsServerII:~# cat /boot/config/plugins/i915-sriov/i915-sriov.cfg
vfnumber=1
root@AlsServerII:~#

 here they are stored, in my case "1" vf is created at boot.

 

you can also try to nano, edit and boot (bypass the webui)

  • Like 1
  • Upvote 1
Link to comment
On 2/23/2024 at 6:08 PM, Ocelot8391 said:

Handbrake seems to be failing to encode using QSV for an "unknown reason".

you may should take a look t the handbrake log what it says, if its really a hardware issue for handbrake or some missing setting for your encode

 

i can confirm, plugin works nice here (incl. handbrake), thanks @giganode

 

image.thumb.png.92bfe9b63f20f0df80bdfdefaba8232d.png

 

as soon other hardware encoders work it should also work for handbrake, sample, set a fixed fps for your output or or or ...

  • Like 1
Link to comment
On 2/27/2024 at 8:21 AM, giganode said:

 

Which problem exactly?

Giga,

After more research, it appears the drivers are unable to init the graphics pci board in the linux client (Manjaro)

Here is what I did... I installed the plugin and left it at 2 VF's by default.

I created a fresh manjaro install using both qxl and intel cards selected in the vm template. 

The VM boots fine, the card is there... but does not init properly in the linux guest.

 

lspci

00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
00:01.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 05)
00:02.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.4 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.5 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:07.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03)
00:07.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03)
00:07.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03)
00:07.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03)
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)
01:00.0 Ethernet controller: Red Hat, Inc. Virtio 1.0 network device (rev 01)
02:00.0 Communication controller: Red Hat, Inc. Virtio 1.0 console (rev 01)
03:00.0 SCSI storage controller: Red Hat, Inc. Virtio 1.0 block device (rev 01)
06:00.0 VGA compatible controller: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] (rev 04)

 

➜ ~ sudo dmesg | grep i915 
[    3.189249] i915 0000:06:00.0: [drm] *ERROR* Device is non-operational; MMIO access returns 0xFFFFFFFF!
[    3.189631] i915 0000:06:00.0: Device initialization failed (-5)
[    3.189634] i915 0000:06:00.0: Please file a bug on drm/i915; see https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs for details.
[    3.189636] i915: probe of 0000:06:00.0 failed with error -5

 

I suspect linux i915 drivers are not quite up to spec with sriov. Arch repo offers an aur install of 1915-sriov-dkms. Tried it, but no go.

 

I was able to get a windows11 client working, but had to use rdp to get the client to show. I prefer to use nomachine but not working yet.

 

Any thoughts would be appreciated....

 

buckyu-diagnostics-20240303-1526.zip

Edited by bucky2076
attach diags
Link to comment
On 3/1/2024 at 10:07 PM, giganode said:

 

Please post your Diagnostics.

 

I have uploaded the complete zip file. I hope that is right

 

On 3/2/2024 at 6:27 AM, alturismo said:

change the value, reboot ... they are created at boot.

 

you can may take a look at this after you changed the value in the GUI

 

root@AlsServerII:~# cat /boot/config/plugins/i915-sriov/i915-sriov.cfg
vfnumber=1
root@AlsServerII:~#

 here they are stored, in my case "1" vf is created at boot.

 

you can also try to nano, edit and boot (bypass the webui)

 

I have already restarted the host several times, but the value always remains at 0.

I have had a look in the cfg. It says vfnumber=3
But I don't see any changes.

 

image.png.76c9ad1eccc98fe9182d3baac4a6438b.png

 

 

 

image.thumb.png.d631efb3372d78da75483a05179c2bd8.png

lstation-diagnostics-20240303-1023.zip

Edited by Lunixx
Link to comment
23 minutes ago, Lunixx said:

But I don't see any changes.

 

may reduce them to 1 ... but i guess wont help

 

Feb 29 12:24:05 lstation root: ---Setting VFs to: 4---
Feb 29 12:24:05 lstation kernel: i915 0000:00:02.0: not enough MMIO resources for SR-IOV
Feb 29 12:24:05 lstation kernel: i915 0000:00:02.0: [drm] *ERROR* Failed to enable 4 VFs (-ENOMEM)
Feb 29 12:24:05 lstation root: 
Feb 29 12:24:05 lstation root: -------------------------------------------------
Feb 29 12:24:05 lstation root: ---Installation from SR-IOV plugin successful!---
Feb 29 12:24:05 lstation root: -------------------------------------------------
Feb 29 12:24:05 lstation root: 
Feb 29 12:24:06 lstation root: plugin: i915-sriov.plg installed

 

root cause is rather your mainboard ... may try some BIOS lookups, above 4g, uefi, latest bios installed, last try, syslinux startup parameter ...

 

https://access.redhat.com/solutions/37376

  • Like 1
Link to comment
On 3/3/2024 at 10:56 AM, alturismo said:

may reduce them to 1 ... but i guess wont help

 

Feb 29 12:24:05 lstation root: ---Setting VFs to: 4---
Feb 29 12:24:05 lstation kernel: i915 0000:00:02.0: not enough MMIO resources for SR-IOV
Feb 29 12:24:05 lstation kernel: i915 0000:00:02.0: [drm] *ERROR* Failed to enable 4 VFs (-ENOMEM)
Feb 29 12:24:05 lstation root: 
Feb 29 12:24:05 lstation root: -------------------------------------------------
Feb 29 12:24:05 lstation root: ---Installation from SR-IOV plugin successful!---
Feb 29 12:24:05 lstation root: -------------------------------------------------
Feb 29 12:24:05 lstation root: 
Feb 29 12:24:06 lstation root: plugin: i915-sriov.plg installed

 

root cause is rather your mainboard ... may try some BIOS lookups, above 4g, uefi, latest bios installed, last try, syslinux startup parameter ...

 

https://access.redhat.com/solutions/37376

thank you for help. I've enabled 4g and now its working!

  • Like 1
Link to comment
On 3/14/2024 at 5:30 AM, DearSir said:

Mar 11 23:22:14 kernel: i915 0000:00:02.0: Direct firmware load for i915/adlp_dmc.bin failed with error -2

 

 

 

 

i5-12500H  显示代码43 无法使用。。。

Same problem with my ASROCK N100DC-ITX motherboard + UnRaid 6.12.8. An idea to investigate ?

Link to comment

3 月 11 日 23:22:14 内核:i915 0000:00:02.0:i915/adlp_dmc.bin 的直接固件加载失败,出现错误 -2

 

 

 

 


 ERYING G660 ITX 

i5-12500H 显示代码43 无法使用。。。

 

BIOS设置为 4G编解码,开起1个虚拟显卡

 

能够识别,但无法使用。

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.