Jump to content

unRAID plugin for iGPU SR-IOV support


Recommended Posts

6 hours ago, dirkinthedark said:

I don't know why this happens, if you are interested let me know and I will do any troubleshooting steps you would like me to do.

It is really hard for me since I don't have such hardware where I even can test SR-IOV.

 

However you can post your Diagnostics again with the card bound to VFIO and without it bound to VFIO.

  • Like 1
Link to comment
  • 2 weeks later...

when i have the igpu-vf passed through it vm works fine via remote desktop. but when i change it back to VNC in the unraid GUI it never initializes the display. i've had these issues before after passing through a nvidia gpu as well. not sure why i can't go back to vnc after using passthrough on a prior vm boot.

Link to comment
16 hours ago, letrain said:

when i have the igpu-vf passed through it vm works fine via remote desktop. but when i change it back to VNC in the unraid GUI it never initializes the display. i've had these issues before after passing through a nvidia gpu as well. not sure why i can't go back to vnc after using passthrough on a prior vm boot.

 

If you want to use VNC and also use the iGPU (or Nvidia GPU, whatever) in a VM, you need to have TWO video devices. The GPU should be added as the 2nd Graphics Card. There's a "+" button you can click to add the second one.

image.thumb.png.50dcab991091b3d4ac80a2950221bed7.png

  • Like 2
  • Thanks 1
Link to comment
  • 2 weeks later...

Is this working with 6.12.6? i keep getting below when i try to download the plugin

 

----------Downloading i915-sriov module Package for kernel v6.1.64----------

---------This could take some time, please don't close this window!-----------

--------Can't download i915-sriov module Package for kernel v6.1.64-----------

plugin: run failed: '/bin/bash' returned 1 Executing hook script: post_plugin_checks

Link to comment
1 hour ago, Tpole said:

Is this working with 6.12.6? i keep getting below when i try to download the plugin

 

----------Downloading i915-sriov module Package for kernel v6.1.64----------

---------This could take some time, please don't close this window!-----------

--------Can't download i915-sriov module Package for kernel v6.1.64-----------

plugin: run failed: '/bin/bash' returned 1 Executing hook script: post_plugin_checks

You need to use the updated version from here:

 

  • Like 1
Link to comment
3 hours ago, Daniel15 said:

You need to use the updated version from here:

 

Great! that installed! but now i can seem to get it to add any VFs, no matter what i change, or clicking enable, i just stays like this, and i cant see any added VFs in the VMs setup

image.thumb.png.039b45f8a7318f9e4883ce5701b0ae23.png

Link to comment
11 hours ago, alturismo said:

you should set the VF Numbers, hit save ... and then

 

you did a reboot like its stated here ?

image.thumb.png.508a1b5b89e5c29d43170f265e3811a5.png

yep i've rebooted and no joy, do i just hit enable now, or save to config then enable? I've done both before a restart, and each on their own before a restart and none of the 3 have worked

Edited by Tpole
Link to comment
  • 2 weeks later...

Hi,

I am not familiar with Linux, but I did put the command to the unraids terminal.

That's the result:

 

(just like the last page of a lot more messages)

 

[  346.110367] eth0: renamed from veth610d776
[  346.136447] IPv6: ADDRCONF(NETDEV_CHANGE): veth8b3bbd4: link becomes ready
[  346.136494] docker0: port 15(veth8b3bbd4) entered blocking state
[  346.136498] docker0: port 15(veth8b3bbd4) entered forwarding state
[  347.767559] docker0: port 16(veth84f9567) entered blocking state
[  347.767565] docker0: port 16(veth84f9567) entered disabled state
[  347.770528] device veth84f9567 entered promiscuous mode
[  348.392266] eth0: renamed from vethf32abb2
[  348.403357] IPv6: ADDRCONF(NETDEV_CHANGE): veth84f9567: link becomes ready
[  348.403404] docker0: port 16(veth84f9567) entered blocking state
[  348.403408] docker0: port 16(veth84f9567) entered forwarding state
[  393.796257] vethf32abb2: renamed from eth0
[  393.817391] docker0: port 16(veth84f9567) entered disabled state
[  393.847800] docker0: port 16(veth84f9567) entered disabled state
[  393.848529] device veth84f9567 left promiscuous mode
[  393.848533] docker0: port 16(veth84f9567) entered disabled state
[ 1429.196177] hrtimer: interrupt took 27267 ns

Link to comment
5 hours ago, Commerzpunk said:

Hello,

i got the same in my Unraid 6.12.6

 

Got a 12th Gen Intel® Core™ i3-12100

 

image.png.0054dc94dcc82311a21eb1dfc25eb7ab.png

 

What ever i try, every combination, several restarts, its sticking to 0 and "enable now".

 

Who has an idea how to use this plugin?

I still havent managed to get this to work as well, so any help on this would be great!

Link to comment
On 1/26/2024 at 12:12 AM, Tpole said:

I still havent managed to get this to work as well, so any help on this would be great!

Do you have any active vfio binds for the igpu? I think I had this problem before I unbound everything

Link to comment

That was the solution for me!

Go to the top menu, tools, system devices, unbind the igpu. Reboot, and the plugin works. :)

 

No, its not. As soon as I bind the igpu again after reboot its not working again.

 

Same as in the screenshots above. Always 0.

 

Who can help please? I relly like to use my igpu in my VMs. :(

Edited by Commerzpunk
Link to comment
2 hours ago, Commerzpunk said:

That was the solution for me!

Go to the top menu, tools, system devices, unbind the igpu. Reboot, and the plugin works. :)

 

No, its not. As soon as I bind the igpu again after reboot its not working again.

 

Same as in the screenshots above. Always 0.

 

Who can help please? I relly like to use my igpu in my VMs. :(

Take my tips with a grain of salt but do you need to bind it at all? I just leave everything unbound.

Link to comment
On 1/16/2024 at 9:32 PM, Daniel15 said:

You need to use the updated version from here:

 

I was able to successfully install this plugin and (unlike other recent commenters) successfully set the VFs number. 

 

However I don't really understand what these virtual functions are or how many I need. I am just trying to pass my 12600k iGpu through to a windows VM (top priority) and if possible allow to also be used by my plex docker. If the simultaneous use by plex docker is not possible, then I can just run the plex server in the windows VM. 

 

When I activate 1, 4, or 7 VFs I get the same error in logs:

kernel: i915 0000:00:02.1: [drm] *ERROR* tlb invalidation response timed out for seqno 23

 

From the VM logs I get this:

char device redirected to /dev/pts/0 (label charserial0)
2024-02-06T00:36:01.107238Z qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:00:02.0","id":"hostdev0","bus":"pci.0","addr":"0x2"}: IGD device 0000:00:02.0 has no ROM, legacy mode disabled
2024-02-06T00:36:01.546538Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.546553Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380010000000, 0x1000000, 0x14e185000000) = -22 (Invalid argument)
2024-02-06T00:36:01.546647Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.546652Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380000000000, 0x10000000, 0x14e175000000) = -22 (Invalid argument)
2024-02-06T00:36:01.546779Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:00:02.0
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=
2024-02-06T00:36:01.551701Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.551710Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380010000000, 0x1000000, 0x14e185000000) = -22 (Invalid argument)
2024-02-06T00:36:01.551770Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.551774Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380000000000, 0x10000000, 0x14e175000000) = -22 (Invalid argument)
2024-02-06T00:36:01.557479Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.557488Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380010000000, 0x1000000, 0x14e185000000) = -22 (Invalid argument)
2024-02-06T00:36:01.557556Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.557559Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380000000000, 0x10000000, 0x14e175000000) = -22 (Invalid argument)
2024-02-06T00:36:01.577905Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.577917Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380010000000, 0x1000000, 0x14e185000000) = -22 (Invalid argument)
2024-02-06T00:36:01.577987Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:01.577991Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380000000000, 0x10000000, 0x14e175000000) = -22 (Invalid argument)
2024-02-06T00:36:03.139161Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:03.139188Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380010000000, 0x1000000, 0x14e185000000) = -22 (Invalid argument)
2024-02-06T00:36:03.139352Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:03.139356Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380000000000, 0x10000000, 0x14e175000000) = -22 (Invalid argument)
2024-02-06T00:36:03.143430Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:03.143439Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380010000000, 0x1000000, 0x14e185000000) = -22 (Invalid argument)
2024-02-06T00:36:03.143543Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:03.143547Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380000000000, 0x10000000, 0x14e175000000) = -22 (Invalid argument)
2024-02-06T00:36:05.514505Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:05.514546Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380010000000, 0x1000000, 0x14e185000000) = -22 (Invalid argument)
2024-02-06T00:36:05.514758Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T00:36:05.514766Z qemu-system-x86_64: vfio_dma_map(0x14e186a1ac00, 0x380000000000, 0x10000000, 0x14e175000000) = -22 (Invalid argument)

 

Here are the settings I used for the VM (see attached image).

 

No idea what to do here but would super appreciate any help!!

 

Screenshot 2024-02-05 at 4.38.42 PM.jpg

Screenshot 2024-02-05 at 4.41.33 PM.jpg

Link to comment

@WobbleBobble2 In your VM settings, change the graphics card (near the bottom of your screenshot) from 00:02.0 to one of the other functions (e.g. 00:02.1). 00:02.0 is reserved for the system itself and must not be used by a VM. Also make sure that each VM you want to pass the graphics card through to is using a different one (e.g. 00:02.1 for this VM, 00:02.2 for another VM, etc).

 

Using it in a Plex Docker should work fine. I've got a Plex container that has transcoding enabled, plus an Unmanic container that transcodes antenna TV recordings from MPEG2 to H.265, and I'm also using the iGPU in a Windows Server 2022 VM for Blue Iris at the same time. All works fine.

Edited by Daniel15
  • Like 2
Link to comment
33 minutes ago, Daniel15 said:

@WobbleBobble2 In your VM settings, change the graphics card (near the bottom of your screenshot) from 00:02.0 to one of the other functions (e.g. 00:02.1). 00:02.0 is reserved for the system itself and must not be used by a VM. Also make sure that each VM you want to pass the graphics card through to is using a different one (e.g. 00:02.1 for this VM, 00:02.2 for another VM, etc).

 

Using it in a Plex Docker should work fine. I've got a Plex container that has transcoding enabled, plus an Unmanic container that transcodes antenna TV recordings from MPEG2 to H.265, and I'm also using the iGPU in a Windows Server 2022 VM for Blue Iris at the same time. All works fine.

Thank you for the quick reply! Ok so I set the number of VFs to one in the SR-IOV plugin, then selected 00:02.1 in the VM and that seems to have resolved the first error in my original post, which is in the main unraid log.

 

However the second error still shows up in the VM log and the VM won't start:
 

-device '{"driver":"vfio-pci","host":"0000:00:02.1","id":"hostdev0","bus":"pci.6","addr":"0x10"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/1 (label charserial0)
2024-02-06T03:13:40.767622Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T03:13:40.767677Z qemu-system-x86_64: vfio_dma_map(0x14942d228c00, 0x381800000000, 0x20000000, 0x14940b000000) = -2 (No such file or directory)
2024-02-06T03:13:40.823130Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T03:13:40.823146Z qemu-system-x86_64: vfio_dma_map(0x14942d228c00, 0x381800000000, 0x20000000, 0x14940b000000) = -22 (Invalid argument)
2024-02-06T03:13:42.798137Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T03:13:42.798242Z qemu-system-x86_64: vfio_dma_map(0x14942d228c00, 0x381800000000, 0x20000000, 0x14940b000000) = -22 (Invalid argument)
2024-02-06T03:13:42.835529Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T03:13:42.835550Z qemu-system-x86_64: vfio_dma_map(0x14942d228c00, 0x381800000000, 0x20000000, 0x14940b000000) = -22 (Invalid argument)
2024-02-06T03:13:42.894002Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T03:13:42.894028Z qemu-system-x86_64: vfio_dma_map(0x14942d228c00, 0x381800000000, 0x20000000, 0x14940b000000) = -22 (Invalid argument)
2024-02-06T03:13:42.941425Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T03:13:42.941438Z qemu-system-x86_64: vfio_dma_map(0x14942d228c00, 0x381800000000, 0x20000000, 0x14940b000000) = -22 (Invalid argument)
2024-02-06T03:13:45.600164Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T03:13:45.600186Z qemu-system-x86_64: vfio_dma_map(0x14942d228c00, 0x381800000000, 0x20000000, 0x14940b000000) = -22 (Invalid argument)

Also great to know about being able to also use the iGpu for plex!

Link to comment

Also wanted to add a few more details as I am still getting the same VFIO_dma_map errors:

1. I am able to successfully get the 2 VFs (02.1 and 02.2) to show up in settings > system devices in their own IOMMU groups and bind them to VFIO at boot.

2. The alder lake sound card cannot be separated into its own IOMMU group for my motherboard, so it's in a group with the ISA bridge, SMBus, and Serial Bus. 

3. I double checked that the drive my domains folder is in still has plenty of space (another person with the VFIO_dma_map error encountered this)

4. I thought the error might have something to do with me disabling the sound card in UEFI, but I re-enabled audio and got the same error. 

5. I also tried enabling & disabling multi-monitor igpu setting in UEFI and it had no effect.

 

If you are having trouble getting the new VFs to show up make sure you haven't bound the primary VF 02.0 to VFIO at boot as this will prevent the new VFs from showing up. 

 

I'm posting the full VM log here along with my diagnostics in case anyone might be able to help:

-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-Windows 10/master-key.aes"}' \
-blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/[removed]_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-i440fx-7.2,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
-accel kvm \
-cpu host,migratable=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off \
-m 16384 \
-object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":17179869184}' \
-overcommit mem-lock=off \
-smp 8,sockets=1,dies=1,cores=4,threads=2 \
-uuid [removed]\
-display none \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=35,server=on,wait=off \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-hpet \
-no-shutdown \
-boot strict=on \
-device '{"driver":"pci-bridge","chassis_nr":1,"id":"pci.1","bus":"pci.0","addr":"0x2"}' \
-device '{"driver":"pci-bridge","chassis_nr":2,"id":"pci.2","bus":"pci.0","addr":"0x3"}' \
-device '{"driver":"pci-bridge","chassis_nr":3,"id":"pci.3","bus":"pci.0","addr":"0x6"}' \
-device '{"driver":"pci-bridge","chassis_nr":4,"id":"pci.4","bus":"pci.0","addr":"0x8"}' \
-device '{"driver":"pci-bridge","chassis_nr":5,"id":"pci.5","bus":"pci.0","addr":"0x9"}' \
-device '{"driver":"pci-bridge","chassis_nr":6,"id":"pci.6","bus":"pci.0","addr":"0xa"}' \
-device '{"driver":"ich9-usb-ehci1","id":"usb","bus":"pci.0","addr":"0x7.0x7"}' \
-device '{"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pci.0","multifunction":true,"addr":"0x7"}' \
-device '{"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pci.0","addr":"0x7.0x1"}' \
-device '{"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pci.0","addr":"0x7.0x2"}' \
-device '{"driver":"ahci","id":"sata0","bus":"pci.0","addr":"0x4"}' \
-device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.0","addr":"0x5"}' \
-blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
-device '{"driver":"virtio-blk-pci","bus":"pci.0","addr":"0xc","drive":"libvirt-3-format","id":"virtio-disk2","bootindex":1,"write-cache":"on","serial":"vdisk1"}' \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/Win10_22H2_English_x64v1.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
-device '{"driver":"ide-cd","bus":"sata0.0","drive":"libvirt-2-format","id":"sata0-0-0","bootindex":2}' \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.240-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
-device '{"driver":"ide-cd","bus":"sata0.1","drive":"libvirt-1-format","id":"sata0-0-1"}' \
-netdev tap,fd=36,id=hostnet0 \
-device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:73:76:08","bus":"pci.0","addr":"0xb"}' \
-chardev pty,id=charserial0 \
-device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \
-chardev socket,id=charchannel0,fd=34,server=on,wait=off \
-device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \
-device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \
-audiodev '{"id":"audio1","driver":"none"}' \
-device '{"driver":"vfio-pci","host":"0000:00:02.2","id":"hostdev0","bus":"pci.6","addr":"0x10"}' \
-device '{"driver":"vfio-pci","host":"0000:00:1f.3","id":"hostdev1","bus":"pci.0","addr":"0xd"}' \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/0 (label charserial0)
2024-02-06T08:45:42.252270Z qemu-system-x86_64: vfio: Cannot reset device 0000:00:1f.3, no available reset mechanism.
2024-02-06T08:45:42.256264Z qemu-system-x86_64: vfio: Cannot reset device 0000:00:1f.3, no available reset mechanism.
2024-02-06T08:45:42.755580Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.755597Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x38200010c000, 0x4000, 0x147a2884a000) = -22 (Invalid argument)
2024-02-06T08:45:42.755679Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.755682Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x382000000000, 0x100000, 0x1476252ff000) = -22 (Invalid argument)
2024-02-06T08:45:42.761136Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.761146Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x381800000000, 0x20000000, 0x1475fec00000) = -22 (Invalid argument)
2024-02-06T08:45:42.762136Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.762143Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x38200010c000, 0x4000, 0x147a2884a000) = -22 (Invalid argument)
2024-02-06T08:45:42.762282Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.762286Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x382000000000, 0x100000, 0x1476252ff000) = -22 (Invalid argument)
2024-02-06T08:45:42.811477Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.811492Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x381800000000, 0x20000000, 0x1475fec00000) = -22 (Invalid argument)
2024-02-06T08:45:42.814038Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.814046Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x38200010c000, 0x4000, 0x147a2884a000) = -22 (Invalid argument)
2024-02-06T08:45:42.814165Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:42.814169Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x382000000000, 0x100000, 0x1476252ff000) = -22 (Invalid argument)
2024-02-06T08:45:44.531401Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.531442Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x381800000000, 0x20000000, 0x1475fec00000) = -22 (Invalid argument)
2024-02-06T08:45:44.533997Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.534011Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x38200010c000, 0x4000, 0x147a2884a000) = -22 (Invalid argument)
2024-02-06T08:45:44.534318Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.534323Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x382000000000, 0x100000, 0x1476252ff000) = -22 (Invalid argument)
2024-02-06T08:45:44.577147Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.577193Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x381800000000, 0x20000000, 0x1475fec00000) = -22 (Invalid argument)
2024-02-06T08:45:44.584370Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.584398Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x38200010c000, 0x4000, 0x147a2884a000) = -22 (Invalid argument)
2024-02-06T08:45:44.584584Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.584590Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x382000000000, 0x100000, 0x1476252ff000) = -22 (Invalid argument)
2024-02-06T08:45:44.635492Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.635553Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x381800000000, 0x20000000, 0x1475fec00000) = -22 (Invalid argument)
2024-02-06T08:45:44.677263Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:44.677285Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x381800000000, 0x20000000, 0x1475fec00000) = -22 (Invalid argument)
2024-02-06T08:45:46.544729Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:46.544760Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x381800000000, 0x20000000, 0x1475fec00000) = -22 (Invalid argument)
2024-02-06T08:45:46.583933Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:46.583951Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x38200010c000, 0x4000, 0x147a2884a000) = -22 (Invalid argument)
2024-02-06T08:45:46.584150Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:46.584156Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x382000000000, 0x100000, 0x1476252ff000) = -22 (Invalid argument)
2024-02-06T08:45:57.747577Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:57.747600Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x38200010c000, 0x4000, 0x147a2884a000) = -22 (Invalid argument)
2024-02-06T08:45:57.747862Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
2024-02-06T08:45:57.747872Z qemu-system-x86_64: vfio_dma_map(0x147620e48800, 0x382000000000, 0x100000, 0x1476252ff000) = -22 (Invalid argument)

 

 

Screenshot 2024-02-06 at 12.48.53 AM.jpg

Screenshot 2024-02-06 at 12.48.48 AM.jpg

hal9000-diagnostics-20240206-0053.zip

Edited by WobbleBobble2
Link to comment
On 2/13/2024 at 1:18 AM, Scott Nielson said:

It appears that I am having problems downloading i915-sriov module Package for kernel v6.1.64

 

It fails really quickly and just complains that it cannot download and the install does not complete.

 

 

Are you using the newer version from this thread? See the "recommended post" at the top. The version in the apps section in Unraid does not work properly with newer Unraid versions.

  • Like 1
Link to comment

Hi, made the switch from the old plugin the the new ich777 version. Worked perfect.

I'm still on Unraid 6.12.5 and would like to update to the new 6.12.8.

With the old plugin, we had to wait until the plugin adapted new Unraid versions.


Is it save to update Unraid and will the new plugin still work on the Unraid 6.12.8 linux kernel 6.1.74?

 

Thx, Duglim

Link to comment
9 minutes ago, Duglim said:

Hi, made the switch from the old plugin the the new ich777 version. Worked perfect.

I'm still on Unraid 6.12.5 and would like to update to the new 6.12.8.

With the old plugin, we had to wait until the plugin adapted new Unraid versions.


Is it save to update Unraid and will the new plugin still work on the Unraid 6.12.8 linux kernel 6.1.74?

 

Thx, Duglim

Yes he builds for current releases

  • Like 1
Link to comment

Please use the new support thread over here:

 

Remove the old plugin form your system, install the new plugin through the CA App and recreate your VFs

 

 

I'll close this thread for further replies, please use the new support thread.

Link to comment
  • ich777 locked this topic
Guest
This topic is now closed to further replies.
×
×
  • Create New...