giganode

Community Developer
  • Posts

    230
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by giganode

  1. 1 hour ago, Ocelot8391 said:

     

    Okay, well that got me a new error.

     

    No filter class matching [sriov]
    Requested device sriov not found!

     

    To add some detail: I'm adding the VF as a second graphics card, while leaving VNC as the primary one. This is done so I can still access the machine.

     

    I'm certain the plugin and the virtualization work with my 12th gen. As in Windows, with the same type of setup (VNC as 1st + VF as 2nd), it works no problem.

     

    Please post Diagnostics.

  2. I'm currently attempting to set up a VM running Debian 12 (kernel 6.1.0) while using SR-IOV to give the VM possibilities for hardware acceleration.
     
    However, "intel_gpu_top" keeps spitting out the following error:
    "No device filter specified and no discrete/integrated i915 devices found"
     
    When checking the output of "lspci", I clearly see the integrated UHD Graphics 730 there. So at least it's picked up in the VM. The only thing I've done is replaced the va-driver with the non-free version inside the VM, nothing else.
     
    But I've hit kind of a dead end on the troubleshooting, no clue what I should do next to possibly move forward. Any tips or advice would be greatly appreciated.

    When using sriov try the following command

    intel_gpu_top -d sriov

  3. On 3/31/2024 at 5:08 PM, NoobSpy said:

    Hi, Simple question. With my i5-12400 if I install this I wont need my GPU anymore ever as long I have the CPU or higher for VM use?

     

    You can use the iGPUs resources for VMs while unraid is also still able to use them, yes. 

     

    Just connect via a service like sunshine/moonlight, parsec, rdp or vnc. If you need a physical monitor use an usb to hdmi dongle.

     

    But keep in mind, this plugin relies on a repo which seems to be no longer maintained.

    So don't just sell your gpu atm 😄

     

    Once there is more information about the future of this plugin, it will be posted here.

    • Like 1
  4. 14 hours ago, Encore said:

    im facing a strange issue, if i reboot the server, my windows vm runs fine with sr-iov with my i5 14500 

     

    but if i quit the vm, my whole gpu dissapears, no gpu on device list and plugin shows no gpu found until i reboot the server

     

     

    i915 0000:00:02.1: [drm] *ERROR* tlb invalidation response timed out for seqno 23:o

     

    ar 29 05:57:58 NAS kernel: vfio-pci 0000:00:02.1: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: Running in SR-IOV VF mode
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] GT0: GUC: interface version 0.1.4.1
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] VT-d active for gfx access
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] Using Transparent Hugepages
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] GT0: GUC: interface version 0.1.4.1
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: GuC firmware PRELOADED version 1.4 submission:SR-IOV VF
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: HuC firmware PRELOADED
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] Protected Xe Path (PXP) protected content support initialized
    Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] PMU not supported for this GPU.
    Mar 29 05:57:58 NAS kernel: sdd: sdd1 sdd2 sdd3 sdd4
    Mar 29 05:57:58 NAS kernel: [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.1 on minor 1
    Mar 29 05:57:58 NAS kernel: ata6.00: Enabling discard_zeroes_data
    Mar 29 05:57:58 NAS kernel: sdd: sdd1 sdd2 sdd3 sdd4
    Mar 29 05:57:58 NAS usb_manager: Info: rc.usb_manager  vm_action Windows 11 stopped end -
    Mar 29 05:57:59 NAS kernel: i915 0000:00:02.1: [drm] *ERROR* tlb invalidation response timed out for seqno 23
    Mar 29 05:57:59 NAS kernel: i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=none:owns=io+mem
    Mar 29 05:57:59 NAS kernel: i915 0000:00:02.2: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
    Mar 29 05:57:59 NAS kernel: pci 0000:00:02.1: Removing from iommu group 19
    Mar 29 05:57:59 NAS kernel: i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=io+mem:owns=io+mem
    Mar 29 05:57:59 NAS kernel: pci 0000:00:02.2: Removing from iommu group 20
    Mar 29 05:58:00 NAS unassigned.devices: Disk with ID 'Samsung_SSD_860_EVO_500GB_S3Z2NB0K660578V (dev1)' is not set to auto mount.
    Mar 29 05:58:00 NAS unassigned.devices: Disk with ID 'Samsung_SSD_860_EVO_500GB_S3Z2NB0K660578V (dev1)' is not set to auto mount.
    Mar 29 05:58:00 NAS unassigned.devices: Disk with ID 'Samsung_SSD_860_EVO_500GB_S3Z2NB0K660578V (dev1)' is not set to auto mount.
    Mar 29 05:58:00 NAS unassigned.devices: Partition '/dev/sdd2' does not have a file system and cannot be mounted.
    Mar 29 05:58:01 NAS kernel: i915 0000:00:02.0: Disabled 2 VFs
    Mar 29 05:58:01 NAS kernel: Console: switching to colour dummy device 80x25
    Mar 29 05:58:01 NAS acpid: input device has been disconnected, fd 11
    Mar 29 05:58:01 NAS kernel: pci 0000:00:02.0: Removing from iommu group 0

    Please post Diagnostics.

  5. On 3/26/2024 at 7:35 PM, PaulW08 said:

    Anyone having issues with their Unraid Server crashing when using this plugin? I think it has to do with the VM locking up at somepoint and then causing the Unraid Server to lock up. It's kinda been imposible to grab logs since I have no idea when it is going to happen, but this past time the VM crashed and I caught it. My WebUI became unusable, but luckily I have a BLIKVM hooked up to my system and I had booted into the GUI mode and was able to pull some info. Unfortunately I had to take screenshots and couldn't copy and paste logs. A VM log is also screenshotted. I get those errors on the VM but performance seems fine besides the crash at some point. 

    Screenshot 2024-03-26 at 2.12.27 PM.png

    Screenshot 2024-03-26 at 2.12.48 PM.png

    Screenshot 2024-03-26 at 2.34.23 PM.png

     

    For now you can ignore the VFIO_MAP_DMA errors.

     

    Is the vm running 24/7?

     

    Please post Diagnostics.

     

  6. On 3/19/2024 at 12:58 AM, Deadboy01 said:

    I am having a similiar issue to Lunixx, everytime I set the number of VF and restart the system it still says 0 available. I've confirmed my BIOS and above 4G decoding is enabled, as well as SR-IOV. I then checked my system log and found this

     

    Mar 18 23:43:35 Nostromo root: plugin: installing: i915-sriov.plg
    Mar 18 23:43:35 Nostromo root: Executing hook script: pre_plugin_checks
    Mar 18 23:43:35 Nostromo root: plugin: running: anonymous
    Mar 18 23:43:35 Nostromo root: plugin: creating: /usr/local/emhttp/plugins/intel-i915-sriov/README.md - from INLINE content
    Mar 18 23:43:35 Nostromo root: plugin: checking: /boot/config/plugins/i915-sriov/unraid-i915-sriov-2023.11.22.txz - MD5
    Mar 18 23:43:35 Nostromo root: plugin: skipping: /boot/config/plugins/i915-sriov/unraid-i915-sriov-2023.11.22.txz already exists
    Mar 18 23:43:35 Nostromo root: plugin: running: upgradepkg --install-new /boot/config/plugins/i915-sriov/unraid-i915-sriov-2023.11.22.txz
    Mar 18 23:43:35 Nostromo root: 
    Mar 18 23:43:35 Nostromo root: +==============================================================================
    Mar 18 23:43:35 Nostromo root: | Installing new package /boot/config/plugins/i915-sriov/unraid-i915-sriov-2023.11.22.txz
    Mar 18 23:43:35 Nostromo root: +==============================================================================
    Mar 18 23:43:35 Nostromo root: 
    Mar 18 23:43:35 Nostromo root: Verifying package unraid-i915-sriov-2023.11.22.txz.
    Mar 18 23:43:35 Nostromo root: Installing package unraid-i915-sriov-2023.11.22.txz:
    Mar 18 23:43:35 Nostromo root: PACKAGE DESCRIPTION:
    Mar 18 23:43:35 Nostromo root: Package unraid-i915-sriov-2023.11.22.txz installed.
    Mar 18 23:43:35 Nostromo root: plugin: running: anonymous
    Mar 18 23:43:36 Nostromo root: patching file usr/local/emhttp/plugins/dynamix.vm.manager/include/libvirt.php
    Mar 18 23:43:36 Nostromo root: Hunk #1 succeeded at 816 (offset 36 lines).
    Mar 18 23:43:36 Nostromo root: 
    Mar 18 23:43:36 Nostromo root: ------------------------SR-IOV package found locally!-------------------------
    Mar 18 23:43:36 Nostromo root: ----------------SR-IOV package already installed, nothing to do----------------
    Mar 18 23:43:36 Nostromo root: ---------------------Kernel Module 'i915' already enabled----------------------
    Mar 18 23:43:38 Nostromo root: ---Setting VFs to: 2---
    Mar 18 23:43:38 Nostromo kernel: pci 0000:00:02.0: no driver bound to device; cannot configure SR-IOV
    Mar 18 23:43:38 Nostromo root: 
    Mar 18 23:43:38 Nostromo root: -------------------------------------------------
    Mar 18 23:43:38 Nostromo root: ---Installation from SR-IOV plugin successful!---
    Mar 18 23:43:38 Nostromo root: -------------------------------------------------
    Mar 18 23:43:38 Nostromo root: 
    Mar 18 23:43:38 Nostromo root: plugin: i915-sriov.plg installed
    Mar 18 23:43:38 Nostromo root: plugin: i915-sriov.plg installed
    Mar 18 23:43:38 Nostromo root: Executing hook script: post_plugin_checks
    Mar 18 23:43:38 Nostromo root: plugin: installing: libvirtwol.plg
    Mar 18 23:43:38 Nostromo root: Executing hook script: pre_plugin_checks
    Mar 18 23:43:38 Nostromo root: plugin: checking: /boot/config/plugins/libvirtwol/libvirt-python-env-4.10.0-x86_64-1.txz - MD5
    Mar 18 23:43:38 Nostromo root: plugin: skipping: /boot/config/plugins/libvirtwol/libvirt-python-env-4.10.0-x86_64-1.txz already exists
    Mar 18 23:43:38 Nostromo root: plugin: running: upgradepkg --install-new /boot/config/plugins/libvirtwol/libvirt-python-env-4.10.0-x86_64-1.txz

     

    Crucially the part that caught my attention was "pci 0000:00:02.0: no driver bound to device; cannot configure SR-IOV" so I checked the new System Drivers in Tools and in there I could see that the i915 driver is listed as disabled. I found a file called i915.conf with the text "blacklist i915" present in /boot/config/modprobe.d, so I deleted the file and rebooted. However the plugin still failed to work. I checked and the file i915.conf is still there, so I deleted it and this time created a new i915.conf that was blank with touch and restarted again. This time the file was overwritten with a new file that contained the text "blacklist i915" again. I also found a blank copy of i915.conf in /etc/modprobe.d. I have deleted the file multiple times, but each time I restart it is there again. There are no scripts that create this file, so I don't know where it is coming from.

     

    I've also tried going into the Tools section of the GUI, opening the System Drivers section and trying to remove he Blacklist from there. I have edited the modprobe config, as well as deleting it ad finishing by pressing the "Rebuild Modules" button. But each time the i915 file is removed, only to reappear when I reboot.

     

    Any help would be appreciated as I feel like I am going around in circles here.

     

    40 minutes ago, Deadboy01 said:

    When I did this I get the error message "No intel graphics card present" from the Intel Graphics SR-IOV page under Settings

     

    I cannot believe I missed this, updating the BIOS has resolved another issue I was facing, but I am still experiencing the same issues with regards to the SR-IOV plugin.

     

    Sorry for the late answer. Can you please do the following:

     

    uninstall all intel igpu related plugins -> full shutdown of the server -> start server and install intel_gpu_top and afterwards install the sriov plugin -> go to sriov plugin settings page and hit "enable now" and set VFs to 2 and save to file -> reboot

     

    Don't touch any files manually. after the reboot check log files etc and repost diagnostics please.

     

    Please also test in terminal and post the outcome:

     

    intel_gpu_top -d sriov

     

     

  7. On 2/29/2024 at 12:34 PM, Lunixx said:

    I can't get it to work. I have installed the plugin from giganode and restarted my Unraid.

    When I go into the settings, the VFs numbers are not saved, it stays at 0 when i refresh this page. All VM's are Stopped

     

     

    image.thumb.png.304e83f0a375664fe30e1de68118b7e5.png

     

    I have an Intel 13500 and my Unraid Version ist the latest 6.12.8

     

     

    image.thumb.png.eebe705bd1cee974b4dbcffb57860ded.png

     

     

    Please post your Diagnostics.

    • Thanks 1
  8. Raptor Lake IGPU, from Minisforum AR900i
    Device ID in VM manager - 02:00:0 without sound card, as it in different group and gives me error 
    VM is working, just getting error 43, not sure what does it mean overall for performance

    Then you are doing it wrong.

    You mustn’t passthrough the iGPU itself.
    You have to passtrough one of its VFs
    • Thanks 1
  9. 16 minutes ago, dopeytree said:

    Hi just trying to understand SR-IOV...

     

    If you use this does it still need to be GPU passthrough'ed? I.E gpu  is then solely available for VM's & not available for docker containers?

     

    Hi!

     

    With SR-IOV you are able to share the devices resources with VMs. The device is still usable for the host system.

     

    In simple terms, the host and dockers use ..

     

    image.thumb.png.5a01079968e980a73959bef9fea218a4.png

     

    .. while you passthrough one of these VFs to a VM.

     

    image.thumb.png.67a98a482719b8033155465fb7f76fa4.png

     

    The amount of VFs (Virtual Functions) can be increased if needed and is for example limited to 7 for Intel iGPUs as far as I have seen.

    image.thumb.png.0b81c94c8b7ad934bd5dde9c56dd68cd.png

  10. Rocking an i5-12400. I installed the plugin and got the drivers working in my Windows VM. No problems whatsoever.

    However, is QSV supposed to work in an SR-IOV setup? It works okay on the host, but in the Windows 10 guest, Handbrake seems to be failing to encode using QSV for an "unknown reason".
     
    Hardware acceleration seems to work perfectly elsewhere. It seems to be just QSV who's messing up. Has anyone had success with QSV?

    QuickSync should work. Did you check if device manager reports issues like code 43 or something else?

    Sunshine for example uses QuickSync. I can use Sunshine, but handbrake fails on both of my systems, although I know someone with an N100 on which the windows vm can utilize QuickSync in handbrake.
  11. On 2/20/2024 at 12:59 AM, jakeshake said:

    Hello,

     

    Thank you for keeping the pace going with this plug-in.

     

    I am curious - are you able to output video from a Windows VM using the HDMI or Display Port with the iGPU using this plug-in?

     

    On 2/23/2024 at 1:52 PM, giganode said:

    You should also be able to use a real monitor with an hdmi dongle. Still need to verify this though.

     

    I can verify that you can use a physical monitor but you can not use the onboard hdmi/dp ports.

    With Intel GVT-g it was possible to use an usb to hdmi adapter. This also works with the new iGPUs, just passtrough the usb device and give it a go.. 

    With my adapter I had to install a driver first, this may not be the case for every adapter.

     

    image.thumb.png.892c343c862bb0343c5b841577ba11d8.png

    • Like 1
  12. 1 hour ago, Dabear3 said:

     

    giganode,

    Thank you for your effort and time on this plug-in !

    I upgraded my 11 year old unRaid hardware (which went surprising well) but I have been pulling my hair out for a week trying to get this VM igpu working. Your plugin is what finally did the trick.

     

    WobbleBobble,

    It appears I am at the same point as you. I have the same graphics 770, same "VFIO_MAP_DMA failed" error, using windows remote destop to access,  and same problem with binding the soundcard.

    I don't have a solution but if/when I do, I'll be sure to post.

    Please also continue to share if you find a fix on your end.

    Thank you,

     

    image.png.4d88dfb7a3e64ff8dea87174ce1ef3b8.png

    image.png.32d9879123fbd611c0c3900554f41197.png

    image.png.938418814ca479d36e9505851addd986.png

    image.png.e7f8dd5e4b15f5db9d2fafb3e7f46b7d.png

     

     

    Thank you!

     

    Unbind all devices from vfio and reboot your system. You don't need the audio controller for sound.

    You can install Steam and then use their builtin audio device or you install this Virtual Soundcard and use Moonlight/Sunshine.

     

     

    In terms of the VFIO_MAP_DMA errors at the moment I think that this is a hardware specific error message.

    My main gets the same messages (13500 also with UHD 770) but my N100  does not.

    But anyway, it seems that it does not have an impact to the vm. So unless there are issues affecting the vm I suggest we take a look from time to time and observe the situation. Maybe future updates will eliminate this error 😊

     

     

    btw.. I just did some testing with my Windows11-VM and got some good results:

     

    image.thumb.png.3afefae256cbc53e640d418cbd80c8f5.png

    Furmark

    image.png.8eb2d986a8f17676cefd6f73018df9f0.png 

    Youtube Stats for Nerds

     

    • Like 1
  13. On 2/20/2024 at 12:59 AM, jakeshake said:

    Hello,

     

    Thank you for keeping the pace going with this plug-in.

     

    I am curious - are you able to output video from a Windows VM using the HDMI or Display Port with the iGPU using this plug-in?

     

    You should also be able to use a real monitor with an hdmi dongle. Still need to verify this though.

  14. 6 minutes ago, just4lyl said:

    Thank you for your reply. If I don't bind vfio, will it affect the plex hardware decoding in docker?

     

    No it doesn't. You can run a VM and transcode within plex simultaneously.

     

    You just have to make sure that you do not passthrough the iGPU itself. Pass a VF trough to a vm and everything should be fine 🙂

  15. 18 minutes ago, just4lyl said:

    After successfully installing the plug-in, binding vfio and restarting,

    1.thumb.png.9d5c120a74f534d10c2d57fa7d0382d5.png

    the virtual machine cannot start automatically, but it can be started manually, and there is an error message.

    2.png.492dacbab9032e1a28043a665073621b.png

     

    Don't bind the VFs to vfio. Please unbind, reboot and try again 🙂

  16. 31 minutes ago, WobbleBobble2 said:

    Thanks so much for taking a look at this!

     

    I just checked BIOS and I had memory allocated to iGPU set to "Auto." I just changed it to the maximum allowed, which is 1024mb, but no joy. Same errors (see way below for the VM log errors). However, investigating my unraid log following your lead, it seems the VM is incorrectly attempting to use VF1 (02.0) despite me setting it to VF2 in (00:02.2) VM settings (see attached image). 

    Screenshot 2024-02-20 at 12.23.31 PM.jpg

     

    What makes you think it uses 02.0 instead of 02.1 or 02.1?

     

    Have you tried waiting for a few minutes to see if the vm comes up?

  17. 3 hours ago, WobbleBobble2 said:

    I stopped binding the sound card at startup (tools>system devices)  and set Sound Card to "none" in VM settings. But still getting the same error "VFIO_MAP_DMA failed". 

     

    text  error  warn  system  array  login  
    
    -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.240-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
    -device '{"driver":"ide-cd","bus":"sata0.1","drive":"libvirt-1-format","id":"sata0-0-1"}' \
    -netdev tap,fd=36,id=hostnet0 \
    -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:73:76:08","bus":"pci.0","addr":"0xc"}' \
    -chardev pty,id=charserial0 \
    -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \
    -chardev socket,id=charchannel0,fd=34,server=on,wait=off \
    -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \
    -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \
    -audiodev '{"id":"audio1","driver":"none"}' \
    -vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \
    -k en-us \
    -device '{"driver":"qxl-vga","id":"video0","max_outputs":1,"ram_size":67108864,"vram_size":67108864,"vram64_size_mb":0,"vgamem_mb":16,"bus":"pci.0","addr":"0x2"}' \
    -device '{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.0","addr":"0xe"}' \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    char device redirected to /dev/pts/0 (label charserial0)
    qxl_send_events: spice-server bug: guest stopped, ignoring
    2024-02-20T05:13:29.846302Z qemu-system-x86_64: terminating on signal 15 from pid 18602 (/usr/sbin/libvirtd)
    2024-02-20 05:13:30.271+0000: shutting down, reason=shutdown
    2024-02-20 05:14:00.278+0000: starting up libvirt version: 8.7.0, qemu version: 7.2.0, kernel: 6.1.64-Unraid, hostname: HAL9000
    LC_ALL=C \
    PATH=/bin:/sbin:/usr/bin:/usr/sbin \
    HOME='/var/lib/libvirt/qemu/domain-6-Windows 10' \
    XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-6-Windows 10/.local/share' \
    XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-6-Windows 10/.cache' \
    XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-6-Windows 10/.config' \
    /usr/local/sbin/qemu \
    -name 'guest=Windows 10,debug-threads=on' \
    -S \
    -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-6-Windows 10/master-key.aes"}' \
    -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
    -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/9ff111d8-9ac0-34f8-4fdf-cbc8b866a6fa_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
    -machine pc-i440fx-7.2,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
    -accel kvm \
    -cpu host,migratable=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off \
    -m 16384 \
    -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":17179869184}' \
    -overcommit mem-lock=off \
    -smp 8,sockets=1,dies=1,cores=4,threads=2 \
    -uuid 9ff111d8-9ac0-34f8-4fdf-cbc8b866a6fa \
    -display none \
    -no-user-config \
    -nodefaults \
    -chardev socket,id=charmonitor,fd=35,server=on,wait=off \
    -mon chardev=charmonitor,id=monitor,mode=control \
    -rtc base=localtime \
    -no-hpet \
    -no-shutdown \
    -boot strict=on \
    -device '{"driver":"pci-bridge","chassis_nr":1,"id":"pci.1","bus":"pci.0","addr":"0x3"}' \
    -device '{"driver":"pci-bridge","chassis_nr":2,"id":"pci.2","bus":"pci.0","addr":"0x6"}' \
    -device '{"driver":"pci-bridge","chassis_nr":3,"id":"pci.3","bus":"pci.0","addr":"0xb"}' \
    -device '{"driver":"pci-bridge","chassis_nr":4,"id":"pci.4","bus":"pci.0","addr":"0x8"}' \
    -device '{"driver":"pci-bridge","chassis_nr":5,"id":"pci.5","bus":"pci.0","addr":"0x9"}' \
    -device '{"driver":"pci-bridge","chassis_nr":6,"id":"pci.6","bus":"pci.0","addr":"0xa"}' \
    -device '{"driver":"ich9-usb-ehci1","id":"usb","bus":"pci.0","addr":"0x7.0x7"}' \
    -device '{"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pci.0","multifunction":true,"addr":"0x7"}' \
    -device '{"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pci.0","addr":"0x7.0x1"}' \
    -device '{"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pci.0","addr":"0x7.0x2"}' \
    -device '{"driver":"ahci","id":"sata0","bus":"pci.0","addr":"0x4"}' \
    -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.0","addr":"0x5"}' \
    -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
    -device '{"driver":"virtio-blk-pci","bus":"pci.0","addr":"0xc","drive":"libvirt-3-format","id":"virtio-disk2","bootindex":1,"write-cache":"on","serial":"vdisk1"}' \
    -blockdev '{"driver":"file","filename":"/mnt/user/isos/Win10_22H2_English_x64v1.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
    -device '{"driver":"ide-cd","bus":"sata0.0","drive":"libvirt-2-format","id":"sata0-0-0","bootindex":2}' \
    -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.240-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
    -device '{"driver":"ide-cd","bus":"sata0.1","drive":"libvirt-1-format","id":"sata0-0-1"}' \
    -netdev tap,fd=36,id=hostnet0 \
    -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:73:76:08","bus":"pci.0","addr":"0x2"}' \
    -chardev pty,id=charserial0 \
    -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \
    -chardev socket,id=charchannel0,fd=34,server=on,wait=off \
    -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \
    -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \
    -audiodev '{"id":"audio1","driver":"none"}' \
    -device '{"driver":"vfio-pci","host":"0000:00:02.1","id":"hostdev0","bus":"pci.6","addr":"0x10"}' \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    char device redirected to /dev/pts/0 (label charserial0)
    2024-02-20T05:14:06.722207Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:06.722263Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -2 (No such file or directory)
    2024-02-20T05:14:06.774309Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:06.774325Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:43.108555Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:43.108571Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:46.343716Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:46.343732Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:46.398145Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:46.398160Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:48.175406Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:48.175432Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:48.203421Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:48.203440Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:48.249706Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:48.249734Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:48.276368Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:48.276391Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    2024-02-20T05:14:50.563239Z qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument
    2024-02-20T05:14:50.563261Z qemu-system-x86_64: vfio_dma_map(0x14c7a0048800, 0x381000000000, 0x20000000, 0x14c77de00000) = -22 (Invalid argument)
    

     

     

    Okay, please take a look at this topic as this is not an issue related to the plugin.