Intel Nuc Enthusiast : Trying to use that ARC mobile GPU


Recommended Posts

Hello dear Unraid enthusiasts,

 

I recently got my hand over an Intel Nuc Enthusiast (1200H + ARC 770M).

So I first checked on the forum and saw the difficulties faced on the desktop version, but still wanted to try out and repport.


I thought it would be pretty nice if the onboard iGPU and dGPU could be used separately.

I own a Lenovo Legion 5 Pro with the same hardware, but with a nVidia GPU instead of the ARC.
One of the limitations of this system is the presence of a mux switch, that force me to choose the dGPU or iGPU on boot, and stick with it.
In the case of the Intel NUC, there's no switch.
I was able to use the iGPU for Jellyfin without much issues, and to boot a W11 VM... But problems started quickly after I shut it off.

It turns out that I have issues on VM reboots, and even sometimes on boot.

At one point, I booted it twice and forced stopped it twice before a failed 3rd reboot ; it gives me hopes.
I tried to troubleshoot it alone for a few days without success.

It doesn't mean the situation can be solved considering how the hardware is known to not be optimised for virtualisation... But I thought I could try to take a shoot, or at least repport for the curiosity of the experiment.

 

Any help will be appreciated and I will gladly do any suggested test. :)

 

 

Here's the VM Config.
I also tried Q35 OVMF / OVMF TPM.

Spoiler

1356264784_Screenshot2023-05-27at15_01_23.thumb.jpg.f0fae7b398e1284c6980f110637bcdd7.jpg1833856027_Screenshot2023-05-27at15_01_30.thumb.jpg.0c5084de8035b4cde8901d205083227a.jpg

The XML : 

Spoiler
<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>Windows 11</name>
  <uuid>0e85f3e7-fc89-0df4-9421-0245b2511260</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='6'/>
    <vcpupin vcpu='3' cpuset='7'/>
    <vcpupin vcpu='4' cpuset='8'/>
    <vcpupin vcpu='5' cpuset='9'/>
    <vcpupin vcpu='6' cpuset='10'/>
    <vcpupin vcpu='7' cpuset='11'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/0e85f3e7-fc89-0df4-9421-0245b2511260_VARS-pure-efi-tpm.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv mode='custom'>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='4' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/SSD/VDISKS/W11_Gaming.vdisk'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/SSD/VDISKS/virtio-win-0.1.229-1.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:50:06:12'/>
      <source bridge='br0'/>
      <model type='virtio-net'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='3'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <tpm model='tpm-tis'>
      <backend type='emulator' version='2.0' persistent_state='yes'/>
    </tpm>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc092'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc342'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

 

 

The log when the freeze happens :

71338769_Screenshot2023-05-27at14_46_32.thumb.jpg.b4245e2ae2342e3403cc3511fe16f8e1.jpg

 

An example of log I obtained by activating logging directly on the flash :

Spoiler
May 26 10:37:13 Server  rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="2611" x-info="https://www.rsyslog.com"] start
May 26 10:37:24 Server kernel: docker0: port 2(veth6dfe6d5) entered disabled state
May 26 10:37:24 Server kernel: veth9356112: renamed from eth0
May 26 10:37:24 Server kernel: docker0: port 2(veth6dfe6d5) entered disabled state
May 26 10:37:24 Server kernel: device veth6dfe6d5 left promiscuous mode
May 26 10:37:24 Server kernel: docker0: port 2(veth6dfe6d5) entered disabled state
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered blocking state
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered disabled state
May 26 10:37:51 Server kernel: device vnet0 entered promiscuous mode
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered blocking state
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered forwarding state
May 26 10:37:54 Server kernel: vfio-pci 0000:04:00.0: enabling device (0000 -> 0002)
May 26 10:37:54 Server  acpid: input device has been disconnected, fd 7
May 26 10:37:54 Server  acpid: input device has been disconnected, fd 5
May 26 10:37:54 Server  acpid: input device has been disconnected, fd 6
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 1/KVM/3612 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 5/KVM/3616 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 4/KVM/3615 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 3/KVM/3614 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 2/KVM/3613 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 6/KVM/3617 took a split_lock trap at address: 0x7fe6108c
May 26 10:38:00 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 26 10:38:00 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 26 10:39:05 Server kernel: docker0: port 2(veth5b614b9) entered blocking state
May 26 10:39:05 Server kernel: docker0: port 2(veth5b614b9) entered disabled state
May 26 10:39:05 Server kernel: device veth5b614b9 entered promiscuous mode
May 26 10:39:06 Server kernel: eth0: renamed from vethf4f19bd
May 26 10:39:06 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth5b614b9: link becomes ready
May 26 10:39:06 Server kernel: docker0: port 2(veth5b614b9) entered blocking state
May 26 10:39:06 Server kernel: docker0: port 2(veth5b614b9) entered forwarding state
May 26 10:50:01 Server kernel: x86/split lock detection: #AC: CPU 0/KVM/3611 took a split_lock trap at address: 0xfffff8045d2b248f
May 26 10:58:24 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 26 10:58:24 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 26 10:58:45 Server kernel: br0: port 2(vnet0) entered disabled state
May 26 10:58:45 Server kernel: device vnet0 left promiscuous mode
May 26 10:58:45 Server kernel: br0: port 2(vnet0) entered disabled state
May 26 10:58:45 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 26 10:58:45 Server kernel: input: Logitech G512 SE as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.1/3-6.1:1.0/0003:046D:C342.0005/input/input14
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C342.0005: input,hidraw0: USB HID v1.11 Keyboard [Logitech G512 SE] on usb-0000:00:14.0-6.1/input0
May 26 10:58:45 Server kernel: input: Logitech G512 SE Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.1/3-6.1:1.1/0003:046D:C342.0006/input/input15
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C342.0006: input,hiddev96,hidraw1: USB HID v1.11 Keyboard [Logitech G512 SE] on usb-0000:00:14.0-6.1/input1
May 26 10:58:45 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 26 10:58:45 Server kernel: input: Logitech G203 LIGHTSYNC Gaming Mouse as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.3/3-6.3:1.0/0003:046D:C092.0007/input/input18
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C092.0007: input,hidraw2: USB HID v1.11 Mouse [Logitech G203 LIGHTSYNC Gaming Mouse] on usb-0000:00:14.0-6.3/input0
May 26 10:58:45 Server kernel: input: Logitech G203 LIGHTSYNC Gaming Mouse Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.3/3-6.3:1.1/0003:046D:C092.0008/input/input19
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C092.0008: input,hiddev97,hidraw3: USB HID v1.11 Keyboard [Logitech G203 LIGHTSYNC Gaming Mouse] on usb-0000:00:14.0-6.3/input1
May 26 10:59:00 Server kernel: vfio-pci 0000:03:00.0: not ready 1023ms after FLR; waiting
May 26 10:59:02 Server kernel: vfio-pci 0000:03:00.0: not ready 2047ms after FLR; waiting
May 26 10:59:06 Server kernel: vfio-pci 0000:03:00.0: not ready 4095ms after FLR; waiting
May 26 10:59:11 Server kernel: vfio-pci 0000:03:00.0: not ready 8191ms after FLR; waiting
May 26 10:59:21 Server kernel: vfio-pci 0000:03:00.0: not ready 16383ms after FLR; waiting
May 26 10:59:39 Server kernel: vfio-pci 0000:03:00.0: not ready 32767ms after FLR; waiting
May 26 11:00:16 Server kernel: vfio-pci 0000:03:00.0: not ready 65535ms after FLR; giving up

 

 

I tried these arguments one by one :

video=efifb:off, video=vesafb:off and pcie_no_flr=8086:5690

1642573176_Screenshot2023-05-27at15_03_41.thumb.jpg.67e5456547a70888117d3c5e765e65de.jpg

 

 

 

 

 

server-diagnostics-20230527-1453.zip

Edited by dboris
Link to comment
23 minutes ago, dboris said:

Hello dear Unraid enthusiasts,

 

I recently got my hand over an Intel Nuc Enthusiast (1200H + ARC 770M).

So I first checked on the forum and saw the difficulties faced on the desktop version, but still wanted to try out and repport.


I thought it would be pretty nice if the onboard iGPU and dGPU could be used separately.

I own a Lenovo Legion 5 Pro with the same hardware, but with a nVidia GPU instead of the ARC.
One of the limitations of this system is the presence of a mux switch, that force me to choose the dGPU or iGPU on boot, and stick with it.
In the case of the Intel NUC, there's no switch.
I was able to use the iGPU for Jellyfin without much issues, and to boot a W11 VM... But problems started quickly after I shut it off.

It turns out that I have issues on VM reboots, and even sometimes on boot.

At one point, I booted it twice and forced stopped it twice before a failed 3rd reboot ; it gives me hopes.
I tried to troubleshoot it alone for a few days without success.

It doesn't mean the situation can be solved considering how the hardware is known to not be optimised for virtualisation... But I thought I could try to take a shoot, or at least repport for the curiosity of the experiment.

 

Any help will be appreciated and I will gladly do any suggested test. :)

 

 

Here's the VM Config.
I also tried Q35 OVMF / OVMF TPM.

  Hide contents

1356264784_Screenshot2023-05-27at15_01_23.thumb.jpg.f0fae7b398e1284c6980f110637bcdd7.jpg1833856027_Screenshot2023-05-27at15_01_30.thumb.jpg.0c5084de8035b4cde8901d205083227a.jpg

The XML : 

  Reveal hidden contents

 

The log when the freeze happens :

71338769_Screenshot2023-05-27at14_46_32.thumb.jpg.b4245e2ae2342e3403cc3511fe16f8e1.jpg

 

An example of log I obtained by activating logging directly on the flash :

  Hide contents
May 26 10:37:13 Server  rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="2611" x-info="https://www.rsyslog.com"] start
May 26 10:37:24 Server kernel: docker0: port 2(veth6dfe6d5) entered disabled state
May 26 10:37:24 Server kernel: veth9356112: renamed from eth0
May 26 10:37:24 Server kernel: docker0: port 2(veth6dfe6d5) entered disabled state
May 26 10:37:24 Server kernel: device veth6dfe6d5 left promiscuous mode
May 26 10:37:24 Server kernel: docker0: port 2(veth6dfe6d5) entered disabled state
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered blocking state
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered disabled state
May 26 10:37:51 Server kernel: device vnet0 entered promiscuous mode
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered blocking state
May 26 10:37:51 Server kernel: br0: port 2(vnet0) entered forwarding state
May 26 10:37:54 Server kernel: vfio-pci 0000:04:00.0: enabling device (0000 -> 0002)
May 26 10:37:54 Server  acpid: input device has been disconnected, fd 7
May 26 10:37:54 Server  acpid: input device has been disconnected, fd 5
May 26 10:37:54 Server  acpid: input device has been disconnected, fd 6
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 1/KVM/3612 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 5/KVM/3616 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 4/KVM/3615 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 3/KVM/3614 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 2/KVM/3613 took a split_lock trap at address: 0x7fe6108c
May 26 10:37:55 Server kernel: x86/split lock detection: #AC: CPU 6/KVM/3617 took a split_lock trap at address: 0x7fe6108c
May 26 10:38:00 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 26 10:38:00 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 26 10:39:05 Server kernel: docker0: port 2(veth5b614b9) entered blocking state
May 26 10:39:05 Server kernel: docker0: port 2(veth5b614b9) entered disabled state
May 26 10:39:05 Server kernel: device veth5b614b9 entered promiscuous mode
May 26 10:39:06 Server kernel: eth0: renamed from vethf4f19bd
May 26 10:39:06 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth5b614b9: link becomes ready
May 26 10:39:06 Server kernel: docker0: port 2(veth5b614b9) entered blocking state
May 26 10:39:06 Server kernel: docker0: port 2(veth5b614b9) entered forwarding state
May 26 10:50:01 Server kernel: x86/split lock detection: #AC: CPU 0/KVM/3611 took a split_lock trap at address: 0xfffff8045d2b248f
May 26 10:58:24 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 26 10:58:24 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 26 10:58:45 Server kernel: br0: port 2(vnet0) entered disabled state
May 26 10:58:45 Server kernel: device vnet0 left promiscuous mode
May 26 10:58:45 Server kernel: br0: port 2(vnet0) entered disabled state
May 26 10:58:45 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 26 10:58:45 Server kernel: input: Logitech G512 SE as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.1/3-6.1:1.0/0003:046D:C342.0005/input/input14
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C342.0005: input,hidraw0: USB HID v1.11 Keyboard [Logitech G512 SE] on usb-0000:00:14.0-6.1/input0
May 26 10:58:45 Server kernel: input: Logitech G512 SE Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.1/3-6.1:1.1/0003:046D:C342.0006/input/input15
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C342.0006: input,hiddev96,hidraw1: USB HID v1.11 Keyboard [Logitech G512 SE] on usb-0000:00:14.0-6.1/input1
May 26 10:58:45 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 26 10:58:45 Server kernel: input: Logitech G203 LIGHTSYNC Gaming Mouse as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.3/3-6.3:1.0/0003:046D:C092.0007/input/input18
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C092.0007: input,hidraw2: USB HID v1.11 Mouse [Logitech G203 LIGHTSYNC Gaming Mouse] on usb-0000:00:14.0-6.3/input0
May 26 10:58:45 Server kernel: input: Logitech G203 LIGHTSYNC Gaming Mouse Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.3/3-6.3:1.1/0003:046D:C092.0008/input/input19
May 26 10:58:45 Server kernel: hid-generic 0003:046D:C092.0008: input,hiddev97,hidraw3: USB HID v1.11 Keyboard [Logitech G203 LIGHTSYNC Gaming Mouse] on usb-0000:00:14.0-6.3/input1
May 26 10:59:00 Server kernel: vfio-pci 0000:03:00.0: not ready 1023ms after FLR; waiting
May 26 10:59:02 Server kernel: vfio-pci 0000:03:00.0: not ready 2047ms after FLR; waiting
May 26 10:59:06 Server kernel: vfio-pci 0000:03:00.0: not ready 4095ms after FLR; waiting
May 26 10:59:11 Server kernel: vfio-pci 0000:03:00.0: not ready 8191ms after FLR; waiting
May 26 10:59:21 Server kernel: vfio-pci 0000:03:00.0: not ready 16383ms after FLR; waiting
May 26 10:59:39 Server kernel: vfio-pci 0000:03:00.0: not ready 32767ms after FLR; waiting
May 26 11:00:16 Server kernel: vfio-pci 0000:03:00.0: not ready 65535ms after FLR; giving up

 

 

I tried these arguments one by one :

video=efifb:off, video=vesafb:off and pcie_no_flr=8086:5690

1642573176_Screenshot2023-05-27at15_03_41.thumb.jpg.67e5456547a70888117d3c5e765e65de.jpg

 

 

 

 

 

server-diagnostics-20230527-1453.zip 120.14 kB · 0 downloads

I suspect it will be due to you running 6.11.5 as needs a latter kernel for ARC, I have have a ARC 770 and need to be running 6.12RC and action a modprobe to activate for use for the host. This is running kernel 6.1

 

I have my 770 passed thru to a windows 11 vm

 

Link to comment

  

1 hour ago, SimonF said:

I suspect it will be due to you running 6.11.5 as needs a latter kernel for ARC, I have have a ARC 770 and need to be running 6.12RC and action a modprobe to activate. This is running kernel 6.1

 

And I just finished reading the INTEL ARC SUPPORT where you contributed a lot just in case I could find a tip.

 

I did that, but still facing the the same issue, same with Ubuntu VM.

 

 

Screenshot 2023-05-27 at 17.11.11.jpg

422697782_Screenshot2023-05-27at17_45_07.thumb.jpg.716340201d14c618d5d95a71dd9fb5fb.jpg

 

Edited by dboris
Link to comment
38 minutes ago, dboris said:

  

 

And I just finished reading the INTEL ARC SUPPORT where you contributed a lot just in case I could find a tip.

 

I did that, but still facing the the same issue, same with Ubuntu VM.

 

 

Screenshot 2023-05-27 at 17.11.11.jpg

422697782_Screenshot2023-05-27at17_45_07.thumb.jpg.716340201d14c618d5d95a71dd9fb5fb.jpg

 

I know I cannot passthru igpu UHD770 to any vm, so not sure if this will be the same. Will do some research do you have a link to nuc support or manual?

Link to comment
1 hour ago, SimonF said:

I know I cannot passthru igpu UHD770 to any vm, so not sure if this will be the same. Will do some research do you have a link to nuc support or manual?

 

You should find all related officia docs on the intel's support page :

https://www.intel.com/content/www/us/en/products/sku/196170/intel-nuc-12-enthusiast-kit-nuc12snki72/support.html

 

It has the "A770M" ARC dGPU. Not sure to understand the relation with the UHD770.

Regarding the iGPU (Intel Xe), I have no issue, it behaves the same as desktop iGPU counterparts and I can pass it to dockers.

 

image.thumb.png.b4b0a62a41a5e61dc9e4bcd16bedb2e4.png

 

 

Edited by dboris
Link to comment
2 hours ago, dboris said:

 

You should find all related officia docs on the intel's support page :

https://www.intel.com/content/www/us/en/products/sku/196170/intel-nuc-12-enthusiast-kit-nuc12snki72/support.html

 

It has the "A770M" ARC dGPU. Not sure to understand the relation with the UHD770.

Regarding the iGPU (Intel Xe), I have no issue, it behaves the same as desktop iGPU counterparts and I can pass it to dockers.

 

image.thumb.png.b4b0a62a41a5e61dc9e4bcd16bedb2e4.png

 

 

Found this https://kevinquillen.com/getting-intel-arc-770m-work-fedora-37

 

states to get f4dora work had to do the following

 

Also in the BIOS, disable ACPI and PCIe power management features.

 

In terminal, run sudo nano /etc/modprobe.d/i915.conf and add options i915 force_probe=5690, and save the file. If you have an Arc GPU other than the 770m, you need to look up the corresponding PCI ID instead of 5690 here.

Link to comment
2 hours ago, SimonF said:

Found this https://kevinquillen.com/getting-intel-arc-770m-work-fedora-37

 

states to get f4dora work had to do the following

 

Also in the BIOS, disable ACPI and PCIe power management features.

 

In terminal, run sudo nano /etc/modprobe.d/i915.conf and add options i915 force_probe=5690, and save the file. If you have an Arc GPU other than the 770m, you need to look up the corresponding PCI ID instead of 5690 here.

 

Turns out I had edited /boot/config/modprobe.d/i915.conf with 56a0.. (wrong gpu ID).

It was erasing the /etc/modprobe.d/i915.conf on reboot :). 

 

So I edited again both files, rebooted, checked the value, did the bios update (357.0057) and changed the two bios settings...

Tried removing the options video=vesafb:off and pcie_no_flr=8086:5690
Did another VM, made sure to take the Q35 TPM, last version.
W11 VM booted once, with screen plugged, and turned off without issues.

GOOD.

But then, on restart, I faced again (log containing on/off/on):

 

Spoiler

May 27 13:41:43 Server kernel: br0: port 2(vnet0) entered blocking state
May 27 13:41:43 Server kernel: br0: port 2(vnet0) entered disabled state
May 27 13:41:43 Server kernel: device vnet0 entered promiscuous mode
May 27 13:41:43 Server kernel: br0: port 2(vnet0) entered blocking state
May 27 13:41:43 Server kernel: br0: port 2(vnet0) entered forwarding state
May 27 13:41:47 Server acpid: input device has been disconnected, fd 7
May 27 13:41:47 Server acpid: input device has been disconnected, fd 5
May 27 13:41:47 Server acpid: input device has been disconnected, fd 6
May 27 13:41:58 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 27 13:41:58 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 27 13:42:45 Server kernel: x86/split lock detection: #AC: CPU 5/KVM/23966 took a split_lock trap at address: 0x746cc9dd
May 27 13:42:45 Server kernel: x86/split lock detection: #AC: CPU 6/KVM/23967 took a split_lock trap at address: 0x746cdb56
May 27 13:42:45 Server kernel: x86/split lock detection: #AC: CPU 4/KVM/23965 took a split_lock trap at address: 0x746cc9dd
May 27 13:42:45 Server kernel: x86/split lock detection: #AC: CPU 7/KVM/23968 took a split_lock trap at address: 0x746cc9dd
May 27 13:42:46 Server kernel: x86/split lock detection: #AC: CPU 2/KVM/23963 took a split_lock trap at address: 0x746cc9dd
May 27 13:42:46 Server kernel: x86/split lock detection: #AC: CPU 1/KVM/23962 took a split_lock trap at address: 0x746cc9dd
May 27 13:42:46 Server kernel: x86/split lock detection: #AC: CPU 3/KVM/23964 took a split_lock trap at address: 0x746cc9dd
May 27 13:42:47 Server kernel: x86/split lock detection: #AC: CPU 0/KVM/23961 took a split_lock trap at address: 0x746cc9dd
May 27 13:43:00 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 27 13:43:00 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 27 13:43:19 Server kernel: br0: port 2(vnet0) entered disabled state
May 27 13:43:19 Server kernel: device vnet0 left promiscuous mode
May 27 13:43:19 Server kernel: br0: port 2(vnet0) entered disabled state
May 27 13:43:20 Server kernel: usb 3-6.1: reset full-speed USB device number 5 using xhci_hcd
May 27 13:43:20 Server kernel: input: Logitech G512 SE as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.1/3-6.1:1.0/0003:046D:C342.0005/input/input14
May 27 13:43:20 Server kernel: hid-generic 0003:046D:C342.0005: input,hidraw0: USB HID v1.11 Keyboard [Logitech G512 SE] on usb-0000:00:14.0-6.1/input0
May 27 13:43:20 Server kernel: input: Logitech G512 SE Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.1/3-6.1:1.1/0003:046D:C342.0006/input/input15
May 27 13:43:20 Server kernel: hid-generic 0003:046D:C342.0006: input,hiddev96,hidraw1: USB HID v1.11 Keyboard [Logitech G512 SE] on usb-0000:00:14.0-6.1/input1
May 27 13:43:20 Server kernel: usb 3-6.3: reset full-speed USB device number 6 using xhci_hcd
May 27 13:43:20 Server kernel: input: Logitech G203 LIGHTSYNC Gaming Mouse as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.3/3-6.3:1.0/0003:046D:C092.0007/input/input18
May 27 13:43:20 Server kernel: hid-generic 0003:046D:C092.0007: input,hidraw2: USB HID v1.11 Mouse [Logitech G203 LIGHTSYNC Gaming Mouse] on usb-0000:00:14.0-6.3/input0
May 27 13:43:20 Server kernel: input: Logitech G203 LIGHTSYNC Gaming Mouse Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6.3/3-6.3:1.1/0003:046D:C092.0008/input/input19
May 27 13:43:20 Server kernel: hid-generic 0003:046D:C092.0008: input,hiddev97,hidraw3: USB HID v1.11 Keyboard [Logitech G203 LIGHTSYNC Gaming Mouse] on usb-0000:00:14.0-6.3/input1
May 27 13:43:37 Server kernel: br0: port 2(vnet1) entered blocking state
May 27 13:43:37 Server kernel: br0: port 2(vnet1) entered disabled state
May 27 13:43:37 Server kernel: device vnet1 entered promiscuous mode
May 27 13:43:37 Server kernel: br0: port 2(vnet1) entered blocking state
May 27 13:43:37 Server kernel: br0: port 2(vnet1) entered forwarding state
May 27 13:43:52 Server kernel: vfio-pci 0000:03:00.0: not ready 1023ms after FLR; waiting
May 27 13:43:54 Server kernel: vfio-pci 0000:03:00.0: not ready 2047ms after FLR; waiting
May 27 13:43:57 Server kernel: vfio-pci 0000:03:00.0: not ready 4095ms after FLR; waiting
May 27 13:44:02 Server kernel: vfio-pci 0000:03:00.0: not ready 8191ms after FLR; waiting
May 27 13:44:12 Server kernel: vfio-pci 0000:03:00.0: not ready 16383ms after FLR; waiting

 

And after a reset... No config change.. Booting doesn't work anymore :D.
After deleting uuid, loader and nvram : still doesn't work.
Restarting the same VM config from 0 : Same.

Turning screen on and off : same.

I still haven't found why sometimes it seems to work fine, despite spending hours rebooting the nuc.

Edited by dboris
Link to comment
  • 2 weeks later...

Managed to get my w11 VM to be a few times without changing much... I had disabled passthrough and passed the audio card of the GPU.

I benchmarked the GPU :) So everything was working.
Thought I found the solution.

Then I got multiple system crash with the same issue, and the same previously working changes.

It's incoherent... I think I'm better off waiting the 6.2 kernel.

Edited by dboris
Link to comment
  • 3 weeks later...
On 6/10/2023 at 5:12 PM, dboris said:

Managed to get my w11 VM to be a few times without changing much... I had disabled passthrough and passed the audio card of the GPU.

I benchmarked the GPU :) So everything was working.
Thought I found the solution.

Then I got multiple system crash with the same issue, and the same previously working changes.

It's incoherent... I think I'm better off waiting the 6.2 kernel.

Hey! i saw this post since i am planning on buying this: https://tweakers.net/pricewatch/1687200/intel-nuc-11-enthusiast-barebone/specificaties/

 

Is it correct that you can use the iGPU for jelly or in my case plex and still use the nvidia or in your case intel gpu for passthrough to VM?

 

And is your unraid instance booted uefi or non ?

Link to comment
On 6/29/2023 at 8:29 AM, Mirano said:

Hey! i saw this post since i am planning on buying this: https://tweakers.net/pricewatch/1687200/intel-nuc-11-enthusiast-barebone/specificaties/

 

Is it correct that you can use the iGPU for jelly or in my case plex and still use the nvidia or in your case intel gpu for passthrough to VM?

 

And is your unraid instance booted uefi or non ?

 

With this model, I can use both iGPU and dGPU at the same time as I mentionned.
The only issue is the Intel Arc support.
With a NVIDIA GPU you shouldn't be facing the sames issues.

My instance of unraid is booted UEFI but it should't change much.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.