dboris

Members
  • Posts

    66
  • Joined

  • Last visited

About dboris

  • Birthday 04/14/1992

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dboris's Achievements

Rookie

Rookie (2/14)

9

Reputation

  1. Update : I was able to benchmark the GPU in a VM. The results were so underwhelming, probably because of the ReBAR requirements that I just stopped trying to improve the situations.
  2. Sooo it worked ! On the other hand I still randomly get the same error on : [8086:460d] 00:01.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x16 Controller #1 (rev 02) I modified the script you gave me accordingly but I still get the bug. The UI of the server is unresponsive, but I was able to go check in the related reset_method file : it's empty. However, after crashes, I noticed it's usually not empty ("bus"). GNU nano 7.2 /etc/libvirt/hooks/qemu.d/Window11-Arc_reset #!/usr/bin/env php <?php function log_to_log($m, $type = "NOTICE") { if ($type == "DEBUG" ) return NULL; $m = print_r($m,true); $m = str_replace("\n", " ", $m); $m = str_replace('"', "'", $m); $cmd = "/usr/bin/logger ".'"'.$m.'"' ; exec($cmd); } #Reset # echo > /sys/bus/pci/devices/0000:03:00.0/reset_method # echo > /sys/bus/pci/devices/0000:04:00.0/reset_method # # Note if this fails then the VM will not start. # $vmname = "Windows 11" ; log_to_log("Hooks Arc Clear" .$argv[2]." VM:".$argv[1]) ; if ($argv[2] == 'prepare' && $argv[1] == "$vmname"){ log_to_log( "Clear /sys/bus/pci/devices/0000:03:00.0/reset_method") ; file_put_contents("/sys/bus/pci/devices/0000:03:00.0/reset_method", " ") ; log_to_log( "Clear /sys/bus/pci/devices/0000:04:00.0/reset_method") ; file_put_contents("/sys/bus/pci/devices/0000:04:00.0/reset_method", " ") ; log_to_log( "Clear /sys/bus/pci/devices/0000:01:00.0/reset_method") ; file_put_contents("/sys/bus/pci/devices/0000:01:00.0/reset_method", " ") ; } ?>
  3. As far as I understand, the ARC gpu won't crash if I clearing (both devices) by making sure that this file is empty using : nano /sys/bus/pci/devices/0000:03:00.0/reset_method same for the audio device : 0000:04:00.0 So, the first thing I will do is make sure this manual fix works, restart the VM, clear, etc. I'll report my findings here tonight ! On the other hand, I don't understand yet where I should put the script / the uuid part. I'll check the content of /etc/libvirt/hooks to get some clues tonight too. Don't hesitate if you have any ideas of the directions to take, even if it involves testing. My arrays cries in bad blocks already. I already spent countless (worthless?) hours trying to get that NUC's GPU working as it's a perfect candidate for my use case. Wish I had found that post last month.
  4. Some guys on the Proxmox forum were able to fix the A770M issue I encounter. Any idea how to translate it on Unraid ? Thanks,
  5. Does someone tested passing their Arc dGPU to a VM ? I still get the same system crash :). My poor array isn't enjoying the multiple hard reset. I would be interested to know if I messed up my configuration or if it's simply not working yet.
  6. I think you are simply in the wrong directory. I got the same issue. Go down one directory and check for the presence of the unRAIDServer folder.
  7. I tried with my A770M and I still get the same error. I was getting some errors before, but after adding the GPU and Audio as passthrough on boot, and audio as function in the VM, it seems ok on boot : "Make sure the GuC/HuC firmware loaded without any FAIL or ERROR." : cat /sys/kernel/debug/dri/0/gt/uc/guc_info cat /sys/kernel/debug/dri/0/gt/uc/huc_info [ 15.979594] i915 0000:00:02.0: enabling device (0006 -> 0007) [ 15.980248] i915 0000:00:02.0: [drm] VT-d active for gfx access [ 15.980311] i915 0000:00:02.0: [drm] Using Transparent Hugepages [ 15.989526] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=mem [ 15.993661] i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/adlp_dmc_ver2_16.bin (v2.16) [ 15.993905] mei_hdcp 0000:00:16.0-b638ab7e-94e2-4ea2-a552-d1c54b627f04: bound 0000:00:02.0 (ops i915_hdcp_component_ops [i915]) [ 16.160451] i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.bin version 70.5.1 [ 16.160466] i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc.bin version 7.9.3 [ 16.174909] i915 0000:00:02.0: [drm] HuC authenticated [ 16.175918] i915 0000:00:02.0: [drm] GuC submission enabled [ 16.175921] i915 0000:00:02.0: [drm] GuC SLPC enabled [ 16.176633] i915 0000:00:02.0: [drm] GuC RC: enabled [ 16.177691] mei_pxp 0000:00:16.0-fbf6fcf1-96cf-4e2e-a6a6-1bab8cbe36b1: bound 0000:00:02.0 (ops i915_pxp_tee_component_ops [i915]) [ 16.177865] i915 0000:00:02.0: [drm] Protected Xe Path (PXP) protected content support initialized [ 16.179762] [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.0 on minor 0 [ 16.190923] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 16.195256] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes [ 16.195437] i915 0000:00:02.0: [drm] Cannot find any crtc or sizes cat: /sys/kernel/debug/dri/0/gt/uc/guc_info: No such file or directory cat: /sys/kernel/debug/dri/0/gt/uc/huc_info: No such file or directory However on the log I get the same errors and freezes than I had before updating the kernel. Once I boot the VM I get : Jul 12 15:17:36 Server nmbd[9631]: [2023/07/12 15:17:36.016923, 0] ../../source3/nmbd/nmbd_become_lmb.c:398(become_local_master_stage2) Jul 12 15:17:36 Server nmbd[9631]: ***** Jul 12 15:17:36 Server nmbd[9631]: Jul 12 15:17:36 Server nmbd[9631]: Samba name server SERVER is now a local master browser for workgroup WORKGROUP on subnet 192.168.1.1 Jul 12 15:17:36 Server nmbd[9631]: Jul 12 15:17:36 Server nmbd[9631]: ***** Jul 12 15:17:55 Server webGUI: Successful login user root from 192.168.1.10 Jul 12 15:21:13 Server kernel: br0: port 2(vnet0) entered blocking state Jul 12 15:21:13 Server kernel: br0: port 2(vnet0) entered disabled state Jul 12 15:21:13 Server kernel: device vnet0 entered promiscuous mode Jul 12 15:21:13 Server kernel: br0: port 2(vnet0) entered blocking state Jul 12 15:21:13 Server kernel: br0: port 2(vnet0) entered forwarding state Jul 12 15:21:29 Server kernel: vfio-pci 0000:03:00.0: not ready 1023ms after FLR; waiting Jul 12 15:21:31 Server kernel: vfio-pci 0000:03:00.0: not ready 2047ms after FLR; waiting Jul 12 15:21:34 Server kernel: vfio-pci 0000:03:00.0: not ready 4095ms after FLR; waiting Jul 12 15:21:39 Server kernel: vfio-pci 0000:03:00.0: not ready 8191ms after FLR; waiting Jul 12 15:21:49 Server kernel: vfio-pci 0000:03:00.0: not ready 16383ms after FLR; waiting Jul 12 15:22:05 Server ntpd[1666]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 15:22:06 Server kernel: vfio-pci 0000:03:00.0: not ready 32767ms after FLR; waiting Jul 12 15:22:42 Server kernel: vfio-pci 0000:03:00.0: not ready 65535ms after FLR; giving up Jul 12 15:22:55 Server kernel: vfio-pci 0000:03:00.0: not ready 1023ms after bus reset; waiting Jul 12 15:22:57 Server kernel: vfio-pci 0000:03:00.0: not ready 2047ms after bus reset; waiting Jul 12 15:23:00 Server kernel: vfio-pci 0000:03:00.0: not ready 4095ms after bus reset; waiting Jul 12 15:23:05 Server kernel: vfio-pci 0000:03:00.0: not ready 8191ms after bus reset; waiting Jul 12 15:23:15 Server kernel: vfio-pci 0000:03:00.0: not ready 16383ms after bus reset; waiting Jul 12 15:23:33 Server kernel: vfio-pci 0000:03:00.0: not ready 32767ms after bus reset; waiting Jul 12 15:24:09 Server kernel: vfio-pci 0000:03:00.0: not ready 65535ms after bus reset; giving up Once again, GPU is untouched, and audio is passed as a multifunction device.
  8. You were on point. 6.4 now released.
  9. With this model, I can use both iGPU and dGPU at the same time as I mentionned. The only issue is the Intel Arc support. With a NVIDIA GPU you shouldn't be facing the sames issues. My instance of unraid is booted UEFI but it should't change much.
  10. Managed to get my w11 VM to be a few times without changing much... I had disabled passthrough and passed the audio card of the GPU. I benchmarked the GPU So everything was working. Thought I found the solution. Then I got multiple system crash with the same issue, and the same previously working changes. It's incoherent... I think I'm better off waiting the 6.2 kernel.
  11. Turns out I had edited /boot/config/modprobe.d/i915.conf with 56a0.. (wrong gpu ID). It was erasing the /etc/modprobe.d/i915.conf on reboot :). So I edited again both files, rebooted, checked the value, did the bios update (357.0057) and changed the two bios settings... Tried removing the options : video=vesafb:off and pcie_no_flr=8086:5690 Did another VM, made sure to take the Q35 TPM, last version. W11 VM booted once, with screen plugged, and turned off without issues. GOOD. But then, on restart, I faced again (log containing on/off/on): And after a reset... No config change.. Booting doesn't work anymore :D. After deleting uuid, loader and nvram : still doesn't work. Restarting the same VM config from 0 : Same. Turning screen on and off : same. I still haven't found why sometimes it seems to work fine, despite spending hours rebooting the nuc.
  12. You should find all related officia docs on the intel's support page : https://www.intel.com/content/www/us/en/products/sku/196170/intel-nuc-12-enthusiast-kit-nuc12snki72/support.html It has the "A770M" ARC dGPU. Not sure to understand the relation with the UHD770. Regarding the iGPU (Intel Xe), I have no issue, it behaves the same as desktop iGPU counterparts and I can pass it to dockers.
  13. And I just finished reading the INTEL ARC SUPPORT where you contributed a lot just in case I could find a tip. I did that, but still facing the the same issue, same with Ubuntu VM.
  14. Hello dear Unraid enthusiasts, I recently got my hand over an Intel Nuc Enthusiast (1200H + ARC 770M). So I first checked on the forum and saw the difficulties faced on the desktop version, but still wanted to try out and repport. I thought it would be pretty nice if the onboard iGPU and dGPU could be used separately. I own a Lenovo Legion 5 Pro with the same hardware, but with a nVidia GPU instead of the ARC. One of the limitations of this system is the presence of a mux switch, that force me to choose the dGPU or iGPU on boot, and stick with it. In the case of the Intel NUC, there's no switch. I was able to use the iGPU for Jellyfin without much issues, and to boot a W11 VM... But problems started quickly after I shut it off. It turns out that I have issues on VM reboots, and even sometimes on boot. At one point, I booted it twice and forced stopped it twice before a failed 3rd reboot ; it gives me hopes. I tried to troubleshoot it alone for a few days without success. It doesn't mean the situation can be solved considering how the hardware is known to not be optimised for virtualisation... But I thought I could try to take a shoot, or at least repport for the curiosity of the experiment. Any help will be appreciated and I will gladly do any suggested test. Here's the VM Config. I also tried Q35 OVMF / OVMF TPM. The XML : The log when the freeze happens : An example of log I obtained by activating logging directly on the flash : I tried these arguments one by one : video=efifb:off, video=vesafb:off and pcie_no_flr=8086:5690 server-diagnostics-20230527-1453.zip