Acidcliff

Members
  • Posts

    20
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Acidcliff's Achievements

Noob

Noob (1/14)

6

Reputation

  1. [ 43.683341] i915 0000:00:02.2: Device initialization failed (-71) [ 43.683344] i915: probe of 0000:00:02.2 failed with error -71 => I'm by far no expert but this seems to be the culprit - seems as if unraid isn't able to initiatize the VFs. Therefore Win11 is probably not able to use it What CPU model are you using? Have you enabled SR-IOV in BIOS? Have you done a reboot after installing the plugin? What's your "lspci -v" output?
  2. Tried that (via the uninstall function on the plugins page) - same error. And now I've the problem that the plugin doesn't want to install anymore. Edit: Also tried to replace the libvirt.php (that was patched by the previous version of the plugin) with the libvirt.php.orig backup -> but it didn't work
  3. plugin: installing: i915-sriov.plg Executing hook script: pre_plugin_checks plugin: downloading: i915-sriov.plg ... done Executing hook script: pre_plugin_checks +============================================================================== | Skipping package unraid-i915-sriov-2023.03.30 (already installed) +============================================================================== patching file usr/local/emhttp/plugins/dynamix.vm.manager/include/libvirt.php Hunk #1 FAILED at 780. 1 out of 1 hunk FAILED -- saving rejects to file usr/local/emhttp/plugins/dynamix.vm.manager/include/libvirt.php.rej plugin: run failed: /bin/bash Executing hook script: post_plugin_checks getting this error, when trying to update the plugin to the latest version
  4. Yay I finally got it working!!! Tried everything: Reinstalling the plugin, intel_gpu_top, using the modified kernel instead of the addon... But the error 43 always remained. The solution was the following: Although I already had the intel GPU driver installed, I had to reinstall it with the option for "clean install" marked (otherwise it didnt't work). After the reinstall everything worked as a charm I'm finally able to run my Win11 VM on VF 2.1 with Parsec (using the Virtual Display Driver) (don't forget to remove the Red Hat Display Driver)
  5. Hey, it's amazing that you have brought it so far! I've the following problem. When I try to bind one of the VFs (02.1) to the VM (Win 11) the iGPU does not seem to work (Code 43 in the device manager) I'm running a 12600k on 6.11.5. VFs are created and visible in device overview. 02.1 is bound to vfio at boot. Any idea why it's not working? <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='de'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x10' function='0x0'/> </hostdev> Loading config from /boot/config/vfio-pci.cfg BIND=0000:00:02.1|8086:4680 --- Processing 0000:00:02.1 8086:4680 Error: Device 0000:00:02.1 does not exist, unable to bind device --- vfio-pci binding complete Devices listed in /sys/bus/pci/drivers/vfio-pci: Loading config from /boot/config/vfio-pci.cfg BIND=0000:00:02.1|8086:4680 --- Processing 0000:00:02.1 8086:4680 Vendor:Device 8086:4680 found at 0000:00:02.1 IOMMU group members (sans bridges): /sys/bus/pci/devices/0000:00:02.1/iommu_group/devices/0000:00:02.1 Binding... 0000:00:02.1 already bound to vfio-pci Successfully bound the device 8086:4680 at 0000:00:02.1 to vfio-pci --- vfio-pci binding complete Devices listed in /sys/bus/pci/drivers/vfio-pci: lrwxrwxrwx 1 root root 0 Mar 28 16:01 0000:00:02.1 -> ../../../../devices/pci0000:00/0000:00:02.1
  6. Die Hoffnung stirbt ja zuletzt Hab mal alle Ports ausprobiert - keine Änderung. C9 wäre ja schon ein Traum - das MSI Board gönnt mir dann wohl leider nichts unter C2
  7. Hi zusammen, super Anleitung, super Thread und vor allem richtig Wahnsinn wie konstruktiv und nett hier allen geholfen wird! Mein Setup: Unraid 6.11.3 MB: MSI PRO Z690-A DDR4 CPU: 12600K HDDs: 5xSATA SSD: 1xM.2 Powertop 2.15 ASPM scheint nach meiner Auffassung alles Enabled zu sein USB: Conbee II und JetFlash Unraid USB Stick Ich komme mit meinem Setup aktuell selbst mit abgeschaltetem Array insgesamt nicht unter C6. Allerdings komme ich, wenn ich den Conbee II via zigbee2mqtt (Docker) nutze, nicht mehr unter C2. Autosuspend für den Conbee steht im powertop auch auf Bad und selbst nach tuning scheint er sich immer wieder zurückzustellen. Schalte ich den zigbee2mqtt container ab, gehts wieder auf C6. Ihn erahne zwar schon die Antwort... Aber denkt ihr es wäre prinzipiell möglich mit dem Conbee auf ein besseres Level zu erreichen?
  8. I think I will be able to test it on the weekend!
  9. Looking at the code and commits to me it seems that Intel's LTS Kernel 5.10 has added SR-IOV support for the i915 driver. (Although nothing happened to this after the commit in on 24th Feb). Intels 5.15 LTS Kernel does not have it though. It's also not in the 5.18 Linux Kernel. Not sure in what state the updated SR-IOV driver in 5.10 is. I'm no Linux Pro so I'm also not sure if and how it is possible to compile the adapted driver and add it to the unraid kernel. But if it would be possible it would definitelly be cool
  10. Not quite firm with AMD CPUs but have you tried the following: Setting the cpu scaling governor to a more power saving strategy (e.g. powersave)? Pinning CPU cores so less of them get used (may be less area to cool - but I could be wrong) Undervolting the CPU Making sure that the iGPU is in a power saving state (for intel & nvidia this means to at least install the GPU drivers and enabling it - e.g. through "nvidia-smi --persistence-mode=1") Reducing other heat producing components and case temperature in your build (e.g. spinning down HDDs, using Gigabit-LAN rather than Multigig - if applicable, increasing case FAN speed) At what temperature is your CPU currently running? If you've got enough head-room you might get away with just unplugging the fan (at your own risk of course...) Alternatively (don't now how much room you have) you could also think about using a small AIO and lead the cooling outside. could be more efficient than having the cooler near the HDDs and other stuff...
  11. Yep - applied all from powertop - but to be honest that didn't make too much of a difference (maybe since my HDDs are mostly spun down anyways). But applying "powersave" as scaling governor made a huge difference bringing the System down about 5-10W to 47-50W*. Maybe I'll look into undervolting - but overall I think I can be happy with the 50W for the setup. *Current Setup: CPU: Intel 12600k, GPU MSI GTX 1050 Areo, 5xHDDs (~32TB), 32 GB RAM 3600MHZ, MSI Z690 PRO A, PNY XLR8 CS3030 1TB M.2, Arctic Liquid Freeze 420, 3x 140mm fans (all stopped for the test), no OC, XMP on, 1 VM running (Linux Debian), 12 Docker container Running (mainly home automation stuff)
  12. Had a deeper look into that - thank you again for pointing that out. In fact while not being under load the GTX 1050 was running in P-Mode P0. Instead of running a VM to get it down to P8 i used, which seems to work (haven't had yet the opportunity to measure the impact): nvidia-smi --persistence-mode=1
  13. I think it would be possible to go even below the 36W since I use a 420mm AIO which is way overkill (but a silent pc was important for me). The AlderLake iGPU has Quick Sync - but you'll run into the same problem that I'm currently trying to solve, that is, that beginning with intel gen 11, intel has dropped the support for VT-x (d or g) and switched to SR-IOV. So at least with the current UNRAID Plugins (like intel TOP) you won't be able to virtualize the iGPU to use it for your docker containers and vms in parallel. Unfortunatelly there is close to no material/tutorials to be found on the topic of "SR-IOVing" an iGPU. As long as this isn't solved you'll probably be stuck with using dedicated GPUs (and I guess the 1050 is one of the least power hungry that also have proper HW-encoding features - Features that lack for example in a 1030). Details on the/my problems with the iGPU SR-IOV here:
  14. First of all a huge thank you to you @BVD! Imo it's the most complete and "user friendliest" guide to SR-IOV that I could find anywhere. Really cool! I know that you focused this thread around NICs but I tried to apply the guide to intels iGPU that (supposedly) support SR-IOV on gen 11+. Don't want to capture this thread - details are in a separate Thread: When I try to use the VFs from the iGPU, I get an error stating, that the VF needs a VF Token to be used (to my research a shared secret UUID between the PF and VF). Most sources about VF Tokens I could find are from discussions around DPDK so I guess it's also a topic one could stumble upon when using VFs on NICS. I was wondering if someone here ever had to deal with something similar and know a way of setting the Token with the workflow described here.