SimonF Posted December 3, 2021 Posted December 3, 2021 Mine is the igpu. if you pass thru a gpu to a vm is no longer seen by the host drivers so gpu stat will not work. Quote
DeadDevil6210 Posted December 4, 2021 Posted December 4, 2021 (edited) 11 hours ago, Deadlystrike said: Did you do any other modifications? I rolled back all my changes and started with just Intel GPU TOP plugin Blacklisted Driver /dev/dri volume With these HW Encoding still works for H.264, but not H.265/HEVC I also tried adding the force_probe and that yielded no difference I did add the following line to my go file: i915.force_probe=4680 (i have the 12600K igpu don't know if the other 12 series have a different gpu so be aware that 4680 can be different if I look to the rocket lake gpu's) That's the only thing I think you didn't do if I see all your roll back actions. BUT: My server crashed everytime somebody hit a hevc file on plex that needed transcoding. So something makes the whole server freeze. I now deleted the /dev//dri entry from plex and everything is fine again server survived the night without hanging. Will wait for kernel 5.15 or 5.16 in unraid. Update: Syslog Dec 4 08:39:24 Wol-Ent-NAS ntpd[1295]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 4 08:40:45 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for preemption time out Dec 4 08:40:45 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:4:28fffffd, in Plex Transcoder [23848] Dec 4 08:40:54 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:4:28fffffd, in Plex Transcoder [23848] Dec 4 08:40:54 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm] Resetting vcs0 for stopped heartbeat on vcs0 Dec 4 08:40:54 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on vcs0 Dec 4 08:40:54 Wol-Ent-NAS kernel: [drm:__uc_sanitize [i915]] *ERROR* Failed to reset GuC, ret = -110 Dec 4 08:40:54 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm] *ERROR* Failed to reset chip Dec 4 08:40:54 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm:add_taint_for_CI [i915]] CI tainted:0x9 by intel_gt_reset+0x26d/0x292 [i915] Dec 4 08:40:54 Wol-Ent-NAS kernel: i915 0000:00:02.0: [drm] Plex Transcoder[23848] context reset due to GPU hang Dec 4 08:40:59 Wol-Ent-NAS kernel: Fence expiration time out i915-0000:00:02.0:Plex Transcoder[23848]:23f4! Somebody how has a good understanding? Edited December 4, 2021 by DeadDevil6210 Quote
1HP Posted December 4, 2021 Posted December 4, 2021 12 hours ago, SimonF said: Mine is the igpu. if you pass thru a gpu to a vm is no longer seen by the host drivers so gpu stat will not work. aah ok, thank you !! Quote
Colin Haber Posted December 6, 2021 Posted December 6, 2021 Anyone have any luck getting the IGPU to run with Jellyfin? Got the drivers loaded but I'm seeing these errors on VAAPI transcodes: ffmpeg version 4.3.1-Jellyfin Copyright (c) 2000-2020 the FFmpeg developers built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04) configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-shared --disable-libxcb --disable-sdl2 --disable-xlib --enable-gpl --enable-version3 --enable-static --enable-libfontconfig --enable-fontconfig --enable-gmp --enable-gnutls --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libdav1d --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --arch=amd64 --enable-opencl --enable-vaapi --enable-amf --enable-libmfx --enable-vdpau --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvenc --enable-nvdec --enable-ffnvcodec libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100 [AVHWDeviceContext @ 0x563ff779b240] libva: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so has no function __vaDriverInit_1_0 [AVHWDeviceContext @ 0x563ff779b240] libva: /usr/lib/jellyfin-ffmpeg/lib/dri/i965_drv_video.so init failed [AVHWDeviceContext @ 0x563ff779b240] Failed to initialise VAAPI connection: -1 (unknown libva error). Device creation failed: -5. Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': Input/output error Error parsing global options: Input/output error Quote
MadMatt337 Posted December 8, 2021 Posted December 8, 2021 (edited) In case anyone is curious, I am setting up a new server now with a 12600k and ASRock Z690 Steel Legend, my onboard NIC is working correctly (Dragon RTL8125BG) on 6.10.0-rc2 Edited December 8, 2021 by MadMatt337 1 Quote
Deadlystrike Posted December 9, 2021 Posted December 9, 2021 20 hours ago, MadMatt337 said: In case anyone is curious, I am setting up a new server now with a 12600k and ASRock Z690 Steel Legend, my onboard NIC is working correctly (Dragon RTL8125BG) on 6.10.0-rc2 Is your NIC working at gigabit or 2.5 gigabit speeds. My understanding is 2.5 gigabit support is featured in the linux kernel 5.15 and the current RC2 of unraid is 5.14 From what I have been reading 5.15 and 5.16 bring a ton of enhancements and stability to intel Alderlake in both Big Little Support, Graphics and 2.5 gigabit. So the next unraid RC3 should be pretty fire for us. Quote
SimonF Posted December 9, 2021 Posted December 9, 2021 16 hours ago, Deadlystrike said: My understanding is 2.5 gigabit support is featured in the linux kernel 5.15 and the current RC2 of unraid is 5.14 This was for MSI motherboards, other may work. Quote
MadMatt337 Posted December 10, 2021 Posted December 10, 2021 On 12/8/2021 at 7:23 PM, Deadlystrike said: Is your NIC working at gigabit or 2.5 gigabit speeds. My understanding is 2.5 gigabit support is featured in the linux kernel 5.15 and the current RC2 of unraid is 5.14 From what I have been reading 5.15 and 5.16 bring a ton of enhancements and stability to intel Alderlake in both Big Little Support, Graphics and 2.5 gigabit. So the next unraid RC3 should be pretty fire for us. I cannot confirm actual functionality at 2.5 Gb speeds right now unfortunately as I have not upgraded my network to support it just yet. But my system info is showing it as a supported link mode so I can only assume it would work. Quote
sylus Posted December 12, 2021 Posted December 12, 2021 On 12/3/2021 at 7:57 AM, DeadDevil6210 said: For everyone interested in power draw: my eve energy smart wall outlet says in complete idle power draw goes down to 10-20W when transcoding on cpu only it jumped instantly to 50-60W I can now test on gpu transcode powerdraw. The power consumption was only 10-20W while transcoding with the iGPU? I expected it to be higher. In most cases I read about 50-60W idling. Quote
Blobbonator Posted December 18, 2021 Posted December 18, 2021 On 12/4/2021 at 8:27 AM, DeadDevil6210 said: My server crashed everytime somebody hit a hevc file on plex that needed transcoding. So something makes the whole server freeze. I now deleted the /dev//dri entry from plex and everything is fine again server survived the night without hanging. Will wait for kernel 5.15 or 5.16 in unraid. I got it now running (using the official plex container) don't even need to provide the /dev/dri device to the container. But for HDR h265 content i had to disable HDR Tonemapping in the Transcoder settings. Transcoding a 2160p HDR movie to 1080p My Plex Transcode Settings: (sorry they're in german) Pretty happy for now I'm using intel_gpu_top, 6.10-rc2, had to remove everything from my go file so no chmod 777 for the dev/dri and no modprobe command, set my primary output in bios to the igpu, blacklisted Driver as mentioned above and added the i915.force_probe=4680 to my boot device. Hope this might help Quote
danielh1267 Posted December 19, 2021 Posted December 19, 2021 Please forgive my ignorance, but when you mention adding the "i915.force_probe=4680" to your boot device, where exactly is it that you added it? I placed it into the "go" file, and now the web gui won't load anymore after rebooting my server. Quote
danielh1267 Posted December 19, 2021 Posted December 19, 2021 11 minutes ago, danielh1267 said: Please forgive my ignorance, but when you mention adding the "i915.force_probe=4680" to your boot device, where exactly is it that you added it? I placed it into the "go" file, and now the web gui won't load anymore after rebooting my server. After removing i915.force_probe=4680 from my go file, the web gui problem is resolved. Still curious where you added to your boot device. Thanks! Quote
Hoopster Posted December 19, 2021 Posted December 19, 2021 44 minutes ago, danielh1267 said: Still curious where you added to your boot device. Added to syslinux.cfg to the kernel /bzimage line of whatever the default boot option is. In the example below, added to unRAID OS. default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage i915.force_probe=4680 append initrd=/bzroot mitigations=off label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest I have no idea if this is what you need to do, but that is how you add kernel parameters directly to the boot process. Quote
SimonF Posted December 19, 2021 Posted December 19, 2021 5 hours ago, danielh1267 said: i915.force_probe=4680 Install intel gpu top by ich777 from Communitiy Apps. That will apply the force probe. If you are running 6.10RC2 you will need to blacklist the i915 and reboot for intel gpu top to work correctly. Run this on the command line and reboot. echo "blacklist i915" > /boot/config/modprobe.d/i915.conf or add to syslinux as above ^^^ as suggested by @Hoopster Quote
Yellow NL Posted December 20, 2021 Posted December 20, 2021 Asside from Plex transcoding. Any chance of passing the igou through to a Windows vm? and asside from the the igpu support. How is the buying advice regarding alder lake as a whole vs the ryzen 5000 series? Quote
SimonF Posted December 23, 2021 Posted December 23, 2021 On 12/9/2021 at 5:42 PM, SimonF said: This was for MSI motherboards, other may work. Testing on a dev release now has support for 2.5Gb NIC on my MSI Z690 PRO root@computenode:~# lspci | grep 06:00.0 06:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03) root@computenode:~# cat /var/log/syslog | grep igc Dec 23 10:24:23 computenode kernel: igc 0000:06:00.0: enabling device (0000 -> 0002) Dec 23 10:24:23 computenode kernel: igc 0000:06:00.0: PCIe PTM not supported by PCIe bus/controller Dec 23 10:24:23 computenode kernel: igc 0000:06:00.0 (unnamed net_device) (uninitialized): PHC added Dec 23 10:24:23 computenode kernel: igc 0000:06:00.0: 4.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x1 link) Dec 23 10:24:23 computenode kernel: igc 0000:06:00.0 eth0: MAC: Dec 23 10:24:23 computenode kernel: igc 0000:06:00.0 eth0: PHC removed Dec 23 10:24:24 computenode kernel: igc 0000:06:00.0: PCIe PTM not supported by PCIe bus/controller Dec 23 10:24:24 computenode kernel: igc 0000:06:00.0 (unnamed net_device) (uninitialized): PHC added Dec 23 10:24:24 computenode kernel: igc 0000:06:00.0: 4.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x1 link) Dec 23 10:24:24 computenode kernel: igc 0000:06:00.0 eth0: MAC: Dec 23 10:24:35 computenode kernel: igc 0000:06:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX root@computenode:~# Quote
SimonF Posted December 23, 2021 Posted December 23, 2021 On 12/20/2021 at 10:49 AM, Yellow NL said: Asside from Plex transcoding. Any chance of passing the igou through to a Windows vm? and asside from the the igpu support. How is the buying advice regarding alder lake as a whole vs the ryzen 5000 series? I have not got igpu passthrough to work as yet, but need to test with new Dev release. Quote
Excited_idiot Posted December 23, 2021 Posted December 23, 2021 Do y’all have any clue when unraid should support kernel 5.15 officially? Is the upgrade process to a supported version fairly straightforward? Also, that blacklist i915 command.. are there any negatives/drawbacks to doing that? Or an easy way to “undo” it once this hardware is officially supported? Quote
JorgeB Posted December 23, 2021 Posted December 23, 2021 7 minutes ago, Excited_idiot said: Do y’all have any clue when unraid should support kernel 5.15 officially? v6.10-rc3 when released will be using kernel 5.15.x, or newer if it takes a long time which I doubt. Quote
SimonF Posted December 23, 2021 Posted December 23, 2021 1 hour ago, Excited_idiot said: Also, that blacklist i915 command.. are there any negatives/drawbacks to doing that? Or an easy way to “undo” it once this hardware is officially supported? The file just needs to be removed to revert. or you can add the force probe to syslinux setup if you dont want to blacklist 1 hour ago, Excited_idiot said: Is the upgrade process to a supported version fairly straightforward? Yes just goto update os. if its rc then select next rather than stable. Quote
SimonF Posted December 29, 2021 Posted December 29, 2021 On 12/23/2021 at 5:17 PM, SimonF said: I have not got igpu passthrough to work as yet, but need to test with new Dev release. Still not been able to passthru igpu. Windows freezes even if 2nd gpu. and strange on happen today gpu was removed from iommu group computenode kernel: pci 0000:00:02.0: Removing from iommu group 2 and hasnt returned so will need to rebolt. geting strange errors from nvidia k4000 when i shutdown vm. maye be a qemu issue. Dec 28 17:21:04 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:04 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:05 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:05 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:06 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:06 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:07 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:07 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:08 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:08 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:09 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:09 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:10 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:10 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:11 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:11 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Dec 28 17:21:12 computenode kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window] Dec 28 17:21:12 computenode kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Quote
pepper33 Posted December 30, 2021 Posted December 30, 2021 (edited) Sorry to butt in, I also have gotten a 12600k and the MSI Z690 Pro ddr4 and will be setting the server up in the following days with two 16tb hdds and 2 nvmes (500gb for write cache and 1tb for dockers and such). So I just wanted to ask if the unraid install goes smoothly and is functional for file storage if I simply use a usb ethernet adapter and not mess with plex or VMs yet? Thanks for the help Edited December 30, 2021 by pepper33 Quote
MadMatt337 Posted December 30, 2021 Posted December 30, 2021 11 hours ago, pepper33 said: Sorry to butt in, I also have gotten a 12600k and the MSI Z690 Pro ddr4 and will be setting the server up in the following days with two 16tb hdds and 2 nvmes (500gb for write cache and 1tb for dockers and such). So I just wanted to ask if the unraid install goes smoothly and is functional for file storage if I simply use a usb ethernet adapter and not mess with plex or VMs yet? Thanks for the help Mine has been running smoothly on 6.10rc2 for the last few weeks. Have been running Plex (with hw transcoding on with the igpu, although not used much as I direct stream everything, but did work fine in testing), Sonarr, Radarr, NZBget, a windows 10 VM with a seperate video card passed though (on and off use right now but never had an issue yet). Setup was smooth, nothing out of the ordinary other than the couple minor extra steps above for Plex HW transcoding. Quote
Hurde Posted January 5, 2022 Posted January 5, 2022 On 12/30/2021 at 5:17 PM, MadMatt337 said: Have been running Plex (with hw transcoding on with the igpu, although not used much as I direct stream everything, but did work fine in testing). Setup was smooth, nothing out of the ordinary other than the couple minor extra steps above for Plex HW transcoding. Have you tried HW transcoding HEVC/H.265 material with HDR tone mapping enabled? I think that is currently the only thing that doesn't work correctly. Quote
MadMatt337 Posted January 5, 2022 Posted January 5, 2022 8 hours ago, Hurde said: Have you tried HW transcoding HEVC/H.265 material with HDR tone mapping enabled? I think that is currently the only thing that doesn't work correctly. I have not yet as I was still on the process of moving most of my media over from another server but can do some testing today hopefully to see. Will report back once I try it. 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.