ross232 Posted October 7, 2020 Share Posted October 7, 2020 Thanks for the rapid release Updating to 6.9 beta30 now. Quote Link to comment
efschu Posted October 15, 2020 Share Posted October 15, 2020 On 9/29/2020 at 5:47 PM, thor2002ro said: ... update zfs to 2.0 rc2 fix zfs startup script ... Thx for the work, I'm on 5.9.0rc8-thor-Unraid+.NV.6.9b30.zip and zfs-2.0.0-rc2-x86_64-thor.tgz Jfyi the ZFS zpool devices in /dev/<zpool>/<poolname> are still missing. Quote Link to comment
thor2002ro Posted October 15, 2020 Author Share Posted October 15, 2020 Does running in terminal ./etc/rc.d/rc.zfs start Work? Quote Link to comment
thor2002ro Posted October 18, 2020 Author Share Posted October 18, 2020 updated zfs package and reuploaded it Quote Link to comment
rilles Posted October 25, 2020 Share Posted October 25, 2020 I have a AMD 2200G with built in graphics. The machine boots up fine but the display stops working at this point Tower kernel: amdgpu 0000:09:00.0: Direct firmware load for amdgpu/raven_gpu_info.bin failed with error -2 Tower kernel: amdgpu 0000:09:00.0: amdgpu: Failed to load gpu_info firmware "amdgpu/raven_gpu_info.bin" Tower kernel: amdgpu 0000:09:00.0: amdgpu: Fatal error during GPU init Tower kernel: [drm] amdgpu: finishing device. Tower kernel: amdgpu: probe of 0000:09:00.0 failed with error -2 what do I need to add to stop the amdgpu fail? Quote Link to comment
rilles Posted October 25, 2020 Share Posted October 25, 2020 46 minutes ago, rilles said: I have a AMD 2200G with built in graphics. The machine boots up fine but the display stops working at this point Tower kernel: amdgpu 0000:09:00.0: Direct firmware load for amdgpu/raven_gpu_info.bin failed with error -2 Tower kernel: amdgpu 0000:09:00.0: amdgpu: Failed to load gpu_info firmware "amdgpu/raven_gpu_info.bin" Tower kernel: amdgpu 0000:09:00.0: amdgpu: Fatal error during GPU init Tower kernel: [drm] amdgpu: finishing device. Tower kernel: amdgpu: probe of 0000:09:00.0 failed with error -2 what do I need to add to stop the amdgpu fail? After lots of googling and some guessing - this worked modprobe.blacklist=amdgpu Quote Link to comment
thor2002ro Posted October 26, 2020 Author Share Posted October 26, 2020 you forgot to update the bzfirmware look into the "5.8.1-20200814" release theres a bzfirmware update the on included with unraid is really basic.... Quote Link to comment
thor2002ro Posted October 26, 2020 Author Share Posted October 26, 2020 5.9.1-20201026 release update to 5.9.1 updated paragon ntfs3 v10 improvements to pcie aer nvidia driver update 450.80.02 updated bzfirmware update/add drivers out of tree: corefreq kernel module(added utils for corefreq module run corefreq-cli-run) d058a1e zfs drivers 3928ec5 tbsecp3 drivers 3cdeaee asus-wmi-sensors driver 3 r8125 driver 9.003.05 r8152 driver 2.13.0 ryzen_smu driver 44a0f687 tn40xx driver 0.3.6.17.3 zenpower driver 0.1.12 Quote Link to comment
rilles Posted October 26, 2020 Share Posted October 26, 2020 bzfirmware was the missing bit. all good - hope to do some load testing now to see if my AMD still crashes with this newer then 6.8.3 kernel. thanks for doing this. Quote Link to comment
rango3221 Posted October 28, 2020 Share Posted October 28, 2020 thank you for this amazing kernel. My performance in general and in VM's feels more snappier. But i do have one issue when using the nvidia driver utils. I have installed everything as shown in the first post, but when using GPU transcoding in Jellyfin docker container, the playback stalls complaining about a CUDA init error. The same transcoding works when using the unraid nvidia builds. any idea what could be the issue? Quote Link to comment
loomitz Posted November 2, 2020 Share Posted November 2, 2020 as the last comment the VM's feels better but i cant get the Nvidia driver to work on dockers. when i run nvidia-smi i get +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro P2000 On | 00000000:0D:00.0 Off | N/A | | 47% 37C P8 4W / 75W | 0MiB / 5059MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ but in plex or emby its not working if i try - docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi i get this error docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.1, please update your driver to a newer version, or use an earlier cuda container\\\\n\\\"\"": unknown. Thanks for your work. Quote Link to comment
thor2002ro Posted November 3, 2020 Author Share Posted November 3, 2020 yes I can reproduce it... I think the last version kernel has an issue with docker nvidia..... will look into it in the next version.... Quote Link to comment
thor2002ro Posted November 4, 2020 Author Share Posted November 4, 2020 I found the problem it needs an updated driver .... the nvidia docker is using cuda 11.1 and the 450.80.02 driver that is included in the last kernel has cuda 11.0 lol its a mismatch ... Quote Link to comment
thor2002ro Posted November 6, 2020 Author Share Posted November 6, 2020 5.9.4-20201105 update to 5.9.4 alternative unraid driver disk spindown using hdparm nvidia driver update 455.38 updated bzfirmware(get last from 5.9.1-20201026 release) update/add drivers out of tree: corefreq kernel module(added utils for corefreq module run corefreq-cli-run) 5816cf3 zfs drivers 52e585a tbsecp3 drivers 3cdeaee asus-wmi-sensors driver 3 r8125 driver 9.003.05 r8152 driver 2.13.0 ryzen_smu driver 44a0f687 tn40xx driver 0.3.6.17.3 zenpower driver 0.1.12 Quote Link to comment
loomitz Posted November 11, 2020 Share Posted November 11, 2020 dont know what im doing wrong but when i try your kernel all works but the nvidia drivers with docker stop working, the plex give me a unkown error. Zfs is working, Zen is working only the nvidia drivers with docker. the performance in vms is just better and snappy. Quote Link to comment
thor2002ro Posted November 12, 2020 Author Share Posted November 12, 2020 On 11/11/2020 at 12:50 PM, loomitz said: dont know what im doing wrong but when i try your kernel all works but the nvidia drivers with docker stop working, the plex give me a unkown error. Zfs is working, Zen is working only the nvidia drivers with docker. the performance in vms is just better and snappy. are you on the latest 5.9.4? I'm using a test version of 5.10 but it uses the same nvidia driver should work Quote Link to comment
dunioo Posted November 16, 2020 Share Posted November 16, 2020 Hello Great work, thank you Is there any reason why it wouldn't work with plex (linuxserver/plex) The card shows up in the nvidia-smi from the host and the container, but plex (despite enabling hardware acceleration) doesn't use it. I have installed your kernel + nvidia driver utils from the latest release on unraid 6.8.3. Do you have any advice as to what might be the reason? Thanks in advance Quote Link to comment
thor2002ro Posted November 18, 2020 Author Share Posted November 18, 2020 good news I figured out the nvidia issue and nvidia and docker will work in the next version Quote Link to comment
thor2002ro Posted November 18, 2020 Author Share Posted November 18, 2020 (edited) release 5.10rc4-20201118 update to 5.10rc4 nvidia driver update 455.45.01 fixed nvidia docker corsair power supply kernel driver Add RCEC handling to PCI/AER v11 add support for virtio-mem: Big Block Mode (BBM) update ntfs3 driver v12 updated bzfirmware(get last from 5.9.1-20201026 release) removed all old AMD GPU reset quirks add support for new AMD GPU reset module vendor_reset(auto resets all AMD GPUS when it needs to please dont run any other reset methods, to use add in /config/go file "modprobe vendor_reset") supports: AMD | Polaris 10 | AMD | Polaris 11 | AMD | Polaris 12 | AMD | Vega 10 | Vega 56/64 AMD | Vega 20 | Radeon VII AMD | Navi 10 | 5600XT, 5700, 5700XT AMD | Navi 12 | Pro 5600M AMD | Navi 14 | Pro 5300, RX 5300, 5500XT update/add drivers out of tree: corefreq kernel module(added utils for corefreq module run corefreq-cli-run) e4c271f zfs drivers 0ca45cb tbsecp3 drivers removed not building asus-wmi-sensors driver 3 r8125 driver 9.003.05 r8152 driver 2.13.0 ryzen_smu driver 44a0f687 tn40xx driver 0.3.6.17.3 zenpower driver 0.1.12 vendor_reset module 6140e2f RR272x_1x does not build so removed Edited November 18, 2020 by thor2002ro 1 Quote Link to comment
dunioo Posted November 18, 2020 Share Posted November 18, 2020 I updated to the newest version. It does seemthat the nvidia installation has gone further. Unfortunately somehow it failed: logs have following errors: /bin/bash: line 14: nvidia-smi: command not found installation... failed to initialize NVML: Driver/library version mismatch Here is a photo of this: For some reason, it caused webpanel to not start. p.s. hypervisor/libvirt errors are not related, it was there before (something with vm backup plugin) Quote Link to comment
thor2002ro Posted November 18, 2020 Author Share Posted November 18, 2020 (edited) you need to update the hole thing not just the packages.... the packages are tied to the kernel.... bzimage and bzmodules Edited November 18, 2020 by thor2002ro Quote Link to comment
dunioo Posted November 18, 2020 Share Posted November 18, 2020 I did replace all of the provided files using new release, for both nvidia drivers and kernel package (did it once again just to be sure), but the problem persist. In the first post you put this: dont try to install the utils packages on any modified rootfs .... they need to be installed on stock unraid rootfs Does that mean that i should replace those files using original unraid file from before i started? Quote Link to comment
thor2002ro Posted November 18, 2020 Author Share Posted November 18, 2020 no..... you dont need the original files ..... but the kernel you are loading seams to be 5.9.4 not 5.10rc4..... so the bzimage is not the right one for sure.... Quote Link to comment
dunioo Posted November 18, 2020 Share Posted November 18, 2020 (edited) I am probably doing something very stupid here, but it seems to me that those are exactly the same files: Edited November 18, 2020 by dunioo Quote Link to comment
thor2002ro Posted November 18, 2020 Author Share Posted November 18, 2020 dono what to say.... maybe it was a flash write error .... try to overwrite the files again and be sure to safely remove the flash stick and try again.... Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.