thor2002ro

Members
  • Posts

    51
  • Joined

  • Last visited

Everything posted by thor2002ro

  1. Works great i have some suggestions...option to backup docker.img its useful of you have lots of custom dockers..... And i think flash backup could use compression..... The rest looks good. Good job 👌
  2. Seams everyone is ignoring the fact that your server will need to permanently phone home.... Guess next will be pushing ad's to the server web interface..... Not ok with this..... My server should be fully private....
  3. I will update it when I get some time but currently a little swamped at work the current kernel works great at the moment so not in any hurry I didn't observe any issues with the current build on my server
  4. As I remember the 5700xt reset issue is with non referance/founder edition cards doesn't matter what kernel version are you on.....
  5. Sorry to hear that.... My vega 56 has been great.... With the reset module apart from not being able to reboot a vm... I need to shut it down and start it up again if I need to reboot
  6. sounds weird.... don't know about plex... I use jellyfin and emby.... I have a GTX970(primary) and a Vega56 I use nvidia for transcoding and can start vm with it no issue... I don't need to shutdown any docker container.... or prepare it in any way.... and after the VM is stopped the driver resumes.... when VM is started the docker container trascodes with CPU.... when the GPU is available again it switches back to gpu on the next transcode When transcoding usually power state should be P2 .... and P0 when idle in power saving.... Also starting and stopping a VM with nvidia usually resets the nvidia persistence mode this is tested extensively since I use it every day nvidia for linux VM and vega for gaming vm I never once need to reset the nvidia card.... never seen P8 state.... when transcoding.... seams like its playing video ..... the "nvidia-smi -pm 1" command the nvidia install is using should enable persistence mode for all cards.... as mentioned until a VM uses it then it resets .... I will investigate the nvidia-persistence daemon maybe it will better keep the cards persistence mode enabled.... you could try to disable autostart at any docker that uses the card and see if stopping and starting vm with passthrough is fine .... depending on the platform you are using might be IOMMU issues.... don't forget to look at the system log and dmesg in command line
  7. I wouldn't .... if they changed the version of the md driver ....but on the other hand it wont explode....it wont start the array that's it...
  8. glad you fixed it.... I really need to make a plugin like install method.... to make stuff easier.... PS: after some testing it seams amdgpu driver doesn't need to be blacklisted if you use AMD GPU for VM passthrough to windows VM, works great.... but not so great with linux VM crashes the driver....
  9. dono what to say.... maybe it was a flash write error .... try to overwrite the files again and be sure to safely remove the flash stick and try again....
  10. no..... you dont need the original files ..... but the kernel you are loading seams to be 5.9.4 not 5.10rc4..... so the bzimage is not the right one for sure....
  11. you need to update the hole thing not just the packages.... the packages are tied to the kernel.... bzimage and bzmodules
  12. release 5.10rc4-20201118 update to 5.10rc4 nvidia driver update 455.45.01 fixed nvidia docker corsair power supply kernel driver Add RCEC handling to PCI/AER v11 add support for virtio-mem: Big Block Mode (BBM) update ntfs3 driver v12 updated bzfirmware(get last from 5.9.1-20201026 release) removed all old AMD GPU reset quirks add support for new AMD GPU reset module vendor_reset(auto resets all AMD GPUS when it needs to please dont run any other reset methods, to use add in /config/go file "modprobe vendor_reset") supports: AMD | Polaris 10 | AMD | Polaris 11 | AMD | Polaris 12 | AMD | Vega 10 | Vega 56/64 AMD | Vega 20 | Radeon VII AMD | Navi 10 | 5600XT, 5700, 5700XT AMD | Navi 12 | Pro 5600M AMD | Navi 14 | Pro 5300, RX 5300, 5500XT update/add drivers out of tree: corefreq kernel module(added utils for corefreq module run corefreq-cli-run) e4c271f zfs drivers 0ca45cb tbsecp3 drivers removed not building asus-wmi-sensors driver 3 r8125 driver 9.003.05 r8152 driver 2.13.0 ryzen_smu driver 44a0f687 tn40xx driver 0.3.6.17.3 zenpower driver 0.1.12 vendor_reset module 6140e2f RR272x_1x does not build so removed
  13. 5.9.4-20201105 update to 5.9.4 alternative unraid driver disk spindown using hdparm nvidia driver update 455.38 updated bzfirmware(get last from 5.9.1-20201026 release) update/add drivers out of tree: corefreq kernel module(added utils for corefreq module run corefreq-cli-run) 5816cf3 zfs drivers 52e585a tbsecp3 drivers 3cdeaee asus-wmi-sensors driver 3 r8125 driver 9.003.05 r8152 driver 2.13.0 ryzen_smu driver 44a0f687 tn40xx driver 0.3.6.17.3 zenpower driver 0.1.12
  14. I found the problem it needs an updated driver .... the nvidia docker is using cuda 11.1 and the 450.80.02 driver that is included in the last kernel has cuda 11.0 lol its a mismatch ...
  15. yes I can reproduce it... I think the last version kernel has an issue with docker nvidia..... will look into it in the next version....
  16. 5.9.1-20201026 release update to 5.9.1 updated paragon ntfs3 v10 improvements to pcie aer nvidia driver update 450.80.02 updated bzfirmware update/add drivers out of tree: corefreq kernel module(added utils for corefreq module run corefreq-cli-run) d058a1e zfs drivers 3928ec5 tbsecp3 drivers 3cdeaee asus-wmi-sensors driver 3 r8125 driver 9.003.05 r8152 driver 2.13.0 ryzen_smu driver 44a0f687 tn40xx driver 0.3.6.17.3 zenpower driver 0.1.12
  17. you forgot to update the bzfirmware look into the "5.8.1-20200814" release theres a bzfirmware update the on included with unraid is really basic....