ich777 Posted June 15, 2020 Author Share Posted June 15, 2020 6 hours ago, Dazog said: Can we be provided an option to specify where we can get the nvidia drivers? For example Nvidia has beta drivers listed here: http://developer.download.nvidia.com/compute/cuda/11.0.1/local_installers/cuda_11.0.1_450.36.06_linux.run an option in the docker to custom field our own download link? You can do that, I write you a private message. Quote Link to comment
ich777 Posted June 15, 2020 Author Share Posted June 15, 2020 3 hours ago, Marshalleq said: Very nice plugin! I think I have some ideas to expand on the ZFS section too..... Let me know what you got in mind in a private message. Quote Link to comment
ich777 Posted June 15, 2020 Author Share Posted June 15, 2020 6 hours ago, mkfelidae said: Looks good here, shows the Nvidia GPU information i would need to pass a GPU to a docker, shows my ZFS information (currently no pools is correct, i haven't set any up yet.) and also shows that there are DVB adapters on my system. Fine work I must say. What DVB Cards are you using? I hope you don't mind me asking but can you post a screenshot if you are using the LibreELEC, Xbox One USB or TBS drivers? Quote Link to comment
mkfelidae Posted June 15, 2020 Share Posted June 15, 2020 6 hours ago, ich777 said: What DVB Cards are you using? I hope you don't mind me asking but can you post a screenshot if you are using the LibreELEC, Xbox One USB or TBS drivers? I use two USB tuner sticks, both from hauppage, a WinTV-HVR (a single ATSC tuner with composite video input as well) and a WinTV-DualHD (a double ATSC tuner with no other features) both appear to use different drivers. and for the DualHD it shows up as the same driver twice. This uses the LibreELEC drivers as far as I know, I have always used the LibreELEC build before as that was the only build that showed my tuners. Quote Link to comment
ich777 Posted June 15, 2020 Author Share Posted June 15, 2020 48 minutes ago, mkfelidae said: I use two USB tuner sticks, both from hauppage, a WinTV-HVR (a single ATSC tuner with composite video input as well) and a WinTV-DualHD (a double ATSC tuner with no other features) both appear to use different drivers. and for the DualHD it shows up as the same driver twice. This uses the LibreELEC drivers as far as I know, I have always used the LibreELEC build before as that was the only build that showed my tuners. Thank you very much! I should rename that a little bit in the plugin, it actually shows the loaded Kernel modules for the installed DVB cards/tuners (there is no other good way of showing the real name if the cards/tuners are used by another application). The version that is shown is also the version of the first Kernel module in the first slot, but I think this should do it for now, since they are showing up. Quote Link to comment
suyac Posted June 16, 2020 Share Posted June 16, 2020 (edited) Found this in the logs while the container was working, maybe it s nothing but I ll put them here anyway : /opt/scripts/start-server.sh: line 235: [: 0.8.4: integer expression expected ---One or more Stock Unraid v6.8.3 files not found, downloading...--- ---Latest version for ZFS: v0.8.4--- /opt/scripts/start-server.sh: line 235: [: 0.8.4: integer expression expected ---One or more Stock Unraid v6.8.3 files not found, downloading...--- ---Successfully downloaded Stock Unraid v6.8.3--- Other than that is looks good. I only have a gpu in my test server, no tunner cards. Edited June 16, 2020 by suyac 1 Quote Link to comment
ich777 Posted June 17, 2020 Author Share Posted June 17, 2020 (edited) 13 hours ago, suyac said: Found this in the logs while the container was working, maybe it s nothing but I ll put them here anyway 13 hours ago, suyac said: /opt/scripts/start-server.sh: line 235: [: 0.8.4: integer expression expected This is actually a thing on my to do list but has no importatn priority since this will not affect any user (at least if someone is not going to set ZFS to version 0.7 or lower). FIXED 13 hours ago, suyac said: Other than that is looks good. I only have a gpu in my test server, no tunner cards. Appreciated. Edited June 17, 2020 by ich777 Quote Link to comment
Marshalleq Posted June 17, 2020 Share Posted June 17, 2020 I see beta 22 has been released here: So when I get a chance, I will install it / compile for it. Won't be able to try for another 4 hours at least though - @ich777 it will be interesting to see if this container handles it already or needs changes to make it work. Quote Link to comment
Alphacosmos Posted June 18, 2020 Share Posted June 18, 2020 On 6/15/2020 at 8:20 AM, ich777 said: @TexasDave, @Ramiii, @MowMdown, @Jus, @Marshalleq, @suyac, @monstahnator, @scottc, @Alphacosmos, @mkfelidae, @sjaak Beta Plugin is out right now: https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Please feel free to test and report back if something is wrong or not properly working. Yeah got it going nice work man. How would i amend the docker to work with the new 6.9 beta 22 that released a few hours ago. Id love to test the pools Quote Link to comment
ich777 Posted June 18, 2020 Author Share Posted June 18, 2020 (edited) 9 hours ago, Marshalleq said: I see beta 22 has been released here: So when I get a chance, I will install it / compile for it. Won't be able to try for another 4 hours at least though - @ich777 it will be interesting to see if this container handles it already or needs changes to make it work. Should just work fine. EDIT: Made a small modification so that DVB builds everything (one module was missing) 4 hours ago, Alphacosmos said: Yeah got it going nice work man. How would i amend the docker to work with the new 6.9 beta 22 that released a few hours ago. Id love to test the pools Please read the first page on the bottom but I would do it like this: Update to the new stock Beta and reboot Download/Redownload the template from the CA App and change the following things: Change the repository from 'ich777/unraid-kernel-helper:6.8.3' to 'ich777/unraid-kernel-helper:6.9.0' Select the build options that you prefer Click on 'Show more settings...' Set Beta Build to 'true' Start the container and it will create the folders '/stock/beta' inside the main folder Place the files bzimage bzroot bzmodules bzfirmware in the folder from step 5 (after the start of the container you have 2 minutes to copy over the files, if you don't copy over the files within this 2 mintues simply restart the container and the build will start if it finds all files) Reboot Doesn't work with 6.9.0 beta22 @Marshalleq & @Alphacosmos & @Dazog EDIT2: Fixed everything, builds now sucessfully. Please be sure to update first to the new beta reboot and then do the steps above Edited June 18, 2020 by ich777 Quote Link to comment
Dazog Posted June 18, 2020 Share Posted June 18, 2020 1 hour ago, ich777 said: Should just work fine. EDIT: Made a small modification so that DVB builds everything (one module was missing) Please read the first page on the bottom but I would do it like this: Update to the new stock Beta and reboot Download/Redownload the template from the CA App and change the following things: Change the repository from 'ich777/unraid-kernel-helper:6.8.3' to 'ich777/unraid-kernel-helper:6.9.0' Select the build options that you prefer Click on 'Show more settings...' Set Beta Build to 'true' Start the container and it will create the folders '/stock/beta' inside the main folder Place the files bzimage bzroot bzmodules bzfirmware in the folder from step 5 (after the start of the container you have 2 minutes to copy over the files, if you don't copy over the files within this 2 mintues simply restart the container and the build will start if it finds all files) Reboot It's possible the nvidia drivers do not support 5.7? unRAID rc.docker: EmbyServer: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown Getting errors with any docker using the nvidia card and card doesn't show up in plugin. Quote Link to comment
ich777 Posted June 18, 2020 Author Share Posted June 18, 2020 Just now, Dazog said: It's possible the nvidia drivers do not support 5.7? unRAID rc.docker: EmbyServer: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown Getting errors with any docker using the nvidia card and card doesn't show up in plugin. Something changed at build, I have to look into this but I have to go to my real work now... 1 Quote Link to comment
Dazog Posted June 18, 2020 Share Posted June 18, 2020 Just now, ich777 said: Something changed at build, I have to look into this but I have to go to my real work now... No worries, rolling back to non nvidia build for now. Let me know later if you need testing help 1 Quote Link to comment
Dazog Posted June 18, 2020 Share Posted June 18, 2020 6.9 beta 22 works Fixed it in under an hour. Best!!!!!!!!!!!!!!!!!!!!!!!!! 1 Quote Link to comment
ich777 Posted June 18, 2020 Author Share Posted June 18, 2020 Everything is now fixed and also a prebuild Unraid 6.9.0beta22 with nVidia builtin is uploaded. Unraid 6.9.0beta22 with nVidia Quote Link to comment
Pducharme Posted June 18, 2020 Share Posted June 18, 2020 1 hour ago, ich777 said: Everything is now fixed and also a prebuild Unraid 6.9.0beta22 with nVidia builtin is uploaded. Unraid 6.9.0beta22 with nVidia Wow that’s very nice work!! 1 Quote Link to comment
ich777 Posted June 18, 2020 Author Share Posted June 18, 2020 (edited) 14 minutes ago, Pducharme said: Wow that’s very nice work!! Appreciated. You could also build your own Kernel/Images if you are interested. Please read the descriptions of the template and the first post of this thread carefully and you should be ready to go. Edited June 18, 2020 by ich777 Quote Link to comment
Pducharme Posted June 18, 2020 Share Posted June 18, 2020 Ok, now on your Pre-Build 6.9.0beta22 w/NVIDIA driver backed-in. Works well. I can confirm my PLEX still does HW transcoding, and my System Temps now works fine on my X570 motherboard w/ Ryzen 9 3900x Question: Is there a newer version of the NVIDIA Driver available ? I thought that 440.82 is pretty old (not sure). 1 Quote Link to comment
ich777 Posted June 18, 2020 Author Share Posted June 18, 2020 (edited) 28 minutes ago, Pducharme said: Ok, now on your Pre-Build 6.9.0beta22 w/NVIDIA driver backed-in. Works well. I can confirm my PLEX still does HW transcoding, and my System Temps now works fine on my X570 motherboard w/ Ryzen 9 3900x Keep in mind you can always build them yourself. (Plugin for my Kernel/Images is also available on the CA App) Good to hear 28 minutes ago, Pducharme said: I thought that 440.82 is pretty old (not sure). This is the latest version for Linux (note that Linux and Windows drivers are not on the same version - Windows drivers are always ahead) You can always look up the latest release for Linux here: Click or you could also go to the index of all versions: Click (There is also a beta version available but you have to implement that manually into the build script) Edited June 18, 2020 by ich777 Quote Link to comment
Pducharme Posted June 18, 2020 Share Posted June 18, 2020 21 minutes ago, ich777 said: (There is also a beta version available but you have to implement that manually into the build script) Oh! this is what got me confused, i didn't saw it was a beta of the driver Anyone know how to add one of the language pack? (i mean the official ones already created). If I go to settings / display, there is only English in the drop down box. Quote Link to comment
ich777 Posted June 18, 2020 Author Share Posted June 18, 2020 (edited) 9 minutes ago, Pducharme said: Oh! this is what got me confused, i didn't saw it was a beta of the driver Anyone know how to add one of the language pack? (i mean the official ones already created). If I go to settings / display, there is only English in the drop down box. No this is no beta driver, it is only a beta driver available (Cuda driver package). My container is built with the latest official Linux version of the nVidia drivers. You have to update CA App if it's not on the newest version and then you have to search for your language in the CA App, can only speak for German but that works just fine Edited June 18, 2020 by ich777 Quote Link to comment
Pducharme Posted June 18, 2020 Share Posted June 18, 2020 @ich777 Thanks! now in French . I'm still trying to find a way to have my Win10 VM booting. I know with QEMU 5.0, they have an issue with Ryzen 3000 series CPU causing a Kernel Crash. Quote Link to comment
ich777 Posted June 18, 2020 Author Share Posted June 18, 2020 (edited) 3 minutes ago, Pducharme said: I'm still trying to find a way to have my Win10 VM booting. I know with QEMU 5.0, they have an issue with Ryzen 3000 series CPU causing a Kernel Crash. Is this eventually what you are searching for (the comment from @rango3221, please keep in mind that I also got problems when having the nVidia drivers preinstalled for transcoding/Steam Docker Container and try to run a VM on the same machine - I'm on Intel, I think you actually need a second graphics card to boot up the VM, this is also adressed in the Linuxserver.io thread): Edited June 18, 2020 by ich777 Quote Link to comment
Pducharme Posted June 19, 2020 Share Posted June 19, 2020 11 hours ago, ich777 said: Is this eventually what you are searching for (the comment from @rango3221, please keep in mind that I also got problems when having the nVidia drivers preinstalled for transcoding/Steam Docker Container and try to run a VM on the same machine - I'm on Intel, I think you actually need a second graphics card to boot up the VM, this is also adressed in the Linuxserver.io thread): After reading couple of options, since this VM only host a Backup server software, I switch the CPU to Emulated instead of Host-Passthru. I don't really care the degraded performances due to this. There is a CPU feature that is passed with 5.0 that doesn't work with Ryzen, only on EPIC, that's why it causing this. Also, alternatively, can modify the domain XML of the VM to add a bunch of stuff. Quote Link to comment
Marshalleq Posted June 19, 2020 Share Posted June 19, 2020 Hmmm, so I just went to do this - my docker runs on ZFS and apparently I need to have installed (or otherwise gotten) the new kernel first. Since docker doesn't start without ZFS for me, I can't build the new kernel. I'm sure there's a manual download somewhere I will try use that - but have a feeling there's some optimisation portential for the container here somewhere... Possibly I could install the other ZFS plugin, in order to build this ZFS kernel lol. Seems backwards but it'll work I'm sure - probably easiest thinking about it. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.