Jus Posted June 7, 2020 Share Posted June 7, 2020 Wow thx for swift response So as i understand the v6.9.0 beta1 release from first page would be most straight forward for me Ok will try that I dont have critical data on array yet It is just week old as i am new and experimenting on it.. just spent quite some time setting things up and trying to be carefull not to totally break it 1 Quote Link to comment
ich777 Posted June 7, 2020 Author Share Posted June 7, 2020 2 minutes ago, Jus said: Wow thx for swift response So as i understand the v6.9.0 beta1 release from first page would be most straight forward for me Ok will try that I dont have critical data on array yet It is just week old as i am new and experimenting on it.. just spent quite some time setting things up and trying to be carefull not to totally break it Always keep in mind to backup your old files from the USB Flash drive (bzimage, bzroot, bzimages, bzfirmware) to your local computer. Quote Link to comment
Marshalleq Posted June 7, 2020 Share Posted June 7, 2020 Yes that's good advice when playing with kernels - (they're in /boot BTW). 1 Quote Link to comment
Jus Posted June 8, 2020 Share Posted June 8, 2020 (edited) @ich777 @Marshalleq It worked like a charm ! So far absolutelly no issues noticed all running as it should. WIll do some stress testing later on just to be 100% sure that it is stable enough for me not to care more Thx a BUNCH!!! Edited June 8, 2020 by Jus 2 Quote Link to comment
TexasDave Posted June 9, 2020 Share Posted June 9, 2020 Not looking to start any controversy but what are the pros/cons of this method versus the linux.io plug in? I finally broke down and picked up a 1050ti to do Plex, unmanic and F@H and trying to decide what route to go. Thanks! 1 Quote Link to comment
ich777 Posted June 9, 2020 Author Share Posted June 9, 2020 (edited) 1 hour ago, TexasDave said: Not looking to start any controversy but what are the pros/cons of this method versus the linux.io plug in? I finally broke down and picked up a 1050ti to do Plex, unmanic and F@H and trying to decide what route to go. Thanks! Totally understandable and me either (I also use many containers from Linuxserver.io) I mainly created this container because I needed some modules in the Kernel and also nVidia drivers with DVB drivers in the images (so that I can pass the DVB streams from my DigitalDevices cards which are managed by TVHeadend to Emby and transcode them within Emby) and decided to release the Container to the public so that everyone who want's to update the drivers or need's ZFS support can build their own images. My personal Pros: You can build your own Images/Kernel with the preferred addons (nVidia drivers, DVB drivers, ZFS) and also with the latest driver versions (or simply download it from the first post in this thread). You also can customize the build process if you want to add something or remove something (Custom Build mode in the template). You can also build Images/Kernel for Beta builds (description in the first post). By default everything is done automatically after the container is started and is finished after the container stops (please keep an eye on the logs for status Informations). My personal Cons: I have no Plugin (yet) to read the GPU UUID from the graphics card (this can be done from the terminal with the command: 'nvidia-smi -L' without quotes), you can also use the Linuxserver.io Plugin to get the UUID but it won't show some informations. You have to place the files yourself onto your USB Flash Device (don't forget to backup the existing files in case something goes wrong). The Build process can take very long but with modern hardware not more than 15 minutes. It's up to you which way you choose. You can try both, you have to only replace the files bzroot, bzimage, bzmodules and bzfirmware and that's it. But the main difference is that you can build the images yourself with the latest drivers (I think Linuxserver.io builds it only with the actual latest drivers if a major Unraid release is done). If you got any question please feel free to contact me or write a short post in this thread. Edited June 9, 2020 by ich777 2 Quote Link to comment
TexasDave Posted June 9, 2020 Share Posted June 9, 2020 (edited) Thank you for this work and your answer - it is why I love unRAID and the unRAID community! Edit: so the only work / maint needed is to download the container, run it, move the items each time unRAID gets a new version? Or if we want to update drivers? Edited June 9, 2020 by TexasDave 1 Quote Link to comment
ich777 Posted June 9, 2020 Author Share Posted June 9, 2020 (edited) 51 minutes ago, TexasDave said: Thank you for this work and your answer - it is why I love unRAID and the unRAID community! Edit: so the only work / maint needed is to download the container, run it, move the items each time unRAID gets a new version? Or if we want to update drivers? Basically yes but here is a short tutorial: (Make sure that you stop all VM's and also Docker Containers to avoid problems in the first place before start building) Go in the CA App Search for Unraid-Kernel-Helper and click on download In the template select which drivers/features you want to enable (you can leave to other options as they are for example the versions even if you don't install them) Eventually disable the Backup option if you want to do it by hand Select the Cleanup methode (i recommend to leave it as it is 'moderate' - will clean up the directory and only let the downloaded zip files/drivers and the output folder in it after it finishes) Click on Apply Open the log window for the container and wait for it to finish (after it finishes the container will stop) Copy the created files from the output directory to your USB Boot Device (be sure to backup the old bzroot, bzimage, bzmodules & bzfirmare localy to your computer) Reboot the server After the restart delete the output folder from the kernel directory and also remove the Container from your Dockers (see below why) If you want to build the Images/Kernel again because you need more features or because a new driver version is released: (Delete the Container from your server if you left it installed on your server from a previous Images/Kernel build) Go to the CA App Search for Unraid-Kernel-Helper and click on download (I recommend this because over time I will add features/drivers and change the template slightly so that the template is always the newest version). Continue at step 3 from the tutorial above (with the advantage that the container doesn't have to download for example the Kernel or the Unraid files again - only if there isn't a newer version of Unraid released). (Please note if you are using the Cleanup Appdata App i would recommend to not delete the main folder for this container since if you are going to build the Images/Kernel again the Container doesn't need to download the necessary files for example the Kernel again - only if there isn't a newer version of Unraid released) I hope this all makes sense. If you try the container it would be nice to let me know if all worked well. Edited June 9, 2020 by ich777 2 Quote Link to comment
MowMdown Posted June 9, 2020 Share Posted June 9, 2020 (edited) 1 hour ago, ich777 said: Select the Cleanup methode (i recommend to leave it as it is 'moderate' - will clean up the directory and only let the downloaded zip files/drivers and the output folder in it after it finishes) There is a typo in the docker config you left off the 'e' at the end of 'moderate' so it reads 'moderat' I'm afraid that if left as default value 'moderat' that there will be issues changing it to 'moderate' there are some other typos where the trailing 'e' is also missing. Edited June 9, 2020 by MowMdown 1 Quote Link to comment
ich777 Posted June 9, 2020 Author Share Posted June 9, 2020 2 minutes ago, MowMdown said: There is a typo in the docker config you left off the 'e' at the end of 'moderate' so it reads 'moderat' I'm afraid that if left as default value 'moderat' that there will be issues changing it to 'moderate' Thank you for reporting that, already changed the template, will take a few hours to update (btw then it will do no cleanup of the directory if it is set to anything else than moderate or full or even left blank). 1 Quote Link to comment
Ramiii Posted June 10, 2020 Share Posted June 10, 2020 Hello I used the pre-built image , it detected my DVB in the container but it fails to scan everytime , when I revert back to the original DVB version ( Unraid DVB ) it works just reporting this , will build my own build and try again Thanks Rami Quote Link to comment
ich777 Posted June 10, 2020 Author Share Posted June 10, 2020 (edited) 11 minutes ago, Ramiii said: Hello I used the pre-built image , it detected my DVB in the container but it fails to scan everytime , when I revert back to the original DVB version ( Unraid DVB ) it works just reporting this , will build my own build and try again Thanks Rami Which prebuild Image are you using? Can you please tell me which DVB card's are you using? Also wich Unraid DVB build you are using? EDIT: Are you using TVHeadend? Can you also provide a screenshots from the Adapters page? Edited June 10, 2020 by ich777 Quote Link to comment
Ramiii Posted June 10, 2020 Share Posted June 10, 2020 Which prebuild Image are you using? Can you please tell me which DVB card's are you using? Also wich Unraid DVB build you are using? EDIT: Are you using TVHeadend? Can you also provide a screenshots from the Adapters page?I’m using this card ( TeVii S480 / S482 Dual DVB-S2 PCIE ) And using unraid 6.8.3 Quote Link to comment
Ramiii Posted June 10, 2020 Share Posted June 10, 2020 Thanks to @ich777 and his help , We tested a couple of builds and now mine is fully working with the DVD and GPU Really appreciate it 1 Quote Link to comment
TexasDave Posted June 10, 2020 Share Posted June 10, 2020 Went through the process to install. Smooth as butter. 🙂 Now to get it to work with Plex (plus F@H and unmanic) do I need to follow the instructions to rebuild Plex (plus F@H and unmanic) from the linuxuser.io? Plex Add --runtime=nvidia Copy the GPU UUID to the existing NVIDIA_VISIBLE_DEVICES parameter And for F@H and unmanic Add --runtime=nvidia Add NVIDIA_DRIVER_CAPABILITIES and set to all NVIDIA_VISIBLE_DEVICES and set to GPU UUID And use the linux.io plugin to get the GPU UUID? Or another command? How best to test to see if it is working? Thanks! 1 Quote Link to comment
ich777 Posted June 10, 2020 Author Share Posted June 10, 2020 (edited) 1 hour ago, TexasDave said: Went through the process to install. Smooth as butter. 🙂 Now to get it to work with Plex (plus F@H and unmanic) do I need to follow the instructions to rebuild Plex (plus F@H and unmanic) from the linuxuser.io? Plex Add --runtime=nvidia Copy the GPU UUID to the existing NVIDIA_VISIBLE_DEVICES parameter And for F@H and unmanic Add --runtime=nvidia Add NVIDIA_DRIVER_CAPABILITIES and set to all NVIDIA_VISIBLE_DEVICES and set to GPU UUID And use the linux.io plugin to get the GPU UUID? Or another command? How best to test to see if it is working? Thanks! Nice. Yep exactly (this can be done for any Container that supports hardware acceleration from nVidia Graphics cards). You can install the nVidia Plugin from Linuxserver.io or you can also open up a terminal window from your server and type in 'nvidia-smi -L' to see all instlled Graphics Cards and also the UUID. I think for plex you must enable this also in the settings but I'm not 100% sure since I'm using Emby. If you transcode something you and want to watch if it's working you open up a terminal window from your server and type in 'watch nvidia-smi' this will bring up a windows where you see all informations about your graphics card and also on the bottom wich processes are using the card. EDIT: Forgot to say if nvidia-smi works it works in general Then it's only a configuration thing of the containers. EDIT2: also please note that most consumer cards only supports simultanious encoding of 3 streams. https://developer.nvidia.com/video-encode-decode-gpu-support-matrix Edited June 10, 2020 by ich777 1 Quote Link to comment
TexasDave Posted June 10, 2020 Share Posted June 10, 2020 It is working very well with Folding @ Home. Will now confirm all is well with Plex and see if I can use it with unmanic. Great feedback in the log so I felt I knew what was going on at all times. I think it is a cool tool learn to see what goes into building the kernal. Thanks!! 1 Quote Link to comment
ich777 Posted June 14, 2020 Author Share Posted June 14, 2020 @TexasDave, @Ramiii, @MowMdown, @Jus, @Marshalleq, @suyac, @monstahnator, @scottc, @Alphacosmos, @mkfelidae, @sjaak Beta Plugin is out right now: https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Please feel free to test and report back if something is wrong or not properly working. 2 Quote Link to comment
Ramiii Posted June 14, 2020 Share Posted June 14, 2020 3 minutes ago, ich777 said: @TexasDave, @Ramiii, @MowMdown, @Jus, @Marshalleq, @suyac, @monstahnator, @scottc, @Alphacosmos, @mkfelidae, @sjaak Beta Plugin is out right now: https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Please feel free to test and report back if something is wrong or not properly working. thanks a lot , installed it , shows my GPU but not the DVB 1 1 Quote Link to comment
ich777 Posted June 14, 2020 Author Share Posted June 14, 2020 Just now, Ramiii said: thanks a lot , installed it , shows my GPU but not the DVB Can you please send me a screenshot from the plugin page within a Private Message (not to spam the thread). My DVB solution is a little *special* at the moment and needs some fine tuning. 1 Quote Link to comment
Dazog Posted June 14, 2020 Share Posted June 14, 2020 Can we be provided an option to specify where we can get the nvidia drivers? For example Nvidia has beta drivers listed here: http://developer.download.nvidia.com/compute/cuda/11.0.1/local_installers/cuda_11.0.1_450.36.06_linux.run an option in the docker to custom field our own download link? Quote Link to comment
Dazog Posted June 15, 2020 Share Posted June 15, 2020 1 hour ago, ich777 said: @TexasDave, @Ramiii, @MowMdown, @Jus, @Marshalleq, @suyac, @monstahnator, @scottc, @Alphacosmos, @mkfelidae, @sjaak Beta Plugin is out right now: https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg Please feel free to test and report back if something is wrong or not properly working. Works for my Nvidia only build. Shows all information. Well done. 1 Quote Link to comment
scottc Posted June 15, 2020 Share Posted June 15, 2020 Works perfect for my nvidia custom build Thank you 1 Quote Link to comment
mkfelidae Posted June 15, 2020 Share Posted June 15, 2020 Looks good here, shows the Nvidia GPU information i would need to pass a GPU to a docker, shows my ZFS information (currently no pools is correct, i haven't set any up yet.) and also shows that there are DVB adapters on my system. Fine work I must say. 1 Quote Link to comment
Marshalleq Posted June 15, 2020 Share Posted June 15, 2020 Very nice plugin! I think I have some ideas to expand on the ZFS section too..... 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.