[Support] ich777 - Nvidia/DVB/ZFS/iSCSI/MFT Kernel Helper/Builder Docker


901 posts in this topic Last Reply

Recommended Posts

Unraid Kernel Helper/Builder


With this container you can build your own customized Unraid Kernel.

 

Prebuilt images for direct download are on the bottom of this post.

 

By default it will create the Kernel/Firmware/Modules/Rootfilesystem with nVidia drivers

Currently supported drivers/modules: Nvidia, DigitalDevices, LibreElec, TBS OpenSource, iSCSI Target, Intel iGPU, ZFS, Mellanox Firmware Tools, Navi Reset Patch, gnif/vendor-reset, Intel Relax RMRR Patch

 

nVidia Driver installation: If you build the images with the nVidia drivers please make sure that no other process is using the graphics card otherwise the installation will fail and no nVidia drivers will be installed.

 

ZFS installation: Make sure that you uninstall every Plugin that enables ZFS for you otherwise it is possible that the built images are not working.

 

iSCSI Target: The Unraid-Kernel-Helper-Plugin has now a basic GUI for creation/deletion of IQNs,FileIO/Block Volumes, LUNs, ACL's (please note that some buttons on Chrome, EDGE,... are not visible, I recommend using Firefox).

ATTENTION: Always mount a block volume with the path: '/dev/disk/by-id/...' (otherwise you risk data loss)!

For instructions on how to create a target read the manuals: Manual Block Volume.txt Manual FileIO Volume.txt

 

ATTENTION: Please read the discription of the variables carefully! If you started the container don't interrupt the build process, the container will automatically shut down if everything is finished.

I recommend to open a console window and type in 'docker attach Unraid-Kernel-Helper' (without quotes and replace 'Unraid-Kernel-Helper' with your Container name) to view the log output. (You can also open a log window from the Docker page but this can be verry laggy if you select much build options). The build itself can take very long depending on your hardware but should be done in ~30minutes (some tasks can take very long depending on your hardware, please be patient).

 

 

Plugin available (will show all informations about the images/drivers/modules that it can detect):

https://raw.githubusercontent.com/ich777/unraid-kernel-helper-plugin/master/plugins/Unraid-Kernel-Helper.plg

Or simply download it through the CA App

 

 

This is how the build of the Images is working (simplyfied):

  1. The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished)
    Please be sure to set the build options that you need.
  2. Use the logs or better open up a Console window and type: 'docker attach Unraid-Kernel-Helper' (without quotes) to also see the log (can be verry laggy in the browser depending on how many components you choose).
    The whole process status is outlined by watching the logs (the button on the right of the docker).
  3. The image is built into /mnt/cache/appdata/kernel/output-VERSION by default. You need to copy the output files to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds.
  4. There is a backup copied to /mnt/cache/appdata/kernel/backup-version. Copy that to another drive external to your Unraid Server, that way you can easily copy it straight onto the Unraid USB if something goes wrong.

 

THIS CONTAINER WILL NOT CHANGE ANYTHING TO YOUR EXISTING INSTALLATION OR ON YOUR USB KEY/DRIVE, YOU HAVE TO MANUALLY PUT THE CREATED FILES IN THE OUTPUT FOLDER TO YOUR USB KEY/DRIVE AND REBOOT YOUR SERVER.

 

PLEASE BACKUP YOUR EXISTING USB DRIVE FILES TO YOUR LOCAL COMPUTER IN CASE SOMETHING GOES WRONG!
I AM NOT RESPONSIBLE IF YOU BREAK YOUR SERVER OR SOMETHING OTHER WITH THIS CONTAINER, THIS CONTAINER IS THERE TO HELP YOU EASILY BUILD A NEW IMAGE AND UNDERSTAND HOW THIS IS WORKING.

 

UPDATE NOTICE: Please redownload the template from the CA App to keep the template up to date.

 

Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container!

Note that LimeTech supports no custom Kernels and you should ask in this thread if you are using this specific Kernel when something is not working.

 

CUSTOM_MODE:
This is only for Advanced users!
In this mode the container will stop right at the beginning and will copy over the build script and the dependencies to build the kernel modules for DVB and joydev in the main directory (I highly recommend using this mode for changing things in the build script like adding patches or other modules to build, connect to the console of the container with: 'docker exec -ti NAMEOFYOURCONTAINER /bin/bash' and then go to the /usr/src directory, also the build script is executable).

 

Thanks to @Leoyzen, klueska from nVidia and linuxserver.io for getting the motivation to look into this how this all works... ;)

 

For safety reasons I recommend you to shutdown all other containers and VM's during the build process especially when building with the nVidia drivers!

 

After you finished building the images I recommend you to delete the container! If you want to build it again please redownload it from the CA App so that the Template/Container is always the newest version!

 

!!! Please also note that if you build anything Beta keep an eye on the logs, especially when it comes to building the Kernel (everything before the message '---Starting to build Kernel vYOURKERNELVERSION in 10 seconds, this can take some time, please wait!---' is very important) !!!

 

 

 

Notice for Custom pre-built images for Unraid version 6.9.0beta35 and up:

Since Unraid changed the game completely with the release of version 6.9.0beta35 and up, so that you can install the Nvidia, DVB and many more other addons to Unraid that I even can't imagine at the time of writing, I will not make or post pre-built images here for the new versions.

However I will update the container in general so that you can build your own custom images if you want a 'all in one' solution.

(You have to be at least on Unraid 6.9.0beta35 to see the Plugins in the CA App for Nvidia, DVB,...)

 

If you like my work, please consider making a donation

Donate

 

  • Like 20
  • Thanks 14
Link to post
  • Replies 900
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel.   Prebuilt images for direct download are on the bottom of this post.   By

CHANGELOG: 08.04.2021: Changed template to be compatible with unRAID v6.9.2 19.03.2021: Added possibility to load AMD Drivers/Kernel Module amdgpu on startup and install 'r

Im still confused by him. He does not want to update to 8.6.3 from 8.6.2 due to personal reasons. Yet he keeps saying the only thing it fixed was a venerability. So I guess that he needs this venerabi

Posted Images

CHANGELOG:

08.04.2021:

  • Changed template to be compatible with unRAID v6.9.2

19.03.2021:

  • Added possibility to load AMD Drivers/Kernel Module amdgpu on startup and install 'radeontop'

02.03.2021:

  • Changed template to be compatible with unRAID v6.9.1
  • Added Firewire
  • Added GVT-g

02.03.2021:

  • Changed template to be compatible with unRAID v6.9.0

19.02.2021:

  • Added Coral Acceleration Module Drivers

29.01.2021:

  • Added USBip-HOST (in addition to this you can install the Plugin 'Unraid USBip GUI' from the CA App from @SimonF)

26.01.2021:

  • Added hpsahba patch build mode for certain HP RAID controllers for more information here: Click

18.01.2021:

  • Fixed a bug that ZFS is not shown in the build overwiew if it was enabled

14.01.2021:

  • Fixed a bug that Kernel module zfs is not loaded when variable 'Load ZFS Pools on Array start/stop' is set to 'false'

13.01.2021:

  • Added possibility to select branch for builds with gnif/vendor-reset

03.01.2021:

17.12.2020:

  • Updated Container to support Kernel v5.9.13
  • Added sendemail

28.10.2020:

  • The Unraid-Kernel-Helper-Plugin has now a basic GUI for creation/deletion of IQNs,FileIO/Block Volumes, LUNs, ACL's.

24.08.2020:

  • Updated the Container to use version file instead of pulling every version number from Github repository (for 6.9.0)

13.08.2020:

  • Fixed build/download with Kernels that end with a '0' (for 6.9.0)
  • Fixed building with a custom UNAME (for 6.9.0)

11.08.2020:

  • Fixed build of TBS Open Sources drivers (for 6.9.0)

08.08.2020:

  • Optimized compression of images (for 6.8.3 and 6.9.0)
  • Added multithreaded compression of images (for 6.8.3 and 6.9.0)
  • Added additional check if all necessary nVidia tools compiled correctly
  • Optimized iSCSI startup

04.08.2020:

  • Added iSCSI Target support - for 6.8.3 and 6.9.0 (please note that at the time this feature is command line only, please download the manuals from the first post how to create a target - ATTENTION: always create a block volume with the following path: '/dev/disk/by-id/...' otherwise you risk data loss).

16.07.2020:

  • Added possibility to automaticaly download beta builds (in the variable BETA_BUILD needs to be the beta version for example: 'beta25' or 'beta24' - for 6.9.0)
  • Fixed nVidia build error messages - for 6.9.0
  • Added DONTWAIT variable - for 6.9.0
  • Added additional CUSTOM_BUILD options - for 6.9.0
  • Code cleanup - for 6.9.0

10.07.2020:

  • Added build stage for Mellanox Firmware Tools (for 6.9.0)
  • Added warning on failed download (for 6.9.0)
  • Code cleanup

08.07.2020:

  • Updated Container to be compatible with beta24

02.07.2020:

  • Fixed move from container-toolkit to nvidia-container-toolkit on Github

30.06.2020:

  • ZFS Pools now loaded/unloaded on Array start/stop (manual load with 'zpool import -a' or unload with 'zpool export -a' is always possible)
  • Added possibility to load Intel Drivers/Kernel Module i915 on startup

20.06.2020:

  • Added possibility to build ZFS from 'master' branch on Github (for 6.9.0)
  • Updated Plugin (added additional information from ZFS pool)

18.06.2020:

  • Added fix for nVidia driver v440.82 and Kernel 5.7 (for 6.9.0 beta22)

15.06.2020:

  • Added option to save the full log output from the build process to a file in the main directory
  • Released Unraid-Kernel-Helper-Plugin (in the CA App)
  • Switched to gcc version 9.3.0-13 (for 6.9.0)

10.06.2020:

  • Separated DVB drivers and don't install all at once (valid options are: 'digitaldevices', 'libreelec', 'tbsos', 'xboxoneusb')

07.06.2020:

  • Added possibility to build Beta versions (for 6.9.0)
  • Added finishing sound (will only play on the motherboard pc speaker and only for 6.9.0 and up)
  • Words are hard (fixed a few typos)

06.06.2020:

  • Added TBS OpenSource drivers to the DVB build step

05.06.2020:

  • Corrected an error that zpools not loaded automatically on boot

04.06.2020:

  • Added ZFS to Kernel build options
  • Added option to include user specific Kernel patch files with automatic build
  • Words are hard (fixed a few typos)

31.05.2020:

  • Added possibility to insert custom Kernel version

26.05.2020:

  • Fixed build steps so that the latest 'nvidia-container-runtime' and 'nvidia-container-toolkit' can be built
  • Fixed CUSTOM_MODE sleep (if CUSTOM_MODE was set to true and the script 'buildscript.sh' was executed from the main directory it says again that CUSTOM_MODE is enabled)
  • Added end message with version numbers
  • Added a warning if build mode nVidia is selected (if a process uses the graphics card the installation of the nVidia drivers will fail)
  • Words are hard (fixed a few typos and sentences that were not comprehensible)

25.05.2020:

  • Initial release
  • Like 3
  • Thanks 6
Link to post

can i be the first to say, wow!, i can see how this will be VERY useful for people wanting to pass through hardware to containers, and now being able to build out an image is impressive work indeed!, and of course takes the load of LSIO to produce the custom image every time unraid bumps the version, a real game changer!.

Link to post
5 minutes ago, binhex said:

can i be the first to say, wow!, i can see how this will be VERY useful for people wanting to pass through hardware to containers, and now being able to build out an image is impressive work indeed!, and of course takes the load of LSIO to produce the custom image every time unraid bumps the version, a real game changer!.

Appreciated, this was no easy task since I had zero understanding of how to compile a kernel now I know a little bit... :D

I was also looking for a way to upgrade my drivers a little bit faster and also to add custom kernel modules the 'easy' way (I totally understand that linuxserver can't build a new image for each new driver version...)

If you got any suggestions feel free to contact me. ;)

 

Btw: I uploaded the container already but it will take a bit to update in the CA App.

Edited by ich777
Link to post
10 minutes ago, ich777 said:

If you got any suggestions feel free to contact me

i think my only current suggestion is possibly to add to the OP that limetech will not support custom kernels at this time, so there is no official support whatsoever  if something should go wrong (obviously there is community help though).

Edited by binhex
Link to post
Just now, binhex said:

i think my only current suggestion is possibly to add to the OP that limetech will not support custom kernels at this time, so there is no official support whatsoever (obviously there is community help though).

Should i make it a little bigger or is it not clear? English is not my native language... :/

But i will do that, thank you ;)

 

44 minutes ago, ich777 said:

Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container!

 

Link to post
7 minutes ago, ich777 said:

Forum Notice: When something isn't working with or on your server and you make a forum post always include that you use a Kernel built by this container!

 

 

yeah i saw that but it doesnt actually state that limetech wont support you, just that you need to specify its a custom kernel in your post, makes it kinda sound like limetech MAY support you, just put viewpoint you understand :-).

 

edit -saw your alteration, looks good 🙂

Edited by binhex
Link to post
2 hours ago, Alphacosmos said:

This is amazing. I have been trying to figure out something like this for a while now. Now i can transcode plex with GPU and use a USB DVR. Its perfect. Now all it needs is the ability to add device drivers as required.

You can do that by setting CUSTOM_MODE to 'true'.

The container will then copy the build script and also the DVB patch file to the main directory and you can edit it there and add or remove things, then you can simply run it. ;)

(You also have to remove the first condition because the script stops if CUSTOM_MODE is set to 'true', this will be fixed in the next few days so that you don't have to do that manually)

  • Like 2
  • Thanks 1
Link to post
1 hour ago, ich777 said:

You can do that by setting CUSTOM_MODE to 'true'.

The container will then copy the build script and also the DVB patch file to the main directory and you can edit it there and add or remove things, then you can simply run it. ;)

(You also have to remove the first condition because the script stops if CUSTOM_MODE is set to 'true', this will be fixed in the next few days so that you don't have to do that manually)

See thats amazing as well. I think alot of people are going to be using this and you have saved people alot of time. Thank you!

Link to post

This is great , thank you so much , I was waiting for this so I can have my GPU and DVB both working . before that I had to choose 

 

Just to be sure , the default settings will build a kernel with both drivers and I have to install manually or do I need to set custom mode ?

Link to post
7 minutes ago, Ramiii said:

This is great , thank you so much , I was waiting for this so I can have my GPU and DVB both working . before that I had to choose 

 

Just to be sure , the default settings will build a kernel with both drivers and I have to install manually or do I need to set custom mode ?

If you let it as it is it will build it with nVidia and DVB (DigitalDevices, LibreELEC & Xbox USB Tuner currently included).

If you need other drivers you will need to enable the Custom mode and edit the build script.

Link to post
4 hours ago, sjaak said:

Nice to see this! it would also be awesome if it got a option to include the Vega Reset Bug Patch (and/or Navi Reset Bug Patch)...

You can build the kernel completely yourself by setting CUSTOM_MODE to 'true' by doing so, the container will copy the build script and al necessary files to the main directory and then you can customize the script itself or you copy paste line by line and add you patch files in between.

Link to post

Finally!  Amazing!  Thankyou!

 

I dub thee, the official community kernel!

 

Items on my wishlist to include are:

  1. The awesome ZFS plugin from @steini84 - he has previously published all the build scripts and while he's. always very accommodating to build a new version for us, it would be amazing to link the two.
  2. NFS updates
  3. Samba updates

 

Tips for anyone else first doing this that I didn't know:

 

  1. The build process begins as soon as the docker starts (you will see the docker image is stopped when the process is finished)
  2. Use the logs.  The whole process status is outlined by watching the logs (the button on the right of the docker)
  3. The image is built into /mnt/cache/appdata/kernel/output-version by default.  You need to copy this to /boot on your USB key manually and you also need to delete it or move it for any subsequent builds
  4. There is a backup copied to /mnt/cache/appdata/kernel/backup-version.  I would copy this to another drive external to your unraid box, that way you can easily copy it straight onto the unraid USB if something goes wrong.

 

As a guide, the whole process took about 10 minutes on my Threadripper 1950x (32 threads).  The actual compilation of  the kernel seemed to be about 1 minute, so clearly there's a lot of other things going on.

 

Hope that helps someone.

 

 

Edited by Marshalleq
Link to post

Thanks for the post! ;)

If you can give me links where i can get these scripts or updates, please PM me. ;)

 

A big part of the build process is the compression of bzroot since this is a single core task...

 

I mainly built this container because i needed additional kernel modules for my DebianBuster-Nvidia Container (use this for streaming Steamgames to my mobile phone or older laptops but haven't got time tho fix this with the kernel.modules since I got a lot work)

 

Little side note: you always can build a custom kernel if you set the option CUSTOM_MODE to 'true' then the container will stop right at the beginning and copy the build script to the main directory, then it's only a matter of copy and paste or you change the script to your preferrence.

 

EDIT: Please always delete the container and the template when you finished building the kernel and redownload it from the CA app so that the template is always on the newest version.

 

EDIT2: I think about adding your tutorial to the first post and add my own things to it if you are OK with that.

Edited by ich777
Link to post
15 hours ago, sjaak said:

Nice to see this! it would also be awesome if it got a option to include the Vega Reset Bug Patch (and/or Navi Reset Bug Patch)...

Can you give me links to where i can get them and i will see if i can integrate them easily.

 

Please always delete the container and the template when you finished building the kernel and redownload it from the CA app so that the template is always on the newest version.

Edited by ich777
Link to post
4 hours ago, ich777 said:

Can you give me links to where i can get them and i will see if i can integrate them easily.

 

Please always delete the container and the template when you finished building the kernel and redownload it from the CA app so that the template is always on the newest version.

i cant found them complete (coffee isn't kicked in yet...) but levelonetech is my information source:

https://forum.level1techs.com/t/vega-10-and-12-reset-application/145666

https://forum.level1techs.com/t/navi-reset-kernel-patch/147547

 

maybe you can ask @Leoyzen for the patches? he is also creating a custom kernel and include those patches.

i prefer to chose to install one or bold...

Link to post
1 hour ago, sjaak said:

i cant found them complete (coffee isn't kicked in yet...) but levelonetech is my information source:

https://forum.level1techs.com/t/vega-10-and-12-reset-application/145666

https://forum.level1techs.com/t/navi-reset-kernel-patch/147547

 

maybe you can ask @Leoyzen for the patches? he is also creating a custom kernel and include those patches.

i prefer to chose to install one or bold...

You can customize the build process completely with the options that i've mentioned above.

 

Sorry but i will not investigate too much since I'm really busy at the moment.

If you find the links to the patches somewhere please let me know and i will look into it. ;)

Link to post
6 hours ago, ich777 said:

You can customize the build process completely with the options that i've mentioned above.

 

Sorry but i will not investigate too much since I'm really busy at the moment.

If you find the links to the patches somewhere please let me know and i will look into it. ;)

will do soon, if i got some free time ;)

Link to post
On 5/30/2020 at 4:25 PM, ich777 said:

Thanks for the post! ;)

If you can give me links where i can get these scripts or updates, please PM me. ;)

 

A big part of the build process is the compression of bzroot since this is a single core task...

 

I mainly built this container because i needed additional kernel modules for my DebianBuster-Nvidia Container (use this for streaming Steamgames to my mobile phone or older laptops but haven't got time tho fix this with the kernel.modules since I got a lot work)

 

Little side note: you always can build a custom kernel if you set the option CUSTOM_MODE to 'true' then the container will stop right at the beginning and copy the build script to the main directory, then it's only a matter of copy and paste or you change the script to your preferrence.

 

EDIT: Please always delete the container and the template when you finished building the kernel and redownload it from the CA app so that the template is always on the newest version.

 

EDIT2: I think about adding your tutorial to the first post and add my own things to it if you are OK with that.

Yes of course, add them to the top - no need to ask!

 

Samba is here https://github.com/samba-team/samba

The ZFS code that steini uses is at https://github.com/Steini1984/unRAID6-ZFS

 

ZFS is the main one I'm interested in right now though.

 

Steini's ZFS is for a plugin though, so I'm not sure what that means, I expect he'd be keen to help though, he's great like that.  I'm quite sure there's a huge win in this for him (not having to compile the code each time himself) and us (not having to annoy him and wait to compile it for every beta version).  This is a key advantage really, we're usually at the mercy of others when it comes to testing kernels in the rc path steini is very good, but others have refused at times.

 

Thanks again, this docker is amazing!

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.