[Support] ich777 - Nvidia/DVB/ZFS/iSCSI/MFT Kernel Helper/Builder Docker


620 posts in this topic Last Reply

Recommended Posts

4 hours ago, ich777 said:

Can you try to disable all your VM's and also your Docker containers at startup, reset all your IOMMU assignments, reboot and then try to assign it and see if it works?

 

Can't imagine why this isn't working...

 

i tried a 'clean' flashdrive (read as, new setup, only the license key was reused, no configs), so that a config conflict isn't the problem.

 

4 hours ago, ich777 said:

EDIT: You also tried the prebuilt one from the first post in this thread?

yep:

6 hours ago, sjaak said:

i tried the prebuilt images from TS, no luck too.

i think that my hardware has a glitch...

Link to post
  • Replies 619
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Unraid Kernel Helper/Builder With this container you can build your own customized Unraid Kernel.   Prebuilt images for direct download are on the bottom of this post.   By

CHANGELOG: 18.01.2020: Fixed a bug that ZFS is not shown in the build overwiew if it was enabled   14.01.2020: Fixed a bug that Kernel module zfs is not loaded when va

Im still confused by him. He does not want to update to 8.6.3 from 8.6.2 due to personal reasons. Yet he keeps saying the only thing it fixed was a venerability. So I guess that he needs this venerabi

Posted Images

15 hours ago, ich777 said:

Have you tried to compile it with the nvidia driver alone and without your patch file?

 

Have you installed a cache drive in your system?

On which unraid version are you?

Can you provide a full log? You can enable the option save to log to save the output to a logfile.

 

Something seems wrong the file should be created when the modules are compiled.

I haven't tried it alone. I was going to try and do it tonight, since last night I realized that the nvidia script to allow more streams only works up to driver version  450.57, so the latest won't work.

 

I do have a cache drive on my server. I don't know if that should change anything though.

 

I'm trying to build based on the 6.9.0 beta25 version, that I have the pre-built one from your first post running on it now with the nvidia drivers. But I'm having all kinds of issues with vm's not working, so I was thinking about going back to the 6.8.3 version, but I'd still like to try to make one with newer drivers and kernel like you mentioned in one of your posts before.  If that is possible.

 

I did enable the log, and it saves a file, but since it's a custom build and you have to do it through console, the only thing it saves in the log is to tell me that I have to do it through console....

Link to post
2 hours ago, sjaak said:

 

i tried a 'clean' flashdrive (read as, new setup, only the license key was reused, no configs), so that a config conflict isn't the problem.

 

yep:

i think that my hardware has a glitch...

Can't imagine that it's not working because I don't do amything different than linuxserver.io im the build process...

Eventually the new drivers are the problem.

Have you also tried it with the old nVidia driver version?

 

1 hour ago, PickleRick said:

I haven't tried it alone. I was going to try and do it tonight, since last night I realized that the nvidia script to allow more streams only works up to driver version  450.57, so the latest won't work.

 

I do have a cache drive on my server. I don't know if that should change anything though.

You can set the driver version from latest to the your prferred version.

 

Yes that can be a problem if there is no cache drive in the system.

 

1 hour ago, PickleRick said:

If that is possible.

That should be possible...

 

I would like to ask if something used one of the cards at build process but you also said that you tried the prebuilt one and that also doesn't work so that can't be the problem.

 

1 hour ago, PickleRick said:

I did enable the log, and it saves a file, but since it's a custom build and you have to do it through console, the only thing it saves in the log is to tell me that I have to do it through console....

Can you try to build a completely clean build with nVidia drivers and Custom Build turned off?

 

Oh and are you sure that nothing uses the nVidia card at building the images, if so it will fail and not work.

Link to post
8 hours ago, ich777 said:

Can't imagine that it's not working because I don't do amything different than linuxserver.io im the build process...

Eventually the new drivers are the problem.

Have you also tried it with the old nVidia driver version?

[...]

yep, the 440.100, same as the LS.io images.

I'm gonna wait for beta26, will test it again.

the only difference what is see is that the LSio are bigger then de builds from this docker.

Link to post
2 minutes ago, sjaak said:

yep, the 440.100, same as the LS.io images.

I'm gonna wait for beta26, will test it again.

the only difference what is see is that the LSio are bigger then de builds from this docker.

So the 440.100 driver works when you build with my container?

 

Yes, because I use another compression I think.

How much bigger they are?

A typical bzroot from my container with nVidia has to be around 200MB to 230MB I think, depending on the nVidia driver version...

Link to post
4 hours ago, ich777 said:

So the 440.100 driver works when you build with my container?

no, the LSio version only. non of the builds from this container works :(

 

edit:

the current LSio image on flashdrive: (6.9.0-beta25)

bzfirmware:             8.8MB (output from this container: 9.9MB)

bzfirmware.sha256: 65bytes (container: 106bytes)

bzimages:                4.8MB (container: 4.8MB)

bzimges.sha256:     65bytes (container: 103bytes)

bzmodules:              23.9 (container: 23MB)

bzmodule.sha256:    65bytes (container: 105bytes)

bzroot:                     232.4MB (container: 231.0MB)

bzroot.sha256:         65bytes (container: 102bytes)

output from this containers is version 6.9.0beta25 nvidia drivers 450.57

Edited by sjaak
Link to post
10 minutes ago, sjaak said:

no, the LSio version only. non of the builds from this container works :(

Okay then I can't help sorry.

I don't know what could be the issue...

 

You are the first person where the images doesn't work.

 

Isyour bzroot built with this container about 200mb?

Link to post
1 minute ago, ich777 said:

Isyour bzroot built with this container about 200mb?

 

13 minutes ago, sjaak said:

bzroot:                     232.4MB (container: 231.0MB)

 

the LSio version runs fine, the amd reset patch works only with the windows10 vm, which is not my daily driver, so no big problem... ;)

Link to post
1 minute ago, sjaak said:

 

 

the LSio version runs fine, the amd reset patch works only with the windows10 vm, which is not my daily driver, so no big problem... ;)

Sorry can't help since I don't know what's the problem...

 

It seems everything is fine. ;)

 

Do you know that you can cards in the IOMMU menu to the fifo driver?

 

Eventually something else is the problem, i now tried a GTX 1050Ti and a GTX 750 in one machine and got no problem.

Link to post
7 minutes ago, ich777 said:

Sorry can't help since I don't know what's the problem...

 

It seems everything is fine. ;)

 

Do you know that you can cards in the IOMMU menu to the fifo driver?

 

Eventually something else is the problem, i now tried a GTX 1050Ti and a GTX 750 in one machine and got no problem.

all 3 gpu's are selectable for bind to vfio driver, only the amd vega64 is stubbed,

only the GPU that unRAID use for it's GUI has problems (the GT710) the 1050ti which is in use for plex transcode, works fine.

if i switch in the bios from the gt710 to the 1050ti, then bold gpu stayed at P0 state and gui issues.

Link to post
22 hours ago, ich777 said:
On 9/5/2020 at 3:02 PM, PickleRick said:

If that is possible.

That should be possible...

 

I would like to ask if something used one of the cards at build process but you also said that you tried the prebuilt one and that also doesn't work so that can't be the problem.

I wasn't able to try it last night since I'm moving a bunch of stuff at the house right now.  Should be able to tonight though. Nothing was using it during the build though, I shut down all the dockers and vm's before I ran it. 

 

22 hours ago, ich777 said:
On 9/5/2020 at 3:02 PM, PickleRick said:

I did enable the log, and it saves a file, but since it's a custom build and you have to do it through console, the only thing it saves in the log is to tell me that I have to do it through console....

Can you try to build a completely clean build with nVidia drivers and Custom Build turned off?

 

Oh and are you sure that nothing uses the nVidia card at building the images, if so it will fail and not work.

I'm going to try and do a couple builds tonight with different variations in it and will have to let you know what happens.  The only reason I was able to get the error was during the build in console, it spits out that error really quickly and continues bullding the rest of it anyway.

Link to post

I manually patched the amd usb/audio patch(just so I could take out using the custom variable) and then used the container to build a beta version with the 450.57 nvidia drivers and it is all working fine now.  Other than the problems brought on to my vm's with the beta, but that doesn't have anything to do with this container.

 

Keep up the good work sir and thank you.

Link to post

Sorry if this has already been mentioned, but will this work if you're using a custom build already (say, LSIOs nVidia build), or does it need to be stock before making any changes? My understanding was given you're overriding the existing files in /boot, it should just "work".

 

I've gone through and built a custom build with nVidia and DVB enabled (I'm on Unraid 6.8.3), overwrote the files within /boot with the newly compiled drivers (using WinSCP to get to the boot drive) and rebooted. The LSIO nVidia build remains after reboot :( The log file don't show anything to have failed when I built the custom build myself using the CA App (logs attached).

 

I've also tried using a prebuild 6.8.3 with nVidia and DVB enabled but the same result.

 

Not sure if I'm missing anything?

EDIT: For whatever reason, recompiling the same settings for a third time (nothing changed), it now loads both nVidia and DVB drivers (attached log):

 

image.thumb.png.13e458f98af444fb1aec09208c1639ce.png

 

Thanks for your hard work @ich777! :) 

2020-09-12_10.29.31.log

2020-09-12_11.38.43.log

Edited by evakq8r
Link to post
3 hours ago, evakq8r said:

EDIT: For whatever reason, recompiling the same settings for a third time (nothing changed), it now loads both nVidia and DVB drivers (attached log):

Good to hear that it works now.

 

The LS.io does exactly the same except for that that they build the image and you download the prebuilt image through the plugin, the plugin from LS.io will always be there if you don't uninstall it.

 

Hope this make things clearer. ;)

 

EDIT: Forgot to say that no process should use the nVidia card at building the images, otherwise the installation of the nVidia driver fails.

Link to post
2 hours ago, ich777 said:

Good to hear that it works now.

 

The LS.io does exactly the same except for that that they build the image and you download the prebuilt image through the plugin, the plugin from LS.io will always be there if you don't uninstall it.

 

Hope this make things clearer. ;)

 

EDIT: Forgot to say that no process should use the nVidia card at building the images, otherwise the installation of the nVidia driver fails.

Thanks! And I had turned off all VMs and dockers to ensure that was the case, just thought it was odd for this to work on the third go. Murphy's Law......

Link to post
8 hours ago, evakq8r said:

Thanks! And I had turned off all VMs and dockers to ensure that was the case, just thought it was odd for this to work on the third go. Murphy's Law......

That's very strange since the prebuilts should work OOB.

 

EDIT: Sometimes the drivers don't show up in the plugin (I use a very differnt methode than LS.io).

Sometimes you have to try if everything works... :D

Link to post
  • 2 weeks later...

@ich777 Were there any updates to the template in the latest version, or just in the core itself? I only updated vs uninstall/reinstall. I am getting to have so many values in the template defined I really try and avoid deleting altogether if I don't have to 😁.

Edited by cybrnook
Link to post
13 minutes ago, cybrnook said:

@ich777 Were there any updates to the template in the latest version, or just in the core itself? I only updated vs uninstall/reinstall. I am getting to have so many values in the template defined I really try and avoid deleting altogether if I don't have to 😁.

Just an update to the core itself for 6.9.0 now the needed components for nVidia are precompiled by me und just inserted, this speeds up the build signifficantly... ;)

The latest template has only Mellanox Tools in it and I think some other small things (hint, you can also define a variable 'DONTWAIT' and set it to 'true' then all warning messages are skipped).

You only have to edit the 'none hidden settings' the others are just if somebody want's to download a different version of the container.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.