[Plugin] Nvidia-Driver


ich777

Recommended Posts

3 minutes ago, Savinon said:

I installed it from the linuxserver repo and I also tried the official one also.

Have you got a valid PlexPass subscription? Have you enabled HW acceleration within the container? Have you generated a new PLEX_CLAIM and used it on the first start?

 

EDIT: Can you try Jellyfin from the CA App only for testing purposes?

Link to comment
Just now, ich777 said:

Have you got a valid PlexPass subscription? Have you enabled HW acceleration within the container? Have you generated a new PLEX_CLAIM and used it on the first start?

I purchased one before I decided to go through with this and I generated one every time installed the plex docker. Is it possible the driver isn't talking to the docker? 

Link to comment
1 minute ago, Savinon said:

Is it possible the driver isn't talking to the docker? 

From what I've see the driver is working, when you followed the instructions step by step it should work, can you share a screenshot with the Advanced View on from your Plex template?

Link to comment
1 minute ago, Savinon said:

I tried that as well and also and also I read that the all variable is no longer needed for this docker container. 

 

For the official Plex container that's simply not true, also for Jellyfin or Emby that's not ture...

From what I've know the @binhex pelxpass container also works fine but you have to put all in all variables.

Link to comment

Hi @ich777

 

I ran into an issue this morning when upgrading to 6.9.1.  I had your latest version of the plugin installed and I had manually downloaded the latest gpu driver before upgrading to 6.9.1.  After upgrade to 6.9.1 none of my nvidia gpu enabled containers would start (bad parameter execution).  I then went to the plugins page and noticed your plugin had disappeared.   I went back into community apps to get it and when I downloaded it the first time the UI told me it was skipping the install because the plugin was already installed however it wasn't listed in the unraid UI (in plugins or settings).  I went back to the community plugin page and tried again and this time it allowed me to install.

At this point the GPU and GPUID was not recognized.  So I disabled docker and renabled docker then went back to the nvidia plugin and the GPU / GPU ID was still not there.  Gave the server another reboot and all is working now.

My assumption is updates in 6.9.1 maybe related to docker updates was some how interfering with your plugin.  Just wanted to report this back to you.

Thanks again for your work on this plugin. Much appreciated.

  • Like 1
Link to comment
3 minutes ago, repomanz said:

I had manually downloaded the latest gpu driver before upgrading to 6.9.1.

You mean with the 'Download' button or am I wrong? The download button doesn't work if you upgrade Unraid within the WebGUI and then Download the driver because the plugin can't know what version it should download, it downloads every time the Nvidia driver for the current version of Unraid.

Hope that makes sense to you...

 

3 minutes ago, repomanz said:

After upgrade to 6.9.1 none of my nvidia gpu enabled containers would start (bad parameter execution).

Have you got a internet connection on boot or do you virtualize pfSense or any other kind of firewall on Unraid itself?

 

15 minutes ago, repomanz said:

At this point the GPU and GPUID was not recognized.  So I disabled docker and renabled docker then went back to the nvidia plugin and the GPU / GPU ID was still not there. 

This is really strange but if a restart does the trick it's fine I think... :D

 

16 minutes ago, repomanz said:

My assumption is updates in 6.9.1 maybe related to docker updates was some how interfering with your plugin. 

Not as far as I know since I down and upgrade very often on my Dev machine even down to beta35, yesterday I downgraded to 6.9.0 and then upgraded back to 6.9.1 without a problem.

 

The only thing that you have to be sure is that you have a internet connection at boot since it looks for newer drivers and also if you upgrade for the appropriate version of the driver.

Link to comment
4 minutes ago, MrGreen718 said:

I cannot get this to work in any Plex docker including binhex-Plexpass or the official Plex docker. Ive also tried every other Plex docker as well. Using your recommended variables. I keep getting the following errors when I do it. The attached error is from the official Plex docker. 

F51802CA-59EB-488C-92EA-97622B47D816.jpeg

Screenshot 2021-03-13 at 10.27.36 AM.pdf 271.59 kB · 1 download

 

check if you don't have any space " " at front and back of the Nvidia Visible Devices ID, this will happen when you did copy the ID

Edited by sjaak
Link to comment
9 minutes ago, ich777 said:

You mean with the 'Download' button or am I wrong? The download button doesn't work if you upgrade Unraid within the WebGUI and then Download the driver because the plugin can't know what version it should download, it downloads every time the Nvidia driver for the current version of Unraid.

Hope that makes sense to you...


My unraid machine gets it's DNS from a VM running on unraid so dns is not up at boot.  To counter that, I used the download button within your plugin so it was cached at boot.

All working now though; next time I'll change DNS to resolve to 1.1.1.1 as a safeguard.

 

Link to comment
14 minutes ago, repomanz said:

To counter that, I used the download button within your plugin so it was cached at boot.

No, because when you upgrade the Kernel version changes and the plugin detects that and it has to download the driver for the new Kernel version and that's why it fails.

 

16 minutes ago, repomanz said:

All working now though; next time I'll change DNS to resolve to 1.1.1.1 as a safeguard.

I would strongly recommend to set the default DNS server from Unraid itself to the DNS from your router or what you prefer since the server itself always needs exclusive access for the dockers you can always use the DNS server from your VM. ;)

That would be my recommendation (also do it like that on my server).

Link to comment
32 minutes ago, MrGreen718 said:

I cannot get this to work in any Plex docker including binhex-Plexpass or the official Plex docker. Ive also tried every other Plex docker as well. Using your recommended variables. I keep getting the following errors when I do it. The attached error is from the official Plex docker. 

Have you restarted the server or disabled and re-enabled the Docker service like the instructions on the first page and also the red box that appears told you?

 

I think not because your Docker service told you that the runtime 'nvidia' isn't found and that's exactly the problem when you install the Driver and don't read the instructions or even the red box... :D

Link to comment

Been using this plug-in for the past two months, from 6.9.0-rc2 through 6.9.1, and it has worked perfectly with Linuxserver Plex when following the setup instructions.

 

My biggest complaint is that I needed to buy a Quadro P400 to put in the server, to get the fullest out of hw transcoding.  ich777 - you owe me $120.  *grin*

 

Though not your fault, I do get spammed by the nVidia driver bug that seems to have been around for quite some time:

Quote

Mar 15 13:16:50 Malta-Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]

Mar 15 13:16:50 Malta-Tower kernel: caller _nv000708rm+0x1af/0x200 [nvidia] mapping multiple BARs

 

Can you suggest what might be the latest work around to keep this from filling up my log?

Currently, I keep checking to see if it is going on, and starting a video in Plex that requires transcoding (then closing the video) will get the driver back to the P8/gpu_idle state.

Link to comment

Looking for some input here guys regarding GPU use. I'm on 6.9.1 and using the Nvidia plugin which enables the dockers to use all my three GPUs, one one is used for Emby. Now the issue is when I assign one to a VM I get his error...

 

Mar 15 23:37:01 Ultron kernel: NVRM: Attempting to remove minor device 1 with non-zero usage count!

 

So from what I can gather the Nvidia plugin captures the GPU's and wont allow them to be used by VMs. Using the VFIO option reserves the spare two GPUs enabling their use by VMs, okay great but this means the VMs must be running other wise the cards are in full P0 mode (full power).

 

Is there a away to boot without VFIO using the Nvidia plugin, use the spare cards with the VMs and then return them to the Nvidia plugin pool when the VM shuts down for power management and other uses?

Or is there a way to keep the spare cards on VFIO and run a script to put them in P8 mode?

Link to comment
9 minutes ago, david279 said:

Using nvidia persistence mode may keep the cards at idle. I run this command in a user script when the array starts up using the user scripts plugin.

 

nvidia-smi --persistence-mode=1

Will that work if the cards are using VFIO though?

If i haven't got them enabled with VFIO the drivers will set the cards to the correct power state however this means I can't use them with the VMs as i get this error...

Mar 15 23:37:01 Ultron kernel: NVRM: Attempting to remove minor device 1 with non-zero usage count!

This locks the entire VM section up requiring a reboot.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.