[Plugin] Linuxserver.io - Unraid Nvidia


Recommended Posts

9 hours ago, Jhp612 said:

Will the Rc1 build work for rc3? 

If you follow the Unraid upgrade path, you will lose Nvidia support, as you will revert to stock Unraid (no extra drivers).

 

The way to keep it working is to upgrade via the plugin. And they may not necessarily be supplying all RC versions.

 

Long story short, no. If it's available from the plugin, yes. Otherwise sit tight.

Link to comment
6 hours ago, dgreig said:

Are you guys planning on releasing new versions for other 6.8 rc series or just waiting until 6.8 goes final?

thanks so much for releasing rc1 as the 6.8 series has been a long time coming and still getting to use my P2000 is a fantastic bonus.

 

cheers!

I'll try and make builds for each rc series, home/work/family gets in the way sometimes, and if there are changes to the build in terms of out of tree drivers or other issues then it can take longer.  To make the builds takes about an hour, so I left it running when I went to bed last night after making so many silly errors before as I needed to sleep.  Woke up this morning and checked that it's working as expected so uploaded it.

  • Thanks 1
Link to comment

Anyone else getting this via the plugin when checking for new builds?

 

Warning: file_get_contents(https://lsio.ams3.digitaloceanspaces.com/?prefix=unraid-nvidia/): failed to open stream: HTTP request failed! HTTP/1.1 503 Service Unavailable in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 45

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 47

Warning: array_keys() expects parameter 1 to be array, null given in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 51

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 51

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 70

Warning: array_multisort(): Argument #3 is expected to be an array or a sort flag in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 73

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 91

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 103
 

Link to comment
Anyone else getting this via the plugin when checking for new builds?
 
Warning: file_get_contents(https://lsio.ams3.digitaloceanspaces.com/?prefix=unraid-nvidia/): failed to open stream: HTTP request failed! HTTP/1.1 503 Service Unavailable in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 45

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 47

Warning: array_keys() expects parameter 1 to be array, null given in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 51

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 51

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 70

Warning: array_multisort(): Argument #3 is expected to be an array or a sort flag in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 73

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 91

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/Unraid-Nvidia/include/exec.php on line 103
 
Yeah, Digital Ocean has an issue at the minute with it's spaces, and that's where our builds are.

Sent from my Mi A1 using Tapatalk

  • Thanks 1
Link to comment
Quote

 

Investigating

Our Engineering team is investigating an issue with Object Storage and Spaces performance in AMS3 region. During this time, you may experience issues with accessing Spaces. We apologize for the inconvenience and will share an update once we have more information.

Posted about 1 hour ago. Oct 19, 2019 - 10:56 UTC

 

 

Link to comment
On 10/18/2019 at 12:32 PM, Pducharme said:

 

Someone already told me this simple Rule :

 

  • GPU can be shared among Dockers (at the same time)
  • GPU dedicated to a VM is only available to that VM until that VM is powered down

Applying that rule to your question, I would say that no, it won't fallback on quicksync on the docker.  If you turn off the VM, then maybe I guess ? (can't test since i'm running a Ryzen 9 3900x on my Unraid, so no quicksync).

@Pducharme Sorry If I was not terrible clear, on the Intel System, it has both a Discrete GPU (Plugged in card) and QuickSynce, part of the graphics integrated into the CPU. 

I have the NVIDIA set at the card the Dockers and the one VM to use, I know that if I have the VM running, the dockers can not use it, but will they fall back to the Quicksync Graphics integrated into the CPU, so just software?

 

 

Link to comment
2 hours ago, wesman said:

@Pducharme Sorry If I was not terrible clear, on the Intel System, it has both a Discrete GPU (Plugged in card) and QuickSynce, part of the graphics integrated into the CPU. 

I have the NVIDIA set at the card the Dockers and the one VM to use, I know that if I have the VM running, the dockers can not use it, but will they fall back to the Quicksync Graphics integrated into the CPU, so just software?

 

 

I believe this could happen with Emby as you can select multiple transcoders but plex just uses either one or the other. From our testing of VM's using GPU's that are assigned to containers, we found it locked up it up.

Link to comment
1 hour ago, j0nnymoe said:

I believe this could happen with Emby as you can select multiple transcoders but plex just uses either one or the other. From our testing of VM's using GPU's that are assigned to containers, we found it locked up it up.

Yeah the one scenario was, if a gpu is being actively used for transcoding when you start a VM with that gpu passed through, unraid gets locked up.

 

If you start the VM when there is no active transcode, the gpu will no longer be available to the container so it will fall back to something else, no crashes. Whether it falls back to igpu or software, I don't know the answer. I never tried it. But it's really a plex question.

Link to comment
4 hours ago, Dazog said:

Has anyone wrote a bash script to enable these commands after array has started?

 

nvidia-smi --persistence-mode=1
fuser -v /dev/nvidia*

 

It allows the cards to go into p8 when not in use. (Save power mode)

I ain't very good at bash scripts.

Use the user scripts plugin.

  • Thanks 1
Link to comment
On 10/20/2019 at 7:39 PM, Dazog said:

Has anyone wrote a bash script to enable these commands after array has started?

 

nvidia-smi --persistence-mode=1
fuser -v /dev/nvidia*

 

It allows the cards to go into p8 when not in use. (Save power mode)

I ain't very good at bash scripts.

You could probably put that in the go file.

Link to comment
2 minutes ago, Sic79 said:

 


Interesting, I have problem with the P state on my Nvidia card too. Can you pls explain where this go file is, would like to test if it can help me solve this temporary while waiting for a bug fix.

 

The go file is a file which is run when the server is booted. It exists at `/boot/config/go`

Link to comment

I'm having some difficulty getting this working.  I have installed a P2000 card, and installed the plugin, downloaded and installed the Nvidia Unraid Build.  When I do a fresh reboot, I can go to the settings->Unraid Nvidia page and I see the GPU listed there. But after a few minutes, if I go back there, its gone and all the GPU entries are blank.  I am not passing the P2000 to any VM.  I have added it to only my plex docker.

 

diag attached if it helps.

 

Edit * Nevermind, I am an idiot. stupid mistake.  all seems to be working now.  and Thanks for the awesome work.

 

Edited by mattekure
reuploaded new diag.
Link to comment
  • trurl locked this topic
Guest
This topic is now closed to further replies.