[Plugin] Linuxserver.io - Unraid Nvidia


Recommended Posts

This might be completely unrelated, but I thought I would share:

 

I have a p2000 and I added the patch for both decode and encode. When parity check started running, I was getting random failures in plex. Sometimes it would buffer endlessly. Sometimes it would start and then stop video. Remote users were unable to view libraries. However, the moment I stopped the parity check, everything went back to normal.

 

I wouldn't report this here, but this setup has been running fine with the same users, devices, and with parity check every month without issues for 2 years. The only new things are the patch and the latest build of unraid.

 

I hope this is helpful. Thank you so much for all your hard work.

Link to comment
11 hours ago, mkyb14 said:

think this is what you're asking for?

Screen Shot 2019-03-15 at 4.27.42 PM.png

Screen Shot 2019-03-15 at 4.33.41 PM.png

That looks ok, on the off chance, just delete all the nvidia params and type them in manually rather than copy/paste.  I know that sounds weird, but I have a theory.

Link to comment
10 hours ago, CHBMB said:

That looks ok, on the off chance, just delete all the nvidia params and type them in manually rather than copy/paste.  I know that sounds weird, but I have a theory.

ok, so manually typing everything in worked.... I even made sure after pasting it in previously to delete and spacing and re-type the last letter to make sure there were no spaces...

 

ok, back in business!  Thank you. 

Link to comment
15 hours ago, CHBMB said:

That looks ok, on the off chance, just delete all the nvidia params and type them in manually rather than copy/paste.  I know that sounds weird, but I have a theory.

 

I just did a first time install and transcoding was not working.  Went to last page and found this suggestion.  Manually typed in all the parameters in the plex docker and BAM, now up and running.  Many many thanks for the hard work on this capability.

Link to comment
8 hours ago, pilgrimHK said:

 

I just did a first time install and transcoding was not working.  Went to last page and found this suggestion.  Manually typed in all the parameters in the plex docker and BAM, now up and running.  Many many thanks for the hard work on this capability.

Yeah I think the reason is, when people copy/paste from a code block on the forum, it adds another character that isn't visible.  @SpaceInvaderOne has also noticed it, and I had encountered the issue previously from looking at the Nextcloud upgrade CLI instructions I wrote.

  • Upvote 1
Link to comment
On 2/27/2019 at 9:55 PM, Xaero said:

#!/bin/bash

#This should always return the name of the docker container running plex - assuming a single plex docker on the system.
con="`docker ps --format "{{.Names}}" | grep -i plex`"

echo ""
echo "<b>Applying hardware decode patch...</b>"
echo "<hr>"

#Check to see if Plex Transcoder2 Exists first.
exists=`docker exec -i $con stat "/usr/lib/plexmediaserver/Plex Transcoder2" >/dev/null 2>&1; echo $?`

if [ $exists -eq 1 ]; then # If it doesn't, we run the clause below
	docker exec -i $con mv "/usr/lib/plexmediaserver/Plex Transcoder" "/usr/lib/plexmediaserver/Plex Transcoder2"
	docker exec -i $con /bin/sh -c 'printf "#!/bin/sh\nexec /usr/lib/plexmediaserver/Plex\ Transcoder2 -hwaccel nvdec "\""\$@"\""" > "/usr/lib/plexmediaserver/Plex Transcoder";'
	docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder"
	docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder2"
	docker restart $con
	echo ""
	echo '<font color="green"><b>Done!</b></font>' #Green means go!
else
	echo ""
	echo '<font color="red"><b>Patch already applied or invalid container!</b></font>' #Red means stop!
fi

 

Is there a quick script to reverse this? Not really a linux guy and I want to remove it for testing.

 

Edit: Ok i'm dumb. Editing any property on the docker container seems to reset the patch.

Edited by fr0stbyt3
Link to comment
33 minutes ago, fr0stbyt3 said:

Is there a quick script to reverse this? Not really a linux guy and I want to remove it for testing.

 

Edit: Ok i'm dumb. Editing any property on the docker container seems to reset the patch.

Yep - just force an update of the docker to reset it to stock.

Link to comment
12 hours ago, fr0stbyt3 said:

Is there a quick script to reverse this? Not really a linux guy and I want to remove it for testing.

 

Edit: Ok i'm dumb. Editing any property on the docker container seems to reset the patch.

I'd also suggest using the copy on the gist I linked:
https://gist.github.com/Xaero252/9f81593e4a5e6825c045686d685e2428

It checks for a couple of things that could happen now. Like if you start a VM with the card passed through with the old version, all transcodes would stop working until the VM was stopped. Now it will simply fall back on CPU decoding. It also ignores files that use the mpeg4 AVI container, as they have problems with the ffmpeg build used by plex thus far.

And yeah, force update, or changing any property of the docker will use a fresh copy of the docker, making this very easy to rollback from.

  • Like 1
Link to comment
1 hour ago, Xaero said:

I'd also suggest using the copy on the gist I linked:
https://gist.github.com/Xaero252/9f81593e4a5e6825c045686d685e2428

It checks for a couple of things that could happen now. Like if you start a VM with the card passed through with the old version, all transcodes would stop working until the VM was stopped. Now it will simply fall back on CPU decoding. It also ignores files that use the mpeg4 AVI container, as they have problems with the ffmpeg build used by plex thus far.

And yeah, force update, or changing any property of the docker will use a fresh copy of the docker, making this very easy to rollback from.

Thank you. Again, I appreciate you guys. 

 

Any idea why parity check would cause issues with this?

Link to comment
8 hours ago, fr0stbyt3 said:

Any idea why parity check would cause issues with this?

Parity check must read all disks in the parity array.

 

If any of your docker files (docker.img, appdata) are on the array then your dockers will be impacted. Typically you want your system, domains, and appdata shares to be cache-prefer, and to have all of their files on cache. If any of these files are on the array then docker performance will be impacted due to the slower writes to the parity array, and your dockers will keep parity and array disk(s) spinning. And of course, parity check will be competing for the same disks.

 

You can easily see which disks any user share is using by clicking Compute... for the share on the User Shares page.

  • Like 1
Link to comment
On 3/15/2019 at 8:22 PM, fr0stbyt3 said:

This might be completely unrelated, but I thought I would share:

 

I have a p2000 and I added the patch for both decode and encode. When parity check started running, I was getting random failures in plex. Sometimes it would buffer endlessly. Sometimes it would start and then stop video. Remote users were unable to view libraries. However, the moment I stopped the parity check, everything went back to normal.

 

I wouldn't report this here, but this setup has been running fine with the same users, devices, and with parity check every month without issues for 2 years. The only new things are the patch and the latest build of unraid.

 

I hope this is helpful. Thank you so much for all your hard work

Also know that the encode process is heavily impacted by read and write performance. I'm not sure how nvdec handles it's buffer queueing, but if the buffer isn't filled with enough data, you will notice the video will stop playing. This is READ limited performance, and would be heavily impacted by a parity check, especially for high bitrate media.

 

The nvenc side of the house is limited by how much data is being fed into it by the decoder, and the write speed of the destination media. If you are transcoding to tmpfs (ram) this will almost never be your bottleneck as the encoded media is typically much smaller and lower bitrate than the source media.

  • Like 1
Link to comment
On 3/14/2019 at 8:16 PM, jwoolen said:

I was able to install the plugin and get unraid Nvidia installed, after I reboot my system it hangs after this message:

 

Freeing SMP alternatives memory: 32K

 

I’ve disabled everything I can think of. I’m down to one processing core, one stick of ram, all PCH devices are disabled, hyperthreading disabled. Nothing gets me past that point. My system works fine with the standard version of unRAID. What am I doing wrong?

 

Core system specs:

CPU: 9980XE

MB: Rampage VI Extreme Omega

Cache: Samsung 970 Pro 1TB

Memory: G.Skill Trident Z RGB 4x16GB DDR4-3200MHz 14-14-14-34

 

For anyone else that may be hitting this wall, I was able to get unRAID Nvidia to boot successfully with the hardware above by disabling Intel Speed Shift in the BIOS. This wasn't necessary with the normal unRAID build. Not sure how significant this is.

Edited by jwoolen
Link to comment
9 hours ago, Xaero said:

Also know that the encode process is heavily impacted by read and write performance. I'm not sure how nvdec handles it's buffer queueing, but if the buffer isn't filled with enough data, you will notice the video will stop playing. This is READ limited performance, and would be heavily impacted by a parity check, especially for high bitrate media.

 

The nvenc side of the house is limited by how much data is being fed into it by the decoder, and the write speed of the destination media. If you are transcoding to tmpfs (ram) this will almost never be your bottleneck as the encoded media is typically much smaller and lower bitrate than the source media.

Plex has its own SSD. I use the same SSD for transcoding. The data is read from the array like normal. Could it be as simple as I was saturating the SSD?

Link to comment
20 hours ago, fr0stbyt3 said:

Plex has its own SSD. I use the same SSD for transcoding. The data is read from the array like normal. Could it be as simple as I was saturating the SSD?

I highly doubt it; but it's possible. If you were encoding on platters, sure. But on an SSD you really shouldn't be bottlenecking. I think even 8k footage would encode to an SSD okay, depending on the workload it was under.

Link to comment
On 3/13/2019 at 3:57 AM, Xaero said:

There's a sidebar entry titled nvidia smi. Click on that. Note that the command is "nvidia-smi" but the configuration option is "nvidia_smi: yes" 

If you have entered everything correctly, and restarted the docker per the instructions it should show. 

 

Alternatively, you could add my script from a couple posts ago to userscripts and run it.

Used your script and bashed into the container and manually checked the python file - the setting is in the file! Appreciate the script as I hadn't realized this setting was needed in addition to making the hardware available to the container. I assume by sidebar you mean the right hand side where all of the various charts live. Despite adding the hardware to the container and modifying the python file - with a reboot - no dice :( I've even done a search of the page for the word NVIDIA and not found it lol. Is this a specific plug-in that you've also added or is the functionality in NetData already? Standard NetData container right? Mine is from titpetric/netdata. I can run the SMI command in the container by bashing in - it sees the hardware. Very strange, I must be missing something simple. Is it buried under one of the headings that can expand?

 

On 3/13/2019 at 4:11 AM, Allram said:

 

Thanks, the link made it very easy - much appreciated! I'll get it yet :D

Link to comment

image.thumb.png.6fd29df300f7d7ca56740f41c9a6dc6f.png

After running the script posted. You can see the nvidia smi entry highlighted in purple.
And yes, this is with titpetric/netdata from Community Apps.
Do you have the container set to save it's configuration to an appdata folder, perhaps? If so I think on startup it copies or symlinks the one in the appdata folder over the top of the one that script writes to. It also overwrites the file every time the container updates.

Edited by Xaero
Link to comment

First off, thank you so much the LSIO team for this.  You made this process so easy and I know full well how much time it took to get this working so well.

 

I think I have this working and I see the (hw) next to my test transcode but watch nvidia-smi is showing nothing under processes (1st pic).  It's definitely reading that there is one because when I stop the video is shows "No processes found" as you can see in the second pic.

 

Capture.PNG

 

Capture2.PNG

Edited by IamSpartacus
Link to comment
1 hour ago, Pducharme said:

Run the nvidia-smi from Unraid cli directly, not from the docker cli, this way youll see the process name when it's in use.

 

Ahhh sweet, that worked thanks.

Any reason why during Mobile Syncs I'm still seeing very high CPU usage (testing with the same video I'm testing live transcoding with).  I see the GPU is being used but still seeing a lot of CPU usage during the process as well.  Lack of decode support?

 

EDIT:  Nvmd, found the decode patch script.

Edited by IamSpartacus
Link to comment
On 3/9/2019 at 8:50 PM, Xaero said:

That is something that would need support added for the netdata docker. There's a plugin here:
https://github.com/coraxx/netdata_nv_plugin

That would allow you to monitor various GPU statistics via netdata. Note that it doesn't monitor the nvdec and nvenc pipelines, so you wouldn't be able to see the transcoding usage - just the memory usage.

Edit:
Updated my script on gist;
https://git.io/fhhe3

 

What's the best way to go about getting this script to run automatically after every restart/update of the Plex container?

Edited by IamSpartacus
Link to comment
6 hours ago, IamSpartacus said:

 

What's the best way to go about getting this script to run automatically after every restart/update of the Plex container?

You don't have to run it if the container is restarted, only if it is updated, force updated, or removed and reinstalled. 
That said, there isn't really a good way to fire events based on container update/restart at the moment (at least not built into unraid, or any plugins)

If you are using the CA Auto Updater, and have it scheduled to update the container, you can also use CA User Scripts and schedule the script to run on the same schedule as the CA Auto Updater, just five minutes later, and it will run after the container has been updated that way. We could ask Limetech, the CA User Sripts or the CA Auto Updater dev's if they can add an event for container update.  That said, I just have the script set to run every day after my containers are scheduled to update and it works fine.

Link to comment
1 hour ago, Xaero said:

You don't have to run it if the container is restarted, only if it is updated, force updated, or removed and reinstalled. 
That said, there isn't really a good way to fire events based on container update/restart at the moment (at least not built into unraid, or any plugins)

If you are using the CA Auto Updater, and have it scheduled to update the container, you can also use CA User Scripts and schedule the script to run on the same schedule as the CA Auto Updater, just five minutes later, and it will run after the container has been updated that way. We could ask Limetech, the CA User Sripts or the CA Auto Updater dev's if they can add an event for container update.  That said, I just have the script set to run every day after my containers are scheduled to update and it works fine.

 

Ahh, I've been playing around with a test container for all this so each time I've been adding/removing paths that's why it's been needing to be run again.  Makes sense now.

 

I only do manual container updates so I can easily just run the script on demand anytime I update Plex.

Link to comment

Tested transcoding six 4K HDR10 50-80Mbps files down to 2-4Mbps streams all at once.  Obviously not at all a real world test because I'll never be transcoding 4K at all (at least not on purpose) but was cool to see how well my 1080Ti handled it.

 

Thanks again so much to the LSIO team.  I owe you all a round.  Donate link?

 

Capture.PNG

Edited by IamSpartacus
Link to comment
  • trurl locked this topic
Guest
This topic is now closed to further replies.