Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Xaero last won the day on March 10

Xaero had the most liked content!

Community Reputation

28 Good

About Xaero

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @CHBMB I too see this high power consumption. I know why it's happening, too. Basically, the nvidia driver doesn't initialize power management until an Xorg server is running. The only way to force a power profile on Linux currently is to use nvidia-smi like so: nvidia-settings --ctrl-display :0 -a "[gpu:0]/GPUPowerMizerMode=2" Which requires a running Xorg display. I've been trying to dig around in sysfs to see if there is another place that this value is stored, but there doesn't seem to be. It looks like the cards are locked into performance mode... Perhaps this is worth bringing up to nvidia? In the meantime, I'm going to continue digging to see if I can find a way (perhaps an nvidia-settings docker?) to force the power state.
  2. You should be providing that library in your docker container if you need nvidia-settings. If you are running nvidia-settings in unraid itself (not sure why you would need to) then you would need to grab slackware packages for that library. That library is part of the Xorg project, and has nothing to do with Nvidia driver function, outside of the gui tool.
  3. I've moved my script from gist to a dedicated github repository. I was trying to avoid this as it shouldn't be necessary for this script to exist in the near future, and it's a single source file with a pretty basic function. I've also created a dedicated thread on the forum for this script and any issues that result from using it. @CHBMB, please feel free to point people to either this repository or this forum thread for issues you believe to be caused by the wrapper script. Anybody using the wrapper script originally posted to Reddit, the Plex forums, or earlier in this thread, be advised that you *should* move to the newer script written by Revr3nd and facilitated by my User Script. The nvidia decoder doesn't like certain formats, and those aren't filtered by earlier versions of the script. If you use those formats, and the decode script without filtering - you WILL have problems. This is likely the reason that Plex has been so reluctant to enable hardware decoding on Linux for nvidia. Emby is likely already filtering what content is transcoded by the GPU out of the box, though I have not taken a look under the hood.
  4. This post is under construction Plex nvdec wrapper script Please use this thread, to report issues, or discuss the use of the nvdec wrapper script. DO NOT report issues resulting from the use of this script to Unraid, LS.IO, Revr3nd, Plex, or nvidia. What is this? This is a wrapper script to enable nvidia based hardware decoding in Plex dockers running on unraid-nvidia. You must be running an unraid-nvidia build and have a working transcode environment using your nvidia card for this script to do anything. To find out more about the unraid-nvidia project and install it yourself, see this post. How do I get it? Click here to visit the github repository where you will find detailed instructions on how to set up this script. Reporting issues At a minimum, please provide a brief description of the issue you are facing, and a copy of the System Log from the current boot WHILE the issue is present. Do not reboot until after you have copied the log. To post the log, please use a paste service, like paste.ubuntu.com, rather than pasting into the forum directly. Full diagnostic zips may be required, but are generally overkill for troubleshooting what's happening with pci-e devices. Specifically, this advice applies to anyone reporting an issue of the use of this script causing their card to "drop" from unraid nvidia, as this may be something more major that needs to be submitted to the proper channels. Please feel free to also mention any improvements for this OP.
  5. I've mentioned, at the top of my script to post any issues using it on the gist itself, rather than reporting them to Plex, Unraid, LSIO, or Reverend (the guy who's wrapper my script downloads.) So that I can filter issues that are caused by error on my part, or the end user's part. My apologies since this is apparently insufficient. I'll go ahead and make a thread, though my ability to provide support will be limited as I am currently without a PC until its unpacked.
  6. I'm not at any of my normal computers - but knowing the codes for these hidden characters, can be useful when diagnosing this type of problem. It's possible, that it's something simple like not consistently using UTF-8 (i.e. it's set as a meta tag, so the client will render using it, but the server itself isn't aware of the character set being used, or content being submitted is being submitted using a different character set)
  7. I think the concerns are best directed at the nvidia team, specifically the nvidia docker team. Plex can't really do anything about what the driver or kernel decides to do with the card when it's done using it. The kernel, or driver, should be telling the card to enter a different PState when it's idle and that's not happening.
  8. This information could prove useful for creating an (albeit, hacky) solution - if need be. I don't currently have any hardware set up where I can test anything (my server is in boxes, currently) One thing you may try is enabling persistence mode: nvidia-smi -pm 1 This will result in a couple of watts of idle usage, but will force the drivers to stay loaded, even when no job is running. It's possible the drivers are exiting as soon as the transcode jobs finish, and not changing the power state back to idle. Or, if it's already enabled, you could try disabling it: nvidia-smi -pm 0 A hacky, scripted solution would be to monitor the nvidia smi output for a condition of both LOW GPU, NVENC and NVDEC performance and a HIGH power state, and issue the nvidia-smi --gpu-reset which would reset the GPU allowing it to idle again. Both of these are hacky workarounds. I too would echo the Plex team on this in posting on the nvidia developer forums with this information, particularly point out the use of the new nvidia docker blobs, as it could quite possibly be an issue there. Once I have my server up and running again, I'll have a poke at seeing if I can replicate and/or resolve this issue. https://devtalk.nvidia.com/
  9. That's the GPU core utilization. It shouldn't really increase with nvenc or nvdec usage. That's the whole point of nvenc and nvdec - giving gamers a way to accelerate video transcoding for streaming. For a streaming site like Netflix, it doesn't really make sense to transcode video data live for each stream, just store the video pre-encoded for transcoding. For a gamer to stream to twitch, they need a way to encode video on the fly without impacting rendering performance, enter nvenc and nvdec. The spike you are seeing is kind of suspicious, but the important metric to look at is the enc and dec columns of nvidia-smi dmon
  10. Where are you looking to see the utilization? the default nvidia-smi screen shows fan speed, GPU core and GPU error correcting percentages, which aren't applicable to the nvenc/nvdec pipelines. You'll want to use nvidia-smi dmon to get the columns of percentages, and pay attention in particular to the last two columns. If you watch them for the duration of a short video, you should see how it works. It fills a buffer rapidly with video, and then idles. With multiple streams it will simply use the idle durations from the other streams to buffer a new stream. You'd have to have about a dozen or more streams for them to need to double up the duty cycle, and that's where you MIGHT start seeing decreased performance. Of course, this also means you would need to be able to DECODE fast enough for 12+ streams simultaneously encoding. Which is probably more of a problem.
  11. If you purchased this second hand, it's possible that someone flashed a second hand BIOS designed for overclocking or cryptocurrency mining. As far as un-doing that, you'd want to find the OEM firmware for that card and flash it. Know that flashing a card ALWAYS poses the risk of bricking the card. It's also possible that the fan motor is using a decent amount of power as it seems to be idling at 40%, where it could turn the fan completely off with a lower temperature. It's also a higher wattage GPU, and I'm not certain what the idle power consumption claims from the manufacturer are, or how accurate those claims are.
  12. The Nvidia Linux blobs only support manipulating the power states through the use of powermizer and coolbits flags for X11. Since the driver isn't being used to initialize a display, the card does it's own default power management and software has little control. You MAY be able to force lower power states by echoing values to sysfs nodes for the card, which can be found in some DRM subfolder of the /sys directory. This probably won't persist, and could lead to instability.
  13. Use the older wrapper script. Without the marap filter, just `exec Plex Transcoder2 -hwaccel qsv "$@"` Hwaccel should only affect the decoder. Encoder is set with the encoder parameter later in the string, so this should work to enable quicksync for decode, and nvenc for encode. Ymmv.
  14. https://stevens.li/guides/video/converting-hdr-to-sdr-with-ffmpeg/ Some good evidence of the issues with HDR Tone Mapping and FFMPEG, and some really promising results using some command line switches.
  15. I'll have to add a check to see if the container exists, and is running, and start it if not... Also, the script only needs to be run if the container has been updated, or it's configuration has been edited and saved (which causes a force update)