b3rs3rk

Members
  • Posts

    207
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by b3rs3rk

  1. Are VENDOR and GPUID set in your cfg file now?
  2. You should select the Vendor and GPU on the GPUStat settings page and click Apply at the bottom to fill those out and then it should work.
  3. Well that's annoying, working exactly how it is supposed to work. Can you share the contents of your gpustat.cfg file from the /usr/local/emhttp/plugins/gpustat directory?
  4. I should have been more decisive. Sorry. Run this one and send the result: php -r "var_dump(@simplexml_load_string(shell_exec('nvidia-smi -q -x -g GPU-23265d6c-1de2-5786-b964-e20b2a209ad6 2>&1')));"
  5. Okay, so the simplexml_load_string is definitely failing because that's the only time it should return False. Try this (I substituted your UUID so the below should work for your install): php -r "var_dump(shell_exec('nvidia-smi -q -x -g GPU-23265d6c-1de2-5786-b964-e20b2a209ad6 2>&1'));" If it looks like normal XML, try this: php -r "var_dump(@simplexml_load_string(shell_exec('nvidia-smi -q -x -g GPU-23265d6c-1de2-5786-b964-e20b2a209ad6 2>&1')));" If that dumps a SimpleXMLElement, I'm not sure what is wrong as that is basically what the plugin does. If it doesn't, try the commands again but remove the 2>&1 at the end of the shell_exec commands to show stderr. Then bundle the outputs up in a file and paste here for troubleshooting purposes.
  6. I'm not seeing anything that would cause this to happen. A 302 error will occur when the data retrieved from the nvidia-smi call is not an instance of valid XML or it is empty XML. If it wasn't valid XML+, you'd probably be getting a stack error when I try to use @simplexml_load_string on it to turn it into a SimpleXMLElement. When you have the chance, nano the Nvidia.php file: nano -c +287 /usr/local/emhttp/plugins/gpustat/lib/Nvidia.php This should take you directly to line 287 in that file which should be a blank line. Change that line to read: var_dump($data); exit; Save it and exit nano, then re-run the gpustatus.php file like you did for the troubleshooting steps in the first post (great reading skills btw) and send me the result.
  7. Did you go to the Settings page and set something non-default? Usually the Vendor is set to Change Me on install and had to be set to NVIDIA unless you have an old config file present. EDIT: I see that it is throwing a 302 error. Let me look through the full Nvidia-smi output and see if I can identify the problem.
  8. @ich777 how much do I owe you for being my front line support engineer lol
  9. The first figure is usually dynamic based on the power state. To reduce power consumption, the card uses Gen 1 when idle and ramps up to the maximum as determined by the power state. The value in the parentheses is the maximum of your card or your PCI express bus capabilities and is static. My maximum, for example, displays as Generation 2 because I'm using an older motherboard/chipset even though the Quadro P4000 I'm using can use 3.0.
  10. Deepstack and DisqueTV app detection released. Emby detection has been removed due to false positives. Until we define a method to uniquely determine ffmpeg is being launched by Emby it will remain disabled.
  11. Maybe in some cases on Windows there are separate driver packages, but for Linux they are not separate. https://www.nvidia.com/Download/driverResults.aspx/171392/en-us If you click Supported Products you'll see pretty much everything is supported by the one Linux driver package.
  12. Just to check, the parent PID wasn't the same as the PID in nvidia-smi right? I may have something wrong in my thinking here. Ugh, makes it so much harder when I can't test this stuff on my own. I'll see if I can run a deepstack instance and get it to spawn these processes. I'm guessing you need a camera to do it?
  13. @gowg We might have to do something different with this one. Using the same PID from nvidia-smi get the parent PID: ps -o ppid= -p <insert_nvidia-smi_pid> Then take that pid and feed it into the previous command I sent you instead of the PID from nvidia-smi and paste me the result. I'm hoping that the parent thread that spawns these python3 processes has obvious language involving deepstack.
  14. Not necessarily. If the full command invocation contains keywords that tie the process to deepstack I can add that as a detection with recent code changes. When deepstack is using the GPU, get the PID from nvidia-smi and then run: echo $(ps -fp <insert_pid_from_nvidia-smi> -o command) And paste me the full result.
  15. I updated to 6.9.2 today and updated to the latest NVIDIA driver (465.19.01) and I'm still not having any issues with a Quadro P4000. I've already pushed a fix for the double NVIDIA in the product name which will go in the next release. Other than that, the plugin is displaying data just as I expected.
  16. I don't use any of the data in the <supported_clocks> parent element. It could be entirely non-existent from the nvidia-smi output and it wouldn't make any difference. So whatever you're seeing is probably irrelevant.
  17. You're free to roll back to a previous version of my plugin by manually installing the PKG file from the Github repository if you're insistent that it was from my code changes. All of the files are dated, so you can go back as far as you want. But if your nvidia-smi continues to not display the metrics that aren't updating, it's not going to make any difference as that's where I get the data from. EDIT: After reviewing your setup in your signature block, I don't think anyone can help you unless someone reports a similar problem that has a less complex installation of UnRAID running. To be frank, there are plenty of things that can go wrong or function differently if you're running UnRAID within a VM, especially when attempting to pass through a GPU to it and poll the statistics as if it were bare metal. I would reach out to the folks in the sub forum dedicated to running UnRAID as a VM and see if anyone is having a similar issue and how to resolve it.
  18. I just assumed they didn't have the sensors necessary to probe that data or it wasn't implemented for that GPU's driver. Here's hoping they don't do that to the P4000
  19. Sure, but I find it hard to believe they deprecated anything with his Quadro being Pascal.
  20. It was just a stab in the dark really. But I don't know what could be wrong with it. My code depends on nvidia-smi functioning properly and near as I can tell my code is doing exactly what it is supposed to do.
  21. Request a feature enhancement on the project's Github page making sure to include the commands shown as running processes in nvidia-smi while the application is using the GPU. Providing at least a 32x32 pixel square transparent png image to display in the widget when the app is running will make it easier for me to add as well. Click here for a good example of such an enhancement request.
  22. @5STAR If nvidia-smi doesn't provide me the data, I can't display it on the widget and that's exactly what I'm seeing in your output. I'm thinking something is wrong with the NVIDIA driver's NVML library or the nvidia-smi utility itself. You're using driver version 465.19.01 and my server is still on 460.56 and it has no such issue. Can you try going into your Nvidia-Driver Plugin settings and rolling back the driver? Just set it to 460.56 (which is known good for me at least) and rebooting your UnRAID server then try again. If the widget works as expected after the rollback, I'd stay on that driver until we can determine what the issue is. If that doesn't fix it, we'll probably have to file a bug with Nvidia or just wait for successive driver releases and hope they alleviate the issue. I don't know, might be a better question for @ich777 as he may have heard something else about this.
  23. Look in the original post of this thread and provide the troubleshooting info at the bottom of it.
  24. AMD Sensor support released for APU (Temp) and dGPU (Temp/Fan/Power).
  25. Interesting. Looks like (at least some of) the dGPUs have fan and power metering as well that can be added.