Jump to content

Chad Kunsman

Members
  • Content Count

    55
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Chad Kunsman

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. "GPU Reset couldn't run because GPU 00000000:01:00.0 is the primary GPU." is the message I get when I try to reset. Persistence mode on or off makes no difference. Only resetting the Plex docker container causes it to downclock. The hacky solution for scripting may come into play if nothing else works. My 'something else' I'm trying is going to be to give a P400 a shot to see if behavior is any different. I know for a fact its general power usage will be less. Maybe it will also not be affected by this bug. If it is, I'll give the script a try and definitely echo these concerns to the Plex team. Thank you!
  2. By the way, I have half solved this problem after more testing. The card can and will clock itself down while idle UNTIL it has to perform transcoding work. Once the transcode job is finished, it will never clock itself down again until I restart the Plex docker container. See the moment I restarted the container here: http://i.imgur.com/zeXKiFZ.png See a thread from another user who seems to have the exact same issue here: https://forums.plex.tv/t/stuck-in-p-state-p0-after-transcode-finished-on-nvidia/387685/2 I really hope there's a solution as this is about $40 a year down the drain in extra power usage just due to the card not idling as it should.
  3. That's pretty disappointing. I'd like to pursue another angle, then. Is it possible the card is not clocking down due to something on the system side keeping it busy/locked? Is there any way to search for things from that angle? Instead of assuming it doesn't know how to idle, let me assume something is keeping it from being idle. But I don't know how to search for that. SMI command maybe? Additionally does this card have some sort of NVRAM on it that would permanently save settings applied to it even across power cycles or being removed from the computer? I'm trying to find out/understand if there is some sort of permanent setting that might have been applied to the card that I could undo if I moved it to a different OS for example.
  4. Is there any way to determine why the power state is like this? Why would this be at P0? I bought this card used. Is it possible to permanently set the power state until manually reset? Maybe the previous owner did that? If so, wondering how I can reset it. I did try some reset commands through nvidia-smi but I got errors along the lines of (paraphrasing) "You can't do this because this is your only video card"
  5. Not sure if this is the best thread for this, but any ideas why my IDLE (as far as I know) power usage on my freshly installed GTX 1060 is 28watts? It's not doing anything. Idle usage on this card should be easily single digits.
  6. How about outgoing? Is that important at all or of any concern? Client is set to random outgoing port.
  7. It's PIA. Thank you so much and apologies for missing that in the FAQ.
  8. Hello, what's the best way to validate that the ports (incoming and outgoing) are all set up correctly and everything is working okay? I am getting upload speeds lower than I would expect and am not sure how to validate the ports are good end to end.
  9. Had this happen due to what I am pretty sure is docker containers acting up. Is there any way I can either keep limits on how much RAM a docker container can use or stop Unraid from killing VM's if it thinks its out of memory?
  10. How important is it that containers be stopped in order to perform a backup? I have misc issues with containers either not coming back up after being shut down or files getting corrupted (Tautulli) upon shutdown/startup. So if possible, I'd like to not stop containers if I don't have to. Will certain files get skipped if they're in use or something?
  11. If a container was using a lot of RAM wouldn't I see this represented in the chart I pasted above as 'used'? I'm totally willing to entertain the notion that a container was using a lot of RAM but I would hope that my monitoring tools would be able to see this happening.
  12. One was 1gb min 2gb max, the other is 2gb min and max. Both ubuntu. (lubuntu and xubuntu) Here's a grafana snapshot of when the VM's were killed. You can see the ram usage dip after they were killed. Note lots of free (cached)
  13. Sure. Attached. vault-diagnostics-20181013-1328.zip
  14. Noticed my VM's would occasionally be down (once every few weeks) and finally captured some info in the logs. But I don't know how to interpret what I see in the logs. I do know that Unraid thinks it's out of RAM and is shutting down my VM's but I have 16GB of RAM and the 'used' amount never really got over about 9GB and the rest was shown as either cache or free in grafana. So it seems like something else is going on other than just running out of RAM. Log file attached. Any ideas what the likely culprit is? Unraid 6.5.3 crash.txt
  15. I have a Splunk instance set up and I've already started sending the Unraid server logs there. Is there a way to send all my container logs there, too, somewhat easily? Thank you.