iammeuru

Members
  • Posts

    5
  • Joined

  • Last visited

iammeuru's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Really? In unraid I show up-to-date, but I don't see any variables in the template for internal node stuff, other than the one I added to turn it off explicitly. I'm mid-transcode, and am not going to force update it to find out rn, but maybe I'm misunderstanding how I need to update it to get the latest and greatest?
  2. I didn't have the options in the template, I just added those variables to it, though I'm now running perfectly after deleting all but the internalNode one on the tdarr(server) container, and just setting it to false... I'm just using the external node as its own separate docker container. Apparently there was some sort of conflict, but it's working now... Basically if you're coming from 2.00.13 to 2.00.14 and you have(had) a functional setup, ensure they're as up to date as possible, and just create a new variable in the template called interalNode, and set its value to false, then you should be able to connect as previously done, and it will work. Maybe they'll update the template to have the correct variables to setup for internal node usage, or someone will make a guide on how to get it up with an internal node, and then maybe I'll switch, but probably not at this point, because this works and I don't see a reason to switch from a functional setup unless someone can describe some tangible benefits to it aside from losing a small amount of overhead running the second container. I do applaud the team for trying to get it all setup into one package for that reason, and it surely would've been the way I went if I'd started this a little later than I did.
  3. I forced an update to the docker image for the server (2.00.14). I shutdown my node docker image. I added new variables for nodeID, nodeIP, Node Port (as a port), NVIDIA_VISIBLE_DEVICES, NVIDIA_DRIVER_CAPABILITIES, internalNode, and the extra parameter "--runtime=nvidia" to the tdarr image template, and it didn't work (I don't see a node registering through the UI, and no mention of it in the logs), so now I'm just going to remove all of that, and try setting internalNode=false, and try using my previous setup with the tdarr_node docker image (2.00.14 also) as it was previously configured and working before the update from 2.00.13 to 2.00.14. I assume that won't work, and if so, I'll just downgrade everything back to 2.00.13... I'm not sure why all of the needed (for internal node use) variables aren't in the template by default, because I feel like maybe I'm not needing all of those variables I added, or I've configured it wrong somehow, but I can't really see where I did it incorrectly. This has to be basically affecting everyone who was using their tdarr setup on their unraid server for local transcode work, so I think some change needs to happen to the image template to make this less confusing. I've basically got a 2080super here I can't use whatsoever for tdarr transcode work, so I'm about 75% of my previous capacity in terms of workload handling, while right in the middle of transcoding the entirety of my media libraries. I'm not really buying that the internal node was "interfering", because I saw the node registering with its appropriate name when I updated everything initially, but maybe someone here can disambiguate how to verify that, and I can check.
  4. I don't know that this is correct exactly. I have not set interalNode=true after the update, and still have the node docker image running. I have all of the appropriate settings for my GPU setup in it (as they were before, while it was working under 2.00.13, verified the same), but while that particular node registers, updates plugins, etc, unlike my other 3 nodes (which work correctly, and transcode perfectly) I get the following error for every attempted transcode [h264_cuvid @ 0x559e07dba1c0] Cannot load libnvcuvid.so.1 [h264_cuvid @ 0x559e07dba1c0] Failed loading nvcuvid. Stream mapping: Stream #0:0 -> #0:0 (h264 (h264_cuvid) -> hevc (hevc_nvenc)) Stream #0:1 -> #0:1 (copy) Error while opening decoder for input stream #0:0 : Operation not permitted
  5. Too bad I already updated everything to 2.00.14 Only the tdarr node docker container on my unraid server itself is having issues... My other 3 machines running a node are comfortably plugging away and successfully transcoding media (server also on the same unraid machine the malfunctioning node was running on).