I forced an update to the docker image for the server (2.00.14). I shutdown my node docker image. I added new variables for nodeID, nodeIP, Node Port (as a port), NVIDIA_VISIBLE_DEVICES, NVIDIA_DRIVER_CAPABILITIES, internalNode, and the extra parameter "--runtime=nvidia" to the tdarr image template, and it didn't work (I don't see a node registering through the UI, and no mention of it in the logs), so now I'm just going to remove all of that, and try setting internalNode=false, and try using my previous setup with the tdarr_node docker image (2.00.14 also) as it was previously configured and working before the update from 2.00.13 to 2.00.14.
I assume that won't work, and if so, I'll just downgrade everything back to 2.00.13... I'm not sure why all of the needed (for internal node use) variables aren't in the template by default, because I feel like maybe I'm not needing all of those variables I added, or I've configured it wrong somehow, but I can't really see where I did it incorrectly. This has to be basically affecting everyone who was using their tdarr setup on their unraid server for local transcode work, so I think some change needs to happen to the image template to make this less confusing.
I've basically got a 2080super here I can't use whatsoever for tdarr transcode work, so I'm about 75% of my previous capacity in terms of workload handling, while right in the middle of transcoding the entirety of my media libraries.
I'm not really buying that the internal node was "interfering", because I saw the node registering with its appropriate name when I updated everything initially, but maybe someone here can disambiguate how to verify that, and I can check.