alexbn71 Posted September 13, 2022 Share Posted September 13, 2022 Fast, free, self-hosted Artificial Intelligence Server for any platform, any language CodeProject.AI Server is a locally installed, self-hosted, fast, free and Open Source Artificial Intelligence server for any platform, any language. No off-device or out of network data transfer, no messing around with dependencies, and able to be used from any platform, any language. Runs as a Windows Service or a Docker container. Docker Image: https://hub.docker.com/r/codeproject/ai-server Quote Link to comment
MountainMining Posted January 4, 2023 Share Posted January 4, 2023 I'm looking to get this set up on my Unraid box. What GPU did you use for this? Are you also using Blue Iris? If so, is Blue Iris installed on a VM in Unraid or on a separate Windows box? Thanks. Quote Link to comment
reapa Posted April 24, 2023 Share Posted April 24, 2023 Hey, I think you need to add the paths and add a gpu template too. That would be really helpful. Like include the - - gpus all Quote Link to comment
JM2005 Posted August 15, 2023 Share Posted August 15, 2023 I updated the docker to latest version and when you look at the modules loaded it shows a couple modules that have updates available, when you click update, it will not allow you to update the modules. Reviewing the logs and it shows the following error below. Module description for 'FaceProcessing' is invalid. A 'pre-installed' module can't be downloaded Quote Link to comment
lzrdking71 Posted August 20, 2023 Share Posted August 20, 2023 On 8/15/2023 at 12:32 PM, JM2005 said: I updated the docker to latest version and when you look at the modules loaded it shows a couple modules that have updates available, when you click update, it will not allow you to update the modules. Reviewing the logs and it shows the following error below. Module description for 'FaceProcessing' is invalid. A 'pre-installed' module can't be downloaded @JM2005, it looks like there is an active issue listed on github for this https://github.com/codeproject/CodeProject.AI-Server/issues/59#issue-1857894572. Quote Link to comment
lzrdking71 Posted August 20, 2023 Share Posted August 20, 2023 (edited) If you want to use a Coral USB TPU module, I got it to work by setting the docker to privileged....adding the device to the template (see screenshot below)....and then downloading the module in the webui. I got the idea from looking at how the Frigate template in CA handles it then doing it the same way. Edited August 20, 2023 by lzrdking71 Quote Link to comment
gulo Posted December 29, 2023 Share Posted December 29, 2023 Looking for any additional help with CodeProject AI Unraid Docker and USB Coral. Per previous post, I have added a device to the Docker by using value "/dev/bus/usb" and set the Docker to privileged mode. When running "lsusb" in Unraid I was showing the Coral as "ID 1a6e:089a Global Unichip Corp. " Per official instructions, I have installed and reinstalled the ObjectDetection (Coral) module several times, until it finally found the Coral which caused it to change how it appears in lsusb command to "ID 18d1:9302 Google Inc. " I also see "objectdetection_coral_adapter.py: Edge TPU detected" in the CodeProjectg AI server log. However I am unable to run any images. When I try the first object detection I get the following error message: "RuntimeError: Encountered an unresolved custom op. Did you miss a custom op or delegate?Node number 11 (EdgeTpuDelegateForCustomOp) failed to invoke." All subsequent attempts return: "The interpreter is in use. Please try again later" If I restart the Docker, or stop/start the Coral module, it goes back to that first error message. Any ideas? Quote Link to comment
only-university6482 Posted January 25 Share Posted January 25 Hi all, Just updated the codeprojectai container today and it is not working with my gtx970 gpu anymore - haven't figured it out yet. Been working fine for 2 years. Rebooted etc. Nothing. Changed back to CodeProject.AI_ServerGPU container app and this works fine. Something in the new image that is not compatible with older gtx gpu's perhaps? Please let me know if anyone runs CodeProject.AI_Server with a gtx 900 series to see if it detects it. Mine defaults to CPU and can't seem to change it. It always picks up the gpu automatically. Many thanks Quote Link to comment
mgiggs Posted February 12 Share Posted February 12 Hi All, The WebUI keeps reminding me of a later version being available but I cannot see anyway to update the solution, I'm assuming the docker hub needs to be updated to 2.5.x? When does this usually happen, looks like the latest version has been out about a month? Cheers Michael Quote Link to comment
curtis-r Posted February 14 Share Posted February 14 (edited) I was able to update the Docker image last night via the unraid/docker page, by clicking 'update' in the Version column, but I can't update modules (through the CodeProject GUI). This github link from an ealier post describes my issue exactly but claims it was fixed in a release last October. Anyone else having trouble when they try to update a module? Edited February 16 by curtis-r Quote Link to comment
curtis-r Posted February 15 Share Posted February 15 Now I'm (more) perplexed. I uninstalled 2 modules I wasn't using, uninstalled the CodeProject Docker app entirely. Manually deleted through MC the residual CodeProject Cache folder. Restarted unraid. Now when I reinstall the CodeProject Docker, it not only shows installed the Module I can't seem to update, but the other 2 modules I had uninstalled. This is on a fresh install! Where is it getting this ghost data from? Quote Link to comment
5252525111 Posted February 23 Share Posted February 23 For those running CP.AI using the gpu docker container. It seems like CP hasn't been building it for about 5 months. What you can do is change it to a cuda container and have the latest version. Simply update `codeproject/ai-server:gpu` to one of the following depending on your GPUs capabilities. - `codeproject/ai-server:cuda11_7` - `codeproject/ai-server:cuda12_2` 1 Quote Link to comment
only-university6482 Posted March 22 Share Posted March 22 (edited) On 2/23/2024 at 2:02 PM, 5252525111 said: On 2/23/2024 at 2:02 PM, 5252525111 said: For those running CP.AI using the gpu docker container. It seems like CP hasn't been building it for about 5 months. What you can do is change it to a cuda container and have the latest version. Simply update `codeproject/ai-server:gpu` to one of the following depending on your GPUs capabilities. - `codeproject/ai-server:cuda11_7` - `codeproject/ai-server:cuda12_2` This worked thanks but still no mesh option on this one. Regarding the mesh I have the standard codeproject/ai-server running with its mesh function. Ran up several other CPAI nodes (a few on lxc proxmox debian and alpine containers and one windows installation on a VM. The unraid mesh does not communicate with any - added them to hostnames - they are visible but cannot ping them. UDP port enabled also. The lxc containers read the windows cpai but not the other way around. Would really like to get the mesh working on the unraid cpai container app. What other security/firewall settings do I need to change to get them to communicate with each other? Thanks Edited March 22 by only-university6482 Quote Link to comment
kyoumei Posted April 11 Share Posted April 11 I am running codeproject/ai-server with the cuda12_2 tag, and it works well. However, I've noticed that the container size is 11.1GB in size, when looking at the docker container size section. Is this normal? Thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.