m0ngr31 Posted December 5, 2023 Share Posted December 5, 2023 Summary: A support thread for the apps that I've written or just added to Unraid Current Apps: DailyNotes EPlusTV Genmon OpenRA Server ollama ollama-webui ComfyUI-Magic Refact Tabby Stable Diffusion Quote Link to comment
mechxero Posted December 10, 2023 Share Posted December 10, 2023 The ollama container seems to have been blacklisted. Is this permanent? Quote Link to comment
m0ngr31 Posted December 19, 2023 Author Share Posted December 19, 2023 On 12/10/2023 at 12:34 AM, mechxero said: The ollama container seems to have been blacklisted. Is this permanent? I'm not sure. I don't even know how that happened Quote Link to comment
Random.Name Posted February 18 Share Posted February 18 I have ollama and the website working, but my gpu (P2000) stays idle while my cpu spikes, any chance the HW acceleration is not fully working? Or am I missing something? How do i go about troubleshooting that? Quote Link to comment
hernandito Posted March 27 Share Posted March 27 Hello... following up... I don't see @m0ngr31 Ollama template in CA. I am trying @Joly0 template and I can get it running using the Ollama-WebUI docker. I do NOT think it is using the NVIDIA GPU. I have a GT1050 and the NVIDIA drivers installed. Is there a way to get my GPU to work with Ollama? Thank you! Quote Link to comment
Joly0 Posted March 28 Share Posted March 28 (edited) Hey guys, could you try going into advanced mode while editing the container, remove both variables for nvidia devices and replace the argument in "extra paramters" with this "--gpus=all" That worked for me. If you guys get it to work aswell, i will update the template Atleast with these settings my gpu is used Also when i created that template, the settings where working, so looks like ollama changed something Edited March 28 by Joly0 Quote Link to comment
hernandito Posted March 29 Share Posted March 29 (edited) Thank you @Joly0. I have not tried what you described above. But I did a little more research as well. After installing your Docker... I them went into the console and followed this guide: https://www.jeremymorgan.com/blog/generative-ai/run-llm-locally-ubuntu/ I had to install the nvidia-cuda-toolkit, by simply entering: apt install nvidia-cuda-toolkit I think it's working but can't really tell the difference. I only have a 1050 GPU w/ low RAM. Here are the GPU stats being displayed before a question, and then while it is "thinking" . I would have thought everything would be pegged at maximum. Before Asking a question: While it is processing my question: It is working.... and it is providing some nice answers. I added the LLAVA module and it is nice that I upload an image, and it can tell me what it sees. Neat, but not sure how useful it is. My real goal is to use a Text to Image module. Trying to investigate how to do this... There is also a Docker for this called "Stable-Diffusion WebUI Docker". But you have to compile it yourself (mine froze in the middle of it): https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup 😜 ** EDIT ** I just realized that @m0ngr31 has a template for this!! Trying it out now. Big download. ** EDIT #2 ** My nVidia card has too little memory. I guess it was too much to hope for. Not sure if investing on a better GPU is worth it. 😕 It is not easy figuring out the whole AI thing in general terms. OpenAI, Hugging Face, Tensor Flow, Conda, Elon Musk lawsuit, Grok, what is free vs. what you pay for ... this is just stuff I am seeing without diving into the topic. It would be nice to have a chunk of popular AI dockers... Maybe there should be a Forum section for AI. Anyway, excited about this... Thanks, H. Edited March 29 by hernandito Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.