[Support] m0ngr31 - Dockers


Recommended Posts

  • 2 weeks later...
  • 1 month later...
  • 1 month later...

Hello... following up... I don't see @m0ngr31 Ollama template in CA.

 

I am trying @Joly0 template and I can get it running using the Ollama-WebUI docker. I do NOT think it is using the NVIDIA GPU. I have a GT1050 and the NVIDIA drivers installed.

 

1196130809_ScreenShot2024-03-27at8_23_58AM.thumb.png.f4eda5003b75b94e9ff37f8690dbaa6f.png

 

Is there a way to get my GPU to work with Ollama?

 

Thank you!

 

 

Link to comment

Hey guys, could you try going into advanced mode while editing the container, remove both variables for nvidia devices and replace the argument in "extra paramters" with this "--gpus=all"

That worked for me. If you guys get it to work aswell, i will update the template

 

Atleast with these settings my gpu is used image.png.252ca0abdfa4cff3e6325dbc6146172f.png

 

Also when i created that template, the settings where working, so looks like ollama changed something

Edited by Joly0
Link to comment

Thank you @Joly0. I have not tried what you described above. But I did a little more research as well.

 

After installing your Docker... I them went into the console and followed this guide:

https://www.jeremymorgan.com/blog/generative-ai/run-llm-locally-ubuntu/

 

I had to install the nvidia-cuda-toolkit, by simply entering:

apt install nvidia-cuda-toolkit

 

I think it's working but can't really tell the difference. I only have a 1050 GPU w/ low RAM. Here are the GPU stats being displayed before a question, and then while it is "thinking" . I would have thought everything would be pegged at maximum.

 

Before Asking a question:

image.png.d5ed70ad7b4de7532549782451f97f92.png

 

While it is processing my question:

image.png.b68600ecb2a4b12a86e0d949416e8646.png

 

It is working.... and it is providing some nice answers.

 

I added the LLAVA module and it is nice that I upload an image, and it can tell me what it sees. Neat, but not sure how useful it is.

 

My real goal is to use a Text to Image module. Trying to investigate how to do this...  There is also a Docker for this called "Stable-Diffusion WebUI Docker". But you have to compile it yourself (mine froze in the middle of it): https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup 😜

 

** EDIT **

I just realized that @m0ngr31 has a template for this!! Trying it out now. Big download.

 

** EDIT #2 **

My nVidia card has too little memory. I guess it was too much to hope for. Not sure if investing on a better GPU is worth it. 😕

 

It is not easy figuring out the whole AI thing in general terms. OpenAI, Hugging Face, Tensor Flow, Conda, Elon Musk lawsuit, Grok, what is free vs. what you pay for ... this is just stuff I am seeing without diving into the topic.

 

It would be nice to have a chunk of popular AI dockers... Maybe there should be a Forum section for AI.

 

Anyway, excited about this...

 

Thanks,

 

H.

 

Edited by hernandito
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.