grtgbln Posted July 29 Author Share Posted July 29 1 hour ago, UnraidTobias said: Hi @grtgbln can you please provide a correct support-forum link in your gpt4all container and add an example on how to set it up properly? With a LLM as an example and a Frontend? That would be so helpful, probably for many of us,.... thanks so much in advance! The support link on the template points to the GitHub page for GPT4ALL, which contains documentation about the project. Please direct your project-specific questions to those locations. Quote Link to comment
Aegisnir Posted August 4 Share Posted August 4 Hello. Trying out your Automatic1111 container but I can't find where the webui.sh file is located. I can see it inside the container via command line, but not inside of the appdata directory when I navigate through the unraid file system or a file browser. Can you please advise where this and the rest of the data is located? I assume its writing directly into docker.img directly without using the appdata folder...? Quote Link to comment
UnraidTobias Posted August 4 Share Posted August 4 (edited) On 7/29/2024 at 10:16 PM, grtgbln said: The support link on the template points to the GitHub page for GPT4ALL, which contains documentation about the project. Please direct your project-specific questions to those locations. Hi, i found out, the issue on my side is, that the nvidia card i have is to old. So nvidia drivers do not get detected....but according to gpt4all github readme, a gpu is not needed at all. Is there an option you could provide the container also without the nvidia/gpu requirment? Or optional? Or can you elaborate a bit about the background why this is necessary, please? Thank you Edited August 4 by UnraidTobias Quote Link to comment
Krakout Posted August 10 Share Posted August 10 Can someone help me install LocalAI with AMD support? I don't understand the instructions about adding devices and variables... " For AMD GPU support, add "/dev/kfd" and "/dev/dri" each as a Device and add the required Variables: https://localai.io/features/gpu-acceleration/#setup-example-dockercontainerd" Thanks! Quote Link to comment
bmartino1 Posted August 10 Share Posted August 10 (edited) 10 hours ago, Krakout said: Can someone help me install LocalAI with AMD support? I don't understand the instructions about adding devices and variables... " For AMD GPU support, add "/dev/kfd" and "/dev/dri" each as a Device and add the required Variables: https://localai.io/features/gpu-acceleration/#setup-example-dockercontainerd" Thanks! not possible. the models won't use the amd gpu. I recommend using the AIO gpu hib version: localai/localai:latest-aio-cpu to add the gpu if any for amd to a docker. --device=/dev/dri --device=/dev/kfd add the above to the extra parameter option: I see they finaly updateed and fixed it. Edited August 10 by bmartino1 redaction. 1 Quote Link to comment
EvilOni Posted August 10 Share Posted August 10 Hi, i managed to get the anythingallm docker image running. It works fine for RAG tasks, however i have an issue with attching URL's via the bulk web scraper. I see the following error in the log. [collector] info: Discovering links... [collector] error: Failed to get page links from https://learn.microsoft.com/en-us/azure/well-architected/reliability. Error: Failed to launch the browser process! [1006:1006:0810/184605.831237:FATAL:zygote_host_impl_linux.cc(127)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/main/docs/linux/suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox. Any help would be appreciated Quote Link to comment
bmartino1 Posted August 10 Share Posted August 10 (edited) 14 minutes ago, EvilOni said: Hi, i managed to get the anythingallm docker image running. It works fine for RAG tasks, however i have an issue with attching URL's via the bulk web scraper. I see the following error in the log. [collector] info: Discovering links... [collector] error: Failed to get page links from https://learn.microsoft.com/en-us/azure/well-architected/reliability. Error: Failed to launch the browser process! [1006:1006:0810/184605.831237:FATAL:zygote_host_impl_linux.cc(127)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/main/docs/linux/suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox. Any help would be appreciated For ameren PSP(power smart pricing) i ran a python script for HA to scrape the table data for the price at that hour. You may be hitting an issue with the website you are scraping that need to be rendered in a browser, as its a script that needs ran to then scrape.... Any way, per the error, as it appears that the Browser Error at launching chromium... You can review similar python scraping here: I have a constant discord message bot that successfully renders the page for me and sends me this data in this example. I have once lxc setup for myself and one setup for a friend/solar panel business... While I'm not familiar with what the anythingallm docker is nor its capabilities, if its web scraping, you may need to do something similar to collect the data. I found it easier to use python selenium with chrome installed from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC and called: chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') to launch the browser. You may need similar settings... Edited August 10 by bmartino1 spelling/gramar Quote Link to comment
EvilOni Posted August 13 Share Posted August 13 (edited) @bmartino1 Ok i figured out the problem. You need to add an argument to extra parameters of the docker container settings. --cap-add SYS_ADMIN This is referenced in the AnythingLLM documentation, so the author should probably include this as part of the instructions. https://docs.anythingllm.com/installation/self-hosted/local-docker#recommend-way-to-run-dockerized-anythingllm Edited August 13 by EvilOni 1 Quote Link to comment
grtgbln Posted August 13 Author Share Posted August 13 58 minutes ago, EvilOni said: @bmartino1 Ok i figured out the problem. You need to add an argument to extra parameters of the docker container settings. --cap-add SYS_ADMIN This is referenced in the AnythingLLM documentation, so the author should probably include this as part of the instructions. https://docs.anythingllm.com/installation/self-hosted/local-docker#recommend-way-to-run-dockerized-anythingllm Thanks for diagnosing the issue, I have updated the template accordingly. Quote Link to comment
TheFullTimer Posted August 18 Share Posted August 18 (edited) For the Invoke-AI container the path to models and outputs can be mapped elsewhere. Assuming your storage is fast or you have extra NVMe laying around. The paths are: - Container Path: /invokeai_root/outputs - Container Path: /invokeai_root/models Edited August 18 by TheFullTimer removed markdown formatting Quote Link to comment
disco4000 Posted September 8 Share Posted September 8 Hi, the Invoke-AI container is pulling 4,5GB with every update almost every next day. Do all these big files really change everytime there is an update or ist this a misconfiguration in the docker build file? Cheers, d. Quote Link to comment
grtgbln Posted September 8 Author Share Posted September 8 (edited) 3 hours ago, disco4000 said: Hi, the Invoke-AI container is pulling 4,5GB with every update almost every next day. Do all these big files really change everytime there is an update or ist this a misconfiguration in the docker build file? Cheers, d. I doubt the underlying dependencies are actually changing, but part of the Dockerfile build file is importing things like NVIDIA SMI and PyTorch on every build, without caching these layers, which unfortunately leads to having to redownload these base layers every time the image updates. https://github.com/invoke-ai/InvokeAI/blob/5eb919f6020118764b8b5ece4f7660bd31f52472/docker/Dockerfile#L36 Edited September 8 by grtgbln Quote Link to comment
MrTroll Posted September 12 Share Posted September 12 On 9/9/2024 at 2:45 AM, grtgbln said: I doubt the underlying dependencies are actually changing, but part of the Dockerfile build file is importing things like NVIDIA SMI and PyTorch on every build, without caching these layers, which unfortunately leads to having to redownload these base layers every time the image updates. https://github.com/invoke-ai/InvokeAI/blob/5eb919f6020118764b8b5ece4f7660bd31f52472/docker/Dockerfile#L36 Its also moved to the Pre-release branch now, is this expected behavior? Quote Link to comment
grtgbln Posted Saturday at 07:22 AM Author Share Posted Saturday at 07:22 AM On 9/12/2024 at 12:48 AM, MrTroll said: Its also moved to the Pre-release branch now, is this expected behavior? It's set up to pull whatever the InvokeAI team deems the "latest" Docker image: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai/273561071?tag=latest Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.