JackG13 Posted March 28, 2023 Share Posted March 28, 2023 Hi, I am looking to get this github project working, https://github.com/nsarrazin/serge, which is a way to get the LLaMa models working in a chat interface on a local machine without needing a GPU through a webui. It uses a docker compose file, but before that it requires cloning the repository. I am not sure how to get this working smoothly. Any help would be appreciated! Quote Link to comment
dopeytree Posted June 12, 2023 Share Posted June 12, 2023 It's now in the app store Quote Link to comment
rosswaters Posted March 5 Share Posted March 5 I am having issues with Serge. I have a very large server that has lots of RAM and CPUs with GPUs in it. I installed serge downloaded 2 models, both did not work. I see them in the directory if I go into the appdata folder though. When I go to the serge GUI it just tells me to download another model and the load model is blank with no models downloaded. I looked at the logs and saw this error. # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Well that would work great if I'm on Ubuntu or something else with etc/*.conf files. Any thoughts on this. I have way more resources then is required. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.