Holaf Posted September 23, 2023 Share Posted September 23, 2023 (edited) Nope, sorry. But you can look at /entry.sh It's this script that does everything (It can be a bit messy, I'm not used to do this kind of thing) For AMD GPU I didn't do anything mainly because I don't have one on my servers, so it will be hard for me to test. Edited September 23, 2023 by Holaf Quote Link to comment
hans-o Posted October 4, 2023 Share Posted October 4, 2023 Have been using automatic1111 just fine and wanted to give fooocus a shot after it's v2 update, it's booting up with the following error: Quote RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install. Is there an easy way to manually set the versions? thanks Quote Link to comment
Holaf Posted October 7, 2023 Share Posted October 7, 2023 (edited) Sorry I was busy lately, I will fix this asap edit : simple fix : ) You stop the container, go in the Fooocus folder and remove the venv directory. When you'll lrestart the container it will reinstall all the correct dependencies 👍 Edited October 7, 2023 by Holaf 1 Quote Link to comment
ShadowVlican Posted October 11, 2023 Share Posted October 11, 2023 On 9/12/2023 at 10:20 AM, Avsynthe said: Hey all, I'm having an issue where stable diffusion never releases RAM on AUTOMATIC1111. The more I generate, the higher it goes. The server went down today and I couldn't figure out why the last snapshot of the system showed 99% memory used of 64GB. I realised SD is just compounding away. This happens no matter what model I use with VAE models increasing it quicker for obvious reasons. Switching models makes no difference, it just continues on. I've had to limit SD to 20GB RAM and so it'll eventually crash when it hits. Is anyone else experiencing this? i turned OFF -medvram and now system RAM doesn't leak anymore! i read this thread to figure it out: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6850 1 Quote Link to comment
JapanFreak Posted October 19, 2023 Share Posted October 19, 2023 (edited) Hi noob here i tried to install this added the gpuid to the container but i still failed and i have no idea why and what to do maybe i have the wrong nvidia driver should i install the open source one or other version ? Edited October 19, 2023 by JapanFreak Quote Link to comment
Holaf Posted October 19, 2023 Share Posted October 19, 2023 I have the same driver, so it should work 🤔 Can you try to uninstall/reinstall the plugin ? can you also check if the driver is running with the command "nvidia-smi" ? Quote Link to comment
RoboCanvas Posted October 19, 2023 Share Posted October 19, 2023 Try taking out the specific GPU ID and use use "all" (no quotes). Quote Link to comment
JapanFreak Posted October 19, 2023 Share Posted October 19, 2023 (edited) 51 minutes ago, ubermetroid said: Try taking out the specific GPU ID and use use "all" (no quotes). Error response from daemon: Unknown runtime specified nvidia. See 'docker run --help'. The command failed. thats what i get with all 2 hours ago, Holaf said: I have the same driver, so it should work 🤔 Can you try to uninstall/reinstall the plugin ? can you also check if the driver is running with the command "nvidia-smi" ? thats what i get when i runn the command it says off so i am not sure it works as for reisntalling i tried like 6 times every time i get Error response from daemon: Unknown runtime specified nvidia. See 'docker run --help'. thanks for your replays Edited October 19, 2023 by JapanFreak Quote Link to comment
Holaf Posted October 20, 2023 Share Posted October 20, 2023 I believe that the runtime comes with the nvidia driver, so for me it's a bug with the nvidia plugin or your server, but not with my container 🤷♂️ Quote Link to comment
ados Posted October 29, 2023 Share Posted October 29, 2023 Installed and was working ok with Easy Diffusion but really wanted to get Lama Cleaner for constant image fixing I do. It seemed to install but will not stay running, the docker just stops before the web UI will load. The last line in log before closing is: ModuleNotFoundError: No module named 'torch._utils' Quote Link to comment
Holaf Posted October 30, 2023 Share Posted October 30, 2023 Thanks for the info, I'll take a look at this tomorrow. 1 Quote Link to comment
Holaf Posted October 31, 2023 Share Posted October 31, 2023 @ados I tried a fresh installation of lama cleaner alone, and a fresh installation of lama cleaner with easy-diffusion already installed, and it worked in both cases 🤔 Can you stop your container, delete both folders "cache" and "50-lama-cleaner" and then relaunch your container ? I think it's your best chance to get it working 🤞 Quote Link to comment
danbru1989 Posted November 1, 2023 Share Posted November 1, 2023 Any ideas why ControlNet is disabled when using Invoke? I noticed that my models/sd-1/controlnet folder is empty. Quote Link to comment
Holaf Posted November 1, 2023 Share Posted November 1, 2023 I tried to do a fresh install and invokeAI downloaded by itself controlnet models (it took some time after I could connect to the webUI) Perhaps you could try to remove or rename the actual invokeAI folder and try to install it again Quote Link to comment
danbru1989 Posted November 1, 2023 Share Posted November 1, 2023 I renamed the 03-invakeai folder and started up the container. It re-downloaded Invoke and ControlNet is accessible now. Thank you! Maybe there was a better way to do it that would have retained all my assets and settings, but this worked and I didn't have anything important there. Quote Link to comment
Holaf Posted November 1, 2023 Share Posted November 1, 2023 Honestly, the way InvokeAI handles models/settings is still a bit of a mystery for me 😅 Quote Link to comment
blanketred Posted November 4, 2023 Share Posted November 4, 2023 How do you install it as CPU only? I don't have a GPU in my unraid server. Thank you! Quote Link to comment
ados Posted November 5, 2023 Share Posted November 5, 2023 (edited) On 10/31/2023 at 8:33 PM, Holaf said: @ados I tried a fresh installation of lama cleaner alone, and a fresh installation of lama cleaner with easy-diffusion already installed, and it worked in both cases 🤔 Can you stop your container, delete both folders "cache" and "50-lama-cleaner" and then relaunch your container ? I think it's your best chance to get it working 🤞 Before seeing this I tried switching to SD.Next and also had issues with it not loading an docker crashing. This was with a fresh docker container too. I applied the suggested fix of deleting the cache and SD.Next folders in container appdata and has not got it running. The only model I have it working with is easy diffusion. 😢 Should mention the SD.Next crash is caused by: -bash: line 1: 73 Illegal instruction bash webui.sh --listen --port=9000 --insecure --medvram --allow-code Edited November 5, 2023 by ados Quote Link to comment
Holaf Posted November 8, 2023 Share Posted November 8, 2023 strange 🤔 I'm working on a new version, I hope to push it on dockerhub this week-end. It's a bit long to do because I'm modifying all scripts and I have to test multiple installations from scratch 😣 I hope it will work for you with this next update Quote Link to comment
FoxxMD Posted November 14, 2023 Share Posted November 14, 2023 Hi @Holaf and thanks for the amazing container. I'm using SD.Next with almost everything working correctly. However using Models -> CivitAI -> download for embeddings logs an error like this: 2023-11-14T17:42:13.998575208Z 17:42:13-994976 ERROR CivitAI download: name=easynegative.pt 2023-11-14T17:42:13.998614237Z url=https://civitai.com/api/download/models/9536path= 2023-11-14T17:42:13.998627355Z temp=models/Stable-diffusion/22863553.tmp size=0Mb 2023-11-14T17:42:13.998638689Z removed invalid download: bytes=25323 2023-11-14T17:42:14.001201822Z 2023-11-14T17:42:14.002561798Z Exception in thread Thread-68 (download_civit_model_thread): 2023-11-14T17:42:14.002596562Z Traceback (most recent call last): 2023-11-14T17:42:14.002618000Z File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner 2023-11-14T17:42:14.002935406Z self.run() 2023-11-14T17:42:14.002967860Z File "/usr/lib/python3.10/threading.py", line 953, in run 2023-11-14T17:42:14.003280728Z self._target(*self._args, **self._kwargs) 2023-11-14T17:42:14.003312062Z File "/opt/stable-diffusion/04-SD-Next/webui/modules/modelloader.py", line 174, in download_civit_model_thread 2023-11-14T17:42:14.004375606Z os.rename(temp_file, model_file) 2023-11-14T17:42:14.004402584Z FileNotFoundError: [Errno 2] No such file or directory: 'models/Stable-diffusion/22863553.tmp' -> 'models/Stable-diffusion/easynegative.pt' I have no issues downloading from huggingface or downloading regular sd 1.5/lora models from civitai. Additionally, after manually moving files into appdata/stable-diffusion/models/embeddings and restarting the container it still does not move them into the correct folder in 04-SD-Next. 2023-11-14T17:52:43.096595854Z removing folder /opt/stable-diffusion/04-SD-Next/webui/models/VAE and create symlink 2023-11-14T17:52:43.109880467Z moving folder /opt/stable-diffusion/04-SD-Next/webui/embeddings to /opt/stable-diffusion/models/embeddings 2023-11-14T17:52:43.118850832Z sending incremental file list 2023-11-14T17:52:43.163582694Z 2023-11-14T17:52:43.163651074Z sent 155 bytes received 12 bytes 334.00 bytes/sec 2023-11-14T17:52:43.163666876Z total size is 100,167 speedup is 599.80 I have to manually move files appdata/stable-diffusion/models/embeddings -> appdata/04-SD-Next/webui/models/embeddings after which point they are recognized by SD.Next Quote Link to comment
FoxxMD Posted November 15, 2023 Share Posted November 15, 2023 (edited) On 10/10/2023 at 8:23 PM, ShadowVlican said: i turned OFF -medvram and now system RAM doesn't leak anymore! i read this thread to figure it out: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6850 This didn't work for me but the comments in that issue were the clue I needed. For anyone else running into memory leaks when using SD.Next: 1. Exec into the container and install updated malloc library apt update apt -y install libgoogle-perftools-dev 2. Create the file /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh and add this contents: export LD_PRELOAD=libtcmalloc.so echo "libtcmalloc loaded" 3. Make the file executable Open unraid web terminal and run chmod +x /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh EDIT: @Holaf can you please add libgoogle-perftools-dev as a dependency in the image? OR add a hook into entry.sh so end-users can run a script as root on container start/creation so these kinds of modifications can be made without needing to modify the docker image. Edited November 16, 2023 by FoxxMD 2 Quote Link to comment
echofire Posted November 17, 2023 Share Posted November 17, 2023 On 11/15/2023 at 1:29 PM, FoxxMD said: This didn't work for me but the comments in that issue were the clue I needed. For anyone else running into memory leaks when using SD.Next: 1. Exec into the container and install updated malloc library apt update apt -y install libgoogle-perftools-dev 2. Create the file /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh and add this contents: export LD_PRELOAD=libtcmalloc.so echo "libtcmalloc loaded" 3. Make the file executable Open unraid web terminal and run chmod +x /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh EDIT: @Holaf can you please add libgoogle-perftools-dev as a dependency in the image? OR add a hook into entry.sh so end-users can run a script as root on container start/creation so these kinds of modifications can be made without needing to modify the docker image. THANK YOU! This resolved my issue as well with my 02-sd-webui installation. Quote Link to comment
Holaf Posted November 17, 2023 Share Posted November 17, 2023 Just pushed a big update, I tested as much as I could, but I'm sure I let some bugs go through ... Don't hesitate to report here Quote Link to comment
Joly0 Posted November 18, 2023 Share Posted November 18, 2023 Hey @Holaf would it be possible, to get the code for the dockerfile and scripts that you used to build this container on github, gitlab or anywhere so people can look at it and maybe improve it? Also, is it possible to get a change log. You said you pushed a big update, but i would like to know what changed? Having a changelog and having the source code would help to better understand what changed 2 Quote Link to comment
RoboCanvas Posted November 18, 2023 Share Posted November 18, 2023 What is the big update? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.