[SUPPORT] - stable-diffusion Advanced


Recommended Posts

Nope, sorry.
But you can look at /entry.sh
It's this script that does everything (It can be a bit messy, I'm not used to do this kind of thing)


For AMD GPU I didn't do anything mainly because I don't have one on my servers, so it will be hard for me to test.

Edited by Holaf
Link to comment
  • 2 weeks later...

Have been using automatic1111 just fine and wanted to give fooocus a shot after it's v2 update, it's booting up with the following error:

 

Quote

RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions.
PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.8.
Please reinstall the torchvision that matches your PyTorch install.

 

Is there an easy way to manually set the versions? thanks

Link to comment

Sorry I was busy lately, I will fix this asap

edit :
simple fix : )
You stop the container, go in the Fooocus folder and remove the venv directory. When you'll lrestart the container it will reinstall all the correct dependencies 👍

Edited by Holaf
  • Like 1
Link to comment
On 9/12/2023 at 10:20 AM, Avsynthe said:

Hey all,

 

I'm having an issue where stable diffusion never releases RAM on AUTOMATIC1111. The more I generate, the higher it goes.

The server went down today and I couldn't figure out why the last snapshot of the system showed 99% memory used of 64GB. I realised SD is just compounding away. This happens no matter what model I use with VAE models increasing it quicker for obvious reasons. Switching models makes no difference, it just continues on.

 

I've had to limit SD to 20GB RAM and so it'll eventually crash when it hits. Is anyone else experiencing this?

 

i turned OFF -medvram and now system RAM doesn't leak anymore!

i read this thread to figure it out: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6850

  • Thanks 1
Link to comment
  • 2 weeks later...
51 minutes ago, ubermetroid said:

Try taking out the specific GPU ID and use use "all" (no quotes).

Error response from daemon: Unknown runtime specified nvidia.
See 'docker run --help'.

The command failed.

thats what i get with all

 

2 hours ago, Holaf said:

I have the same driver, so it should work 🤔
Can you try to uninstall/reinstall the plugin ?
can you also check if the driver is running with the command "nvidia-smi" ?

image.thumb.png.19daf0b8661c61fa1effad06d8d5282e.png

thats what i get when i runn the command it says off so i am not sure it works

as for reisntalling i tried like 6 times every time i get Error response from daemon: Unknown runtime specified nvidia.
See 'docker run --help'.

thanks for your replays

Edited by JapanFreak
Link to comment
  • 2 weeks later...

Installed and was working ok with Easy Diffusion but really wanted to get Lama Cleaner for constant image fixing I do.
It seemed to install but will not stay running, the docker just stops before the web UI will load.

The last line in log before closing is: ModuleNotFoundError: No module named 'torch._utils'

Link to comment

@ados
I tried a fresh installation of lama cleaner alone, and a fresh installation of lama cleaner with easy-diffusion already installed, and it worked in both cases 🤔

Can you stop your container, delete both folders "cache" and "50-lama-cleaner" and then relaunch your container ?
I think it's your best chance to get it working 🤞

Link to comment
On 10/31/2023 at 8:33 PM, Holaf said:

@ados
I tried a fresh installation of lama cleaner alone, and a fresh installation of lama cleaner with easy-diffusion already installed, and it worked in both cases 🤔

Can you stop your container, delete both folders "cache" and "50-lama-cleaner" and then relaunch your container ?
I think it's your best chance to get it working 🤞

Before seeing this I tried switching to SD.Next and also had issues with it not loading an docker crashing.

This was with a fresh docker container too.

I applied the suggested fix of deleting the cache and SD.Next folders in container appdata and has not got it running.

The only model I have it working with is easy diffusion. 😢

 

Should mention the SD.Next crash is caused by: 

-bash: line 1: 73 Illegal instruction bash webui.sh --listen --port=9000 --insecure --medvram --allow-code

Edited by ados
Link to comment

Hi @Holaf and thanks for the amazing container.

 

I'm using SD.Next with almost everything working correctly. However using Models -> CivitAI -> download for embeddings logs an error like this:

 

2023-11-14T17:42:13.998575208Z 17:42:13-994976 ERROR    CivitAI download: name=easynegative.pt               
2023-11-14T17:42:13.998614237Z                          url=https://civitai.com/api/download/models/9536path= 
2023-11-14T17:42:13.998627355Z                          temp=models/Stable-diffusion/22863553.tmp size=0Mb   
2023-11-14T17:42:13.998638689Z                          removed invalid download: bytes=25323                 
2023-11-14T17:42:14.001201822Z 
2023-11-14T17:42:14.002561798Z Exception in thread Thread-68 (download_civit_model_thread):
2023-11-14T17:42:14.002596562Z Traceback (most recent call last):
2023-11-14T17:42:14.002618000Z   File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
2023-11-14T17:42:14.002935406Z     self.run()
2023-11-14T17:42:14.002967860Z   File "/usr/lib/python3.10/threading.py", line 953, in run
2023-11-14T17:42:14.003280728Z     self._target(*self._args, **self._kwargs)
2023-11-14T17:42:14.003312062Z   File "/opt/stable-diffusion/04-SD-Next/webui/modules/modelloader.py", line 174, in download_civit_model_thread
2023-11-14T17:42:14.004375606Z     os.rename(temp_file, model_file)
2023-11-14T17:42:14.004402584Z FileNotFoundError: [Errno 2] No such file or directory: 'models/Stable-diffusion/22863553.tmp' -> 'models/Stable-diffusion/easynegative.pt'

 

I have no issues downloading from huggingface or downloading regular sd 1.5/lora models from civitai. Additionally, after manually moving files into appdata/stable-diffusion/models/embeddings and restarting the container it still does not move them into the correct folder in 04-SD-Next.

 

2023-11-14T17:52:43.096595854Z removing folder /opt/stable-diffusion/04-SD-Next/webui/models/VAE and create symlink
2023-11-14T17:52:43.109880467Z moving folder /opt/stable-diffusion/04-SD-Next/webui/embeddings to /opt/stable-diffusion/models/embeddings
2023-11-14T17:52:43.118850832Z sending incremental file list
2023-11-14T17:52:43.163582694Z 
2023-11-14T17:52:43.163651074Z sent 155 bytes  received 12 bytes  334.00 bytes/sec
2023-11-14T17:52:43.163666876Z total size is 100,167  speedup is 599.80

 

I have to manually move files

 

appdata/stable-diffusion/models/embeddings -> appdata/04-SD-Next/webui/models/embeddings

 

after which point they are recognized by SD.Next

Link to comment
On 10/10/2023 at 8:23 PM, ShadowVlican said:

 

i turned OFF -medvram and now system RAM doesn't leak anymore!

i read this thread to figure it out: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6850

 

This didn't work for me but the comments in that issue were the clue I needed. For anyone else running into memory leaks when using SD.Next:

 

1. Exec into the container and install updated malloc library

 

apt update
apt -y install libgoogle-perftools-dev

 

2. Create the file /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh and add this contents:

 

export LD_PRELOAD=libtcmalloc.so
echo "libtcmalloc loaded"

 

3. Make the file executable

Open unraid web terminal and run

chmod +x /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh

 

EDIT: @Holaf can you please add libgoogle-perftools-dev as a dependency in the image?

 

OR add a hook into entry.sh so end-users can run a script as root on container start/creation so these kinds of modifications can be made without needing to modify the docker image.

Edited by FoxxMD
  • Like 2
Link to comment
On 11/15/2023 at 1:29 PM, FoxxMD said:

 

This didn't work for me but the comments in that issue were the clue I needed. For anyone else running into memory leaks when using SD.Next:

 

1. Exec into the container and install updated malloc library

 

apt update
apt -y install libgoogle-perftools-dev

 

2. Create the file /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh and add this contents:

 

export LD_PRELOAD=libtcmalloc.so
echo "libtcmalloc loaded"

 

3. Make the file executable

Open unraid web terminal and run

chmod +x /mnt/user/appdata/stable-diffusion/04-SD-Next/webui/webui-user.sh

 

EDIT: @Holaf can you please add libgoogle-perftools-dev as a dependency in the image?

 

OR add a hook into entry.sh so end-users can run a script as root on container start/creation so these kinds of modifications can be made without needing to modify the docker image.

THANK YOU!  This resolved my issue as well with my 02-sd-webui installation.

Link to comment

Hey @Holaf would it be possible, to get the code for the dockerfile and scripts that you used to build this container on github, gitlab or anywhere so people can look at it and maybe improve it? Also, is it possible to get a change log. You said you pushed a big update, but i would like to know what changed? Having a changelog and having the source code would help to better understand what changed

  • Upvote 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.