Jump to content

[SUPPORT] - stable-diffusion Advanced

Recommended Posts

Here is the official page dedicated to the support of this advanced version of stable distribution. You can make your requests/comments regarding the template or the container.




The goal of this docker container is to provide an easy way to run different WebUI for stable-diffusion.

You can choose between the following:


  • 01 - Easy Diffusion :
    The easiest way to install and use Stable Diffusion on your computer. https://github.com/easydiffusion/easydiffusion
  • 02 - Automatic1111 :
    A browser interface based on Gradio library for Stable Diffusion https://github.com/AUTOMATIC1111/stable-diffusion-webui
  • 03 - InvokeAI :
    InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products. https://github.com/invoke-ai
  • 04 - SD.Next :
    This project started as a fork from Automatic1111 WebUI and it grew significantly since then, but although it diverged considerably, any substantial features to original work is ported as well https://github.com/vladmandic/automatic
  • 05 - ComfyUI :
    A powerful and modular stable diffusion GUI and backend https://github.com/comfyanonymous/ComfyUI


Docker Hub: https://hub.docker.com/r/holaflenain/stable-diffusion

GitHub: https://github.com/superboki/UNRAID-FR/tree/main/stable-diffusion-advanced

Documentation: https://hub.docker.com/r/holaflenain/stable-diffusion

Donation : https://fr.tipeee.com/superboki

  • Like 6
  • Thanks 1
Link to comment

With this amount of vram It will be hard to produce images larger than 512x512 (and even at this resolution I'm not sure it will work).
some interfaces (easy-diffusion for instance) have an option for low memory GPU. you could try that.
note that it will also use a lot of RAM (20GB to 25GB)

  • Like 2
Link to comment

You have a parameters.txt for each interface.

For InvokeAI you can edit the file stable-diffusion\03-invokeai\parameters.txt and remove or comment the line that contains --max_loaded_models=2


I will fix this on my next update (hopefully this week-end)


Link to comment
21 hours ago, ubermetroid said:

EE micromamba download failed


That was local network issue not related to the container :)

However, I did update the container with the last install script for easy-diffusion.

There are now six choices for image generation :

and two other tools :
50-lama-cleaner (inpainting)
70-kohya (model training)

Link to comment
3 hours ago, fixer said:

Is it possible to upgrade to SDXL 1.0 with refiners, and if so, is there documentation anywhere?

Like FriendlyFriend said, SDXL is just a model and can be used with at least ComfyUI, Automatic1111 (1.5+), SD-Next and Fooocus.
I believe you can find tutorials on YouTube for each one.
And if you're unsure of what to do, the easiest way to use SDXL is with Fooocus (interface 06). It only works with SDXL and you have nothing to do except writing prompts. :)

Link to comment
3 hours ago, FriendlyFriend said:

I want to run SD with a cloud GPU with an extension https://github.com/omniinfer/sd-webui-cloud-inference. Installation fails when there is no gpu installed. Is there a workaround?

The easiest way is to edit the file 02-sd-webui/parameters.txt and add this parameter :


I don't know if this extension will work, but at least the interface should launch


edit: I did test it, and it works 👍


Edited by Holaf
  • Thanks 1
Link to comment

EDIT: Updating the UI container path to /opt/stable-diffusion seems to have fixed the issue (I had it set this way previously, so it was likely deleting the easydiffusion folder and restarting the docker that did the trick).



I've picked this up and have been successfully testing it with my 1050Ti.

Where I've become stuck is adding new models (and changing the outputs folder), TIs, and LORAs - I don't seem to be able to get the docker to pick them up. My preference is to leave them in a share, but whether it's from a share or appdata, the models won't populate in the UI.


Current config:

UI Path: Container path and host path points to /mnt/user/StableDiffusion/

Outputs: Host Path /mnt/user/StableDiffusion/outputs, container path /outputs


Share structure:








               **CKPT MODELS ARE HERE**





Can anyone tell me what I'm doing wrong? I've tried multiple configs to get this working, but haven't had any luck. I'm using EasyDiffusion for now, and dropping the models in that sibdir, or in appdata didn't seem to help either.

Edited by WaxedWookie
Solution found and added to top of post.
Link to comment
  • 2 weeks later...

Hey - thanks for addng this to unraid. I've had good success generating images with the Tesla P4 I used with containers. 


I have found a problem - and seem to have just repeated it. I'm using 02-sd-webui. When I install the Dreambooth extension, and then restart the UI as prompted, the container dies and won't restart. 
I think I had the same problem, caused in the same way, about 2 weeks ago. I tried to fix by removing the folder sd_dreambooth_extension from the extension folder - but that didn't work so I just did a total remove/reinstall. 


My python fu is not strong enough to see if there is a fix I should apply in the log output. If anyone has ideas that would be great. Also is there a better way to uninstall an extension that causes a problems like this? 


The log says :

Traceback (most recent call last):
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1086, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 85, in <module>
    from accelerate import __version__ as accelerate_version
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/__init__.py", line 3, in <module>
    from .accelerator import Accelerator
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/accelerator.py", line 35, in <module>
    from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/checkpointing.py", line 24, in <module>
    from .utils import (
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/utils/__init__.py", line 131, in <module>
    from .bnb import has_4bit_bnb_layers, load_and_quantize_model
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/utils/bnb.py", line 42, in <module>
    import bitsandbytes as bnb
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 6, in <module>
    from .autograd._functions import (
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 5, in <module>
    import bitsandbytes.functional as F
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/functional.py", line 13, in <module>
    from .cextension import COMPILED_WITH_CUDA, lib
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 113, in <module>
    lib = CUDASetup.get_instance().lib
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 109, in get_instance
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 59, in initialize
    binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 125, in evaluate_cuda_setup
    cuda_version_string = get_cuda_version(cuda, cudart_path)
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 45, in get_cuda_version
    check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version)))
  File "/usr/lib/python3.10/ctypes/__init__.py", line 387, in __getattr__
    func = self.__getitem__(name)
  File "/usr/lib/python3.10/ctypes/__init__.py", line 392, in __getitem__
    func = self._FuncPtr((name_or_ordinal, self))
AttributeError: python3: undefined symbol: cudaRuntimeGetVersion

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/stable-diffusion/02-sd-webui/webui/launch.py", line 48, in <module>
  File "/opt/stable-diffusion/02-sd-webui/webui/launch.py", line 44, in main
  File "/opt/stable-diffusion/02-sd-webui/webui/modules/launch_utils.py", line 432, in start
    import webui
  File "/opt/stable-diffusion/02-sd-webui/webui/webui.py", line 13, in <module>
  File "/opt/stable-diffusion/02-sd-webui/webui/modules/initialize.py", line 16, in imports
    import pytorch_lightning  # noqa: F401
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 35, in <module>
    from pytorch_lightning.callbacks import Callback  # noqa: E402
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 14, in <module>
    from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/batch_size_finder.py", line 24, in <module>
    from pytorch_lightning.callbacks.callback import Callback
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/callback.py", line 25, in <module>
    from pytorch_lightning.utilities.types import STEP_OUTPUT
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/utilities/types.py", line 27, in <module>
    from torchmetrics import Metric
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/__init__.py", line 14, in <module>
    from torchmetrics import functional  # noqa: E402
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/__init__.py", line 120, in <module>
    from torchmetrics.functional.text._deprecated import _bleu_score as bleu_score
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/text/__init__.py", line 50, in <module>
    from torchmetrics.functional.text.bert import bert_score  # noqa: F401
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/text/bert.py", line 23, in <module>
    from torchmetrics.functional.text.helper_embedding_metric import (
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/text/helper_embedding_metric.py", line 27, in <module>
    from transformers import AutoModelForMaskedLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizerBase
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1076, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1088, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
python3: undefined symbol: cudaRuntimeGetVersion
[+] accelerate version 0.21.0 installed.
[+] diffusers version 0.19.3 installed.
[+] transformers version 4.30.2 installed.
[+] bitsandbytes version 0.35.4 installed.
Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --no-half-vae --disable-nan-check --api


Link to comment

Hey all,


I'm having an issue where stable diffusion never releases RAM on AUTOMATIC1111. The more I generate, the higher it goes.

The server went down today and I couldn't figure out why the last snapshot of the system showed 99% memory used of 64GB. I realised SD is just compounding away. This happens no matter what model I use with VAE models increasing it quicker for obvious reasons. Switching models makes no difference, it just continues on.


I've had to limit SD to 20GB RAM and so it'll eventually crash when it hits. Is anyone else experiencing this?

Edited by Avsynthe
Link to comment

I believe this is "normal"
With comfyUI my container is using actually 35GB of RAM.
I suspect that most of this ram is used by Python libraries. Most of the people using those tools are running them on their local computers and not on servers, so they'll restart them often.
In any case I won't be able to do anything about this, unfortunately :(


Link to comment

After some further digging around the time of posting this, I found a thread on the AUTOMATIC1111 github where it seems to be a somewhat common occurrence for some but isn't meant to be normal apparently. It also doesn't look like they know 100% what's causing it.




Surely it's meant to release ram after each generation right? Regardless, with the restart unless stopped flag, it's more of just a delay once in a while after a number of generations. Small little workaround. Thanks for the reply!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...