superboki Posted August 15, 2023 Share Posted August 15, 2023 Here is the official page dedicated to the support of this advanced version of stable distribution. You can make your requests/comments regarding the template or the container. The goal of this docker container is to provide an easy way to run different WebUI for stable-diffusion. You can choose between the following: 01 - Easy Diffusion : The easiest way to install and use Stable Diffusion on your computer. https://github.com/easydiffusion/easydiffusion 02 - Automatic1111 : A browser interface based on Gradio library for Stable Diffusion https://github.com/AUTOMATIC1111/stable-diffusion-webui 03 - InvokeAI : InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products. https://github.com/invoke-ai 04 - SD.Next : This project started as a fork from Automatic1111 WebUI and it grew significantly since then, but although it diverged considerably, any substantial features to original work is ported as well https://github.com/vladmandic/automatic 05 - ComfyUI : A powerful and modular stable diffusion GUI and backend https://github.com/comfyanonymous/ComfyUI Docker Hub: https://hub.docker.com/r/holaflenain/stable-diffusion GitHub: https://github.com/superboki/UNRAID-FR/tree/main/stable-diffusion-advanced Documentation: https://hub.docker.com/r/holaflenain/stable-diffusion Donation : https://fr.tipeee.com/superboki 6 1 Quote Link to comment
dbinott Posted August 15, 2023 Share Posted August 15, 2023 I have GeForce GTX 1050 Ti with only 4GB. Should I bother trying to install? Quote Link to comment
Holaf Posted August 15, 2023 Share Posted August 15, 2023 With this amount of vram It will be hard to produce images larger than 512x512 (and even at this resolution I'm not sure it will work). some interfaces (easy-diffusion for instance) have an option for low memory GPU. you could try that. note that it will also use a lot of RAM (20GB to 25GB) 2 Quote Link to comment
ingenious-loafer1556 Posted August 17, 2023 Share Posted August 17, 2023 Fantastic work making a CA app. After successfully installing the container am getting this issue where container loads and then after ~40s it stops. logs: _stable-diffusion_logs.txt Quote Link to comment
Holaf Posted August 17, 2023 Share Posted August 17, 2023 You have a parameters.txt for each interface. For InvokeAI you can edit the file stable-diffusion\03-invokeai\parameters.txt and remove or comment the line that contains --max_loaded_models=2 I will fix this on my next update (hopefully this week-end) Quote Link to comment
RoboCanvas Posted August 19, 2023 Share Posted August 19, 2023 Logs: Downloading micromamba from https://micro.mamba.pm/api/micromamba/linux-64/latest to /opt/stable-diffusion/01-easy-diffusion/installer_files/mamba/micromamba EE micromamba download failed Any idea how to fix this? docker | unraid | easy diffusion Quote Link to comment
Holaf Posted August 19, 2023 Share Posted August 19, 2023 Can you try to remove the folder 01-easy-diffusion, update the container to the latest version, and run the installation again ? Quote Link to comment
Holaf Posted August 20, 2023 Share Posted August 20, 2023 21 hours ago, ubermetroid said: EE micromamba download failed That was local network issue not related to the container However, I did update the container with the last install script for easy-diffusion. There are now six choices for image generation : 01-easy-diffusion 02-sd-webui 03-invokeai 04-SD-Next 05-comfy-ui 06-Fooocus and two other tools : 50-lama-cleaner (inpainting) 70-kohya (model training) Quote Link to comment
fixer Posted August 20, 2023 Share Posted August 20, 2023 Is it possible to upgrade to SDXL 1.0 with refiners, and if so, is there documentation anywhere? Quote Link to comment
FriendlyFriend Posted August 20, 2023 Share Posted August 20, 2023 I want to run SD with a cloud GPU with an extension https://github.com/omniinfer/sd-webui-cloud-inference. Installation fails when there is no gpu installed. Is there a workaround? Quote Link to comment
FriendlyFriend Posted August 20, 2023 Share Posted August 20, 2023 3 minutes ago, fixer said: Is it possible to upgrade to SDXL 1.0 with refiners, and if so, is there documentation anywhere? SDXL base and refiner are models that you load in the webui Quote Link to comment
Holaf Posted August 20, 2023 Share Posted August 20, 2023 3 hours ago, fixer said: Is it possible to upgrade to SDXL 1.0 with refiners, and if so, is there documentation anywhere? Like FriendlyFriend said, SDXL is just a model and can be used with at least ComfyUI, Automatic1111 (1.5+), SD-Next and Fooocus. I believe you can find tutorials on YouTube for each one. And if you're unsure of what to do, the easiest way to use SDXL is with Fooocus (interface 06). It only works with SDXL and you have nothing to do except writing prompts. Quote Link to comment
Holaf Posted August 20, 2023 Share Posted August 20, 2023 (edited) 3 hours ago, FriendlyFriend said: I want to run SD with a cloud GPU with an extension https://github.com/omniinfer/sd-webui-cloud-inference. Installation fails when there is no gpu installed. Is there a workaround? The easiest way is to edit the file 02-sd-webui/parameters.txt and add this parameter : --skip-torch-cuda-test I don't know if this extension will work, but at least the interface should launch edit: I did test it, and it works 👍 Edited August 20, 2023 by Holaf 1 Quote Link to comment
WaxedWookie Posted August 22, 2023 Share Posted August 22, 2023 (edited) EDIT: Updating the UI container path to /opt/stable-diffusion seems to have fixed the issue (I had it set this way previously, so it was likely deleting the easydiffusion folder and restarting the docker that did the trick). __________ I've picked this up and have been successfully testing it with my 1050Ti. Where I've become stuck is adding new models (and changing the outputs folder), TIs, and LORAs - I don't seem to be able to get the docker to pick them up. My preference is to leave them in a share, but whether it's from a share or appdata, the models won't populate in the UI. Current config: UI Path: Container path and host path points to /mnt/user/StableDiffusion/ Outputs: Host Path /mnt/user/StableDiffusion/outputs, container path /outputs Share structure: -StableDiffusion -outputs -models -embeddings -hypernetwork -lora -stable-diffusion **CKPT MODELS ARE HERE** -vae -01-easy-diffusion -03-invokeai Can anyone tell me what I'm doing wrong? I've tried multiple configs to get this working, but haven't had any luck. I'm using EasyDiffusion for now, and dropping the models in that sibdir, or in appdata didn't seem to help either. Edited August 22, 2023 by WaxedWookie Solution found and added to top of post. Quote Link to comment
WaxedWookie Posted August 22, 2023 Share Posted August 22, 2023 On 8/16/2023 at 1:16 AM, dbinott said: I have GeForce GTX 1050 Ti with only 4GB. Should I bother trying to install? It works well enough for me with a 4GB 1050TI - I'd say give it a shot. Quote Link to comment
ShadowVlican Posted September 2, 2023 Share Posted September 2, 2023 thanks! it finally works! i had this extension installed a while ago, but it kept deleting my embeddings after every start, rendering the whole thing useless... glad to see it's been fixed now Quote Link to comment
Holaf Posted September 2, 2023 Share Posted September 2, 2023 Glad to hear that it's working fine now Quote Link to comment
wm-te Posted September 8, 2023 Share Posted September 8, 2023 Hey - thanks for addng this to unraid. I've had good success generating images with the Tesla P4 I used with containers. I have found a problem - and seem to have just repeated it. I'm using 02-sd-webui. When I install the Dreambooth extension, and then restart the UI as prompted, the container dies and won't restart. I think I had the same problem, caused in the same way, about 2 weeks ago. I tried to fix by removing the folder sd_dreambooth_extension from the extension folder - but that didn't work so I just did a total remove/reinstall. My python fu is not strong enough to see if there is a fix I should apply in the log output. If anyone has ideas that would be great. Also is there a better way to uninstall an extension that causes a problems like this? The log says : Traceback (most recent call last): File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1086, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 85, in <module> from accelerate import __version__ as accelerate_version File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/__init__.py", line 3, in <module> from .accelerator import Accelerator File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/accelerator.py", line 35, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/checkpointing.py", line 24, in <module> from .utils import ( File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/utils/__init__.py", line 131, in <module> from .bnb import has_4bit_bnb_layers, load_and_quantize_model File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/accelerate/utils/bnb.py", line 42, in <module> import bitsandbytes as bnb File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 6, in <module> from .autograd._functions import ( File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 5, in <module> import bitsandbytes.functional as F File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/functional.py", line 13, in <module> from .cextension import COMPILED_WITH_CUDA, lib File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 113, in <module> lib = CUDASetup.get_instance().lib File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 109, in get_instance cls._instance.initialize() File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 59, in initialize binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup() File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 125, in evaluate_cuda_setup cuda_version_string = get_cuda_version(cuda, cudart_path) File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 45, in get_cuda_version check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version))) File "/usr/lib/python3.10/ctypes/__init__.py", line 387, in __getattr__ func = self.__getitem__(name) File "/usr/lib/python3.10/ctypes/__init__.py", line 392, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: python3: undefined symbol: cudaRuntimeGetVersion The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/stable-diffusion/02-sd-webui/webui/launch.py", line 48, in <module> main() File "/opt/stable-diffusion/02-sd-webui/webui/launch.py", line 44, in main start() File "/opt/stable-diffusion/02-sd-webui/webui/modules/launch_utils.py", line 432, in start import webui File "/opt/stable-diffusion/02-sd-webui/webui/webui.py", line 13, in <module> initialize.imports() File "/opt/stable-diffusion/02-sd-webui/webui/modules/initialize.py", line 16, in imports import pytorch_lightning # noqa: F401 File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 35, in <module> from pytorch_lightning.callbacks import Callback # noqa: E402 File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 14, in <module> from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/batch_size_finder.py", line 24, in <module> from pytorch_lightning.callbacks.callback import Callback File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/callback.py", line 25, in <module> from pytorch_lightning.utilities.types import STEP_OUTPUT File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/pytorch_lightning/utilities/types.py", line 27, in <module> from torchmetrics import Metric File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/__init__.py", line 14, in <module> from torchmetrics import functional # noqa: E402 File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/__init__.py", line 120, in <module> from torchmetrics.functional.text._deprecated import _bleu_score as bleu_score File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/text/__init__.py", line 50, in <module> from torchmetrics.functional.text.bert import bert_score # noqa: F401 File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/text/bert.py", line 23, in <module> from torchmetrics.functional.text.helper_embedding_metric import ( File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/torchmetrics/functional/text/helper_embedding_metric.py", line 27, in <module> from transformers import AutoModelForMaskedLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizerBase File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1076, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/opt/stable-diffusion/02-sd-webui/webui/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1088, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): python3: undefined symbol: cudaRuntimeGetVersion [+] accelerate version 0.21.0 installed. [+] diffusers version 0.19.3 installed. [+] transformers version 4.30.2 installed. [+] bitsandbytes version 0.35.4 installed. Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --no-half-vae --disable-nan-check --api Quote Link to comment
Holaf Posted September 9, 2023 Share Posted September 9, 2023 Hello, It's now Fixed on the last version of my container (1.5.1) FYI, found the fix here : https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12770 Quote Link to comment
Avsynthe Posted September 12, 2023 Share Posted September 12, 2023 (edited) Hey all, I'm having an issue where stable diffusion never releases RAM on AUTOMATIC1111. The more I generate, the higher it goes. The server went down today and I couldn't figure out why the last snapshot of the system showed 99% memory used of 64GB. I realised SD is just compounding away. This happens no matter what model I use with VAE models increasing it quicker for obvious reasons. Switching models makes no difference, it just continues on. I've had to limit SD to 20GB RAM and so it'll eventually crash when it hits. Is anyone else experiencing this? Edited September 13, 2023 by Avsynthe Quote Link to comment
Holaf Posted September 13, 2023 Share Posted September 13, 2023 I believe this is "normal" With comfyUI my container is using actually 35GB of RAM. I suspect that most of this ram is used by Python libraries. Most of the people using those tools are running them on their local computers and not on servers, so they'll restart them often. In any case I won't be able to do anything about this, unfortunately Quote Link to comment
Avsynthe Posted September 13, 2023 Share Posted September 13, 2023 After some further digging around the time of posting this, I found a thread on the AUTOMATIC1111 github where it seems to be a somewhat common occurrence for some but isn't meant to be normal apparently. It also doesn't look like they know 100% what's causing it. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2180 Surely it's meant to release ram after each generation right? Regardless, with the restart unless stopped flag, it's more of just a delay once in a while after a number of generations. Small little workaround. Thanks for the reply! Quote Link to comment
ShadowVlican Posted September 18, 2023 Share Posted September 18, 2023 experiencing the same RAM problem as well... after a while of generations, it'll use up all my RAM and crash i don't remember this happening when running a1111 on windows Quote Link to comment
Joly0 Posted September 21, 2023 Share Posted September 21, 2023 I am curious if it is possible to run this with an amd gpu instead of an nvidia one? For example invokeai supports running with rocm instead of cuda, but i seem not to see any possibility to get this running with rocm, only cuda. How to do that? Quote Link to comment
Joly0 Posted September 21, 2023 Share Posted September 21, 2023 And another thing, is there somewhere the dockerfile and scripts that were used to create this docker? Would like to take a look at this but cant find anything anywhere Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.