[SUPPORT] - stable-diffusion Advanced


Recommended Posts

Hey @Holaf.

Just wanted to say well done mate. Project looks fantastic, the ability to use different interfaces just rocks. Really impressive stuff.

 

I've got one question, perhaps it's obvious, but I can't find the solution to my inconvenience.

 

Is there a way to unload the model from the GPU memory if not being used for some time, e.g. 15 minutes? A command or a script, or perhaps a line in the config to modify?

 

I use the same GPU for LLM, and I'm limited to 12 GB (RTX A2000) which doesn't allow me to run both of those things simultaneously. Which is fair :) 

 

Whenever I use llama, it loads the model to the GPU memory, and after a few minutes of inactivity, llama release the memory and I can use SD without any more actions from my side. If I reverse the scenario, SD will load the model into GPU memory, but it will never clear it (see attached photo). I can't run llama without restarting the SD docker container.

 

image.thumb.png.6eb312ed0d595e44ec33a8be46139261.png

 

I use Easy Diffusion as my preferred interface.

 

Keep up the good work!

Thanks.

 

 

 

Link to comment

It would be awesome of the code was in GitHub. It would help in edge cases where we need to add a dependency or something to the container. I know we can do that now, using your image as a base image, but its a little bit harder not knowing exactly how it was built.

Link to comment

@Joly0 @BigD I got tired of waiting for the code so I just reverse engineered it 😄

 

https://github.com/FoxxMD/stable-diffusion-multi

 

The master branch is, AFAIK, the same as the current latest tag for holaf's image. I have not published an image on dockerhub for this but you can use it to build your own locally.

 

The lsio branch is my rework of holaf's code to work on Linuxserver.io's ubuntu base image. This is published on dockerhub at foxxmd/stable-diffusion:lsio. It includes fixes for SD.Next memory leak and I plan on making more improvements next week. If anyone wants to migrate from holaf's to this one make sure you check the migration steps as it is slightly different in folder structure.

 

Also I haven't thoroughly checked everything actually works..just 04-SD-Next on my dev machine (no gpu for use here yet). I will better test both master/lsio next week when i get around to improvements as well.

 

EDIT: see this post below for updated image, repository, and migration steps

___

 

I'm also happy to add @Holaf as an owner on my github repository if that makes things easier for them to contribute code. Or happy to fold my changes into their repository, when/if they make it available. I don't want to fragment the community but I desperately needed to make these changes and its almost been a month waiting for code at this point.

Edited by FoxxMD
Link to comment
16 hours ago, FoxxMD said:

@Joly0 @BigD I got tired of waiting for the code so I just reverse engineered it 😄

 

https://github.com/FoxxMD/stable-diffusion-multi

 

The lsio branch is my rework of holaf's code to work on Linuxserver.io's ubuntu base image. This is published on dockerhub at foxxmd/stable-diffusion:lsio. It includes fixes for SD.Next memory leak and I plan on making more improvements next week. If anyone wants to migrate from holaf's to this one make sure you check the migration steps as it is slightly different in folder structure.

This looks very great. Any chances you make a pr with your changes to holafs official repo now that it is released. Would like to see it using lsio as a base image, rather than normal ubuntu. Also the memory leak fix might be useful for everyone

Link to comment

For anyone else who might be a dummy like me and couldn't get SD XL models and Juggernaut working on SD Next, I found I needed to go to System > Settings > Execution backend > change to "diffusers" then restart SD Next.

 

Not sure why I don't have to do this on the AUTOMATIC1111 setup which is a similar program.

Link to comment

Hi, 

I have two small problems.
Even if I change the place of the output, it still generates the images in the default output file. (when I return from the docker setting my custom path is there) Do you have an idea? I use Fooocus GUI

The other problem is that the images are generated by the "nobody" user, which means that I constantly have to give myself write permission to delete the images. 

And really good work, the docker works very well! it's great! 

  • Upvote 1
Link to comment
7 hours ago, Joly0 said:

This looks very great. Any chances you make a pr with your changes to holafs official repo now that it is released. Would like to see it using lsio as a base image, rather than normal ubuntu. Also the memory leak fix might be useful for everyone

Yes I'll make PRs. The memory leak fix and lsio rework are not dependent on each other so they'll be separate.

Link to comment
4 hours ago, FoxxMD said:

Yes I'll make PRs. The memory leak fix and lsio rework are not dependent on each other so they'll be separate.

Nice. I am currently working on getting this to work with amd cards and rocm. Will see how good this will go. So if we both succeed, this project will have a big jump

Link to comment

Hi, 

 

Using the ComfyUI, I have a custom Module that requires me to install NDI Phython for it to work. 

Using the Docker instance Console, I installed PIP and tried installing using  ndi using 

 

pip install ndi-python


but it keeps giving me the error bellow and the ComfyUI Module wont work. 
 

Can someone point me out i the right di 
 

 

Prestartup times for custom nodes:
   0.0 seconds: /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager-main

Total VRAM 32365 MB, total RAM 128681 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA RTX 5000 Ada Generation : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
### Loading: ComfyUI-Manager (V1.6.4)
### ComfyUI Revision: 1804 [614b7e73] | Released on '2023-12-09'
Loading ComfyUI-NDI nodes begin----------
Traceback (most recent call last):
  File "/opt/stable-diffusion/05-comfy-ui/ComfyUI/nodes.py", line 1800, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/comfyui-NDI/__init__.py", line 13, in <module>
    import NDIlib as ndi
  File "/opt/stable-diffusion/05-comfy-ui/env/lib/python3.11/site-packages/NDIlib/__init__.py", line 7, in <module>
    from .NDIlib import *
ModuleNotFoundError: No module named 'NDIlib.NDIlib'

Cannot import /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/comfyui-NDI module for custom nodes: No module named 'NDIlib.NDIlib'

Import times for custom nodes:
   0.0 seconds: /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI_toyxyz_test_nodes-main
   0.0 seconds: /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI_toyxyz_test_nodes
   0.0 seconds (IMPORT FAILED): /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/comfyui-NDI
   0.0 seconds: /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
   0.0 seconds: /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/ComfyUi-NoodleWebcam
   0.0 seconds: /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager-main
   0.1 seconds: /opt/stable-diffusion/05-comfy-ui/ComfyUI/custom_nodes/Jovimetrix

Setting output directory to: /outputs/05-comfy-ui
Starting server

To see the GUI go to: http://0.0.0.0:9000
^C

 

Link to comment

What is the current state of this?  I messed up and deleted my old working install and have been running in circles trying to get things back.  I've tried the normal method, then FoxxMD's guide but don't know if I'm wasting time while things are still being worked on.  Spent all day watching logs from my two Unraid servers and am throwing in the towel at this point...my eyes hurt. 

Link to comment

@Joly0 @BigD

 

I forked holaf's repository, available at foxxmd/stable-diffusion, and have been improving off of it instead of using the repo from last post. There are individual pull requests in for all my improvements on his repository BUT the main branch on my repo has everything combined as well and where i'll be working on things until/if holaf merges my PRs.

 

My combined main branch is also available as a docker image at foxxmd/stable-diffusion:latest on dockerhub and ghcr.io/foxxmd/stable-diffusion:latest

 

I have tested with SD.Next only but everything else should also work.

 

To migrate from holaf to my image on unraid edit (or create) the stable-diffusion template:

  • Repository => foxxmd/stable-diffusion:latest
  • Edit Stable-Diffusion UI Path
    • Container Path => /config
  • Remove Outputs
    • These will still be generated at /mnt/user/appdata/stable-diffusion/outputs
  • Add Variable
    • Name/Key => PUID
    • Value => 99
  • Add Variable
    • Name/Key => PGID
    • Value => 100

_______

 

Changes (as of this post):

 

  • Switched to Linuxserver.io ubuntu base image
  • Installed missing git dependency
  • Fixed SD.Next memory leak
  • For SD.Next and automatic1111
    • Packages (venv) are only re-installed if your container is out-of-date with upstream git repository -- this reduces startup time after first install by like 90%
    • Packages can be forced to be reinstalled by setting the environmental variable CLEAN_ENV=true on your docker container (Variable in unraid template)

______

If you have issues you must post your problem with the WEBUI_VERSION you are using

Edited by FoxxMD
Link to comment

Hi, first of all thanks to everyone who is supporting the project and putting so much effort into it. :)

 

I have a Question: Is it possible to change it to Intel integrated GPU or CPU? The docker container from "holaflenain" is for nvidia by default and for my Home Server with an Intel 13500T is a extra Graphic Card Overkill and not efficient. 

Link to comment
4 hours ago, Nordzwerg said:

Hi, first of all thanks to everyone who is supporting the project and putting so much effort into it. :)

 

I have a Question: Is it possible to change it to Intel integrated GPU or CPU? The docker container from "holaflenain" is for nvidia by default and for my Home Server with an Intel 13500T is a extra Graphic Card Overkill and not efficient. 

 

You don't want to run it on CPU and integrated GPUs are much too weak to use. Stable Diffusion requires a recent-ish released dedicated graphics card with a decent amount of VRAM (8GB+ recommended) to run at reasonable speeds. It can run on CPU but you'll be waiting like an hour to generate one image.

Link to comment

@FoxxMD I reviewed your PRs and it looks like great work. I hope it makes its way into the project at some point.


Hopefully it's not a problem to discuss your fork a bit in this thread considering the two projects may eventually merge in the future.

 

I'm trying out your branch on my system (which is not Unraid, just a Windows box with WSL2, so I'm a little unique).

Launching the foxxmd version, I saw this error on run:

fooocus-fox  | s6-overlay-suexec: fatal: can only run as pid 1

It turns out this is because I was using init: true in docker-compose.yml. I was doing this so I could close containers quickly, as the signal handling is not correct so containers wait for 10 seconds before they are killed. I'm able to start Fooocus by removing that init/tini proxy process, but now the container takes 10 seconds to close every time.

Link to comment

@FoxxMD after the first install, I am running into this error with Fooocus with ghcr.io/foxxmd/stable-diffusion:latest
 

fooocus-fox  |     onnxruntime 1.16.3 depends on numpy>=1.24.2
fooocus-fox  |
fooocus-fox  | To fix this you could try to:
fooocus-fox  | 1. loosen the range of package versions you've specified
fooocus-fox  | 2. remove package versions to allow pip attempt to solve the dependency conflict
fooocus-fox  |
fooocus-fox  | ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
fooocus-fox  | [System ARGV] ['launch.py', '--listen', '0.0.0.0', '--port', '9000']
fooocus-fox  | Traceback (most recent call last):
fooocus-fox  |   File "/config/06-Fooocus/Fooocus/launch.py", line 24, in <module>
fooocus-fox  |     from modules.config import path_checkpoints, path_loras, path_vae_approx, path_fooocus_expansion, \
fooocus-fox  |   File "/config/06-Fooocus/Fooocus/modules/config.py", line 7, in <module>
fooocus-fox  |     import modules.sdxl_styles
fooocus-fox  |   File "/config/06-Fooocus/Fooocus/modules/sdxl_styles.py", line 5, in <module>
fooocus-fox  |     from modules.util import get_files_from_folder
fooocus-fox  |   File "/config/06-Fooocus/Fooocus/modules/util.py", line 1, in <module>
fooocus-fox  |     import numpy as np
fooocus-fox  | ModuleNotFoundError: No module named 'numpy'

 

Link to comment

@BigD ah yes that happens because s6 is the init process for the container and using init: true injects tini, so you end up with container init'ing tini init'ing s6!

 

23 minutes ago, BigD said:

now the container takes 10 seconds to close every time.

 

This is likely happening because some process started by Fooocus is not respecting shutdown signals or is frozen (or it may be the entry process for the container by holaf!)

 

You can adjust how long s6 waits for processes to finish gracefully using an ENV to customize s6 behavior. I would look at S6_KILL_FINISH_MAXTIME and S6_KILL_GRACETIME

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.