[SUPPORT] - stable-diffusion Advanced


Recommended Posts

14 hours ago, Holaf said:

I don't have this kind of error on my side :/
FYI, I made a screenshot of my template if you want to compare with yours.


I saw that I made a mistake for the save folder of Fooocus, it should be fixed now (3.0.2)

If you still have those crashes, can you post the output of the logs ?
Thanks :)

 

I've done some tests with fooocus and everything works to now. Thanks for all the work you do, it's really appreciated. 😁

  • Like 1
Link to comment

@superboki Thank you again for taking the time to put all this together and make it work.  It's come a long way from early 2023 when I first found the docker image already.

 

Please keep up the great work!  

 

PS. Yes I'm going to ask for something - sorry - hell I don't even know if it's possible - but I wish there was an option - almost an overlay GUI that would let us quick swap UI's if we have installed them...with a centralized output directory as well rather than each UI having it's own....

-Regardless - thanks again.

Edited by echofire
Link to comment

Hello there, 

2 small question for curiosity
Is it possible to give one of our shares access to the various GUIs (Automatic 111 or Kohya)? If so, what's the best way? Do we add a Host Path to the tamplate? If so, what should we put in the Path container? Or is there another way? 

And I want to test the creation of a Lora with Kohya, I saw that you can install CUDNN 8.6 to help. As I've just got an RTX 3050 8G, I figure it wouldn't be a bad thing. I searched a bit on the internet and couldn't find how to install it on unraid (I saw for windows and ubuntu), if someone had tested it on Unraid and if it worked. Has anyone here tested it?

Thanks

Link to comment

oh maybe I spoke too fast, looking more closely at the installation log it looks like it's included in Torch (if I understood correctly) 😁

 

11:37:31-861037 INFO     Version: v22.4.1                                                                                                                
11:37:31-869485 INFO     nVidia toolkit detected                                
11:37:39-146793 INFO     Torch 2.1.2+cu121                                      
11:37:40-028922 INFO     Torch backend: nVidia CUDA 12.1 cuDNN 8902             
11:37:40-043726 INFO     Torch detected GPU: NVIDIA GeForce RTX 3050 VRAM 7973  
                                      Arch (8, 6) Cores 20  

 
 

Link to comment

Hey guys, something wrong with the output directory? I keep getting this in comfyui:
 

grafik.png

 

EDIT:

I found out, that the parameters txt file was wrong and still had the old path. Maybe this should be added as a migration step into the container

Edited by Joly0
  • Upvote 1
Link to comment

Ok  @superboki @FoxxMD @Holaf I have found a serious issue with this container. So the problem is thill the error above. I came across the same issue with comfyui, when i try to install nodes or extension using comfyui-manager. What i found out so far is, that there is a git-https command, that is used everytime you use a "git clone https://xyz" command, which is basically everytime you want to use comfyui-manager or in the example above in stable diffusion the model downloader for certain things.
I have checked and on the docker itself, git is correctly installed with git-https, though, and this is interesting now: git-https is missing in the conda enviroment. The executables for comfyui for example are located under "/config/05-comfy-ui/env/bin".
This grafik.png.e647abc31423e61309e79218abd53629.png is all that is installed regarding git in the conda environment (git-lfs btw is also recommended, had to install that manually using "conda install -c conda-forge git-lfs -y").
So obviously git cant use https for git clone commands, therefore comfyui-manager cant install anything, i constantly get error messages.
I have also checked all the channels that i could find, if any of them has git with git-https as a package available for installing through conda install. Though none have it. So we have an issue here. I am not really familiar with conda and all the environment stuff, so i have no further idea, i am just trial and error atm to find a solution.

Maybe someone of you has a solution in their mind, but this is what i found. Imo this is a serious issue, so this should get fixed asap

Edited by Joly0
Link to comment

Hi everyone,

I've been experimenting with various Stable Diffusion solutions and have successfully implemented many of them. However, I'm encountering an issue with FaceFusion. Despite its capabilities, it doesn't seem to leverage my NVIDIA 3090 GPU effectively.  I believe CUDA not being an option would mean it's not loading, or not configured properly.   

Has anyone else faced this issue? If so, are there any specific configurations or tricks to optimize GPU utilization for FaceFusion? Your insights would be greatly appreciated!

For reference, here are the solutions I have work so far:  sd, invokeai,  comfy-ui  

Edited by CoreyG
Link to comment
On 12/19/2023 at 8:33 AM, xhaloz said:

Ok I've done some digging.  If I add face fusion via the docker settings or within automatic1111 extensions, it does not detect GPU thus not giving me a "CUDA" option.

The container works fine with the GPU for generating art, and device shows up in nvidia-smi when you docker exec into the container. However if I activate the venv within the docker container and run

python3 -c "import tensorflow; print(tensorflow.config.experimental.list_physical_devices('GPU'))"

It returns []

 

For some reason tensorflow does not detect GPU and facefusion needs that detection for it to provide the CUDA option.  Any thoughts?

Did you ever solve this ?

Link to comment
On 1/5/2024 at 9:03 PM, Joly0 said:

Ok  @superboki @FoxxMD @Holaf I have found a serious issue with this container. So the problem is thill the error above. I came across the same issue with comfyui, when i try to install nodes or extension using comfyui-manager. What i found out so far is, that there is a git-https command, that is used everytime you use a "git clone https://xyz" command, which is basically everytime you want to use comfyui-manager or in the example above in stable diffusion the model downloader for certain things.
I have checked and on the docker itself, git is correctly installed with git-https, though, and this is interesting now: git-https is missing in the conda enviroment. The executables for comfyui for example are located under "/config/05-comfy-ui/env/bin".
This grafik.png.e647abc31423e61309e79218abd53629.png is all that is installed regarding git in the conda environment (git-lfs btw is also recommended, had to install that manually using "conda install -c conda-forge git-lfs -y").
So obviously git cant use https for git clone commands, therefore comfyui-manager cant install anything, i constantly get error messages.
I have also checked all the channels that i could find, if any of them has git with git-https as a package available for installing through conda install. Though none have it. So we have an issue here. I am not really familiar with conda and all the environment stuff, so i have no further idea, i am just trial and error atm to find a solution.

Maybe someone of you has a solution in their mind, but this is what i found. Imo this is a serious issue, so this should get fixed asap

 

Can you try modifying this line in 05.sh to include curl?

 

conda install -c conda-forge git curl python=3.11 pip gxx libcurand --solver=libmamba -y

 

Link to comment

I currently run Comfyui and A1111 on a Win11 VM (with GPU passthrough) on an unRAID server. All output and models etc. are kept in the VM (nothing on the array). I think using this docker might be a cleaner solution. My main concern is whether there would be a performance hit. I have a 12GB VRAM Nvidia 4070. Theoretically (or practically) is there a significant downside/upside in moving to the docker?

 

If I do the migration, I'm thinking of following these steps:

1. Remove GPU binding

2. Reboot

3. Start Win11 VM and make sure it's running OK without GPU

3. Install this Stable Diffusion docker

4. Install SD GUIs and organise folder mappings

5. Go into Win11 VM and copy the necessary files to the corresponding docker mapped folders (I'm guessing mainly json workflows, models, custom nodes, extensions, embedding, LoRAs).

6. Might have to fix some model locations in the workflows, but that shouldn't be a big problem.

 

Any suggestions or advice on this?

 

Link to comment
  • 2 weeks later...
On 1/12/2024 at 3:53 AM, sonofdbn said:

I currently run Comfyui and A1111 on a Win11 VM (with GPU passthrough) on an unRAID server. All output and models etc. are kept in the VM (nothing on the array). I think using this docker might be a cleaner solution. My main concern is whether there would be a performance hit. I have a 12GB VRAM Nvidia 4070. Theoretically (or practically) is there a significant downside/upside in moving to the docker?

 

If I do the migration, I'm thinking of following these steps:

1. Remove GPU binding

2. Reboot

3. Start Win11 VM and make sure it's running OK without GPU

3. Install this Stable Diffusion docker

4. Install SD GUIs and organise folder mappings

5. Go into Win11 VM and copy the necessary files to the corresponding docker mapped folders (I'm guessing mainly json workflows, models, custom nodes, extensions, embedding, LoRAs).

6. Might have to fix some model locations in the workflows, but that shouldn't be a big problem.

 

Any suggestions or advice on this?

 

You are getting a performance hit by running Windows 11 in a VM.  Docker would remove the crazy amount of overhead Windows introduces.  If you're going to use a VM and take away your GPU from your docker environment, I would use a linux VM (ubuntu without a GUI at all).  But yeah you could move out all of the stable diffusion files from your VM and just find the corresponding folders of this docker container, they should mostly map 1:1 just map your /config directory to a place you can access and that is where the files will exist. 

 

 

Link to comment
On 1/9/2024 at 4:30 AM, CoreyG said:

Did you ever solve this ?

No but I did find a solution that needs to be tested.  Apparently when you install extensions they install the onnx-runtime which creates a conflict with existing onnx files.  When this happens it disables the CUDA.  But again, I have not tested this out.  If I do I will report back here and tag you.

Link to comment
On 1/9/2024 at 4:30 AM, CoreyG said:

Did you ever solve this ?

Oh also, the https://github.com/facefusion/facefusion

 

uses pytorch now instead of tensorflow but the one within the hollafain uses tensorflow for some reason even though the script pulls from the original repo, thus creating that issue as well.  I'll check this out later tonight and see if I can push  a change to their github

Edited by xhaloz
Link to comment
14 hours ago, xhaloz said:

You are getting a performance hit by running Windows 11 in a VM.  Docker would remove the crazy amount of overhead Windows introduces.  If you're going to use a VM and take away your GPU from your docker environment, I would use a linux VM (ubuntu without a GUI at all).  But yeah you could move out all of the stable diffusion files from your VM and just find the corresponding folders of this docker container, they should mostly map 1:1 just map your /config directory to a place you can access and that is where the files will exist. 

 

Thanks, I'm trying this out now.

 

I have (05) ComfyUI installed. If I want to try something else like (70) Kohya, do I just edit the template and select a different WebUI port and WEBUI_VERSION 70 and apply? Or would this require a separate container?

 

For ComfyUI I see there's a parameters.txt file that allows you to set the output directory, which defaults to a path in /config, and models are also stored in /config. In my case /config is actually a folder in the appdata share on an NVME cache drive. This drive is likely going to run out of space quickly if I store models and output there.

 

I know that ComfyUI has a yaml file that lets you point to another folder for models. So I can put models somewhere else and point ComfyUI at that location. How does the docker container use the models folder that it sets up? Will all installed UIs automatically look there in addition to anything set up by each UI? And if so, can I move that models folder somewhere else? I would prefer not to move my appdata share. I suppose I could set up the stable diffusion config folder outside appdata, but that doesn't seem right, although I can't think of a good reason why it would be wrong. (Maybe appdata backup would be a bit of a problem?)

 

I've tried pinokio (browser based front-end for various stable diffusion UIs) in Windows and it seems to be a little like this container - lets you download and install various UIs, but documentation is a bit sparse.

 

Link to comment
13 hours ago, sonofdbn said:

I have (05) ComfyUI installed. If I want to try something else like (70) Kohya, do I just edit the template and select a different WebUI port and WEBUI_VERSION 70 and apply? Or would this require a separate container?

 

For ComfyUI I see there's a parameters.txt file that allows you to set the output directory, which defaults to a path in /config, and models are also stored in /config. In my case /config is actually a folder in the appdata share on an NVME cache drive. This drive is likely going to run out of space quickly if I store models and output there.

 

I know that ComfyUI has a yaml file that lets you point to another folder for models. So I can put models somewhere else and point ComfyUI at that location. How does the docker container use the models folder that it sets up? Will all installed UIs automatically look there in addition to anything set up by each UI? And if so, can I move that models folder somewhere else? I would prefer not to move my appdata share. I suppose I could set up the stable diffusion config folder outside appdata, but that doesn't seem right, although I can't think of a good reason why it would be wrong. (Maybe appdata backup would be a bit of a problem?)

 

I've tried pinokio (browser based front-end for various stable diffusion UIs) in Windows and it seems to be a little like this container - lets you download and install various UIs, but documentation is a bit sparse.

 

You can just edit the template and change the WEBUI_VERSION.
Interfaces are stored in different folders and work alongside each others.
You can even run multiple containers pointing to the same local folder at the same time.

If you want to split data, you can edit the container and add a path like this :
303723821_Capturedcran2024-01-21175139.png.17ab3c4a420378abf9d7477dd8430fd4.png

 

You can do the same thing for the output folder that also have a tendency to grow fast :)

Indeed, Pinokio is a cool project for a Windows workstation. On Windows you also have Stability Matrix that offers to install different webUI, but it has a very small collection compared to Pinokio.

  • Like 1
Link to comment

I'm trying to upgrade to the new version with comfy-ui and for some reason it seems to be looking for `/outputs` and not `/config/outputs`:

 

comfy-ui  | FileNotFoundError: [Errno 2] No such file or directory: '/outputs/05-comfy-ui/'
comfy-ui  |
comfy-ui  | During handling of the above exception, another exception occurred:
comfy-ui  |
comfy-ui  | Traceback (most recent call last):
comfy-ui  |   File "/config/05-comfy-ui/ComfyUI/execution.py", line 153, in recursive_execute
comfy-ui  |     output_data, output_ui = get_output_data(obj, input_data_all)
comfy-ui  |                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
comfy-ui  |   File "/config/05-comfy-ui/ComfyUI/execution.py", line 83, in get_output_data
comfy-ui  |     return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
comfy-ui  |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
comfy-ui  |   File "/config/05-comfy-ui/ComfyUI/execution.py", line 76, in map_node_over_list
comfy-ui  |     results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
comfy-ui  |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
comfy-ui  |   File "/config/05-comfy-ui/ComfyUI/nodes.py", line 1359, in save_images
comfy-ui  |     full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, self.output_dir, images[0].shape[1], images[0].shape[0])
comfy-ui  |                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
comfy-ui  |   File "/config/05-comfy-ui/ComfyUI/folder_paths.py", line 246, in get_save_image_path
comfy-ui  |     os.makedirs(full_output_folder, exist_ok=True)
comfy-ui  |   File "<frozen os>", line 215, in makedirs
comfy-ui  |   File "<frozen os>", line 225, in makedirs
comfy-ui  | PermissionError: [Errno 13] Permission denied: '/outputs'
comfy-ui  |
comfy-ui  | Prompt executed in 9.03 seconds
comfy-ui  | gc collect

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.