Jump to content

[SUPPORT] - stable-diffusion Advanced


Recommended Posts

I have been using invoke-ai

It seemed it updated to version 4.0 upon startup.

Now it's not launching and keep looping, i tried to find the command that is "not found" and edit the parameters.txt but am not sure what it's referring to. 

I've included a screenshot, anyone else had this?

 

 

Screenshot_2024-04-02-15-44-30-357-edit_com.server.auditor.ssh.client.jpg

Link to comment
30 minutes ago, Aer said:

I have been using invoke-ai

It seemed it updated to version 4.0 upon startup.

Now it's not launching and keep looping, i tried to find the command that is "not found" and edit the parameters.txt but am not sure what it's referring to. 

I've included a screenshot, anyone else had this?

 

It started doing the same thing to me last night - I thought maybe I'd screwed something up by stopping and restarting the container.

  • Thanks 1
Link to comment
21 hours ago, ippikiookami said:

 

It started doing the same thing to me last night - I thought maybe I'd screwed something up by stopping and restarting the container.

Yes mine automatically updated after restarting the container too.
 

If you have any luck getting in to Invoke-AI let me know.

Link to comment
On 3/14/2024 at 8:20 AM, Araso said:

The two other main files I want back are 'styles.csv' and 'ui-config.json'. The reason why I was doing this for 6.5 hours was because I first tried restoring all of the files and directories from a backup in one go - but then I got a bunch of errors. So I deleted everything again and restarted from scratch. Then I began restoring things one-by-one with a stop/start of the container in between to figure out what exactly was causing the errors. This is why it took so long. This is how I figured out that reinstalling extensions from the tab is the way to go.

 

In 'parameters.forge.txt' I was using for a long time:

 

I've had the looping happen a couple times now, hard to say what happened to start it. The first time, I thought it was after an unraid docker update, but, just a bit ago it happened while I was making images and tried to change the checkpoint... it simply stopped and gave me odd errors. I lost the log, but, it was about being unable to access date-named files corresponding to a change in ckpt in this folder "..appdata2/stable-diffusion/02-sd-webui/webui/config_states/"

I tired restarting, and it just looped... taking forever to do so.  Previously I had added/uncommented this to the parameters.txt file:
--reinstall-xformers
--reinstall-torch

I then re-commented them after, so, I can only assume this is why it started looping after I tried restarting the container after the errors.


And this alone fixed the looping, no folders deleted, etc, and no configs/extensions lost. Not sure this helps all, but, it seems to have worked for me twice now. Hope it fixes the issue at least for some.

 

I do have a question: How did you get Forge installed on this docker?  I haven't tried it yet on my desktop, but, all the word is it is much better for speed, etc, for the smaller cards. I'm getting about 1-5it/s for 512-1024 ranged images on current RTX2000-12G

 

 


Note: I run unraid in a 2U E5-20core Xeon supermicro chassis, I recently got an RTX 2000 12GB unit off ebay, and it has been fantastic. It only burns about 60watts while churning images, and works well for plex transcode at the same time.   While running SD on my desktop, RTX2080 8G, it pretty much eats my whole machine, can't play games, etc while its running, and, cant 'pause' the queue to do so.  400$ for a server GPU that sips power while munching images is a good deal. The 12G vs. 8G alone is a huge thing.  Sadly no 24GB low pro is likely to ever appear, but, a 4U chassis and new server may happen at some point. 
 

Link to comment
11 hours ago, whitewlf said:

I do have a question: How did you get Forge installed on this docker?

 

The way I do it is to edit the WEBUI_VERSION key, separating my choices with a pipe:

 

webui-selection.png.d188e7ef2f362d34e62aec4ed43c791d.png

 

Then after saving, I get a nice an easy dropdown to select my preferred UI:

 

webui-dropdown.png.a1a7757a1210cfc92089c3c165a301fa.png

 

Literally just add (or replace the existing value) this to the WEBUI_VERSION key:

02.forge

 

It should be noted that at the moment, Forge development seems to have kind of stalled, unfortunately. Some things are broken, regional prompter being one of them. I use RP a lot so I switched back to standard A1111. If you're careful then the memory optimisations that Forge has that you lose don't really matter. Hopefully Forge picks back up again but at least there's a choice between the two.

Link to comment

This is the first time I've installed this container. I want to point Models and Outputs to a different path than appdata.

I followed a previous post which added a Path variable for /config/models. Works.

However, changing the output path isn't working. The same post mentions /config/output, but no matter what I set this to, the outputs are always going to appdata/.../02-sd-webui/webui/output.

If I can't change this in the CA container template, is there another way to do so?

Thanks.

Edited by rubicon
Link to comment
4 hours ago, rubicon said:

This is the first time I've installed this container. I want to point Models and Outputs to a different path than appdata.

I followed a previous post which added a Path variable for /config/models. Works.

However, changing the output path isn't working. The same post mentions /config/output, but no matter what I set this to, the outputs are always going to appdata/.../02-sd-webui/webui/output.

If I can't change this in the CA container template, is there another way to do so?

Thanks.

 

Not sure if this is what you are stumbling against, but, the script that this runs on does some voodoo when it starts. After you see it, you can understand it's usefulness and how to work with it. You just cannot work -against- it. Took a little snooping to figure it out on my own, but, it is probably on the git page as to how it works. 

 

(From my understanding) in order to keep a central point of storage between multiple installations/types of stable diffusion, it creates symlinks internal to the docker to point to each folder for common files, like models and output (and loras, vaes, embeddings, etc)  I included a trimmed snippet below of the log output showing what gets associated to where in the 02 A111 installation which I am using:   

moving folder /config/02-sd-webui/webui/models/Stable-diffusion to /config/models/stable-diffusion
removing folder /config/02-sd-webui/webui/models/Stable-diffusion and create symlink

moving folder /config/02-sd-webui/webui/models/hypernetworks to /config/models/hypernetwork
removing folder /config/02-sd-webui/webui/models/hypernetworks and create symlink

moving folder /config/02-sd-webui/webui/models/Lora to /config/models/lora
removing folder /config/02-sd-webui/webui/models/Lora and create symlink

moving folder /config/02-sd-webui/webui/models/VAE to /config/models/vae
removing folder /config/02-sd-webui/webui/models/VAE and create symlink

moving folder /config/02-sd-webui/webui/embeddings to /config/models/embeddings
removing folder /config/02-sd-webui/webui/embeddings and create symlink

moving folder /config/02-sd-webui/webui/models/ESRGAN to /config/models/upscale
removing folder /config/02-sd-webui/webui/models/ESRGAN and create symlink

moving folder /config/02-sd-webui/webui/models/BLIP to /config/models/blip
removing folder /config/02-sd-webui/webui/models/BLIP and create symlink

moving folder /config/02-sd-webui/webui/models/Codeformer to /config/models/codeformer
removing folder /config/02-sd-webui/webui/models/Codeformer and create symlink

moving folder /config/02-sd-webui/webui/models/GFPGAN to /config/models/gfpgan
removing folder /config/02-sd-webui/webui/models/GFPGAN and create symlink

moving folder /config/02-sd-webui/webui/models/LDSR to /config/models/ldsr
removing folder /config/02-sd-webui/webui/models/LDSR and create symlink

moving folder /config/02-sd-webui/webui/models/ControlNet to /config/models/controlnet
removing folder /config/02-sd-webui/webui/models/ControlNet and create symlink

moving folder /config/02-sd-webui/webui/outputs to /config/outputs/02-sd-webui
removing folder /config/02-sd-webui/webui/outputs and create symlink

Run Stable-Diffusion-WebUI

 

 

Also, I cannot remember if I added this, or, it is default, but, I docker forwarded the output to an array share, and the full models point to another array share point.  Initially I thought it would be nice to keep checkpoints/etc on my cache-nvme but, I've been hoarding again and now have a full TB of models/loras/etc. Plus, I don't swap checkpoints often, I simply run dozens of prompts simultaneous like a lunatic. I do a lot of seed shopping, so the checkpoints just sit in the card, and loras are small, so there is no reason to burn the faster storage, for me. Plus, the share uses the array cache anyway.

 

image.thumb.png.b33cc505daa47056515cd07acf06b105.png

Spoiler

 

 

Edited by whitewlf
Link to comment
On 4/2/2024 at 4:50 PM, Aer said:

I have been using invoke-ai

It seemed it updated to version 4.0 upon startup.

Now it's not launching and keep looping, i tried to find the command that is "not found" and edit the parameters.txt but am not sure what it's referring to. 

I've included a screenshot, anyone else had this?

 

 

Screenshot_2024-04-02-15-44-30-357-edit_com.server.auditor.ssh.client.jpg

Thanks a lot, i'll take a look at it

Link to comment

Whitewlf, thanks for the suggestions.

I looked at the directories in the log file compared to what was created on the server.
 

moving folder /config/02-sd-webui/webui/outputs to /config/outputs/02-sd-webui
removing folder /config/02-sd-webui/webui/outputs and create symlink

 

The directory structure.

 

output/
outputs -> /config/outputs/02-sd-webui/


In the WebUI, the path settings are set to defaults and look like this:
 

output/txt2img-images
output/img2img-images
output/extras-images


I was confused why the symlink created is named "outputs" yet the actual output directory is "output." All of my saves were going into "output" which would be inside the appdata folder.

For Container Path, I used /config/output/ and pointed it to my server share. however Images would still be saved into appdata, not my share. I tried variations of the Container Path value such as /config/outputs but it still saved into appdata.

I finally found a Container Path value which works:
 

Container Path = /config/02-sd-webui/webui/output

 

I thought I had tried this before but may have used /config/02-sd-webui/webui/outputs which didn't work (guessing) because the install script was already creating a directory named "outputs" and the symlink.

 

I'm not sure if this will break something later on, but it's working...my saves are going to my share. I'm also unclear how other folks (yourself included) are using a different Container Path with success.

 

747785386_Screenshot2024-04-05at11_47_33AM.thumb.png.be03cccb76e824311cbc8b4601d433a3.png
 

 

Edited by rubicon
Link to comment
On 4/2/2024 at 4:50 PM, Aer said:

I have been using invoke-ai

It seemed it updated to version 4.0 upon startup.

Now it's not launching and keep looping, i tried to find the command that is "not found" and edit the parameters.txt but am not sure what it's referring to. 

I've included a screenshot, anyone else had this?

 

 

Yep, here the same. 

Link to comment

@anknv@Aer@rubicon@ippikiookami (and those I forgot 😅)

I just pushed a new version that should fix issues with InvokeAI.
They removed almost all parameters when launching the UI so the file "parameters.txt" is no longer used and will be renamed.
Instead, a default config.yaml file will be placed in the "03-invokeai" folder, you can modify it to suit your preferences.

I had a very short time to test, I hope I didn't leave too many bugs 🤞

Fixes I made for auto1111 path and Kohya are now also in the latest version.
 

Edited by Holaf
  • Like 2
  • Thanks 1
Link to comment
  • 2 weeks later...

Anyone know why this happens with latest update (05-comfy-ui)

04/15/2024
08:40:42 PM
Prestartup times for custom nodes:
04/15/2024
08:40:42 PM
   0.9 seconds: /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager
04/15/2024
08:40:42 PM
04/15/2024
08:40:54 PM
Total VRAM 12037 MB, total RAM 96462 MB
04/15/2024
08:40:54 PM
Set vram state to: NORMAL_VRAM
04/15/2024
08:40:54 PM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
04/15/2024
08:40:54 PM
VAE dtype: torch.bfloat16
04/15/2024
08:41:02 PM
Using pytorch cross attention
04/15/2024
08:41:09 PM
Traceback (most recent call last):
04/15/2024
08:41:09 PM
  File "/config/05-comfy-ui/ComfyUI/nodes.py", line 1864, in load_custom_node
04/15/2024
08:41:09 PM
    module_spec.loader.exec_module(module)
04/15/2024
08:41:09 PM
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
04/15/2024
08:41:09 PM
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
04/15/2024
08:41:09 PM
  File "/config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager/__init__.py", line 18, in <module>
04/15/2024
08:41:09 PM
    from .glob import manager_core as core
04/15/2024
08:41:09 PM
  File "/config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 9, in <module>
04/15/2024
08:41:09 PM
    import git
04/15/2024
08:41:09 PM
ModuleNotFoundError: No module named 'git'
04/15/2024
08:41:09 PM
04/15/2024
08:41:09 PM
Cannot import /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager module for custom nodes: No module named 'git'
04/15/2024
08:41:09 PM
04/15/2024
08:41:09 PM
Import times for custom nodes:
04/15/2024
08:41:09 PM
   0.0 seconds: /config/05-comfy-ui/ComfyUI/custom_nodes/websocket_image_save.py
04/15/2024
08:41:09 PM
   0.1 seconds (IMPORT FAILED): /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager
04/15/2024
08:41:09 PM
04/15/2024
08:41:09 PM
Setting output directory to: /config/outputs/05-comfy-ui
04/15/2024
08:41:09 PM
Starting server
04/15/2024
08:41:09 PM
04/15/2024
08:41:09 PM
To see the GUI go to: http://0.0.0.0:9000

 

Link to comment

I don't have this issue 🤔
Did you try to clear venv to see if it's enough to solve this problem ?  (take into account that if you installed some python packages via the manager you'll have to reinstall them)

Link to comment

Hello, super big newb here.  i am trying to run this on unraid with a 5950x and 3090ti, It does not seem to want to use the GPU which stays at only 2percent usage while my CPU gets pinned.  I have the nvidia plug in and updated.  Any idea what I should look for or do would be appreciated.  

Link to comment
can you open a terminal in the container and type this command :  nvidia-smi
and then tell me if you have an error
 
I managed to find the problem. My server was previously in sleep mode. Apparently this causes a peoblem with the gpu. Restarting the server managed to fix it. Thank you though for replying!

Sent from my Pixel 6 Pro using Tapatalk

Link to comment

I've installed this for the 1st time, chose 04 SD.next.

I see a bunch of 'WARNING  Package version mismatch' in the log, will those be aligned in the next update?

 

Quote

11:27:36-921135 WARNING  Package version mismatch: tqdm 4.66.2 required 4.66.1  
11:27:36-922221 INFO     Installing package: tqdm==4.66.1                       
11:27:39-162149 WARNING  Package version mismatch: accelerate 0.29.3 required   
                         0.28.0                                                 
11:27:39-163294 INFO     Installing package: accelerate==0.28.0                 
11:27:41-934982 INFO     Installing package:                                    
                         opencv-contrib-python-headless==4.9.0.80               
11:27:45-634186 WARNING  Package version mismatch: diffusers 0.27.2 required    
                         0.27.0                                                 
11:27:45-635316 INFO     Installing package: diffusers==0.27.0                  
11:27:49-200685 INFO     Installing package: einops==0.4.1                      
11:27:51-285687 INFO     Installing package: gradio==3.43.2                     
11:28:00-532099 WARNING  Package version mismatch: huggingface_hub 0.22.2       
                         required 0.21.4                                        
11:28:00-533244 INFO     Installing package: huggingface_hub==0.21.4

 

Link to comment
9 hours ago, shpitz461 said:

Also, is there a way to censor the images? Just launched it for the 1st time and already showing tits:

 

image.thumb.png.de5064561650eb1f5616974ca4613055.png

There is on invokeAI (option 3) under settings enable nsfw checker.

  • Thanks 1
Link to comment

Thanks for this! I'm having trouble getting the docker to use my 1070 ti.

  • I have Nvidia drivers installed and up-to-date
  • I set NVIDIA_VISIBLE_DEVICES to my GPU-xxx GUID
  • I also tried setting NVIDIA_VISIBLE_DEVICES to all but that had no effect
  • I have --runtime=nvidia in the Extra parameters
  • I also tried adding a variable for NVIDIA_DRIVER_CAPABILITIES set to all but that had no effect
  • I also saw an error message in the logs on startup about there being no CUDA_VISIBLE_DEVICES, so I tried adding that variable as well, set to my GPU-xxx GUID, to no avail

All that said, the GPU is being passed through - stable diffusion just isn't using it. When I go into the docker shell I can run `nvidia-smi` and I can see the GPU, but it always says "No running processes found" even if I'm actively generating an image. 

 

# nvidia-smi
Tue Apr 30 09:36:46 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.78                 Driver Version: 550.78         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1070 Ti     Off |   00000000:01:00.0 Off |                  N/A |
|  0%   37C    P0             32W /  180W |       0MiB /   8192MiB |      4%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

 

Though the docker technically works, since it appears to be on CPU only it is painfully slow. Any ideas? I've passed GPUs through to many dockers but usually once they show up in the nvidia-smi results, I'm good to go. I'm not sure how to troubleshoot the GPU not being used even though it is available.

 

Thanks in advance!

Link to comment

Hello, I'm very much a beginner and hasn't used unraid in years, but I wanted to try setting up SD for it but I can't access the WebUI since I most likely have incorrect PUID and PGID. 

 

I don't run rootless containers by podman so I tried finding out the PUID and PGID of the user I have created. using the "id -u" or "id -g" only returned 0 since I'm logged in as root. however running "id 'my username')" gave me this: "uid=1000(my username) gid=100(users) groups=100(users)" So I tried entering PUID = 1000 and PGID = 100, but to no success. 

So I'm wondering how to find out the PGID and PUID of my user (in case I did it wrong). And in case I did it right I'd curious why I still can't connect to the web UI.
 

*In the pic I use PUID 99 as I wanted to try the default value.*

 

image.thumb.png.b790e0763414c6f576dbfb0a1fba7dfd.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...