[Guide] InvokeAI: A Stable Diffusion Toolkit - Docker


Recommended Posts

13 hours ago, mickr777 said:

No problem, it can be called what ever you like anyway, as long as its the same in the xml

 

I got it working and it's great. I'm using an old GTX 970 with 4Gb VRAM and the image generation works fine. The inpainting gives an "out of vram" error fyi.

 

However i've been trying for hours to get the "Waifu Diffusion" model with the kl-f8-anime vae loaded but i can't seem to do it. It shows up in the WebUI and when i press "Load" it seems to do something, then it says "New model loaded" but it didn't actually load the Waifu Diffusion model. Also when i restart my container it says this in the logs:

 

>> Scanning Model: Waifu Diffusion
>> Model scanned ok!
>> Loading Waifu Diffusion from /userfiles/models/ldm/waifu-diffusion-v1-4/wd-v1-3-float32.ckpt
   | Forcing garbage collection prior to loading new model
** model Waifu Diffusion could not be loaded: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config'

 

I tried different versions and different vae files but i cannot get it to work. Do you know how i'd do this? I've had another Stable Difffusion container running before and i did manage to get Waifu Diffusion working, but that container broke completely that's why i set a new one up with your tutorial.

Link to comment
6 hours ago, YourNightmar3 said:

I got it working and it's great. I'm using an old GTX 970 with 4Gb VRAM and the image generation works fine. The inpainting gives an "out of vram" error fyi.

inpainting is a little more hungry on vram then txt2image I think you need at least 6gb of vram at this moment

 

6 hours ago, YourNightmar3 said:

However i've been trying for hours to get the "Waifu Diffusion" model with the kl-f8-anime vae loaded but i can't seem to do it. It shows up in the WebUI and when i press "Load" it seems to do something, then it says "New model loaded" but it didn't actually load the Waifu Diffusion model. Also when i restart my container it says this in the logs:

 

>> Scanning Model: Waifu Diffusion
>> Model scanned ok!
>> Loading Waifu Diffusion from /userfiles/models/ldm/waifu-diffusion-v1-4/wd-v1-3-float32.ckpt
   | Forcing garbage collection prior to loading new model
** model Waifu Diffusion could not be loaded: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config'

 

I tried different versions and different vae files but i cannot get it to work. Do you know how i'd do this? I've had another Stable Difffusion container running before and i did manage to get Waifu Diffusion working, but that container broke completely that's why i set a new one up with your tutorial.

I haven't had that issue my self but I have the diffuser version of waifu diffusion running

 

you can try the diffuser version by manually adding this to your /userfiles/config/models.yaml, then on first switch to it in the webui it will download the required files

waifu-diffusion-v1.4:
  description: has been conditioned on high-quality anime images through fine-tuning
  repo_id: hakurei/waifu-diffusion
  format: diffusers
  width: 512
  height: 512

 

 

Link to comment
23 hours ago, mickr777 said:

I haven't had that issue my self but I have the diffuser version of waifu diffusion running

 

you can try the diffuser version by manually adding this to your /userfiles/config/models.yaml, then on first switch to it in the webui it will download the required files

waifu-diffusion-v1.4:
  description: has been conditioned on high-quality anime images through fine-tuning
  repo_id: hakurei/waifu-diffusion
  format: diffusers
  width: 512
  height: 512

 

That works great, thank you! What exactly is the different between the diffuser version and the other version? I feel like my results with this are a little less good than the examples on their github.

Link to comment
21 minutes ago, YourNightmar3 said:

That works great, thank you! What exactly is the different between the diffuser version and the other version?

Diffusers are meant to more stream lined in the back end, easier to train and seems to be they way everything is going.

 

21 minutes ago, YourNightmar3 said:

I feel like my results with this are a little less good than the examples on their github.

never trust the example images on any model, as they always pick the best examples to show

 

also look into negative prompting after your normal prompt, add text into [ ] eg. [cropped, worst quality, low quality, artifacts, blurry, bad hands], everything that goes in negative prompt [ ] it tries not to add to image.

 

I found anything v3.0 is better at anime (but best to use a negative prompt [nude] as it can be a little nsfw)

anythingv-3.0:
  description: Anything V3 - a latent diffusion model for anime
  repo_id: Linaqruf/anything-v3.0
  format: diffusers
  width: 512
  height: 512

 

Link to comment
1 hour ago, mickr777 said:

I found anything v3.0 is better at anime (but best to use a negative prompt [nude] as it can be a little nsfw)

anythingv-3.0:
  description: Anything V3 - a latent diffusion model for anime
  repo_id: Linaqruf/anything-v3.0
  format: diffusers
  width: 512
  height: 512

 

 

Thanks, it works very good indeed! Do you know if this version of the container has a web API that you can call to generate images and get the result returned? Or if there is one like that? I've been looking for a selfhosted InvokeAI webapi like this for a little while to use in some hobby projects. I can't really find much information about it on Google.

Edited by YourNightmar3
Link to comment
3 hours ago, YourNightmar3 said:

 

Thanks, it works very good indeed! Do you know if this version of the container has a web API that you can call to generate images and get the result returned? Or if there is one like that? I've been looking for a selfhosted InvokeAI webapi like this for a little while to use in some hobby projects. I can't really find much information about it on Google.

Im not sure I haven't focused on that side of it, but you can ask here https://github.com/invoke-ai/InvokeAI/discussions

Link to comment

Hello ! I have this issue

Traceback (most recent call last):
File "/InvokeAI/scripts/invoke.py", line 3, in <module> import ldm. invoke.CLI
ModuleNotFoundError: No module named 'Idm'
Traceback (most recent call last):
File "/InvokeAI/scripts/invoke.py", line 3, in <module>
import ldm.invoke.CLI
ModuleNotFoundError: No module named 'Idm'

 

Link to comment
50 minutes ago, Draco1544 said:

Hello ! I have this issue

Traceback (most recent call last):
File "/InvokeAI/scripts/invoke.py", line 3, in <module> import ldm. invoke.CLI
ModuleNotFoundError: No module named 'Idm'
Traceback (most recent call last):
File "/InvokeAI/scripts/invoke.py", line 3, in <module>
import ldm.invoke.CLI
ModuleNotFoundError: No module named 'Idm'

 

Sound like something didnt get downloaded correctly, try deleting the pyvenv.cfg file under the appdata/invokeai/venv/ folder and rerun the container

Link to comment
  • mickr777 changed the title to [Guide] InvokeAI: A Stable Diffusion Toolkit - Docker (updated to Support v2.3 Update of InvokeAI)
15 hours ago, Elvin said:

Unfortunately this did not work for me.

I'm very sorry I just realised I had changed the guide by mistake to a test build of Dockerfile, start.sh and my-invoke.xml I was playing with I have fixed this, please retry following the guide now

Edited by mickr777
Link to comment

Hey! Glad to see this thread is kept up to date. My install broke a month or two ago and I was kind of tired of fighting it... that said, I trashed everything today and started working to get it re-set up. I am encountering the following:

 

** An error occurred while attempting to initialize the model: "[Errno 2] No such file or directory: '/userfiles/configs/models.yaml'"
** This can be caused by a missing or corrupted models file, and can sometimes be fixed by (re)installing the models.
Do you want to run invokeai-configure script to select and/or reinstall models? [y] 

 

I saw another post mentioned this and they mentioned creating the folders and restarting the container which did not work. I also validated that my Hugging Face token was pit in.

 

Here are the contents of the ./userfiles directory

 

image.png.096636cf075ca0e32f781ecb2a7a3c91.png

 

Any thoughts on approach?

Link to comment
1 hour ago, wes.crockett said:

Hey! Glad to see this thread is kept up to date. My install broke a month or two ago and I was kind of tired of fighting it... that said, I trashed everything today and started working to get it re-set up. I am encountering the following:

 

** An error occurred while attempting to initialize the model: "[Errno 2] No such file or directory: '/userfiles/configs/models.yaml'"
** This can be caused by a missing or corrupted models file, and can sometimes be fixed by (re)installing the models.
Do you want to run invokeai-configure script to select and/or reinstall models? [y] 

 

I saw another post mentioned this and they mentioned creating the folders and restarting the container which did not work. I also validated that my Hugging Face token was pit in.

 

Here are the contents of the ./userfiles directory

 

image.png.096636cf075ca0e32f781ecb2a7a3c91.png

 

Any thoughts on approach?

Yes I ran into this the other day, the config folder is being made in the root folder in the docker and not in the userfiles folder, (ebr from invokeai is looking into it)

 

temp fix was to add export INVOKEAI_ROOT=/userfiles to the start.sh before "invokeai-configure --root="/userfiles/" --yes" line

 

I have fixed the start.sh script in main post, if you delete the /venv/pyvenv.cfg file and rebuild the docker container with the new start.sh change, it should work and rebuild the enviroment 

Edited by mickr777
Link to comment
4 hours ago, mickr777 said:

Yes I ran into this the other day, the config folder is being made in the root folder in the docker and not in the userfiles folder, (ebr from invokeai is looking into it)

 

temp fix was to add export INVOKEAI_ROOT=/userfiles to the start.sh before "invokeai-configure --root="/userfiles/" --yes" line

 

I have fixed the start.sh script in main post, if you delete the /venv/pyvenv.cfg file and rebuild the docker container with the new start.sh change, it should work and rebuild the enviroment 

That did the trick. Thanks!

  • Like 1
Link to comment
  • mickr777 changed the title to [Guide] InvokeAI: A Stable Diffusion Toolkit - Docker

Due to a major source tree restructure on the invokeai git the main start.sh script had to be changed,

 

if you did a manual install the changes are in the main post under start.sh code,

if you are using the docker repository install I have pushed the changes already you should get a update notice.

Edited by mickr777
Link to comment
On 3/3/2023 at 5:48 PM, mickr777 said:

Due to a major source tree restructure on the invokeai git the main start.sh script had to be changed,

 

if you did a manual install the changes are in the main post under start.sh code,

if you are using the docker repository install I have pushed the changes already you should get a update notice.

Hi mickr777,

 

Thanks for maintaining this image.  Seeing this after the latest update:

 

>> Initialization file /home/invokeuser/userfiles/invokeai.init found. Loading...
* Initializing, be patient...
>> Internet connectivity is True
>> InvokeAI, version 3.0.0+a0
>> InvokeAI runtime directory is "/home/invokeuser/userfiles"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> xformers memory-efficient attention is available and enabled
>> NSFW checker is disabled
>> Current VRAM usage:  0.00G
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
  | Using faster float16 precision
  | Loading diffusers VAE from stabilityai/sd-vae-ft-mse
Fetching 15 files: 100%|██████████| 15/15 [00:00<00:00, 173318.35it/s]
  | Default image dimensions = 512 x 512
>> Model loaded in 6.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Loading embeddings from /home/invokeuser/userfiles/embeddings
>> Textual inversion triggers:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)

* --web was specified, starting web server...
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/invokeuser/venv/bin/invokeai:8 in <module>                             │
│                                                                              │
│   5 from invokeai.frontend.CLI import invokeai_command_line_interface        │
│   6 if __name__ == '__main__':                                               │
│   7 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     │
│ ❱ 8 │   sys.exit(invokeai_command_line_interface())                          │
│   9                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:170 in main           │
│                                                                              │
│    167 │                                                                     │
│    168 │   # web server loops forever                                        │
│    169 │   if opt.web or opt.gui:                                            │
│ ❱  170 │   │   invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)    │
│    171 │   │   sys.exit(0)                                                   │
│    172 │                                                                     │
│    173 │   if not infile:                                                    │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:1027 in               │
│ invoke_ai_web_server_loop                                                    │
│                                                                              │
│   1024                                                                       │
│   1025 def invoke_ai_web_server_loop(gen: Generate, gfpgan, codeformer, esrg │
│   1026 │   print("\n* --web was specified, starting web server...")          │
│ ❱ 1027 │   from invokeai.backend.web import InvokeAIWebServer                │
│   1028 │                                                                     │
│   1029 │   # Change working directory to the stable-diffusion directory      │
│   1030 │   os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__),  │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/__init__.py:4 in <module>     │
│                                                                              │
│   1 """                                                                      │
│   2 Initialization file for the web backend.                                 │
│   3 """                                                                      │
│ ❱ 4 from .invoke_ai_web_server import InvokeAIWebServer                      │
│   5                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/invoke_ai_web_server.py:22 in │
│ <module>                                                                     │
│                                                                              │
│     19 from PIL.Image import Image as ImageType                              │
│     20 from werkzeug.utils import secure_filename                            │
│     21                                                                       │
│ ❱   22 import invokeai.frontend.web.dist as frontend                         │
│     23                                                                       │
│     24 from .. import Generate                                               │
│     25 from ..args import APP_ID, APP_VERSION, Args, calculate_init_img_hash │
╰──────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'invokeai.frontend.web'

 

Edited by neopterygii
Link to comment
4 hours ago, neopterygii said:

Hi mickr777,

 

Thanks for maintaining this image.  Seeing this after the latest update:

 

>> Initialization file /home/invokeuser/userfiles/invokeai.init found. Loading...
* Initializing, be patient...
>> Internet connectivity is True
>> InvokeAI, version 3.0.0+a0
>> InvokeAI runtime directory is "/home/invokeuser/userfiles"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> xformers memory-efficient attention is available and enabled
>> NSFW checker is disabled
>> Current VRAM usage:  0.00G
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
  | Using faster float16 precision
  | Loading diffusers VAE from stabilityai/sd-vae-ft-mse
Fetching 15 files: 100%|██████████| 15/15 [00:00<00:00, 173318.35it/s]
  | Default image dimensions = 512 x 512
>> Model loaded in 6.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Loading embeddings from /home/invokeuser/userfiles/embeddings
>> Textual inversion triggers:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)

* --web was specified, starting web server...
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/invokeuser/venv/bin/invokeai:8 in <module>                             │
│                                                                              │
│   5 from invokeai.frontend.CLI import invokeai_command_line_interface        │
│   6 if __name__ == '__main__':                                               │
│   7 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     │
│ ❱ 8 │   sys.exit(invokeai_command_line_interface())                          │
│   9                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:170 in main           │
│                                                                              │
│    167 │                                                                     │
│    168 │   # web server loops forever                                        │
│    169 │   if opt.web or opt.gui:                                            │
│ ❱  170 │   │   invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)    │
│    171 │   │   sys.exit(0)                                                   │
│    172 │                                                                     │
│    173 │   if not infile:                                                    │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:1027 in               │
│ invoke_ai_web_server_loop                                                    │
│                                                                              │
│   1024                                                                       │
│   1025 def invoke_ai_web_server_loop(gen: Generate, gfpgan, codeformer, esrg │
│   1026 │   print("\n* --web was specified, starting web server...")          │
│ ❱ 1027 │   from invokeai.backend.web import InvokeAIWebServer                │
│   1028 │                                                                     │
│   1029 │   # Change working directory to the stable-diffusion directory      │
│   1030 │   os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__),  │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/__init__.py:4 in <module>     │
│                                                                              │
│   1 """                                                                      │
│   2 Initialization file for the web backend.                                 │
│   3 """                                                                      │
│ ❱ 4 from .invoke_ai_web_server import InvokeAIWebServer                      │
│   5                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/invoke_ai_web_server.py:22 in │
│ <module>                                                                     │
│                                                                              │
│     19 from PIL.Image import Image as ImageType                              │
│     20 from werkzeug.utils import secure_filename                            │
│     21                                                                       │
│ ❱   22 import invokeai.frontend.web.dist as frontend                         │
│     23                                                                       │
│     24 from .. import Generate                                               │
│     25 from ..args import APP_ID, APP_VERSION, Args, calculate_init_img_hash │
╰──────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'invokeai.frontend.web'

 

Make sure you have rebuild the container with the new chages in the main post, and Trying delete everything in the invokeai/invokeai (leave userfiles and venv alone) folder and allow it git clone that folder again

 

There is another issue that I am looking into, on start its asking for user input to move the model from diffusers folder to hub, but this is stopping the container starting.

Edited by mickr777
Link to comment
32 minutes ago, mickr777 said:

Make sure you have rebuild the container with the new chages in the main post, and Trying delete everything in the invokeai/invokeai (leave userfiles and venv alone) folder and allow it git clone that folder again

Thanks, looks like somewhere along the way the invokeai folder permissions changed to root;

 

Cloning into 'InvokeAI'...
/home/invokeuser/InvokeAI/.git: Permission denied

 

looks like it's loading properly after adjusting them

  • Like 1
Link to comment

If you get this and the docker stops

>> ALERT:
>> The location of your previously-installed diffusers models needs to move from
>> invokeai/models/diffusers to invokeai/models/hub due to a change introduced by
>> diffusers version 0.14. InvokeAI will now move all models from the "diffusers" directory
>> into "hub" and then remove the diffusers directory. This is a quick, safe, one-time
>> operation. However if you have customized either of these directories and need to
>> make adjustments, please press ctrl-C now to abort and relaunch InvokeAI when you are ready.
>> Otherwise press <enter> to continue.

 

you will need to manually move all files in /userfiles/models/diffusers/ in to /userfiles/models/hub/ and delete the diffusers folder, now it should load.

 

once in the webui, use the model manger to change any models that was pointing to models/diffusers/ to now use models/hub/ 

Edited by mickr777
Link to comment
16 hours ago, neopterygii said:

Thanks, looks like somewhere along the way the invokeai folder permissions changed to root;

 

Cloning into 'InvokeAI'...
/home/invokeuser/InvokeAI/.git: Permission denied

 

looks like it's loading properly after adjusting them

 

Sorry to question this, but did you prehaps change the permissions to a different user or simply editted them to be usable for everyone? im running into the same issue listed above by neopterygii.

 

edit: i was wrong, my error is different. posting what im seeing here :

 

>> Initialization file /home/invokeuser/userfiles/invokeai.init found. Loading...
Fetching 15 files: 100%|██████████| 15/15 [00:00<00:00, 27282.98it/s]
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/invokeuser/venv/bin/invokeai:8 in <module>                             │
│                                                                              │
│   5 from invokeai.frontend.CLI import invokeai_command_line_interface        │
│   6 if __name__ == '__main__':                                               │
│   7 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     │
│ ❱ 8 │   sys.exit(invokeai_command_line_interface())                          │
│   9                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:170 in main           │
│                                                                              │
│    167 │                                                                     │
│    168 │   # web server loops forever                                        │
│    169 │   if opt.web or opt.gui:                                            │
│ ❱  170 │   │   invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)    │
│    171 │   │   sys.exit(0)                                                   │
│    172 │                                                                     │
│    173 │   if not infile:                                                    │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:1027 in               │
│ invoke_ai_web_server_loop                                                    │
│                                                                              │
│   1024                                                                       │
│   1025 def invoke_ai_web_server_loop(gen: Generate, gfpgan, codeformer, esrg │
│   1026 │   print("\n* --web was specified, starting web server...")          │
│ ❱ 1027 │   from invokeai.backend.web import InvokeAIWebServer                │
│   1028 │                                                                     │
│   1029 │   # Change working directory to the stable-diffusion directory      │
│   1030 │   os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__),  │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/__init__.py:4 in <module>     │
│                                                                              │
│   1 """                                                                      │
│   2 Initialization file for the web backend.                                 │
│   3 """                                                                      │
│ ❱ 4 from .invoke_ai_web_server import InvokeAIWebServer                      │
│   5                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/invoke_ai_web_server.py:29 in │
│ <module>                                                                     │
│                                                                              │
│     26 from ..generator import infill_methods                                │
│     27 from ..globals import Globals, global_converted_ckpts_dir, global_mod │
│     28 from ..image_util import PngWriter, retrieve_metadata                 │
│ ❱   29 from ..model_management import merge_diffusion_models                 │
│     30 from ..prompting import (                                             │
│     31 │   get_prompt_structure,                                             │
│     32 │   get_tokenizer,                                                    │
╰──────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'merge_diffusion_models' from 
'invokeai.backend.model_management' 
(/home/invokeuser/InvokeAI/invokeai/backend/model_management/__init__.py)

 

I have updated the docker build with the 1st method (non manual) as of yesterday and my models are listed in /userfiles/models/hub/

Edited by Loyotaemi
Link to comment
8 hours ago, Loyotaemi said:

 

Sorry to question this, but did you prehaps change the permissions to a different user or simply editted them to be usable for everyone? im running into the same issue listed above by neopterygii.

 

edit: i was wrong, my error is different. posting what im seeing here :

 

>> Initialization file /home/invokeuser/userfiles/invokeai.init found. Loading...
Fetching 15 files: 100%|██████████| 15/15 [00:00<00:00, 27282.98it/s]
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/invokeuser/venv/bin/invokeai:8 in <module>                             │
│                                                                              │
│   5 from invokeai.frontend.CLI import invokeai_command_line_interface        │
│   6 if __name__ == '__main__':                                               │
│   7 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     │
│ ❱ 8 │   sys.exit(invokeai_command_line_interface())                          │
│   9                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:170 in main           │
│                                                                              │
│    167 │                                                                     │
│    168 │   # web server loops forever                                        │
│    169 │   if opt.web or opt.gui:                                            │
│ ❱  170 │   │   invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)    │
│    171 │   │   sys.exit(0)                                                   │
│    172 │                                                                     │
│    173 │   if not infile:                                                    │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/frontend/CLI/CLI.py:1027 in               │
│ invoke_ai_web_server_loop                                                    │
│                                                                              │
│   1024                                                                       │
│   1025 def invoke_ai_web_server_loop(gen: Generate, gfpgan, codeformer, esrg │
│   1026 │   print("\n* --web was specified, starting web server...")          │
│ ❱ 1027 │   from invokeai.backend.web import InvokeAIWebServer                │
│   1028 │                                                                     │
│   1029 │   # Change working directory to the stable-diffusion directory      │
│   1030 │   os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__),  │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/__init__.py:4 in <module>     │
│                                                                              │
│   1 """                                                                      │
│   2 Initialization file for the web backend.                                 │
│   3 """                                                                      │
│ ❱ 4 from .invoke_ai_web_server import InvokeAIWebServer                      │
│   5                                                                          │
│                                                                              │
│ /home/invokeuser/InvokeAI/invokeai/backend/web/invoke_ai_web_server.py:29 in │
│ <module>                                                                     │
│                                                                              │
│     26 from ..generator import infill_methods                                │
│     27 from ..globals import Globals, global_converted_ckpts_dir, global_mod │
│     28 from ..image_util import PngWriter, retrieve_metadata                 │
│ ❱   29 from ..model_management import merge_diffusion_models                 │
│     30 from ..prompting import (                                             │
│     31 │   get_prompt_structure,                                             │
│     32 │   get_tokenizer,                                                    │
╰──────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'merge_diffusion_models' from 
'invokeai.backend.model_management' 
(/home/invokeuser/InvokeAI/invokeai/backend/model_management/__init__.py)

 

I have updated the docker build with the 1st method (non manual) as of yesterday and my models are listed in /userfiles/models/hub/

 

It appears to be a error in the invokeai code I have reported the bug 

https://github.com/invoke-ai/InvokeAI/issues/2872

 

Looks like there is a pr

https://github.com/invoke-ai/InvokeAI/pull/2876

Once the pr is merged just start your invokeai docker and it should retrieve the change

 

edit: Pr Merged, all should be working again

Edited by mickr777
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.