[Guide] InvokeAI: A Stable Diffusion Toolkit - Docker


Recommended Posts

22 hours ago, mickr777 said:

What image are you on the main or v2.3, also what errors are showing for you?

 

Currently while the migration to nodes i locked the main image to the prenodes tag, so it shouldnt be getting any updates at the moment till the migration has finished, so anything they do should not effect it.

 

As for the current main branch on git hub I have a working image for it but lots still dont work, the ui needs to be used in dev mode and  things change so much with every update, I didnt think anyone on main wants that yet.

 

I am on the main branch, obviously I get a few hiccups here and there but their GitHub is fairly responsive at fixing stuff, usually I have been relatively stable. I'm now getting this in my log with an inability to load into webui:
 

`[21-05-2023 21:18:02]::[InvokeAI]::INFO --> Patchmatch initialized
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> GFPGAN Initialized
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> CodeFormer Initialized
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> Face restoration initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> Patchmatch initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> GFPGAN Initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> CodeFormer Initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> Face restoration initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> Patchmatch initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> GFPGAN Initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> CodeFormer Initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> Face restoration initialized
From https://github.com/invoke-ai/InvokeAI
ff0e79f..650d69e main -> origin/main
02d2cbc..23abaae feat/images-service -> origin/feat/images-service
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> Patchmatch initialized
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> GFPGAN Initialized
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> CodeFormer Initialized
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> Face restoration initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> Patchmatch initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> GFPGAN Initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> CodeFormer Initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> Face restoration initialized
From https://github.com/invoke-ai/InvokeAI
0ce628b..fab5df9 v2.3 -> origin/v2.3
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> Patchmatch initialized
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> GFPGAN Initialized
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> CodeFormer Initialized
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> Face restoration initialized
Downloading sd-inpainting-1.5:
Downloading stable-diffusion-2.1:
** models.yaml exists. Renaming to models.yaml.orig
Successfully created new configuration file /home/invokeuser/userfiles/configs/models.yaml

** INVOKEAI INSTALLATION SUCCESSFUL **
If you installed manually from source or with 'pip install': activate the virtual environment
then run one of the following commands to start InvokeAI.

Web UI:
invokeai-web

Command-line client:
invokeai

If you installed using an installation script, run:
/home/invokeuser/userfiles/invoke.sh

Add the '--help' argument to see all of the command-line switches available for use.

Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Updates Found, Updating the local Files....
Updating ff0e79f..650d69e
Fast-forward
invokeai/frontend/web/src/app/components/InvokeAIUI.tsx | 11 +++++++++--
.../src/features/gallery/components/CurrentImageButtons.tsx | 8 +++++++-
.../web/src/features/gallery/components/HoverableImage.tsx | 8 +++++++-
invokeai/frontend/web/src/features/gallery/store/actions.ts | 4 ++++
4 files changed, 27 insertions(+), 4 deletions(-)
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> `



Seems like it's getting stuck on VAEs or install again, and I was under the impression that something in the actual main branch changed, and their perspective was that possibly we're loading onto legacy servers which they shutdown around the same time as when this stopped working for me. (few days ago as I am understanding).


Sometimes I am out of my league talking about this stuff, so sorry if I am using any terms incorrectly/confusingly!

Edited by ShadowUnraidLegends
Link to comment
1 hour ago, ShadowUnraidLegends said:

 

I am on the main branch, obviously I get a few hiccups here and there but their GitHub is fairly responsive at fixing stuff, usually I have been relatively stable. I'm now getting this in my log with an inability to load into webui:
 

`[21-05-2023 21:18:02]::[InvokeAI]::INFO --> Patchmatch initialized
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> GFPGAN Initialized
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> CodeFormer Initialized
[21-05-2023 21:18:03]::[InvokeAI]::INFO --> Face restoration initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> Patchmatch initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> GFPGAN Initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> CodeFormer Initialized
[21-05-2023 21:19:17]::[InvokeAI]::INFO --> Face restoration initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> Patchmatch initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> GFPGAN Initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> CodeFormer Initialized
[21-05-2023 21:19:48]::[InvokeAI]::INFO --> Face restoration initialized
From https://github.com/invoke-ai/InvokeAI
ff0e79f..650d69e main -> origin/main
02d2cbc..23abaae feat/images-service -> origin/feat/images-service
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> Patchmatch initialized
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> GFPGAN Initialized
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> CodeFormer Initialized
[22-05-2023 07:45:17]::[InvokeAI]::INFO --> Face restoration initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> Patchmatch initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> GFPGAN Initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> CodeFormer Initialized
[22-05-2023 07:55:55]::[InvokeAI]::INFO --> Face restoration initialized
From https://github.com/invoke-ai/InvokeAI
0ce628b..fab5df9 v2.3 -> origin/v2.3
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> Patchmatch initialized
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> InvokeAI, version 3.0.0+a0
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> InvokeAI runtime directory is "/home/invokeuser/userfiles"
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> GFPGAN Initialized
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> CodeFormer Initialized
[22-05-2023 07:57:49]::[InvokeAI]::INFO --> Face restoration initialized
Downloading sd-inpainting-1.5:
Downloading stable-diffusion-2.1:
** models.yaml exists. Renaming to models.yaml.orig
Successfully created new configuration file /home/invokeuser/userfiles/configs/models.yaml

** INVOKEAI INSTALLATION SUCCESSFUL **
If you installed manually from source or with 'pip install': activate the virtual environment
then run one of the following commands to start InvokeAI.

Web UI:
invokeai-web

Command-line client:
invokeai

If you installed using an installation script, run:
/home/invokeuser/userfiles/invoke.sh

Add the '--help' argument to see all of the command-line switches available for use.

Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Updates Found, Updating the local Files....
Updating ff0e79f..650d69e
Fast-forward
invokeai/frontend/web/src/app/components/InvokeAIUI.tsx | 11 +++++++++--
.../src/features/gallery/components/CurrentImageButtons.tsx | 8 +++++++-
.../web/src/features/gallery/components/HoverableImage.tsx | 8 +++++++-
invokeai/frontend/web/src/features/gallery/store/actions.ts | 4 ++++
4 files changed, 27 insertions(+), 4 deletions(-)
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
invoke> `



Seems like it's getting stuck on VAEs or install again, and I was under the impression that something in the actual main branch changed, and their perspective was that possibly we're loading onto legacy servers which they shutdown around the same time as when this stopped working for me. (few days ago as I am understanding).


Sometimes I am out of my league talking about this stuff, so sorry if I am using any terms incorrectly/confusingly!

Ok thats why, Yes all the legacy web server has been removed from main branch on get hub, however the nodes web server isnt fully ready for public use (still lots of changes happening),  currently the front end needs to be yarn bulit on each start, runs on port 5173, the backend needs to be run separately and the whole user configuration file has changed.

 

So You need to git clone -b prenodes, if you still want a working install, (or use the main dockerhub image in first post)

 

I have docker image that loads nodes ui, but this will all change before the public release of invokeai v3.0

 

I can post when im back in the office what i had to do to get nodes ui to currently work if you wanted to see

Edited by mickr777
Link to comment

added the Main docker for anyone wanting to migrate from prenodes to v3.0 beta,

(this docker will be the default one for updates, i did it this way to not force everyone on prenodes to do the update)

 

if you wish to do this migration, go to main post for instructions

 

DO NOT do this if you dont want to do the migration to v3.0 beta yet

Edited by mickr777
Link to comment

I am get the erro,I am using the full manuel install,I tried many times always showed this erro. Can you help me check it?

 

Checking if The Git Repo Has Changed....
remote: Enumerating objects: 39, done.
remote: Counting objects: 100% (39/39), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 39 (delta 28), reused 39 (delta 28), pack-reused 0
Unpacking objects: 100% (39/39), 7.85 KiB | 618.00 KiB/s, done.
From https://github.com/invoke-ai/InvokeAI
   9fb0b0959..ccbfa5d86  sdxl-support -> origin/sdxl-support
Local Files Are Up to Date
cp: cannot stat '/home/invokeuser//invokeai.yaml': No such file or directory
[2023-07-16 07:49:31,847]::[InvokeAI]::INFO --> Patchmatch initialized
INFO:     Started server process [71]
INFO:     Waiting for application startup.
[2023-07-16 07:49:32,396]::[InvokeAI]::DEBUG --> InvokeAI version 3.0.0+b5
[2023-07-16 07:49:32,396]::[InvokeAI]::DEBUG --> Internet connectivity is True
[2023-07-16 07:49:32,765]::[InvokeAI]::DEBUG --> config file=/home/invokeuser/invokeai/configs/models.yaml
[2023-07-16 07:49:32,765]::[InvokeAI]::DEBUG --> GPU device = cuda
[2023-07-16 07:49:32,765]::[InvokeAI]::DEBUG --> Maximum RAM cache size: 6.0 GiB
[2023-07-16 07:49:32,765]::[InvokeAI]::WARNING --> The file /home/invokeuser/invokeai/configs/models.yaml was not found. Initializing a new file
ERROR:    Traceback (most recent call last):

  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 671, in lifespan
    async with self.lifespan_context(app):
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 566, in __aenter__
    await self._router.startup()
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 648, in startup
    await handler()
  File "/home/invokeuser/InvokeAI/invokeai/app/api_app.py", line 77, in startup_event
    ApiDependencies.initialize(
  File "/home/invokeuser/InvokeAI/invokeai/app/api/dependencies.py", line 120, in initialize
    model_manager=ModelManagerService(config,logger),
  File "/home/invokeuser/InvokeAI/invokeai/app/services/model_manager_service.py", line 325, in __init__
    self.mgr = ModelManager(
  File "/home/invokeuser/InvokeAI/invokeai/backend/model_management/model_manager.py", line 317, in __init__
    self.initialize_model_config(self.config_path)
  File "/home/invokeuser/InvokeAI/invokeai/backend/model_management/model_manager.py", line 415, in initialize_model_config
    with open(config_path,'w') as yaml_file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/invokeuser/invokeai/configs/models.yaml'

ERROR:    Application startup failed. Exiting.
Task was destroyed but it is pending!
task: <Task pending name='Task-3' coro=<FastAPIEventService.__dispatch_from_queue() done, defined at /home/invokeuser/InvokeAI/invokeai/app/api/events.py:33> wait_for=<Future pending cb=[Task.task_wakeup()]>>
 

Link to comment
1 hour ago, suiwaychin said:

I am get the erro,I am using the full manuel install,I tried many times always showed this erro. Can you help me check it?

 

Checking if The Git Repo Has Changed....
remote: Enumerating objects: 39, done.
remote: Counting objects: 100% (39/39), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 39 (delta 28), reused 39 (delta 28), pack-reused 0
Unpacking objects: 100% (39/39), 7.85 KiB | 618.00 KiB/s, done.
From https://github.com/invoke-ai/InvokeAI
   9fb0b0959..ccbfa5d86  sdxl-support -> origin/sdxl-support
Local Files Are Up to Date
cp: cannot stat '/home/invokeuser//invokeai.yaml': No such file or directory
[2023-07-16 07:49:31,847]::[InvokeAI]::INFO --> Patchmatch initialized
INFO:     Started server process [71]
INFO:     Waiting for application startup.
[2023-07-16 07:49:32,396]::[InvokeAI]::DEBUG --> InvokeAI version 3.0.0+b5
[2023-07-16 07:49:32,396]::[InvokeAI]::DEBUG --> Internet connectivity is True
[2023-07-16 07:49:32,765]::[InvokeAI]::DEBUG --> config file=/home/invokeuser/invokeai/configs/models.yaml
[2023-07-16 07:49:32,765]::[InvokeAI]::DEBUG --> GPU device = cuda
[2023-07-16 07:49:32,765]::[InvokeAI]::DEBUG --> Maximum RAM cache size: 6.0 GiB
[2023-07-16 07:49:32,765]::[InvokeAI]::WARNING --> The file /home/invokeuser/invokeai/configs/models.yaml was not found. Initializing a new file
ERROR:    Traceback (most recent call last):

  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 671, in lifespan
    async with self.lifespan_context(app):
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 566, in __aenter__
    await self._router.startup()
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 648, in startup
    await handler()
  File "/home/invokeuser/InvokeAI/invokeai/app/api_app.py", line 77, in startup_event
    ApiDependencies.initialize(
  File "/home/invokeuser/InvokeAI/invokeai/app/api/dependencies.py", line 120, in initialize
    model_manager=ModelManagerService(config,logger),
  File "/home/invokeuser/InvokeAI/invokeai/app/services/model_manager_service.py", line 325, in __init__
    self.mgr = ModelManager(
  File "/home/invokeuser/InvokeAI/invokeai/backend/model_management/model_manager.py", line 317, in __init__
    self.initialize_model_config(self.config_path)
  File "/home/invokeuser/InvokeAI/invokeai/backend/model_management/model_manager.py", line 415, in initialize_model_config
    with open(config_path,'w') as yaml_file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/invokeuser/invokeai/configs/models.yaml'

ERROR:    Application startup failed. Exiting.
Task was destroyed but it is pending!
task: <Task pending name='Task-3' coro=<FastAPIEventService.__dispatch_from_queue() done, defined at /home/invokeuser/InvokeAI/invokeai/app/api/events.py:33> wait_for=<Future pending cb=[Task.task_wakeup()]>>
 

 

I am getting this same exact error.  I just tried installing this today.

 

Edited by Spiffy
Link to comment
19 minutes ago, suiwaychin said:

Are you sure we should put this in the userfile? but this erro said can not find in invokeAI/config

 

 The file /home/invokeuser/invokeai/configs/models.yaml was not found. Initializing a new file

there shouldn't be any config folder in the invokeai folder but you can add the folder and models.yaml there and see if that helps

 

Link to comment
43 minutes ago, suiwaychin said:

I tried , it doesn't work either. 

found the issue (I think) grab the updated start.sh file info from main post and delete container (but remove delete image tick box)

recreate the container, as the main post

 

or used the Simplified Install for main, as I pushed an update to it

 

(use either way but make sure to delete the appdata/invokeai/venv/pyvenv.cfg and if you have it appdata/invokeai/userfiles/invokeai.init files)

Edited by mickr777
Link to comment
5 hours ago, mickr777 said:

found the issue (I think) grab the updated start.sh file info from main post and delete container (but remove delete image tick box)

recreate the container, as the main post

 

or used the Simplified Install for main, as I pushed an update to it

 

(use either way but make sure to delete the appdata/invokeai/venv/pyvenv.cfg and if you have it appdata/invokeai/userfiles/invokeai.init files)

Thank you for the help, it works now. It troubled me in the past few days😂

  • Like 1
Link to comment
4 hours ago, wes.crockett said:

To go from 3.0 a0 to 3.0 release, do we need to delete the container, image, and directories?

If you on the manual setup Should only have to delete and  rebuild the container with the new files, if your on the docker hub images remove container delete image,  then add docker back but change the repo to mickr777/invokeai_unraid_main and start it should migrate everything for you 🤞(minus your outputs folder it need to be backup, as it will not show in new version)

Edited by mickr777
Link to comment

Hey Buddy

 

Just wanted to say thank you for creating this!

 

I have just installed as per your guide although for a while I couldn't figure out why my docker was filling up but after a few spaceinvader docker videos (this guy is a rock star) I was able to complete the install and launch the container..Woot Woot!

 

I haven't generated anything yet, just wondering how to confirm the GPU is being used as I have an Nvidia GeForce RTX 2070 GPU

 

Also are there any good guides for using prompts including negative prompts and which model to use, as I am a total noob so still finding my way around.

 

Again thank you for this !

 

PEACE

Kosti

Link to comment
35 minutes ago, Kosti said:

Hey Buddy

 

Just wanted to say thank you for creating this!

 

I have just installed as per your guide although for a while I couldn't figure out why my docker was filling up but after a few spaceinvader docker videos (this guy is a rock star) I was able to complete the install and launch the container..Woot Woot!

 

I haven't generated anything yet, just wondering how to confirm the GPU is being used as I have an Nvidia GeForce RTX 2070 GPU

 

Also are there any good guides for using prompts including negative prompts and which model to use, as I am a total noob so still finding my way around.

 

Again thank you for this !

 

PEACE

Kosti

https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#how-dynamic-prompts-work

That might help

 

If you have discord you should join the invokeai server

https://discord.gg/ZmtBAhwWhy

 

Also if it using the gpu you should see it on start in your docker logs

Screenshot_20230726_220659_Chrome.jpg

Edited by mickr777
  • Like 1
Link to comment
13 hours ago, mickr777 said:

https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#how-dynamic-prompts-work

That might help

 

If you have discord you should join the invokeai server

https://discord.gg/ZmtBAhwWhy

 

Also if it using the gpu you should see it on start in your docker logs

Screenshot_20230726_220659_Chrome.jpg

 

Greta, thanks

 

Yep seems its detected 

 

[2023-07-26 21:18:04,801]::[InvokeAI]::INFO --> InvokeAI version 3.0.0[0m

[2023-07-26 21:18:04,801]::[InvokeAI]::INFO --> Root directory = /home/invokeuser/userfiles[0m
[2023-07-26 21:18:04,801]::[InvokeAI]::DEBUG --> Internet connectivity is True[0m
[2023-07-26 21:18:05,101]::[InvokeAI]::DEBUG --> Config file=/home/invokeuser/userfiles/configs/models.yaml[0m
[2023-07-26 21:18:05,101]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 2070[0m
[2023-07-26 21:18:05,101]::[InvokeAI]::DEBUG --> Maximum RAM cache size: 6.0 GiB[0m
[2023-07-26 21:18:05,104]::[InvokeAI]::INFO --> Scanning /home/invokeuser/userfiles/models for new models[0m
[2023-07-26 21:18:05,303]::[InvokeAI]::INFO --> Scanned 0 files and directories, imported 0 models[0m
[2023-07-26 21:18:05,303]::[InvokeAI]::INFO --> Model manager service initialized[0m
O[0m: Application startup complete.

 

Link to comment

fix for UI not loading I forced the docker to use the dev WebUI.

 

if you are using the Dockerhub file:

click the invokeai container and click update

once updated right click the container and go edit

 

change this line to the following

image.png.a80cef6b7efb93eeb05c2930429dadf8.png

 

next find webui port and click edit and change it to the following

image.png.0e555ddbb1f8671233775aa218852cfc.png

 

to access the webui now you have to use yourunraidip:5173.

 

if you used the manual install:

I have updated the files in main post you will need to rebuild your image and container with them.

once built change the ports as the same as above

 

also if you get startup aborted:

there was a issue where the needed models were not downloaded, to force a download, delete the file /invokeai/venv/pyvenv.cfg and rerun the docker, this will force the download of missing models.

Edited by mickr777
Link to comment

sorry, I try to update to the latest one, but it failed as following, can you help check?

 

 

Local Files Are Up to Date
>> patchmatch.patch_match: INFO - Compiling and loading c extensions from "/home/invokeuser/venv/lib/python3.10/site-packages/patchmatch".
[2023-07-27 13:03:50,720]::[InvokeAI]::INFO --> Patchmatch initialized
/home/invokeuser/venv/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(

An exception has occurred: /home/invokeuser/userfiles/models/core/convert/CLIP-ViT-bigG-14-laion2B-39B-b160k is missing
== STARTUP ABORTED ==
** One or more necessary files is missing from your InvokeAI root directory **
** Please rerun the configuration script to fix this problem. **
** From the launcher, selection option [7]. **
** From the command line, activate the virtual environment and run "invokeai-configure --yes --skip-sd-weights" **
Press any key to continue...Checking if The Git Repo Has Changed....
Local Files Are Up to Date
[2023-07-27 13:33:30,957]::[InvokeAI]::INFO --> Patchmatch initialized
/home/invokeuser/venv/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(

An exception has occurred: /home/invokeuser/userfiles/models/core/convert/CLIP-ViT-bigG-14-laion2B-39B-b160k is missing
== STARTUP ABORTED ==
** One or more necessary files is missing from your InvokeAI root directory **
** Please rerun the configuration script to fix this problem. **
** From the launcher, selection option [7]. **
** From the command line, activate the virtual environment and run "invokeai-configure --yes --skip-sd-weights" **

 

Link to comment
18 minutes ago, suiwaychin said:

sorry, I try to update to the latest one, but it failed as following, can you help check?

 

 

Local Files Are Up to Date
>> patchmatch.patch_match: INFO - Compiling and loading c extensions from "/home/invokeuser/venv/lib/python3.10/site-packages/patchmatch".
[2023-07-27 13:03:50,720]::[InvokeAI]::INFO --> Patchmatch initialized
/home/invokeuser/venv/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(

An exception has occurred: /home/invokeuser/userfiles/models/core/convert/CLIP-ViT-bigG-14-laion2B-39B-b160k is missing
== STARTUP ABORTED ==
** One or more necessary files is missing from your InvokeAI root directory **
** Please rerun the configuration script to fix this problem. **
** From the launcher, selection option [7]. **
** From the command line, activate the virtual environment and run "invokeai-configure --yes --skip-sd-weights" **
Press any key to continue...Checking if The Git Repo Has Changed....
Local Files Are Up to Date
[2023-07-27 13:33:30,957]::[InvokeAI]::INFO --> Patchmatch initialized
/home/invokeuser/venv/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(

An exception has occurred: /home/invokeuser/userfiles/models/core/convert/CLIP-ViT-bigG-14-laion2B-39B-b160k is missing
== STARTUP ABORTED ==
** One or more necessary files is missing from your InvokeAI root directory **
** Please rerun the configuration script to fix this problem. **
** From the launcher, selection option [7]. **
** From the command line, activate the virtual environment and run "invokeai-configure --yes --skip-sd-weights" **

 

there was a issue where the needed models were not downloaded, to force a download, delete the file appdata/invokeai/venv/pyvenv.cfg only in unraid and rerun the docker, this will force the download of missing models.

Edited by mickr777
Link to comment
  • 4 weeks later...

Thanks for this, OP, much appreciated.

 

I'm getting the following error when clicking "Invoke" for the first time after the container successfully starts.

 

[2023-08-23 15:52:01,758]::[uvicorn.access]::INFO --> 192.168.0.165:51221 - "POST /api/v1/sessions/ HTTP/1.1" 200
[2023-08-23 15:52:01,778]::[uvicorn.error]::DEBUG --> < TEXT '42["unsubscribe",{"session":"4cdb7088-dedc-4b12-98b0-9ffddac5428c"}]' [68 bytes]
[2023-08-23 15:52:01,781]::[uvicorn.error]::DEBUG --> < TEXT '42["subscribe",{"session":"eb903a5d-0b69-46d3-9054-1104f5818247"}]' [66 bytes]
[2023-08-23 15:52:01,797]::[uvicorn.access]::INFO --> 192.168.0.165:51221 - "PUT /api/v1/sessions/eb903a5d-0b69-46d3-9054-1104f5818247/invoke?all=true HTTP/1.1" 500
[2023-08-23 15:52:01,798]::[uvicorn.error]::ERROR --> Exception in ASGI application

Traceback (most recent call last):
  File "/home/invokeuser/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/home/invokeuser/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/applications.py", line 270, in __call__
    await super().__call__(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 92, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 147, in simple_response
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi_events/middleware.py", line 43, in __call__
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/routing.py", line 235, in app
    raw_response = await run_endpoint_function(
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
    return await dependant.call(**values)
  File "/home/invokeuser/InvokeAI/invokeai/app/api/routers/sessions.py", line 263, in invoke_session
    ApiDependencies.invoker.invoke(session, invoke_all=all)
  File "/home/invokeuser/InvokeAI/invokeai/app/services/invoker.py", line 25, in invoke
    invocation = graph_execution_state.next()
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 772, in next
    prepared_id = self._prepare()
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 960, in _prepare
    create_results = self._create_execution_node(next_node_id, iteration_mappings)  # type: ignore
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 875, in _create_execution_node
    self.execution_graph.add_edge(new_edge)
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 302, in add_edge
    self._validate_edge(edge)
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 391, in _validate_edge
    raise InvalidEdgeError(
invokeai.app.services.graph.InvalidEdgeError: Fields are incompatible: cannot connect 143a9a08-8a5e-4311-b3fe-d72618510e4f.a to 81e29919-cb5f-4e1b-98d3-cc8284e34ee6.start

 

If anyone has any idea how to resolve this, will appreciate your help very much. Cheers.

Link to comment
53 minutes ago, wildfyre said:

Thanks for this, OP, much appreciated.

 

I'm getting the following error when clicking "Invoke" for the first time after the container successfully starts.

 

[2023-08-23 15:52:01,758]::[uvicorn.access]::INFO --> 192.168.0.165:51221 - "POST /api/v1/sessions/ HTTP/1.1" 200
[2023-08-23 15:52:01,778]::[uvicorn.error]::DEBUG --> < TEXT '42["unsubscribe",{"session":"4cdb7088-dedc-4b12-98b0-9ffddac5428c"}]' [68 bytes]
[2023-08-23 15:52:01,781]::[uvicorn.error]::DEBUG --> < TEXT '42["subscribe",{"session":"eb903a5d-0b69-46d3-9054-1104f5818247"}]' [66 bytes]
[2023-08-23 15:52:01,797]::[uvicorn.access]::INFO --> 192.168.0.165:51221 - "PUT /api/v1/sessions/eb903a5d-0b69-46d3-9054-1104f5818247/invoke?all=true HTTP/1.1" 500
[2023-08-23 15:52:01,798]::[uvicorn.error]::ERROR --> Exception in ASGI application

Traceback (most recent call last):
  File "/home/invokeuser/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/home/invokeuser/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/applications.py", line 270, in __call__
    await super().__call__(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 92, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 147, in simple_response
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi_events/middleware.py", line 43, in __call__
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/routing.py", line 235, in app
    raw_response = await run_endpoint_function(
  File "/home/invokeuser/venv/lib/python3.10/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
    return await dependant.call(**values)
  File "/home/invokeuser/InvokeAI/invokeai/app/api/routers/sessions.py", line 263, in invoke_session
    ApiDependencies.invoker.invoke(session, invoke_all=all)
  File "/home/invokeuser/InvokeAI/invokeai/app/services/invoker.py", line 25, in invoke
    invocation = graph_execution_state.next()
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 772, in next
    prepared_id = self._prepare()
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 960, in _prepare
    create_results = self._create_execution_node(next_node_id, iteration_mappings)  # type: ignore
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 875, in _create_execution_node
    self.execution_graph.add_edge(new_edge)
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 302, in add_edge
    self._validate_edge(edge)
  File "/home/invokeuser/InvokeAI/invokeai/app/services/graph.py", line 391, in _validate_edge
    raise InvalidEdgeError(
invokeai.app.services.graph.InvalidEdgeError: Fields are incompatible: cannot connect 143a9a08-8a5e-4311-b3fe-d72618510e4f.a to 81e29919-cb5f-4e1b-98d3-cc8284e34ee6.start

 

If anyone has any idea how to resolve this, will appreciate your help very much. Cheers.

 

Friends, I figured it out. The console output says this 

[2023-08-23 16:50:41,249]::[uvicorn.error]::INFO --> Uvicorn running on http://0.0.0.0:9090 (Press CTRL+C to quit)

So I pointed to http://<my-docker-IP>:9090 and tried to use the app there.

Do not use 9090, use 5173 instead, as configured in the docker template.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.