[Guide] InvokeAI: A Stable Diffusion Toolkit - Docker


Recommended Posts

This is a simple unofficial Docker for Unraid (with most data stored in appdata and uses git pull, this helps base image to not need constant updates) for InvokeAI: A Stable Diffusion Toolkit, on start it will check for updates from git, update python venv if changes are needed and auto start the web ui

 

This Docker is using the main branch of InvokeAI, as it is getting updated all the time, things might break every now and then

 

check out their hard work @ https://github.com/invoke-ai/InvokeAI  - Changelog 

or join the InvokeAI discord https://discord.gg/ZmtBAhwWhy

 

https://github.com/invoke-ai/InvokeAI/releases

 

I Know only a small amount about making a docker so this is probably not the right way as I just used google to work it all out, but it worked for me, thought others might find it useful.

 

(If you are interested, I am using the The ASUS Phoenix GeForce RTX 3060 12GB, Gives good speed and ram for the price, low power usage and small size, plus the Nvidia driver plugin for unraid )

 

Make sure you agree with the CreativML Reponsible AI License before installing

https://huggingface.co/spaces/CompVis/stable-diffusion-license

 

My Opinion For best Experience (may or may not run on other hardware):

  • Minimal Requirements: A Nvidia GPU with at least 6gb Vram, at least a Pascal based cards, 12gb free system ram, 20gb storage space
  • Recommended requirements:  A Nvidia GPU with 8gb+ Vram, Turing based cards+, 16gb+ free system ram,  40gb+ storage space

 

There is now 2 Options to make the docker container:

 

Option 1 - Simplified Install (using Docker Hub) - Main (recommended way)

1. building the docker container  

  • make a file called my-invokeai.xml or my-invokeai_prenodes.xml in \config\plugins\dockerMan\templates-user (on your unraid flash drive)
  • add the text below from one of the options below to it and save
  • then go to your unraid gui and then the docker tab, click "add container", in template dropdown box select your user template "invokeai"
  • change port/host paths if required
  • add your hugging face access token if you wish to have auto download of some models/concepts/diffusers

 

For InvokeAI main branch (Unstable as it get all updates)

<?xml version="1.0"?>
<Container version="2">
  <Name>InvokeAI</Name>
  <Repository>mickr777/invokeai_unraid_main</Repository>
  <Registry/>
  <Network>bridge</Network>
  <MyIP/>
  <Shell>bash</Shell>
  <Privileged>false</Privileged>
  <Support>https://forums.unraid.net/topic/130913-guide-invokeai-a-stable-diffusion-toolkit-docker/</Support>
  <Project>https://github.com/invoke-ai/InvokeAI/</Project>
  <Overview>Simplified Docker with auto update for InvokeAI and Unraid</Overview>
  <Category>Other: Status:Beta</Category>
  <WebUI>http://[IP]:[PORT:5173]/</WebUI>
  <TemplateURL/>
  <Icon>https://i.ibb.co/N2c008N/invokeai.png</Icon>
  <ExtraParams>--gpus all -it</ExtraParams>
  <PostArgs/>
  <CPUset/>
  <DateInstalled/>
  <DonateText/>
  <DonateLink/>
  <Requires/>
  <Config Name="InvokeAI" Target="/home/invokeuser/InvokeAI/" Default="/mnt/cache/appdata/invokeai/invokeai/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/invokeai/</Config>
  <Config Name="userfiles" Target="/home/invokeuser/userfiles/" Default="/mnt/cache/appdata/invokeai/userfiles/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/userfiles/</Config>
  <Config Name="venv" Target="/home/invokeuser/venv/" Default="/mnt/cache/appdata/invokeai/venv/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/venv/</Config>
  <Config Name="Huggingface Token" Target="HUGGING_FACE_HUB_TOKEN" Default="" Mode="" Description="If you wish to auto download recommened models please enter you Huggingface token here" Type="Variable" Display="always" Required="false" Mask="true"></Config>
  <Config Name="Webui Port" Target="5173" Default="5173" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">5173</Config>
  <Config Name="cache" Target="/home/invokeuser/.cache" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/appdata/invokeai/cache</Config>
</Container>

 

2. Last steps

  • after container build, on first run the python venv will be created and the preload of some models/weights/diffusers (this can take a while and will download 20gb+ of data, open docker log for progress)
  • once this is done load up any web browser and point it to [Your Unraid IP]:5173

 

---------------------------------------------------------------------------------------------

 

Option 2 - Full Manual Install - Main Branch (or just want to see what i did to get it to work)

1. First we will create the needed files

  • create a folder on one of the drive's on you unraid server (I named mine invokeai)
  • inside that folder create a text file called Dockerfile (make sure you remove the txt extension if it has one)
  • add this info in to it and save:

 

FROM ubuntu:22.04

RUN apt-get update \ 
  && DEBIAN_FRONTEND="noninteractive" \ 
  apt-get install -y \
  git \
  dos2unix \
  python3-pip \
  python3-venv \
  libopencv-dev \
  sudo \
  && apt-get clean

RUN apt-get install -y curl sudo
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash - && sudo apt-get install -y nodejs
RUN npm install -g pnpm

RUN useradd --create-home -u 99 -g 100 invokeuser
WORKDIR /home/invokeuser
ADD invokeai.yaml .
ADD start.sh .
RUN dos2unix start.sh
RUN chmod +x start.sh
USER invokeuser

ENTRYPOINT ["/bin/bash", "start.sh"]

 

  • then create a text file called start.sh
  • add this info in to it and click save:

 

#!/bin/bash

# sets a variable for the home directory
HOMEDIR="/home/invokeuser/"

# Checks if git repo has been cloned and clones it if needed
if [ -f "$HOMEDIR/InvokeAI/pyproject.toml" ] ; then
    git config --global --add safe.directory InvokeAI
	cd $HOMEDIR/InvokeAI/
else
    echo "Cloning Git Repo in to Local Folder..."
    git config --global --add safe.directory InvokeAI
    git clone -b main https://github.com/invoke-ai/InvokeAI.git InvokeAI
	cd $HOMEDIR/InvokeAI/
fi

# Update's to main branch if on prenodes tag
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
if [[ "$BRANCH" = "pre-nodes" ]]; then
    git fetch
    git checkout main
	source $HOMEDIR/venv/bin/activate
    pip install --use-pep517 --no-cache-dir --upgrade -e . 
	invokeai-configure --root="$HOMEDIR/userfiles/" --yes
fi

# Checks if Python Environment has been made and creates it if it has not
if [ -f "$HOMEDIR/venv/pyvenv.cfg" ] ; then
    source $HOMEDIR/venv/bin/activate
else
    echo "Creating Python Environment...."
    python3  -m venv $HOMEDIR/venv/
    source $HOMEDIR/venv/bin/activate
    pip install --use-pep517 --no-cache-dir -e .
    echo "Preloading Important Model/Weights...."
    invokeai-configure --root="$HOMEDIR/userfiles/" --yes
fi 

# installs onnxruntime-gpu 1.16.3 if not found
if [ -d "$HOMEDIR/venv/lib/python3.10/site-packages/onnxruntime_gpu-1.15.1"* ] || [ ! -d "$HOMEDIR/venv/lib/python3.10/site-packages/onnxruntime_gpu"* ]; then
    source $HOMEDIR/venv/bin/activate
    pip install onnxruntime-gpu 1.16.3
fi

# Updates Xformers to 0.0.23 or installs it if not found
if [ -d "$HOMEDIR/venv/lib/python3.10/site-packages/xformers-0.0.22"* ] || [ ! -d "$HOMEDIR/venv/lib/python3.10/site-packages/xformers/" ] ; then
    source $HOMEDIR/venv/bin/activate
    pip install xformers==0.0.23
fi

# Checks if the git repo has had any changes and updates if needed
echo "Checking if The Git Repo Has Changed...."
git fetch
UPSTREAM=${1:-'@{u}'}
LOCAL=$(git rev-parse @)
REMOTE=$(git rev-parse "$UPSTREAM")
BASE=$(git merge-base @ "$UPSTREAM")

if [ $LOCAL = $REMOTE ]; then
    echo "Local Files Are Up to Date"
elif [ $LOCAL = $BASE ]; then
    echo "Updates Found, Updating the local Files...."
	git stash
    git config pull.rebase true
    git pull
	git stash pop
fi

# Gets date modified for upcoming if statements
current=$(date +%s)
last_modified_env=$(stat -c "%Y" "$HOMEDIR/InvokeAI/pyproject.toml")
last_modified_sup=$(stat -c "%Y" "$HOMEDIR/userfiles/models/core/convert/bert-base-uncased/tokenizer.json")

# Updates Python Environment if changes have been made to the pyproject.toml file
if [ -f "$HOMEDIR/venv/pyvenv.cfg" ] && [ $(($current-$last_modified_env)) -lt 60 ] ; then
    echo "Updates Found, Updating python Environment...."
    pip install --use-pep517 --no-cache-dir --upgrade -e .
fi

# Updates Support Models if they have not been updated for 60days
if [ -f "$HOMEDIR/userfiles/models/core/convert/bert-base-uncased/tokenizer.json" ] && [ $(($current-$last_modified_sup)) -gt 5184000 ] ; then
    echo "Updating Support Models...."
    invokeai-configure --root="$HOMEDIR/userfiles/" --yes
fi

# Adds a line to the vite.app.config.ts file to allow network access to the webui
cd $HOMEDIR/InvokeAI/invokeai/frontend/web/config/
if ! grep -q -F "host: true," "vite.app.config.ts" ; then
    sed -i "31i host: true," vite.app.config.ts
fi

# Updates the invokeai.yaml with one that is setup for the docker
cd $HOMEDIR/userfiles/
if ! grep -q -F "root: /home/invokeuser/userfiles" "invokeai.yaml" ; then
   cp -fr $HOMEDIR/invokeai.yaml $HOMEDIR/userfiles/invokeai.yaml
fi

# Installs and actives the yarn dev environment then loads web ui
echo "Loading InvokeAI WebUI....."
cd $HOMEDIR/InvokeAI/invokeai/frontend/web/
pnpm install
pnpm dev & ( cd $HOMEDIR/InvokeAI/ && python scripts/invokeai-web.py --root="$HOMEDIR/userfiles/" )

 

then create a text file called invokeai.yaml

add this info in to it and click save:

InvokeAI:
  Web Server:
    host: 0.0.0.0
    port: 9090
    allow_origins: []
    allow_credentials: true
    allow_methods:
    - '*'
    allow_headers:
    - '*'
  Features:
    esrgan: true
    internet_available: true
    log_tokenization: false
    nsfw_checker: false
    patchmatch: true
    restore: true
  Memory/Performance:
    always_use_cpu: false
    free_gpu_mem: false
    max_loaded_models: 2
    max_cache_size: 6.0
    precision: auto
    sequential_guidance: false
    xformers_enabled: true
    tiled_decode: false
  Paths:
    root: /home/invokeuser/userfiles
    autoimport_dir: autoimport/main
    lora_dir: autoimport/lora
    embedding_dir: autoimport/embedding
    controlnet_dir: autoimport/controlnet
    conf_path: /home/invokeuser/userfiles/configs/models.yaml
    models_dir: /home/invokeuser/userfiles/models
    legacy_conf_dir: /home/invokeuser/userfiles/configs/stable-diffusion
    db_dir: /home/invokeuser/userfiles/databases
    outdir: /home/invokeuser/userfiles/outputs
    from_file: null
    use_memory_db: false
  Models:
    model: null
  Logging:
    log_handlers:
    - console
    log_format: color
    log_level: debug

 

2. Building the docker image

  • Open the Unraid Terminal
  • cd to folder where the 2 files we just created are stored
  • run this command and wait for it to be done:
docker build . -t invokeai_docker_main

 

3. building the docker container 

  • make a file called my-invokeai.xml in \config\plugins\dockerMan\templates-user (on your unraid flash drive)
  • add the text below to it and save
  • then go to your unraid gui and then the docker tab, click "add container", in template dropdown box select your user template "invokeai"
  • change port/host paths if required
  • add your hugging face token if you wish to have auto download of some models/concepts/diffusers

 

<?xml version="1.0"?>
<Container version="2">
  <Name>InvokeAI</Name>
  <Repository>invokeai_docker_main</Repository>
  <Registry/>
  <Network>bridge</Network>
  <MyIP/>
  <Shell>bash</Shell>
  <Privileged>false</Privileged>
  <Support>https://forums.unraid.net/topic/130913-guide-invokeai-a-stable-diffusion-toolkit-docker/</Support>
  <Project>https://github.com/invoke-ai/InvokeAI/</Project>
  <Overview>Simplified Docker with auto update for InvokeAI and Unraid</Overview>
  <Category>Other: Status:Beta</Category>
  <WebUI>http://[IP]:[PORT:5173]/</WebUI>
  <TemplateURL/>
  <Icon>https://i.ibb.co/N2c008N/invokeai.png</Icon>
  <ExtraParams>--gpus all -it</ExtraParams>
  <PostArgs/>
  <CPUset/>
  <DateInstalled/>
  <DonateText/>
  <DonateLink/>
  <Requires/>
  <Config Name="InvokeAI" Target="/home/invokeuser/InvokeAI/" Default="/mnt/cache/appdata/invokeai/invokeai/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/invokeai/</Config>
  <Config Name="userfiles" Target="/home/invokeuser/userfiles/" Default="/mnt/cache/appdata/invokeai/userfiles/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/userfiles/</Config>
  <Config Name="venv" Target="/home/invokeuser/venv/" Default="/mnt/cache/appdata/invokeai/venv/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/venv/</Config>
  <Config Name="Huggingface Token" Target="HUGGING_FACE_HUB_TOKEN" Default="" Mode="" Description="If you wish to auto download recommened models please enter you Huggingface token here" Type="Variable" Display="always" Required="false" Mask="true"></Config>
  <Config Name="Webui Port" Target="5173" Default="" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">5173</Config>
  <Config Name="cache" Target="/home/invokeuser/.cache" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/appdata/invokeai/cache</Config>
</Container>

 

4. Last steps

  • after container build, on first run the python venv will be created and the preload of some models/weights/diffusers (this can take a while and will download 20gb+ of data, open docker log for progress)
  • once this is done load up any web browser and point it to [Your Unraid IP]:5173 (or the port you set)

Last Notes:

  • if you run in to errors after updates, cleaning out the /invokeai/invokeai/ folder and/or deleting the file /invokeai/venv/pyvenv.cfg and rerunning the docker can force a part rebuild and fix a lot of issues

 

 

Feel free to comment with any suggestions

Edited by mickr777
  • Like 3
Link to comment
  • mickr777 changed the title to [Guide] InvokeAI: A Stable Diffusion Toolkit - Docker (updated to Support V2.2.4 of InvokeAI)
2 hours ago, LittelD said:

argh damm.... is it without gpu possible :D?

Yes you can run from cpu, but is extremely slow, to do this in my-invokeai.xml from the guide

change

<ExtraParams>--gpus all</ExtraParams>

to

<ExtraParams>--gpus 0</ExtraParams>

 

Edited by mickr777
Link to comment
1 hour ago, mickr777 said:

Yes you can run from cpu, but is extremely slow, to do this in my-invokeai.xml from the guide

change

<ExtraParams>--gpus all</ExtraParams>

to

<ExtraParams/>

 

sorry not working 

 

getting following error then docker stops suddenly

 

venv/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.)
  return torch._C._cuda_getDeviceCount() > 0

 

Link to comment
28 minutes ago, LittelD said:

sorry not working 

 

getting following error then docker stops suddenly

 

venv/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.)
  return torch._C._cuda_getDeviceCount() > 0

 

Ok I Updated the script, try building again you will need to remove the docker, delete the img and the folders made and start the guide again try 

<ExtraParams>--gpus 0</ExtraParams>

 

Edited by mickr777
Link to comment

I got it all set up but when I start it, it does this...

 

Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
>> Patchmatch initialized
* Initializing, be patient...
>> InvokeAI runtime directory is "/userfiles"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> Initializing safety checker
>> Current VRAM usage:  1.22G
>> Scanning Model: stable-diffusion-1.5
>> Model Scanned. OK!!
>> Loading stable-diffusion-1.5 from /userfiles/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
   | Loading VAE weights from: /userfiles/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
>> Model loaded in 7.57s
>> Max VRAM used to load the model: 3.38G 
>> Current VRAM usage:3.38G
>> Current embedding manager terms: *
>> Setting Sampler to k_lms

* Initialization done! Awaiting your command (-h for help, 'q' to quit)
invoke> goodbye!

Link to comment
2 hours ago, VonHex said:

I got it all set up but when I start it, it does this...

 

Checking if The Git Repo Has Changed....
Local Files Are Up to Date
Loading InvokeAI WebUI.....
>> Patchmatch initialized
* Initializing, be patient...
>> InvokeAI runtime directory is "/userfiles"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> Initializing safety checker
>> Current VRAM usage:  1.22G
>> Scanning Model: stable-diffusion-1.5
>> Model Scanned. OK!!
>> Loading stable-diffusion-1.5 from /userfiles/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
   | Loading VAE weights from: /userfiles/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
>> Model loaded in 7.57s
>> Max VRAM used to load the model: 3.38G 
>> Current VRAM usage:3.38G
>> Current embedding manager terms: *
>> Setting Sampler to k_lms

* Initialization done! Awaiting your command (-h for help, 'q' to quit)
invoke> goodbye!

Looks like their build script makes the invokeai.init file now on start,  so I needed to change my script a little, but if you edit and delete everything in the userfiles/invokeai.init file and just add --web --host="0.0.0.0" to it should fix it

Edited by mickr777
Link to comment
On 12/17/2022 at 12:56 AM, mickr777 said:

Ok I Updated the script, try building again you will need to remove the docker, delete the img and the folders made and start the guide again try 

<ExtraParams>--gpus 0</ExtraParams>

 

Thanks alot, somehow didnt work. 

But i ordered a Tesla M40. I will wait and try then :)

Link to comment
  • mickr777 changed the title to [Guide] InvokeAI: A Stable Diffusion Toolkit - Docker (updated to Support V2.2.5 of InvokeAI)

My M40 arrived... but im still getting an error :D

docker run
  -d
  --name='InvokeAI'
  --net='bridge'
  -e TZ="Europe/Berlin"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="UnraidTower"
  -e HOST_CONTAINERNAME="InvokeAI"
  -e 'HUGGING_FACE_HUB_TOKEN'='xxxxxxxxxxxxxxxxxxx'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.webui='http://[IP]:[PORT:7790]/'
  -l net.unraid.docker.icon='https://i.ibb.co/LPkz8X8/logo-13003d72.png'
  -p '7790:7790/tcp'
  -v '/mnt/cache/appdata/invokeai/invokeai/':'/InvokeAI/':'rw'
  -v '/mnt/cache/appdata/invokeai/userfiles/':'/userfiles/':'rw'
  -v '/mnt/user/appdata/invokeai/venv':'/venv':'rw'
  --gpus all 'invokeai_docker'
7db9xxxxxxxxxxxxx0eb5327xxxxxx
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

 

ay ay seems not to be easy with my config :D

Link to comment
2 hours ago, LittelD said:

My M40 arrived... but im still getting an error :D

docker run
  -d
  --name='InvokeAI'
  --net='bridge'
  -e TZ="Europe/Berlin"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="UnraidTower"
  -e HOST_CONTAINERNAME="InvokeAI"
  -e 'HUGGING_FACE_HUB_TOKEN'='xxxxxxxxxxxxxxxxxxx'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.webui='http://[IP]:[PORT:7790]/'
  -l net.unraid.docker.icon='https://i.ibb.co/LPkz8X8/logo-13003d72.png'
  -p '7790:7790/tcp'
  -v '/mnt/cache/appdata/invokeai/invokeai/':'/InvokeAI/':'rw'
  -v '/mnt/cache/appdata/invokeai/userfiles/':'/userfiles/':'rw'
  -v '/mnt/user/appdata/invokeai/venv':'/venv':'rw'
  --gpus all 'invokeai_docker'
7db9xxxxxxxxxxxxx0eb5327xxxxxx
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

 

ay ay seems not to be easy with my config :D

Did you install the unraid nvidia driver plug in?

plus it is a good idea to install NVTOP and GPU Statistics plugins with it too

 

 

Also in your my-invokeai.xml only change the port like this leave the rest 9090, if your using a different default port

<Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">7790</Config>

 

Edited by mickr777
Link to comment
35 minutes ago, mickr777 said:

Did you install the unraid nvidia driver plug in?

Also good to install NVTOP and GPU Statistics plugins with it too

 

 

Also in your my-invokeai.xml only change the port like this leave the rest 9090, if your using a different default port

<Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">7790</Config>

 

yeah well, as far as i found out Tesla cards are not supported by the plugin. trying to find some other way :(

Link to comment
10 minutes ago, LittelD said:

yeah well, as far as i found out Tesla cards are not supported by the plugin. trying to find some other way :(

Ah I just saw you purchased a Telsa m40, worse case you might have to create a windows or linux vm and passthrough the gpu and install the gpu driver in the vm and then InvokeAI in it using there installer. (but that is outside the scope of my guide)

 

https://github.com/invoke-ai/InvokeAI/releases/tag/v2.2.5

Edited by mickr777
Link to comment
1 minute ago, mickr777 said:

Ah I just saw you purchased a Telsa m40, worse case you might have to create a windows or linux vm and passthrough the gpu and install the gpu driver in the vm and then InvokeAI in it using there installer.

 

https://github.com/invoke-ai/InvokeAI/releases/tag/v2.2.5

nooooo passing through the card seems not to be that easy also hahahaha

germans would say... vom regen in die taufe :D

Link to comment

Hi, thanks for this, after clearing the persistent store a couple times I am still having this crash:

 

You may download the recommended models (about 10GB total), select a customized set, or
completely skip this step.

Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: 
A problem occurred during initialization.
The error was: "EOF when reading a line"
Traceback (most recent call last):
  File "/InvokeAI/ldm/invoke/CLI.py", line 96, in main
    gen = Generate(
  File "/InvokeAI/ldm/generate.py", line 160, in __init__
    mconfig             = OmegaConf.load(conf)
  File "/venv/lib/python3.10/site-packages/omegaconf/omegaconf.py", line 189, in load
    with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/userfiles/configs/models.yaml'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/InvokeAI/scripts/configure_invokeai.py", line 780, in main
    errors.add(download_weights(opt))
  File "/InvokeAI/scripts/configure_invokeai.py", line 597, in download_weights
    choice = user_wants_to_download_weights()
  File "/InvokeAI/scripts/configure_invokeai.py", line 127, in user_wants_to_download_weights
    choice = input('Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: ')
EOFError: EOF when reading a line

 

I was able to resolve this by

 

mkdir /userfiles/configs

 

, rebuilding the container (error again - missing file) then,

 

cp -r /invokeai/configs/stable-diffusion /userfiles/configs/.

 

Appears there is an expectation that configs exists pre, ready for the models.yaml file and stable-diffusion prefs.

 

Newb to this so more than likely this is a hack.

 

Cheers!

 

Ref: https://github.com/invoke-ai/InvokeAI/issues/1420

Link to comment
3 hours ago, neurocis said:

Hi, thanks for this, after clearing the persistent store a couple times I am still having this crash:

 

You may download the recommended models (about 10GB total), select a customized set, or
completely skip this step.

Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: 
A problem occurred during initialization.
The error was: "EOF when reading a line"
Traceback (most recent call last):
  File "/InvokeAI/ldm/invoke/CLI.py", line 96, in main
    gen = Generate(
  File "/InvokeAI/ldm/generate.py", line 160, in __init__
    mconfig             = OmegaConf.load(conf)
  File "/venv/lib/python3.10/site-packages/omegaconf/omegaconf.py", line 189, in load
    with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/userfiles/configs/models.yaml'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/InvokeAI/scripts/configure_invokeai.py", line 780, in main
    errors.add(download_weights(opt))
  File "/InvokeAI/scripts/configure_invokeai.py", line 597, in download_weights
    choice = user_wants_to_download_weights()
  File "/InvokeAI/scripts/configure_invokeai.py", line 127, in user_wants_to_download_weights
    choice = input('Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: ')
EOFError: EOF when reading a line

 

I was able to resolve this by

 

mkdir /userfiles/configs

 

, rebuilding the container (error again - missing file) then,

 

cp -r /invokeai/configs/stable-diffusion /userfiles/configs/.

 

Appears there is an expectation that configs exists pre, ready for the models.yaml file and stable-diffusion prefs.

 

Newb to this so more than likely this is a hack.

 

Cheers!

 

Ref: https://github.com/invoke-ai/InvokeAI/issues/1420

thanks you, it looks like when no hugginface token is given, the folders are not created I have updated start.sh and made some notes at bottom of first post

Link to comment
  • mickr777 changed the title to [Guide] InvokeAI: A Stable Diffusion Toolkit - Docker (updated to Support Diffuser Update of InvokeAI)

Im getting this error after following the instructions, when creating the container with my XML file: 

 

docker: Error response from daemon: pull access denied for invokeai_docker, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.

 

Edit: in the docker template screen i changed the repo from "invokeai_docker" to "invokeai" and now it works. Maybe it has something to do with the fact that i had another "invokeai_docker" container previously.

Edited by YourNightmar3
Link to comment
6 minutes ago, YourNightmar3 said:

Im getting this error after following the instructions, when creating the container with my XML file: 

 

docker: Error response from daemon: pull access denied for invokeai_docker, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.

that normally happens if the docker image doesn't exist locally.

 

when making the docker image did you use this

docker build . -t invokeai_docker

as the name there needs to match the repo in the xml, as we are only making a local image not an online one

Edited by mickr777
Link to comment
33 minutes ago, mickr777 said:

that normally happens if the docker image doesn't exist locally.

 

when making the docker image did you use this

docker build . -t invokeai_docker

as the name there needs to match the repo in the xml, as we are only making a local image not an online one

 

Ah thank you, i'm not very experienced with Docker and i must have changed the name when i executed the docker build command. It's working now.

Link to comment
  • mickr777 changed the title to [Guide] InvokeAI: A Stable Diffusion Toolkit - Docker

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.