mickr777 Posted November 13, 2022 Share Posted November 13, 2022 (edited) This is a simple Docker for Unraid (with most data stored in appdata and uses git pull, this helps base image to not need constant updates) for InvokeAI: A Stable Diffusion Toolkit, on start it will check for updates from git, update python venv if changes are needed and auto start the web ui This Docker is using the main branch of InvokeAI, as it is getting updated all the time, things might break every now and then check out their hard work @ https://github.com/invoke-ai/InvokeAI - Changelog Latest docker changes (4 May 23): just uploaded a quick fix to docker hub, locked the current main branch to the pre-nodes tag till main migration to nodes is done, however if there is files under /invokeai/invokeai/ they will need to be deleted before starting the docker so it can pull the pre-nodes tag I Know only a small amount about making a docker so this is probably not the right way as I just used google to work it all out, but it worked for me, thought others might find it useful. (If you are interested, I am using the The ASUS Phoenix GeForce RTX 3060 12GB, Gives good speed and ram for the price, low power usage and small size, plus the Nvidia driver plugin for unraid ) Make sure you agree with the CreativML Reponsible AI License before installing https://huggingface.co/spaces/CompVis/stable-diffusion-license My Opinion For best Experience (may or may not run on other hardware): Minimal Requirements: A Nvidia GPU with at least 6gb Vram, at least a Pascal based cards, 12gb free system ram, 20gb storage space Recommended requirements: A Nvidia GPU with 8gb+ Vram, Turing based cards+, 16gb+ free system ram, 40gb+ storage space There is now 2 Options to make the docker container: Option 1 - Simplified Install (using Docker Hub) - Main and v2.3 Branch 1. building the docker container make a file called my-invokeai.xml or my-invokeai_v23.xml in \config\plugins\dockerMan\templates-user (on your unraid flash drive) add the text below from one of the options below to it and save then go to your unraid gui and then the docker tab, click "add container", in template dropdown box select your user template "invokeai" change port/host paths if required add your hugging face access token if you wish to have auto download of some models/concepts/diffusers For InvokeAI main branch (currently locked to pre-nodes tag, while node migration happens) <?xml version="1.0"?> <Container version="2"> <Name>InvokeAI</Name> <Repository>mickr777/invokeai_unraid</Repository> <Registry/> <Network>bridge</Network> <MyIP/> <Shell>bash</Shell> <Privileged>false</Privileged> <Support>https://forums.unraid.net/topic/130913-guide-invokeai-a-stable-diffusion-toolkit-docker/</Support> <Project>https://github.com/invoke-ai/InvokeAI/</Project> <Overview>Simplified Docker with auto update for InvokeAI and Unraid</Overview> <Category>Other: Status:Beta</Category> <WebUI>http://[IP]:[PORT:9090]/</WebUI> <TemplateURL/> <Icon>https://i.ibb.co/LPkz8X8/logo-13003d72.png</Icon> <ExtraParams>--gpus all</ExtraParams> <PostArgs/> <CPUset/> <DateInstalled/> <DonateText/> <DonateLink/> <Requires/> <Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">9090</Config> <Config Name="InvokeAI" Target="/home/invokeuser/InvokeAI/" Default="/mnt/cache/appdata/invokeai/invokeai/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/invokeai/</Config> <Config Name="userfiles" Target="/home/invokeuser/userfiles/" Default="/mnt/cache/appdata/invokeai/userfiles/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/userfiles/</Config> <Config Name="venv" Target="/home/invokeuser/venv/" Default="/mnt/cache/appdata/invokeai/venv/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/venv/</Config> <Config Name="Huggingface Token" Target="HUGGING_FACE_HUB_TOKEN" Default="" Mode="" Description="If you wish to auto download recommened models please enter your Huggingface token here" Type="Variable" Display="always" Required="false" Mask="true"/> </Container> For InvokeAI V2.3.* (locked temp image to v2.3.* will not have a upgrade path to main later so best to make this a separate container if you wish to use it, made this as some wanted updates are in v2.3.* but not in main yet eg. loras) <?xml version="1.0"?> <Container version="2"> <Name>InvokeAI v2.3</Name> <Repository>mickr777/invokeai_unraid_v2.3</Repository> <Registry/> <Network>bridge</Network> <MyIP/> <Shell>bash</Shell> <Privileged>false</Privileged> <Support>https://forums.unraid.net/topic/130913-guide-invokeai-a-stable-diffusion-toolkit-docker/</Support> <Project>https://github.com/invoke-ai/InvokeAI/</Project> <Overview>Simplified Docker for InvokeAI v2.3.* and Unraid</Overview> <Category>Other: Status:Beta</Category> <WebUI>http://[IP]:[PORT:9090]/</WebUI> <TemplateURL/> <Icon>https://i.ibb.co/LPkz8X8/logo-13003d72.png</Icon> <ExtraParams>--gpus all</ExtraParams> <PostArgs/> <CPUset/> <DateInstalled/> <DonateText/> <DonateLink/> <Requires/> <Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">9090</Config> <Config Name="InvokeAI" Target="/home/invokeuser/InvokeAI/" Default="/mnt/cache/appdata/invokeai_v23/invokeai/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai_v23/invokeai/</Config> <Config Name="userfiles" Target="/home/invokeuser/userfiles/" Default="/mnt/cache/appdata/invokeai_v23/userfiles/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai_V23/userfiles/</Config> <Config Name="venv" Target="/home/invokeuser/venv/" Default="/mnt/cache/appdata/invokeai_v23/venv/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai_v23/venv/</Config> <Config Name="Huggingface Token" Target="HUGGING_FACE_HUB_TOKEN" Default="" Mode="" Description="If you wish to auto download recommened models please enter you Huggingface token here" Type="Variable" Display="always" Required="false" Mask="true"/> </Container> 2. Last steps after container build, on first run the python venv will be created and the preload of some models/weights/diffusers (this can take a while and will download 20gb+ of data, open docker log for progress) once this is done load up any web browser and point it to [Your Unraid IP]:9090 --------------------------------------------------------------------------------------------- Option 2 - Full Manual Install - Main Branch (or just want to see what i did to get it to work) 1. First we will create the needed files create a folder on one of the drive's on you unraid server (I named mine invokeai) inside that folder create a text file called Dockerfile (make sure you remove the txt extension if it has one) add this info in to it and save: FROM ubuntu:22.04 RUN apt-get update \ && DEBIAN_FRONTEND="noninteractive" \ apt-get install -y \ git \ dos2unix \ python3-pip \ python3-venv \ libopencv-dev \ && apt-get clean WORKDIR /usr/lib/x86_64-linux-gnu/pkgconfig/ RUN ln -sf opencv4.pc opencv.pc RUN useradd --create-home -u 99 -g 100 invokeuser WORKDIR /home/invokeuser ADD start.sh . RUN dos2unix start.sh RUN chmod +x start.sh USER invokeuser ENTRYPOINT ["/bin/bash", "start.sh"] then create a text file called start.sh add this info in to it and click save: #!/bin/bash HOMEDIR="/home/invokeuser/" if [ -f "$HOMEDIR/InvokeAI/pyproject.toml" ] ; then git config --global --add safe.directory InvokeAI cd $HOMEDIR/InvokeAI/ else echo "Cloning Git Repo in to Local Folder..." git config --global --add safe.directory InvokeAI git clone -b pre-nodes https://github.com/invoke-ai/InvokeAI.git InvokeAI cd $HOMEDIR/InvokeAI/ fi if [ -f "$HOMEDIR/venv/pyvenv.cfg" ] ; then source $HOMEDIR/venv/bin/activate else echo "Creating Python Environment...." echo '--web --host="0.0.0.0"'> $HOMEDIR/userfiles/invokeai.init python3 -m venv $HOMEDIR/venv/ source $HOMEDIR/venv/bin/activate pip install --use-pep517 -e . echo "Preloading Important Model/Weights...." invokeai-configure --root="$HOMEDIR/userfiles/" --yes fi if [ -d "$HOMEDIR/venv/lib/python3.10/site-packages/xformers-0.0.16"* ] || [ -d "$HOMEDIR/venv/lib/python3.10/site-packages/xformers-0.0.17"* ] || [ -d "$HOMEDIR/venv/lib/python3.10/site-packages/xformers-0.0.18"* ] ||[ ! -d "$HOMEDIR/venv/lib/python3.10/site-packages/xformers/" ] ; then source $HOMEDIR/venv/bin/activate pip install xformers==0.0.19.dev516 fi echo "Checking if The Git Repo Has Changed...." git fetch UPSTREAM=${1:-'@{u}'} LOCAL=$(git rev-parse @) REMOTE=$(git rev-parse "$UPSTREAM") BASE=$(git merge-base @ "$UPSTREAM") if [ $LOCAL = $REMOTE ]; then echo "Local Files Are Up to Date" elif [ $LOCAL = $BASE ]; then echo "Updates Found, Updating the local Files...." git config pull.rebase true git pull fi current=$(date +%s) last_modified_env=$(stat -c "%Y" "$HOMEDIR/InvokeAI/pyproject.toml") last_modified_pre=$(stat -c "%Y" "$HOMEDIR/InvokeAI/invokeai/backend/config/invokeai_configure.py") if [ -f "$HOMEDIR/venv/pyvenv.cfg" ] && [ $(($current-$last_modified_env)) -lt 60 ] ; then echo "Updates Found, Updating python Environment...." pip install --use-pep517 --upgrade -e . fi if [ -f "$HOMEDIR/venv/pyvenv.cfg" ] && [ $(($current-$last_modified_pre)) -lt 60 ] ; then echo "Updates Found, Updating Model Preload...." invokeai-configure --root="$HOMEDIR/userfiles/" --yes fi echo "Loading InvokeAI WebUI....." invokeai --root="$HOMEDIR/userfiles/" 2. Building the docker image Open the Unraid Terminal cd to folder where the 2 files we just created are stored run this command and wait for it to be done: docker build . -t invokeai_docker 3. building the docker container make a file called my-invokeai.xml in \config\plugins\dockerMan\templates-user (on your unraid flash drive) add the text below to it and save then go to your unraid gui and then the docker tab, click "add container", in template dropdown box select your user template "invokeai" change port/host paths if required add your hugging face token if you wish to have auto download of some models/concepts/diffusers <?xml version="1.0"?> <Container version="2"> <Name>InvokeAI</Name> <Repository>invokeai_docker</Repository> <Registry/> <Network>bridge</Network> <MyIP/> <Shell>bash</Shell> <Privileged>false</Privileged> <Support/> <Project/> <Overview>InvokeAI Docker For Unraid</Overview> <Category>Other: Status:Beta</Category> <WebUI>http://[IP]:[PORT:9090]/</WebUI> <TemplateURL/> <Icon>https://i.ibb.co/LPkz8X8/logo-13003d72.png</Icon> <ExtraParams>--gpus all</ExtraParams> <PostArgs/> <CPUset/> <DateInstalled/> <DonateText/> <DonateLink/> <Requires/> <Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">9090</Config> <Config Name="InvokeAI" Target="/home/invokeuser/InvokeAI/" Default="/mnt/cache/appdata/invokeai/invokeai/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/invokeai/</Config> <Config Name="userfiles" Target="/home/invokeuser/userfiles/" Default="/mnt/cache/appdata/invokeai/userfiles/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/userfiles/</Config> <Config Name="venv" Target="/home/invokeuser/venv/" Default="/mnt/cache/appdata/invokeai/venv/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/venv/</Config> <Config Name="Huggingface Token" Target="HUGGING_FACE_HUB_TOKEN" Default="" Mode="" Description="If you wish to auto download recommened models please enter your Huggingface token here" Type="Variable" Display="always" Required="false" Mask="true"/> </Container> 4. Last steps after container build, on first run the python venv will be created and the preload of some models/weights/diffusers (this can take a while and will download 20gb+ of data, open docker log for progress) once this is done load up any web browser and point it to [Your Unraid IP]:9090 Last Notes: if you run in to errors after updates, cleaning out the /invokeai/invokeai/ folder and/or deleting the file /invokeai/venv/pyvenv.cfg and rerunning the docker can force a part rebuild and fix a lot of issues if you are low on system ram you can add --max_loaded_models 0 to your /userfiles/invokeai.init to disable model caching Feel free to comment with any suggestions Edited May 23 by mickr777 1 Quote Link to comment
ghzgod Posted November 13, 2022 Share Posted November 13, 2022 This is awesome!!! Thank you 1 Quote Link to comment
LittelD Posted December 16, 2022 Share Posted December 16, 2022 Can you tell me what i need to do to get it running with a FirePro W4100 ? Quote Link to comment
mickr777 Posted December 16, 2022 Author Share Posted December 16, 2022 8 hours ago, LittelD said: Can you tell me what i need to do to get it running with a FirePro W4100 ? If I am not mistaken that gpu only has 2gb vram, You need to have a gpu with at least 4gb vram (8gb is highly recommended) Quote Link to comment
LittelD Posted December 16, 2022 Share Posted December 16, 2022 1 hour ago, mickr777 said: If I am not mistaken that gpu only has 2gb vram, You need to have a gpu with at least 4gb vram (8gb is highly recommended) argh damm.... is it without gpu possible :D? Quote Link to comment
mickr777 Posted December 16, 2022 Author Share Posted December 16, 2022 (edited) 2 hours ago, LittelD said: argh damm.... is it without gpu possible :D? Yes you can run from cpu, but is extremely slow, to do this in my-invokeai.xml from the guide change <ExtraParams>--gpus all</ExtraParams> to <ExtraParams>--gpus 0</ExtraParams> Edited December 17, 2022 by mickr777 Quote Link to comment
LittelD Posted December 16, 2022 Share Posted December 16, 2022 1 hour ago, mickr777 said: Yes you can run from cpu, but is extremely slow, to do this in my-invokeai.xml from the guide change <ExtraParams>--gpus all</ExtraParams> to <ExtraParams/> sorry not working getting following error then docker stops suddenly venv/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.) return torch._C._cuda_getDeviceCount() > 0 Quote Link to comment
mickr777 Posted December 16, 2022 Author Share Posted December 16, 2022 (edited) 28 minutes ago, LittelD said: sorry not working getting following error then docker stops suddenly venv/lib/python3.10/site-packages/torch/cuda/__init__.py:88: UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). Did you run some cuda functions before calling NumHipDevices() that might have already set an error? Error 101: hipErrorInvalidDevice (Triggered internally at ../c10/hip/HIPFunctions.cpp:110.) return torch._C._cuda_getDeviceCount() > 0 Ok I Updated the script, try building again you will need to remove the docker, delete the img and the folders made and start the guide again try <ExtraParams>--gpus 0</ExtraParams> Edited December 17, 2022 by mickr777 Quote Link to comment
VonHex Posted December 20, 2022 Share Posted December 20, 2022 I got it all set up but when I start it, it does this... Checking if The Git Repo Has Changed.... Local Files Are Up to Date Loading InvokeAI WebUI..... >> Patchmatch initialized * Initializing, be patient... >> InvokeAI runtime directory is "/userfiles" >> GFPGAN Initialized >> CodeFormer Initialized >> ESRGAN Initialized >> Using device_type cuda >> Initializing safety checker >> Current VRAM usage: 1.22G >> Scanning Model: stable-diffusion-1.5 >> Model Scanned. OK!! >> Loading stable-diffusion-1.5 from /userfiles/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt | LatentDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.52 M params. | Making attention of type 'vanilla' with 512 in_channels | Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Making attention of type 'vanilla' with 512 in_channels | Using faster float16 precision | Loading VAE weights from: /userfiles/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt >> Model loaded in 7.57s >> Max VRAM used to load the model: 3.38G >> Current VRAM usage:3.38G >> Current embedding manager terms: * >> Setting Sampler to k_lms * Initialization done! Awaiting your command (-h for help, 'q' to quit) invoke> goodbye! Quote Link to comment
mickr777 Posted December 20, 2022 Author Share Posted December 20, 2022 (edited) 2 hours ago, VonHex said: I got it all set up but when I start it, it does this... Checking if The Git Repo Has Changed.... Local Files Are Up to Date Loading InvokeAI WebUI..... >> Patchmatch initialized * Initializing, be patient... >> InvokeAI runtime directory is "/userfiles" >> GFPGAN Initialized >> CodeFormer Initialized >> ESRGAN Initialized >> Using device_type cuda >> Initializing safety checker >> Current VRAM usage: 1.22G >> Scanning Model: stable-diffusion-1.5 >> Model Scanned. OK!! >> Loading stable-diffusion-1.5 from /userfiles/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt | LatentDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.52 M params. | Making attention of type 'vanilla' with 512 in_channels | Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Making attention of type 'vanilla' with 512 in_channels | Using faster float16 precision | Loading VAE weights from: /userfiles/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt >> Model loaded in 7.57s >> Max VRAM used to load the model: 3.38G >> Current VRAM usage:3.38G >> Current embedding manager terms: * >> Setting Sampler to k_lms * Initialization done! Awaiting your command (-h for help, 'q' to quit) invoke> goodbye! Looks like their build script makes the invokeai.init file now on start, so I needed to change my script a little, but if you edit and delete everything in the userfiles/invokeai.init file and just add --web --host="0.0.0.0" to it should fix it Edited December 20, 2022 by mickr777 Quote Link to comment
VonHex Posted December 21, 2022 Share Posted December 21, 2022 now it just does this. Checking if The Git Repo Has Changed.... Local Files Are Up to Date Loading InvokeAI WebUI..... >> Initialization file /userfiles/invokeai.init found. Loading... >> Initialization file /userfiles/invokeai.init found. Loading... Quote Link to comment
VonHex Posted December 21, 2022 Share Posted December 21, 2022 nevermind I guess it started working haha 1 Quote Link to comment
LittelD Posted December 24, 2022 Share Posted December 24, 2022 On 12/17/2022 at 12:56 AM, mickr777 said: Ok I Updated the script, try building again you will need to remove the docker, delete the img and the folders made and start the guide again try <ExtraParams>--gpus 0</ExtraParams> Thanks alot, somehow didnt work. But i ordered a Tesla M40. I will wait and try then Quote Link to comment
LittelD Posted January 9 Share Posted January 9 My M40 arrived... but im still getting an error docker run -d --name='InvokeAI' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="UnraidTower" -e HOST_CONTAINERNAME="InvokeAI" -e 'HUGGING_FACE_HUB_TOKEN'='xxxxxxxxxxxxxxxxxxx' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:7790]/' -l net.unraid.docker.icon='https://i.ibb.co/LPkz8X8/logo-13003d72.png' -p '7790:7790/tcp' -v '/mnt/cache/appdata/invokeai/invokeai/':'/InvokeAI/':'rw' -v '/mnt/cache/appdata/invokeai/userfiles/':'/userfiles/':'rw' -v '/mnt/user/appdata/invokeai/venv':'/venv':'rw' --gpus all 'invokeai_docker' 7db9xxxxxxxxxxxxx0eb5327xxxxxx docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. ay ay seems not to be easy with my config Quote Link to comment
mickr777 Posted January 9 Author Share Posted January 9 (edited) 2 hours ago, LittelD said: My M40 arrived... but im still getting an error docker run -d --name='InvokeAI' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="UnraidTower" -e HOST_CONTAINERNAME="InvokeAI" -e 'HUGGING_FACE_HUB_TOKEN'='xxxxxxxxxxxxxxxxxxx' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:7790]/' -l net.unraid.docker.icon='https://i.ibb.co/LPkz8X8/logo-13003d72.png' -p '7790:7790/tcp' -v '/mnt/cache/appdata/invokeai/invokeai/':'/InvokeAI/':'rw' -v '/mnt/cache/appdata/invokeai/userfiles/':'/userfiles/':'rw' -v '/mnt/user/appdata/invokeai/venv':'/venv':'rw' --gpus all 'invokeai_docker' 7db9xxxxxxxxxxxxx0eb5327xxxxxx docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. ay ay seems not to be easy with my config Did you install the unraid nvidia driver plug in? plus it is a good idea to install NVTOP and GPU Statistics plugins with it too Also in your my-invokeai.xml only change the port like this leave the rest 9090, if your using a different default port <Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">7790</Config> Edited January 9 by mickr777 Quote Link to comment
LittelD Posted January 9 Share Posted January 9 35 minutes ago, mickr777 said: Did you install the unraid nvidia driver plug in? Also good to install NVTOP and GPU Statistics plugins with it too Also in your my-invokeai.xml only change the port like this leave the rest 9090, if your using a different default port <Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">7790</Config> yeah well, as far as i found out Tesla cards are not supported by the plugin. trying to find some other way Quote Link to comment
mickr777 Posted January 9 Author Share Posted January 9 (edited) 10 minutes ago, LittelD said: yeah well, as far as i found out Tesla cards are not supported by the plugin. trying to find some other way Ah I just saw you purchased a Telsa m40, worse case you might have to create a windows or linux vm and passthrough the gpu and install the gpu driver in the vm and then InvokeAI in it using there installer. (but that is outside the scope of my guide) https://github.com/invoke-ai/InvokeAI/releases/tag/v2.2.5 Edited January 9 by mickr777 Quote Link to comment
LittelD Posted January 9 Share Posted January 9 1 minute ago, mickr777 said: Ah I just saw you purchased a Telsa m40, worse case you might have to create a windows or linux vm and passthrough the gpu and install the gpu driver in the vm and then InvokeAI in it using there installer. https://github.com/invoke-ai/InvokeAI/releases/tag/v2.2.5 nooooo passing through the card seems not to be that easy also hahahaha germans would say... vom regen in die taufe Quote Link to comment
neurocis Posted January 13 Share Posted January 13 Hi, thanks for this, after clearing the persistent store a couple times I am still having this crash: You may download the recommended models (about 10GB total), select a customized set, or completely skip this step. Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: A problem occurred during initialization. The error was: "EOF when reading a line" Traceback (most recent call last): File "/InvokeAI/ldm/invoke/CLI.py", line 96, in main gen = Generate( File "/InvokeAI/ldm/generate.py", line 160, in __init__ mconfig = OmegaConf.load(conf) File "/venv/lib/python3.10/site-packages/omegaconf/omegaconf.py", line 189, in load with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/userfiles/configs/models.yaml' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/InvokeAI/scripts/configure_invokeai.py", line 780, in main errors.add(download_weights(opt)) File "/InvokeAI/scripts/configure_invokeai.py", line 597, in download_weights choice = user_wants_to_download_weights() File "/InvokeAI/scripts/configure_invokeai.py", line 127, in user_wants_to_download_weights choice = input('Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: ') EOFError: EOF when reading a line I was able to resolve this by mkdir /userfiles/configs , rebuilding the container (error again - missing file) then, cp -r /invokeai/configs/stable-diffusion /userfiles/configs/. Appears there is an expectation that configs exists pre, ready for the models.yaml file and stable-diffusion prefs. Newb to this so more than likely this is a hack. Cheers! Ref: https://github.com/invoke-ai/InvokeAI/issues/1420 Quote Link to comment
mickr777 Posted January 13 Author Share Posted January 13 3 hours ago, neurocis said: Hi, thanks for this, after clearing the persistent store a couple times I am still having this crash: You may download the recommended models (about 10GB total), select a customized set, or completely skip this step. Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: A problem occurred during initialization. The error was: "EOF when reading a line" Traceback (most recent call last): File "/InvokeAI/ldm/invoke/CLI.py", line 96, in main gen = Generate( File "/InvokeAI/ldm/generate.py", line 160, in __init__ mconfig = OmegaConf.load(conf) File "/venv/lib/python3.10/site-packages/omegaconf/omegaconf.py", line 189, in load with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/userfiles/configs/models.yaml' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/InvokeAI/scripts/configure_invokeai.py", line 780, in main errors.add(download_weights(opt)) File "/InvokeAI/scripts/configure_invokeai.py", line 597, in download_weights choice = user_wants_to_download_weights() File "/InvokeAI/scripts/configure_invokeai.py", line 127, in user_wants_to_download_weights choice = input('Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: ') EOFError: EOF when reading a line I was able to resolve this by mkdir /userfiles/configs , rebuilding the container (error again - missing file) then, cp -r /invokeai/configs/stable-diffusion /userfiles/configs/. Appears there is an expectation that configs exists pre, ready for the models.yaml file and stable-diffusion prefs. Newb to this so more than likely this is a hack. Cheers! Ref: https://github.com/invoke-ai/InvokeAI/issues/1420 thanks you, it looks like when no hugginface token is given, the folders are not created I have updated start.sh and made some notes at bottom of first post Quote Link to comment
mickr777 Posted January 15 Author Share Posted January 15 If anyone gets a crash and docker exits after the diffuser update today, delete the contents of the folder \userfiles\models\hub\ and rerun the docker Quote Link to comment
YourNightmar3 Posted January 16 Share Posted January 16 (edited) Im getting this error after following the instructions, when creating the container with my XML file: docker: Error response from daemon: pull access denied for invokeai_docker, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. Edit: in the docker template screen i changed the repo from "invokeai_docker" to "invokeai" and now it works. Maybe it has something to do with the fact that i had another "invokeai_docker" container previously. Edited January 16 by YourNightmar3 Quote Link to comment
mickr777 Posted January 16 Author Share Posted January 16 (edited) 6 minutes ago, YourNightmar3 said: Im getting this error after following the instructions, when creating the container with my XML file: docker: Error response from daemon: pull access denied for invokeai_docker, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. that normally happens if the docker image doesn't exist locally. when making the docker image did you use this docker build . -t invokeai_docker as the name there needs to match the repo in the xml, as we are only making a local image not an online one Edited January 16 by mickr777 Quote Link to comment
YourNightmar3 Posted January 16 Share Posted January 16 33 minutes ago, mickr777 said: that normally happens if the docker image doesn't exist locally. when making the docker image did you use this docker build . -t invokeai_docker as the name there needs to match the repo in the xml, as we are only making a local image not an online one Ah thank you, i'm not very experienced with Docker and i must have changed the name when i executed the docker build command. It's working now. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.