Jump to content

Araso

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by Araso

  1. The way I do it is to edit the WEBUI_VERSION key, separating my choices with a pipe: Then after saving, I get a nice an easy dropdown to select my preferred UI: Literally just add (or replace the existing value) this to the WEBUI_VERSION key: 02.forge It should be noted that at the moment, Forge development seems to have kind of stalled, unfortunately. Some things are broken, regional prompter being one of them. I use RP a lot so I switched back to standard A1111. If you're careful then the memory optimisations that Forge has that you lose don't really matter. Hopefully Forge picks back up again but at least there's a choice between the two.
  2. I must admit, I didn't think to do this. I expect, at most, to remove the env but to leave everything else in place whenever I start the container. But since there's nothing there I've customised or need to keep, I've gone ahead and deleted everything. I imagine since you said you don't use Kohya that you delete the whole directory regularly, which I don't do, and that's why we've been getting different results sometimes. The entire log now (after three starts): [migrations] started [migrations] no migrations found usermod: no changes ─────────────────────────────────────── _____ __ __ _____ _____ _____ _____ | | | | __|_ _| | | | --| | |__ | | | | | | | | | |_____|_____|_____| |_| |_____|_|_|_| _____ __ __ _ __ ____ | __ | | | | | | \ | __ -| | | | |__| | | |_____|_____|_|_____|____/ Based on images from linuxserver.io ─────────────────────────────────────── To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── [custom-init] No custom files found, skipping... App is starting! [ls.io-init] done. Local branch up-to-date, keeping existing venv Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done # All requested packages already installed. Channels: - conda-forge - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done # All requested packages already installed. Requirement already satisfied: pip in /config/70-kohya/env/lib/python3.10/site-packages (24.0) 15:20:04-388204 INFO Python version is 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] 15:20:04-556792 INFO Submodule initialized and updated. 15:20:04-562357 INFO Installing python dependencies. This could take a few minutes as it downloads files. 15:20:04-563542 INFO If this operation ever runs too long, you can rerun this script in verbose mode to check. 15:20:04-565226 INFO Kohya_ss GUI version: v23.0.15 15:20:04-567755 INFO Installing modules from requirements_linux.txt... 15:20:04-570023 INFO Installing modules from requirements.txt... 15:20:04-575680 INFO Installing package: -e ./sd-scripts 15:20:27-118198 INFO Configuring accelerate... 15:20:27-120563 WARNING Could not automatically configure accelerate. Please manually configure accelerate with the option in the menu or with: accelerate config. LAUNCHING KOHYA_SS ! 15:20:32-792246 INFO headless: True Sorted! Now you can check the forum without expecting yet another list of errors from me. Cheers for all your work!
  3. OK, tested: To start with, I simply started the container without touching the env or anything else: kohya-log-first-start.txt So there's a message in the log: Remote branch is ahead. If you encouter any issue after upgrade, try to clean venv for clean packages install There was also an error: error: Your local changes to the following files would be overwritten by checkout: README.md library/sdxl_original_unet.py Please commit your changes or stash them before you switch branches. Aborting fatal: Unable to checkout '6b1520a46b1b6ee7c33092537dc9449d1cc4f56f' in submodule path 'sd-scripts' 22:47:26-432880 ERROR Error during Git operation: Command '['git', 'submodule', 'update', '--init', '--recursive', '--quiet']' returned non-zero exit status 1. So I deleted the file 'Delete_this_file_to_clean_virtual_env_and_dependencies_at_next_launch' and started the container again while I watched to make sure it wiped the env correctly. It did, so that was all working as intended. Here's the log: kohya-log-second-start.txt I still got that error: error: Your local changes to the following files would be overwritten by checkout: README.md library/sdxl_original_unet.py Please commit your changes or stash them before you switch branches. Aborting fatal: Unable to checkout '6b1520a46b1b6ee7c33092537dc9449d1cc4f56f' in submodule path 'sd-scripts' 22:54:37-118317 ERROR Error during Git operation: Command '['git', 'submodule', 'update', '--init', '--recursive', '--quiet']' returned non-zero exit status 1. Then I stopped and started the container again for a third time: kohya-log-third-start.txt This also had the same error as above. All three times I started the container I could still load the UI without problems. What I haven't done, because it takes such a long time to do, is to actually test creating a LoRA. For all I know it might work without any problems or it might fail. As I say - it takes a long time to test something like that. I've never made any changes to those two files: README.md library/sdxl_original_unet.py So it's not a conflict there to prevent overwriting user-modified files. There's something odd here because I imagine it's pretty much an identical process for all UIs but I don't see these problems in A1111 or Forge. So it's something specific only to Kohya, as far as I can tell. But they're all only downloading the latest branch from Github, so they should all work in the same way.
  4. I might be worth mentioning that the last commit, as you can see here was 5 days ago. So the env that was there should have been fully up to date and not requiring an update at all. All of these reinstall have been happening well after the last commit. So I think it's maybe a bad idea to prevent normal updating procedure. I think it's something else - maybe some sort of conflict somewhere. But at the very least, there is now the possibility to delete the file to wipe the env if needed if you do go down that route.
  5. This is fixed and all existing files were moved over. Nice! This also is fixed. This, unfortunately, is not fixed. I keep getting this every time: Remote branch is ahead. Wiping venv for clean packages install Updating 6162193..5bbb4fc Full logs for the first three starts after installing this new version: kohya-log-first-start.txtkohya-log-second-start.txtkohya-log-third-start.txt
  6. Everything seems to be working. However, I have some observations. 1. The permissions files Two files are created in /appdata/stable-diffusion: Delete this file to clean virtual env and dependencies at next launch Delete this file to reset access rights at next launch I haven't tested these for reasons I'll get to below but I would suggest naming them slightly differently to this: Delete_this_file_to_clean_virtual_env_and_dependencies_at_next_launch Delete_this_file_to_reset_access_rights_at_next_launch The reason for this is because it can be tricky to delete files with spaces in them if anyone wants to do this from the console. So having underscores and no spaces removes all doubt and no need to use anything like quotation marks or escape characters. Linux based systems and spaces don't mix together very well. 2. Kohya installs fine, but... Kohya has installed without any problems and I've loaded the WebUI fine. However, it wipes the env and reinstalls on every start. This is without deleting the 'Delete this file to clean virtual env and dependencies at next launch' file. Specifically this directory is wiped every time: /appdata/stable-diffusion/70-kohya/env These are the logs: kohya-log-first-install.txtkohya-log-second-start.txtkohya-log-third-start.txt The ones called 'second' and 'third' are actually later than that. They're more like 'fifth' and sixth' but I didn't realise it was wiping the env until after a few stop/starts so I didn't save any of those logs. But they're all pretty much the same. Every time it reinstalls I can still open the UI so it still works - it just takes along, long time to go through all of this reinstallation first. 3. Kohya works. Kohya seems to be working fine. I've trained a (very crappy) quick LoRA and tested it in A1111 and I can see it's definitely working. It used CUDA correctly. Although, as I mentioned in my previous post, I've always had NVIDIA_VISIBLE_DEVICES set correctly so I don't know if it would work properly without that variable. 4. A1111 outputs are still going to the wrong directory To be clear, I'm talking about standard Automatic1111. You already fixed Forge to go to the correct directory but A1111 is still saving outputs in: /appdata/stable-diffusion/02-sd-webui/webui/output instead of: /appdata/stable-diffusion/outputs/02-sd-webui That's pretty much it for now. It takes forever when Kohya reinstalls during every start of the container so I haven't had much time to do a deep test of everything. At least Kohya installs now and it's not exactly a daily use kind of thing. So if it would take a lot of work to fix, then I say leave it until you have the time. Also, this didn't leave me with much time to test deleting the files to wipe the env or change permissions. For one thing - Kohya is deleting the env anyway without me having to touch that file and I didn't really want to test deleting my working A1111 or Forge. Hopefully some or all of these fixes are quick and easy. As soon as you have any new version I'll be on it for testing.
  7. I've always had this variable set: NVIDIA_VISIBLE_DEVICES Does this mean this variable now does nothing and/or can be safely removed from the template?: CLEAN_ENV I'll test Kohya and see what's what.
  8. If you read my posts a bit farther back, the only real change I've made was after I had some trouble with file permissions so I added UMASK. To do this, I added a variable to the template: Name/Key: UMASK Value: 000 I can't think of anything significant off the top of my head that I changed in A1111's settings. I've been using Forge and had to switch to standard A1111 due to a Forge bug that's there at the moment so my settings are basically at the defaults other than some cosmetic things. You could try disabling all your custom extensions but this one. Then enable your other extensions on-by-one until you find where it breaks. I've heard of people having trouble with conflicting extensions before. If not that, then maybe stop A1111 and temporarily move: config.json ui-config.json somewhere safe and start it up again so it recreates a default config. One thing I noticed was that it worked in the ComfyUI tab just fine with 'realvisxlV40_v40LightningBakedvae.safetensors' but it errored out when I tried to generate anything with that model in the txt2img tab. It would only work with a full SDXL model. For me, using standard SDXL is taking way too long when I've been used to turbo and lightning models for a while now. So I've had to disable this extension just to be able to use them. The point of me saying all of this is - maybe if you have a turbo or lightning model loaded then that alone could cause you problems.
  9. Actually, yes. I didn't even know this extension was available or even possible, so cheers for the heads up. I've installed it and first thing is that it tells you to reload the UI. This isn't enough - a full stop and start of the container makes it work. Presumably you've done that, though. My log: ################################################################ Launching launch.py... ################################################################ glibc version is 2.35 Check TCMalloc: libtcmalloc_minimal.so.4 libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4 Python 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] Version: v1.8.0 Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 CUDA 12.1 Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api Civitai Helper: Get Custom Model Folder 20:02:49 - ReActor - STATUS - Running v0.7.0-b7 on Device: CUDA Loading weights [d6a48d3e20] from /config/02-sd-webui/webui/models/Stable-diffusion/sdxllightning/realvisxlV40_v40LightningBakedvae.safetensors Creating model from config: /config/02-sd-webui/webui/repositories/generative-models/configs/inference/sd_xl_base.yaml Civitai Helper: Set Proxy: Running on local URL: http://0.0.0.0:9000 To create a public link, set `share=True` in `launch()`. [sd-webui-comfyui] Started callback listeners for process webui [sd-webui-comfyui] Starting subprocess for comfyui... [sd-webui-comfyui] Created a reverse proxy route to ComfyUI: /sd-webui-comfyui/comfyui Startup time: 59.9s (prepare environment: 16.8s, import torch: 20.4s, import gradio: 2.8s, setup paths: 7.2s, initialize shared: 0.3s, other imports: 1.7s, list SD models: 0.2s, load scripts: 6.5s, create ui: 3.1s, gradio launch: 0.2s, add APIs: 0.4s). Applying attention optimization: xformers... done. Model loaded in 4.9s (load weights from disk: 0.2s, create model: 0.6s, apply weights to model: 3.5s, calculate empty prompt: 0.3s). [ComfyUI] [sd-webui-comfyui] Setting up IPC... [ComfyUI] [sd-webui-comfyui] Using inter-process communication strategy: File system [ComfyUI] [sd-webui-comfyui] Started callback listeners for process comfyui [ComfyUI] [sd-webui-comfyui] Patching ComfyUI... [ComfyUI] [sd-webui-comfyui] Launching ComfyUI with arguments: --listen 127.0.0.1 --port 8189 [ComfyUI] ** ComfyUI startup time: 2024-03-20 20:03:11.148687 [ComfyUI] ** Platform: Linux [ComfyUI] ** Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] [ComfyUI] ** Python executable: /config/02-sd-webui/conda-env/bin/python3 [ComfyUI] ** Log path: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/comfyui.log [ComfyUI] ### Loading: ComfyUI-Manager (V2.10.2) [ComfyUI] ### ComfyUI Revision: 2077 [4b9005e9] | Released on '2024-03-20' [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] registered ws - sandbox_tab - 7fdd974d65054604850394a243948462 [ComfyUI] registered ws - preprocess_latent_img2img - fff2e5a47a004d34a784737b769602da [ComfyUI] registered ws - postprocess_latent_txt2img - 57d1208bf4254d4eb7a75f53330c7a8c [ComfyUI] registered ws - postprocess_img2img - 95e9cc98ce84468e942c503f0bb45829 [ComfyUI] registered ws - preprocess_img2img - 48382e9789424d21ac3e6da647c14cc1 [ComfyUI] registered ws - postprocess_latent_img2img - 0262e3f624aa4c35ae54783f2e19ac7b [ComfyUI] registered ws - before_save_image_txt2img - 2fe9c48759004cd7a7d68faec7fd8de2 [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] registered ws - postprocess_txt2img - 8074d5435f164a26ad0fb02317be8a8a [ComfyUI] registered ws - before_save_image_img2img - 11f4a5b8fb2e4dbaaac9f154a7cd3ccb [ComfyUI] registered ws - postprocess_image_img2img - 34002d2f69d94cb9a459ddc77987b38c [ComfyUI] registered ws - postprocess_image_txt2img - 2e0a84b53e034bb18eda0e806f6d4ecc [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json Then I get: I've only had a few minutes of testing it. Generating from the ComfyUI tab is working fine. As you can see, I've changed the checkpoint, resolution and CGF/steps. However, generating from the txt2img tab results in: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) So I'm going to have to figure that one out.
  10. A quickie bug report which should be an easy fix: I've had to temporarily switch from Forge to standard A1111 because Regional Prompter is broken at the moment in Forge. Standard A1111 saves output images to: /appdata/stable-diffusion/02-sd-webui/webui/output Instead of: /appdata/stable-diffusion/outputs It's easy enough to get to them but things like Infinite Image Browsing is locked to whatever location is defined as output. This means I can't find something I generated in Forge and send it with all its parameters to txt2img, for example. And vice versa - when Forge is patched for Regional Prompter, I won't be able to browse easily to what I've been creating in standard A1111. At least not without moving everything manually.
  11. I don't want to pile anything extra on you since, at least for me, Forge is the most important thing. But I have been trying to install kohya and I consistently get: Downloading and Extracting Packages: ...working... done Preparing transaction: ...working... done Verifying transaction: ...working... done Executing transaction: ...working... done '/opt/sd-install/parameters/70.txt' -> '/config/70-kohya/parameters.txt' Cloning into 'kohya_ss'... Already up to date. Requirement already satisfied: pip in ./venv/lib/python3.10/site-packages (23.0.1) Collecting pip Using cached pip-24.0-py3-none-any.whl (2.1 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 23.0.1 Uninstalling pip-23.0.1: Successfully uninstalled pip-23.0.1 Successfully installed pip-24.0 Obtaining file:///config/70-kohya/kohya_ss/sd-scripts (from -r requirements.txt (line 46)) ERROR: file:///config/70-kohya/kohya_ss/sd-scripts (from -r requirements.txt (line 46)) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found. venv folder does not exist. Not activating... Warning: LD_LIBRARY_PATH environment variable is not set. Certain functionalities may not work correctly. Please ensure that the required libraries are properly configured. If you use WSL2 you may want to: export LD_LIBRARY_PATH=/usr/lib/wsl/lib/ Traceback (most recent call last): File "/config/70-kohya/kohya_ss/setup/validate_requirements.py", line 18, in <module> from kohya_gui.custom_logging import setup_logging File "/config/70-kohya/kohya_ss/kohya_gui/custom_logging.py", line 6, in <module> from rich.theme import Theme ModuleNotFoundError: No module named 'rich' Validation failed. Exiting... /entry.sh: line 11: /70: No such file or directory /entry.sh: line 12: /config/scripts/70: No such file or directory /entry.sh: line 13: /config/scripts/70.sh: No such file or directory error in webui selection variable App is starting! Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Then it just bootloops endlessly. This is with a completely clean install after deleting the whole '70-kohya' directory. I also tried both with and without the UMASK variable by removing it from the template altogether and I get the same error every time. Running kohya would be nice for me and the fix might be something really simple that was overlooked. If it's quick and easy to fix - then great. If it's more than that then I can live without it as long as Forge is working.
  12. An update... I did all of this yesterday but I got a bit sick of it in the end so I'm reporting it today. I spent about 6.5 hours doing this - reinstalling takes a long time when you do it over and over. Deleting conda-env and installing over the top didn't work (kind of as expected). So I renamed the whole '/appdata/stable-diffusion/02-sd-webui' to '/appdata/stable-diffusion/02-sd-webui-bak' and then it did install without a problem. However, I've spent some time installing extensions and customising my config so I'd like all of that back. Simply copying over files will not work - some do, but not all. It's more reliable to reinstall extensions from the tab than to copy all of the directories over from the backup. The two other main files I want back are 'styles.csv' and 'ui-config.json'. The reason why I was doing this for 6.5 hours was because I first tried restoring all of the files and directories from a backup in one go - but then I got a bunch of errors. So I deleted everything again and restarted from scratch. Then I began restoring things one-by-one with a stop/start of the container in between to figure out what exactly was causing the errors. This is why it took so long. This is how I figured out that reinstalling extensions from the tab is the way to go. In 'parameters.forge.txt' I was using for a long time: # Web + Network --listen --port 9000 # options --enable-insecure-extension-access # --xformers # --api --cuda-malloc --cuda-stream --pin-shared-memory # --no-half-vae # --disable-nan-check # --update-all-extensions # --reinstall-xformers # --reinstall-torch Now, for some reason, this refuses to work without errors. So I've gone back to the default included with this container: # Web + Network --listen --port 9000 # options --enable-insecure-extension-access --xformers --api --cuda-malloc --cuda-stream #--no-half-vae #--disable-nan-check #--update-all-extensions #--reinstall-xformers #--reinstall-torch Anyway, a couple of observations: Firstly, I've finally remembered that every time I've had to reset permissions on directories and files to manipulate them in the past, it removes the executable flag from .sh files. This is part of what's been preventing the container from booting properly. It might be worth when you run: chown'ing directory to ensure correct permissions. to also check that any .sh files are still executable if they need to be. Secondly, after a fair bit of reading around and looking at a few container templates, I've concluded that nobody really knows what to set UMASK as. linuxserver set their containers to: 022 binhex sets them to: 000 I tested (with a whole 'delete everything and start again approach) with the linuxserver approach of a UMASK of 022. When I generate images, it gives me: Directories: drwxr-xr-x Files: -rw-r--r-- I cannot do anything with these files/directories unless I reset their permissions. So this is no good. When I use a UMASK of 000, it gives me outputs with: Directories: drwxrwxrwx Files: -rw-rw-rw- These files/directories I can do something with. They have the 'rw' flag all the way so I can delete any generations I don't want straight from my image viewer. This is all I ever wanted, so this works for me. The other thing I wanted, since I make use of Dynamic Prompts, is access to /appdata/stable-diffusion/02-sd-webui/forge/extensions/sd-dynamic-prompts/wildcards. Now this also works as expected. Because this directory is so buried and because this container resets permissions on every start, it became very tedious very quickly to browse all the way through to reset permissions on just this directory. This is why I was simply resetting permissions on the whole /appdata/stable-diffusion/02-sd-webui directory, which led to other problems. The short version is (at least for me): Use a UMASK of 000 Use the default parameters.forge.txt Only reinstall extensions from the Extensions tab Anyway, so it's sorted now. Hopefully?
  13. Instead of possibly breaking my working install, I added a new container using the existing stable-diffusion template but renamed it stable-diffusion-TEST. Installing a completely new and clean copy into a brand new config directory went without a hitch. I think it's this constant permission juggling that's going on. I think the permissions should be set once and then left alone instead of the way things currently are where they are reset every single time. It also resets the output directory, meaning I can't perform any operations like deleting unwanted generations until I manually reset permissions myself. I'd prefer it if it was more like binhex's method (at the very bottom of here) where there is a text file inside the config directory where nothing happens if it exists. If you manually delete the file, then permissions are reset and a new text file is placed there to stop it from repeatedly happening. Just some file called 'delete_me_to_reset_permissions.txt' or something. For example, when I download with sonarr and radarr, they will create directories and files with permissions where I can move, rename, delete or do anything else with my file manager: sonarr directories: drwxrwxrwx sonarr files: -rw-rw-rw- radarr directories: drwxrwxrwx radarr files: -rw-rw-rw- However, the output of a test generation I've just done in this separate and completely clean install of this container gives directories and files with these permissions: This container directories: drwxr-xr-x This container files: -rw-r--r-- This means I cannot delete files or directories I don't want unless I manually reset the permissions. My PUID and PGID are correct. This finally led me to the solution to at least this problem: The short version is that you might want to specify a UMASK variable in the template. I've just started my original :3.1.0 install and generated a few images. It will then create: Directories: drwxrwxrwx Files: -rw-rw-rw- To get this, I added a variable to the template: Name/Key: UMASK Value: 000 Linuxserver base images have UMASK built-in so I made use of that. More info here. As for updating my main install to :latest - maybe tomorrow. I haven't really had time today and I didn't want to break my existing install when it's working but I'll get around to it when I have a bit more time.
  14. Yes, those errors are in the log after removing conda-env. I know I called it 'venv' but it's the same thing. Then I deleted conda-env and changed the tag to :3.1.0 and no errors at all. Just to be clear - this is still running Forge. What did you change?
  15. I updated to the latest version and got these in the log: Building wheels for collected packages: insightface Building wheel for insightface (pyproject.toml): started Building wheel for insightface (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error × Building wheel for insightface (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [221 lines of output] WARNING: pandoc not enabled running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-312 creating build/lib.linux-x86_64-cpython-312/insightface copying insightface/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface creating build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/common.py -> build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/face_analysis.py -> build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/mask_renderer.py -> build/lib.linux-x86_64-cpython-312/insightface/app creating build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/insightface_cli.py -> build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/model_download.py -> build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/rec_add_mask_param.py -> build/lib.linux-x86_64-cpython-312/insightface/commands creating build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/image.py -> build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/pickle_object.py -> build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/rec_builder.py -> build/lib.linux-x86_64-cpython-312/insightface/data creating build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/arcface_onnx.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/attribute.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/inswapper.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/landmark.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/model_store.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/model_zoo.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/retinaface.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/scrfd.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty copying insightface/thirdparty/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty creating build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/constant.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/download.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/face_align.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/filesystem.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/storage.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/utils creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d copying insightface/thirdparty/face3d/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/io.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/light.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/render.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/vis.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/io.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/light.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/render.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/vis.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/fit.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/load.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/morphabel_model.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model running egg_info writing insightface.egg-info/PKG-INFO writing dependency_links to insightface.egg-info/dependency_links.txt writing entry points to insightface.egg-info/entry_points.txt writing requirements to insightface.egg-info/requires.txt writing top-level names to insightface.egg-info/top_level.txt reading manifest file 'insightface.egg-info/SOURCES.txt' writing manifest file 'insightface.egg-info/SOURCES.txt' /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.data.images' is absent from the `packages` configuration. !! ******************************************************************************** ############################ # Package would be ignored # ############################ Python recognizes 'insightface.data.images' as an importable package[^1], but it is absent from setuptools' `packages` configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'insightface.data.images' is explicitly added to the `packages` configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about "package discovery" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'insightface.data.images' to be distributed and are already explicitly excluding 'insightface.data.images' via `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, you can try to use `exclude_package_data`, or `include-package-data=False` in combination with a more fine grained `package-data` configuration. You can read more about "package data files" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any `.py` files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. ******************************************************************************** !! check.warn(importable) /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.data.objects' is absent from the `packages` configuration. !! ******************************************************************************** ############################ # Package would be ignored # ############################ Python recognizes 'insightface.data.objects' as an importable package[^1], but it is absent from setuptools' `packages` configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'insightface.data.objects' is explicitly added to the `packages` configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about "package discovery" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'insightface.data.objects' to be distributed and are already explicitly excluding 'insightface.data.objects' via `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, you can try to use `exclude_package_data`, or `include-package-data=False` in combination with a more fine grained `package-data` configuration. You can read more about "package data files" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html Failed to build insightface [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any `.py` files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. ******************************************************************************** !! check.warn(importable) /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.thirdparty.face3d.mesh.cython' is absent from the `packages` configuration. !! ******************************************************************************** ############################ # Package would be ignored # ############################ Python recognizes 'insightface.thirdparty.face3d.mesh.cython' as an importable package[^1], but it is absent from setuptools' `packages` configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'insightface.thirdparty.face3d.mesh.cython' is explicitly added to the `packages` configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about "package discovery" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'insightface.thirdparty.face3d.mesh.cython' to be distributed and are already explicitly excluding 'insightface.thirdparty.face3d.mesh.cython' via `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, you can try to use `exclude_package_data`, or `include-package-data=False` in combination with a more fine grained `package-data` configuration. You can read more about "package data files" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any `.py` files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. ******************************************************************************** !! check.warn(importable) creating build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/Tom_Hanks_54745.png -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_black.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_blue.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_green.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_white.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/t1.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images creating build/lib.linux-x86_64-cpython-312/insightface/data/objects copying insightface/data/objects/meanshape_68.pkl -> build/lib.linux-x86_64-cpython-312/insightface/data/objects creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core.h -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.c -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpp -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.pyx -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/setup.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython running build_ext building 'insightface.thirdparty.face3d.mesh.cython.mesh_core_cython' extension creating build/temp.linux-x86_64-cpython-312 creating build/temp.linux-x86_64-cpython-312/insightface creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython gcc -pthread -B /home/abc/miniconda3/compiler_compat -fno-strict-overflow -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -Iinsightface/thirdparty/face3d/mesh/cython -I/tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/numpy/core/include -I/home/abc/miniconda3/include/python3.12 -c insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp -o build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython/mesh_core.o error: command '/config/02-sd-webui/conda-env/bin/gcc' failed: Permission denied [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for insightface ERROR: Could not build wheels for insightface, which is required to install pyproject.toml-based projects And: Building wheels for collected packages: lmdb Building wheel for lmdb (setup.py): started Building wheel for lmdb (setup.py): finished with status 'error' error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [22 lines of output] py-lmdb: Using bundled liblmdb with py-lmdb patches; override with LMDB_FORCE_SYSTEM=1 or LMDB_PURE=1. patching file lmdb.h patching file mdb.c py-lmdb: Using CPython extension; override with LMDB_FORCE_CFFI=1. running bdist_wheel running build running build_py creating build/lib.linux-x86_64-cpython-312 creating build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/__init__.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/__main__.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/_config.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/cffi.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/tool.py -> build/lib.linux-x86_64-cpython-312/lmdb running build_ext building 'cpython' extension creating build/temp.linux-x86_64-cpython-312 creating build/temp.linux-x86_64-cpython-312/build creating build/temp.linux-x86_64-cpython-312/build/lib creating build/temp.linux-x86_64-cpython-312/lmdb gcc -pthread -B /home/abc/miniconda3/compiler_compat -fno-strict-overflow -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -Ilib/py-lmdb -Ibuild/lib -I/home/abc/miniconda3/include/python3.12 -c build/lib/mdb.c -o build/temp.linux-x86_64-cpython-312/build/lib/mdb.o -DHAVE_PATCHED_LMDB=1 -UNDEBUG -w error: command '/config/02-sd-webui/conda-env/bin/gcc' failed: Permission denied [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for lmdb Running setup.py clean for lmdb Failed to build lmdb ERROR: Could not build wheels for lmdb, which is required to install pyproject.toml-based projects And: CUDA Stream Activated: True Traceback (most recent call last): File "/config/02-sd-webui/forge/launch.py", line 51, in <module> main() File "/config/02-sd-webui/forge/launch.py", line 47, in main start() File "/config/02-sd-webui/forge/modules/launch_utils.py", line 541, in start import webui File "/config/02-sd-webui/forge/webui.py", line 19, in <module> initialize.imports() File "/config/02-sd-webui/forge/modules/initialize.py", line 53, in imports from modules import processing, gradio_extensons, ui # noqa: F401 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/forge/modules/processing.py", line 18, in <module> import modules.sd_hijack File "/config/02-sd-webui/forge/modules/sd_hijack.py", line 5, in <module> from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches File "/config/02-sd-webui/forge/modules/sd_hijack_optimizations.py", line 13, in <module> from modules.hypernetworks import hypernetwork File "/config/02-sd-webui/forge/modules/hypernetworks/hypernetwork.py", line 13, in <module> from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors File "/config/02-sd-webui/forge/modules/sd_models.py", line 20, in <module> from modules_forge import forge_loader File "/config/02-sd-webui/forge/modules_forge/forge_loader.py", line 5, in <module> from ldm_patched.modules import model_detection File "/config/02-sd-webui/forge/ldm_patched/modules/model_detection.py", line 5, in <module> import ldm_patched.modules.supported_models File "/config/02-sd-webui/forge/ldm_patched/modules/supported_models.py", line 5, in <module> from . import model_base File "/config/02-sd-webui/forge/ldm_patched/modules/model_base.py", line 6, in <module> from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel, Timestep File "/config/02-sd-webui/forge/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 22, in <module> from ..attention import SpatialTransformer, SpatialVideoTransformer, default File "/config/02-sd-webui/forge/ldm_patched/ldm/modules/attention.py", line 21, in <module> import xformers ModuleNotFoundError: import of xformers halted; None in sys.modules /entry.sh: line 11: /02.forge: No such file or directory /entry.sh: line 12: /config/scripts/02.forge: No such file or directory /entry.sh: line 13: /config/scripts/02.forge.sh: No such file or directory error in webui selection variable App is starting! Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Then it just bootloops endlessly. This is after I delete the venv so it's a clean install. I changed the tag back to :3.1.0 and that installs and runs perfectly fine.
  16. @Max-SDU and @HealthCareUSA: This thread is for container support more than anything, whereas your issues seems to be to do with the WebUI. Also, @Max-SDU, you do not mention which front end you're using when you see this error. Either way, doing a quick search for the term: NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs Finds me this as the first result on the Automatic1111 Github where there's lots of mention of deleting the venv and reinstalling. Maybe try that first and if that's not enough to solve your problems, try opening an issue on the relevant Github. In fact, at the first sign of trouble I've got myself into the habit of deleting the venv before anything else. Yes, it takes a while but it's not a step that you can discount since it can fix a lot from one version to another. Also, have you tried WebUI Forge? It's much more memory efficient than standard A1111 (you do need a decent amount of RAM though) and it has other bug fixes and optimisations as well.
  17. @Holaf What are the differences between these versions: TAG latest Last pushed 17 hours ago by holaflenain 430293c40eb5 TAG 3.1.0 Last pushed 16 hours ago by holaflenain 07cbd209f7e3 TAG test Last pushed 20 hours ago by holaflenain 381488f205f3 Which should I switch to? Are the changes you made in :test now in :latest or :3.1.0 and :test is deprecated now? Usually, :latest and the highest numbered version are one and the same. Why are :latest and :3.1.0 not the same? I don't know which to go for.
  18. I found another problem, except this time it was user error - sort of. At some point between when I last updated and tested things and made the post above, you'd pushed out another update. However, I hadn't seen any notification of a new update. Just a short while ago I noticed there was an update for the container and when I went to look at Docker Hub, I see that it was pushed 14 hours ago. So all today I've been using an out of date version. The moral of this story is that I will have to manually check for updates before I even start the container each time. The good news is that you have, indeed, fixed the file saving location. When I posted above, I was referring to the older version. This new version is correctly saving everything in the right place. So that's fixed. The bad news is in a log file so long I won't post it here in a code block because there are so many lines of it. So I'll attach it to avoid a huge wall of text. It's not even the whole of the log because so much shoots past the log window I can't catch it all and most of it disappears. On the plus side: Everything I use still seems to work, somehow... All the standard things plus ReActor is what I've been using. The end of the attached log says: ERROR: Could not build wheels for insightface, onnx, which is required to install pyproject.toml-based projects Problems with insightface being installed was what I had before when I was trying to use InstantID. It isn't affecting ReActor from what I can tell so far, which is good. However, this error is a thing that's happening after a few full stops and starts of the container, so it's not a one-off. The only way to get a full log would be if you could send log output to disk. Maybe rotate the last three logs or something? log.txt
  19. Ah. Changelogs would be useful. I still have DOCKER_MODS installing bc - so I can now take that out. Good to know. I've added a parameter to my file (parameters.txt) which is now not in the file read by Forge any more (parameters.forge.txt) so now I need to add it in the Forge file and revert it in the A1111 file. On my side, it's still saving to: appdata/stable-diffusion/02-sd-webui/forge/output/ My first thought is I'd need to wipe out: appdata/stable-diffusion/02-sd-webui/conda-env/ But what's in there is not configuration files. So I'd want to be looking at some of those. In which file is this value changed and what has it been changed to? Unless it's a change within one of the .sh files inside the container itself, in which case why isn't it working since I updated to the new :test tag version? Maybe I'll delete the container and start again just for my own peace of mind.
  20. @Holaf I see there's been a new :test tag which I've updated to. I'm seeing this in the log: *** Cannot import xformers Traceback (most recent call last): File "/config/02-sd-webui/forge/modules/sd_hijack_optimizations.py", line 160, in <module> import xformers.ops File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/__init__.py", line 8, in <module> from .fmha import ( File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 10, in <module> from . import attn_bias, cutlass, decoder, flash, small_k, triton, triton_splitk File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/triton_splitk.py", line 21, in <module> if TYPE_CHECKING or _has_triton21(): ^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 192, in _has_triton21 if not _has_a_version_of_triton(): ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 176, in _has_a_version_of_triton import triton # noqa: F401 ^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/__init__.py", line 20, in <module> from .compiler import compile, CompilationError File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/__init__.py", line 1, in <module> from .compiler import CompiledKernel, compile, instance_descriptor File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/compiler.py", line 27, in <module> from .code_generator import ast_to_ttir File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/code_generator.py", line 8, in <module> from .. import language File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/__init__.py", line 4, in <module> from . import math File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/math.py", line 4, in <module> from . import core File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/core.py", line 1375, in <module> @jit ^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 542, in jit return decorator(fn) ^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 534, in decorator return JITFunction( ^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 433, in __init__ self.run = self._make_launcher() ^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 388, in _make_launcher scope = {"version_key": version_key(), ^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 120, in version_key ptxas = path_to_ptxas()[0] ^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/backend.py", line 114, in path_to_ptxas result = subprocess.check_output([ptxas_bin, "--version"], stderr=subprocess.STDOUT) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 466, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 548, in run with Popen(*popenargs, **kwargs) as process: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1026, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1953, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas' --- *** Error loading script: preprocessor_marigold.py Traceback (most recent call last): File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 710, in _get_module return importlib.import_module("." + module_name, self.__name__) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/loaders/unet.py", line 27, in <module> from ..models.embeddings import ImageProjection, MLPProjection, Resampler File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/models/embeddings.py", line 23, in <module> from .attention_processor import Attention File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 32, in <module> import xformers.ops File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/__init__.py", line 8, in <module> from .fmha import ( File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 10, in <module> from . import attn_bias, cutlass, decoder, flash, small_k, triton, triton_splitk File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/triton_splitk.py", line 21, in <module> if TYPE_CHECKING or _has_triton21(): ^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 192, in _has_triton21 if not _has_a_version_of_triton(): ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 176, in _has_a_version_of_triton import triton # noqa: F401 ^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/__init__.py", line 20, in <module> from .compiler import compile, CompilationError File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/__init__.py", line 1, in <module> from .compiler import CompiledKernel, compile, instance_descriptor File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/compiler.py", line 27, in <module> from .code_generator import ast_to_ttir File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/code_generator.py", line 8, in <module> from .. import language File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/__init__.py", line 4, in <module> from . import math File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/math.py", line 4, in <module> from . import core File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/core.py", line 1375, in <module> @jit ^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 542, in jit return decorator(fn) ^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 534, in decorator return JITFunction( ^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 433, in __init__ self.run = self._make_launcher() ^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 388, in _make_launcher scope = {"version_key": version_key(), ^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 120, in version_key ptxas = path_to_ptxas()[0] ^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/backend.py", line 114, in path_to_ptxas result = subprocess.check_output([ptxas_bin, "--version"], stderr=subprocess.STDOUT) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 466, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 548, in run with Popen(*popenargs, **kwargs) as process: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1026, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1953, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 710, in _get_module return importlib.import_module("." + module_name, self.__name__) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/models/unet_2d_condition.py", line 22, in <module> from ..loaders import UNet2DConditionLoadersMixin File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 700, in __getattr__ module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 712, in _get_module raise RuntimeError( RuntimeError: Failed to import diffusers.loaders.unet because of the following error (look up to see its traceback): [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/config/02-sd-webui/forge/modules/scripts.py", line 544, in load_scripts script_module = script_loading.load_module(scriptfile.path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/forge/modules/script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/config/02-sd-webui/forge/extensions-builtin/forge_preprocessor_marigold/scripts/preprocessor_marigold.py", line 10, in <module> from marigold.model.marigold_pipeline import MarigoldPipeline File "/config/02-sd-webui/forge/extensions-builtin/forge_preprocessor_marigold/marigold/model/marigold_pipeline.py", line 9, in <module> from diffusers import ( File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 701, in __getattr__ value = getattr(module, name) ^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 700, in __getattr__ module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 712, in _get_module raise RuntimeError( RuntimeError: Failed to import diffusers.models.unet_2d_condition because of the following error (look up to see its traceback): Failed to import diffusers.loaders.unet because of the following error (look up to see its traceback): [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas' I'm not sure what this affects but I've run a quick generate and I got my results. This is with a straight-up generation of a simple positive prompt and no negative. I'll be using it some more with more complex things in a bit so whatever the above is - maybe it affects something else like ControlNet or whatever. Any chance of a changelog? Edit: What is the 'Holaf_tests' tag for?
  21. Wonder no more... Due to this Github issue being closed a short time ago and the fact that every time Forge boots it pulls every new version up to the minute, the time taken to generate with ReActor selected has now been massively reduced. A test on the same batch size is now down to 36 or so seconds. The only caveat is that in ReActor, under Restore Face, GFPGAN must be selected - CodeFormer is still just as slow. There are always pros and cons with bleeding edge software. There's still a CPU spike though, so it's not exactly running on the GPU but I'll take it..
  22. I've installed the :test tag and have indeed been testing it. Other than the running on the CPU issue, I've found only three minor issues: 1. Starting Forge does not read from parameters.forge.txt In the log I see this: Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api Arg --medvram is removed in Forge. Now memory management is fully automatic and you do not need any command flags. Please just remove this flag. In extreme cases, if you want to force previous lowvram/medvram behaviors, please use --always-offload-from-vram The contents of parameters.txt: # Web + Network --listen --port 9000 # options --enable-insecure-extension-access --medvram --xformers --api #--no-half-vae #--disable-nan-check #--update-all-extensions #--reinstall-xformers #--reinstall-torch The contents of parameters.forge.txt: # Web + Network --listen --port 9000 # options --enable-insecure-extension-access --xformers --api --cuda-malloc --cuda-stream #--no-half-vae #--disable-nan-check #--update-all-extensions #--reinstall-xformers #--reinstall-torch So it looks like Forge is reading parameters.txt rather than parameters.forge.txt. Although I think Forge simply ignores this line, I can remove it in parameters.txt but then if I want to use standard A1111, I might need it to be there. Or if I want to add something specific only for when I run Forge, it'll be there for both versions which could be problematic. 2. I figured out why Forge isn't saving its outputs. It's because I was wrong and it actually is but it's saving them in an unexpected location. All versions (A1111, ComfyUI, etc.) send their outputs to: appdata/stable-diffusion/outputs/ Except for Forge, which sends its outputs to: appdata/stable-diffusion/02-sd-webui/forge/output/ How can I change this to make Forge save somewhere inside of this location?: appdata/stable-diffusion/outputs/ 3. I'm still seeing this when I start Forge: webui.sh: line 246: bc: command not found webui.sh: line 246: [: -eq: unary operator expected I've been reading: https://docs.linuxserver.io/general/container-customization/ https://github.com/linuxserver/docker-mods/tree/universal-package-install?tab=readme-ov-file Essentially, what I've done is add some variables to the template: - DOCKER_MODS=linuxserver/mods:universal-package-install - INSTALL_PACKAGES=bc Which gives me this in the log: **** Adding bc to OS package install list **** [mod-init] **** Installing all mod packages **** Get:1 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB] Get:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB] Get:3 http://archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB] Get:4 http://archive.ubuntu.com/ubuntu jammy/main Sources [1,668 kB] Get:5 http://archive.ubuntu.com/ubuntu jammy/restricted Sources [28.2 kB] Get:6 http://archive.ubuntu.com/ubuntu jammy/universe Sources [22.0 MB] Get:7 http://archive.ubuntu.com/ubuntu jammy/multiverse Sources [361 kB] Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB] Get:9 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB] Get:10 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB] Get:11 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1,792 kB] Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse Sources [21.8 kB] Get:13 http://archive.ubuntu.com/ubuntu jammy-updates/main Sources [595 kB] Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/universe Sources [398 kB] Get:15 http://archive.ubuntu.com/ubuntu jammy-updates/restricted Sources [70.1 kB] Get:16 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [50.4 kB] Get:17 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [1,907 kB] Get:18 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1,343 kB] Get:19 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1,786 kB] Get:20 http://archive.ubuntu.com/ubuntu jammy-security/universe Sources [231 kB] Get:21 http://archive.ubuntu.com/ubuntu jammy-security/main Sources [316 kB] Get:22 http://archive.ubuntu.com/ubuntu jammy-security/multiverse Sources [12.1 kB] Get:23 http://archive.ubuntu.com/ubuntu jammy-security/restricted Sources [65.9 kB] Get:24 http://archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [1,070 kB] Get:25 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1,502 kB] Get:26 http://archive.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [1,859 kB] Get:27 http://archive.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [44.6 kB] Fetched 55.5 MB in 12s (4,745 kB/s) Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: bc 0 upgraded, 1 newly installed, 0 to remove and 19 not upgraded. Need to get 87.6 kB of archives. After this operation, 220 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 bc amd64 1.07.1-3build1 [87.6 kB] Fetched 87.6 kB in 16s (5,319 B/s) Selecting previously unselected package bc. (Reading database ... 49023 files and directories currently installed.) Preparing to unpack .../bc_1.07.1-3build1_amd64.deb ... Unpacking bc (1.07.1-3build1) ... Setting up bc (1.07.1-3build1) ... [custom-init] No custom files found, skipping... Done! So bc is installed and the error is gone. I know this is only a small thing and it didn't appear to really affect anything but I do prefer to not see errors in my logs. I've run some generations to see what the speeds are. If I run with and without ReActor, I get 21s without ReActor and 1m10s with it. I used exactly the same amount in a batch. It's not scientifically down to the exact seed or anything but it wouldn't make any difference here anyway. I wonder how much difference there would be if it could run on the GPU. I have nothing to compare it with since I only tried to use ReActor after I had my previous problems with InstantID and I've only been able to use it with this :test tagged version. So, at least for me, there are really only minor issues if you don't count: 2024-02-26 02:09:53.027799529 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory 2024-02-26 02:10:32.154622901 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory 2024-02-26 02:10:33.533160219 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory 2024-02-26 02:10:34.076110107 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory 2024-02-26 02:10:34.562393148 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory I'm happier than I was before because even if something does have to run on the CPU - at least it runs. Cheers! I've only tested Forge so far - not standard A1111. I think you might be suggesting the above errors don't happen in A1111 so I'll get to testing that when I can. Also: If you push this to :latest, will you announce it? Otherwise I might be stuck on :test forever.
  23. I read the past few posts, specifically about WebUI Forge and the custom scripts directory and from there I found the Github repo with sd-webui-forge.sh in it. So I put that in the right place, set the template to boot from it and soon I had WebUI Forge installed. A slight issue during every bootup of this script: webui.sh: line 246: bc: command not found webui.sh: line 246: [: -eq: unary operator expected This doesn't appear to affect anything. It runs fine regardless. The second issue is the main one for me. I installed WebUI Forge because I'd read articles and watched videos about it and that it was much more memory efficient than Automatic1111. Plus it has bug fixes and some built-in extensions that A1111 doesn't have. One of those is built-in version of Photomaker. This is similar to InstantID so I thought I'd try it and... It was... fine? It worked without issue and did the job to a point but it's still not what I'm after - that's still InstantID because when I had it working before it was near flawless. However, if I show a bit more of that log: ################################################################ Launching launch.py... ################################################################ glibc version is 2.35 Check TCMalloc: libtcmalloc_minimal.so.4 webui.sh: line 246: bc: command not found webui.sh: line 246: [: -eq: unary operator expected libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4 Python 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] Version: f0.0.14v1.8.0rc-latest-184-g43c9e3b5 Commit hash: 43c9e3b5ce1642073c7a9684e36b45489eeb4a49 Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually. So insightface is not installed, which is necessary for InstantID to work. For completeness, if I try to use InstantID regardless: Traceback (most recent call last): File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/lib_ipadapter/IPAdapterPlus.py", line 560, in load_insight_face from insightface.app import FaceAnalysis ModuleNotFoundError: No module named 'insightface' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/blocks.py", line 1431, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper response = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_controlnet/lib_controlnet/controlnet_ui/controlnet_ui_group.py", line 847, in run_annotator result = preprocessor( ^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/scripts/forge_ipadapter.py", line 77, in __call__ insightface=self.load_insightface(), ^^^^^^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/scripts/forge_ipadapter.py", line 71, in load_insightface self.cached_insightface = opInsightFaceLoader(name='antelopev2')[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/lib_ipadapter/IPAdapterPlus.py", line 562, in load_insight_face raise Exception(e) Exception: No module named 'insightface' Like everything else, searching for a way around this is geared toward Windows, WSL, Linux or Mac - nothing for Docker. It's beyond me how I would go about activating a venv in a container and pip installing something that way. So I'm kinda stuck. Even if I did get insightface installed in WebUI Forge, I would still be stuck with my previous problem because: glibc version is 2.35 I will say, though, that even though Stable Diffusion is fast moving and things could change at any moment - WebUI Forge seems to be the way to go right now. The memory efficiency is so much better than standard A1111. Edit, I found another bug with WebUI Forge: It doesn't save images after generation. I can save images manually one-by-one (they go to log/images) but there is only ever a preview, meaning everything is lost upon a new generation.
  24. I'm having a real struggle trying to get either InstantID or ReActor to work in Automatic1111. Log when attempting to use InstantID: ################################################################ Launching launch.py... ################################################################ Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] Version: v1.7.0 Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api Civitai Helper: Get Custom Model Folder [-] ADetailer initialized. version: 24.1.2, num models: 9 CivitAI Browser+: Aria2 RPC started ControlNet preprocessor location: /config/02-sd-webui/webui/extensions/sd-webui-controlnet/annotator/downloads 2024-02-17 22:17:30,284 - ControlNet - INFO - ControlNet v1.1.440 2024-02-17 22:17:30,475 - ControlNet - INFO - ControlNet v1.1.440 WARNING ⚠️ user config directory '/home/abc/.config/Ultralytics' is not writeable, defaulting to '/tmp' or CWD.Alternatively you can define a YOLO_CONFIG_DIR environment variable for this path. Loading weights [4726d3bab1] from /config/02-sd-webui/webui/models/Stable-diffusion/sdxlturbo/dreamshaperXL_v2TurboDpmppSDE.safetensors 2024-02-17 22:17:33,538 - AnimateDiff - INFO - Injecting LCM to UI. 2024-02-17 22:17:34,314 - AnimateDiff - INFO - Hacking i2i-batch. 2024-02-17 22:17:34,359 - ControlNet - INFO - ControlNet UI callback registered. Civitai Helper: Set Proxy: Creating model from config: /config/02-sd-webui/webui/repositories/generative-models/configs/inference/sd_xl_base.yaml Running on local URL: http://0.0.0.0:9000 To create a public link, set `share=True` in `launch()`. Startup time: 84.7s (prepare environment: 20.1s, import torch: 15.3s, import gradio: 3.0s, setup paths: 4.2s, initialize shared: 0.5s, other imports: 2.1s, setup codeformer: 0.6s, setup gfpgan: 0.1s, list SD models: 0.1s, load scripts: 34.0s, create ui: 3.8s, gradio launch: 0.6s). Applying attention optimization: xformers... done. Model loaded in 6.9s (load weights from disk: 1.4s, create model: 0.4s, apply weights to model: 4.6s, calculate empty prompt: 0.4s). 2024-02-17 22:19:59,112 - ControlNet - INFO - Preview Resolution = 512 Traceback (most recent call last): File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1431, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper response = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/controlnet_ui/controlnet_ui_group.py", line 1013, in run_annotator result, is_image = preprocessor( ^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/utils.py", line 80, in decorated_func return cached_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/utils.py", line 64, in cached_func return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/global_state.py", line 37, in unified_preprocessor return preprocessor_modules[preprocessor_name](*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/processor.py", line 801, in run_model_instant_id self.load_model() File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/processor.py", line 739, in load_model from insightface.app import FaceAnalysis File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module> from . import app File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module> from .mask_renderer import * File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module> from ..thirdparty import face3d File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module> from . import mesh File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module> from .cython import mesh_core_cython ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so) Log when attempting to use ReActor: ################################################################ Launching launch.py... ################################################################ Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] Version: v1.7.0 Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e CUDA 11.8 Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api Civitai Helper: Get Custom Model Folder [-] ADetailer initialized. version: 24.1.2, num models: 9 CivitAI Browser+: Aria2 RPC started ControlNet preprocessor location: /config/02-sd-webui/webui/extensions/sd-webui-controlnet/annotator/downloads 2024-02-17 22:27:16,501 - ControlNet - INFO - ControlNet v1.1.440 2024-02-17 22:27:16,690 - ControlNet - INFO - ControlNet v1.1.440 WARNING ⚠️ user config directory '/home/abc/.config/Ultralytics' is not writeable, defaulting to '/tmp' or CWD.Alternatively you can define a YOLO_CONFIG_DIR environment variable for this path. *** Error loading script: console_log_patch.py Traceback (most recent call last): File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts script_module = script_loading.load_module(scriptfile.path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/console_log_patch.py", line 4, in <module> import insightface File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module> from . import app File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module> from .mask_renderer import * File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module> from ..thirdparty import face3d File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module> from . import mesh File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module> from .cython import mesh_core_cython ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so) --- *** Error loading script: reactor_api.py Traceback (most recent call last): File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts script_module = script_loading.load_module(scriptfile.path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_api.py", line 17, in <module> from scripts.reactor_swapper import EnhancementOptions, swap_face, DetectionOptions File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_swapper.py", line 11, in <module> import insightface File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module> from . import app File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module> from .mask_renderer import * File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module> from ..thirdparty import face3d File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module> from . import mesh File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module> from .cython import mesh_core_cython ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so) --- *** Error loading script: reactor_faceswap.py Traceback (most recent call last): File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts script_module = script_loading.load_module(scriptfile.path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_faceswap.py", line 18, in <module> from reactor_ui import ( File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/reactor_ui/__init__.py", line 2, in <module> import reactor_ui.reactor_tools_ui as ui_tools File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/reactor_ui/reactor_tools_ui.py", line 2, in <module> from scripts.reactor_swapper import build_face_model, blend_faces File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_swapper.py", line 11, in <module> import insightface File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module> from . import app File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module> from .mask_renderer import * File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module> from ..thirdparty import face3d File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module> from . import mesh File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module> from .cython import mesh_core_cython ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so) --- *** Error loading script: reactor_swapper.py Traceback (most recent call last): File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts script_module = script_loading.load_module(scriptfile.path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_swapper.py", line 11, in <module> import insightface File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module> from . import app File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module> from .mask_renderer import * File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module> from ..thirdparty import face3d File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module> from . import mesh File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module> from .cython import mesh_core_cython ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so) --- 22:27:20 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [4726d3bab1] from /config/02-sd-webui/webui/models/Stable-diffusion/sdxlturbo/dreamshaperXL_v2TurboDpmppSDE.safetensors 2024-02-17 22:27:21,571 - AnimateDiff - INFO - Injecting LCM to UI. 2024-02-17 22:27:22,412 - AnimateDiff - INFO - Hacking i2i-batch. 2024-02-17 22:27:22,458 - ControlNet - INFO - ControlNet UI callback registered. Civitai Helper: Set Proxy: Creating model from config: /config/02-sd-webui/webui/repositories/generative-models/configs/inference/sd_xl_base.yaml Running on local URL: http://0.0.0.0:9000 To create a public link, set `share=True` in `launch()`. Startup time: 85.3s (prepare environment: 26.2s, import torch: 15.4s, import gradio: 3.1s, setup paths: 4.5s, initialize shared: 0.5s, other imports: 2.2s, setup codeformer: 0.6s, setup gfpgan: 0.1s, list SD models: 0.1s, load scripts: 27.9s, create ui: 3.7s, gradio launch: 0.6s). Applying attention optimization: xformers... done. Model loaded in 6.7s (load weights from disk: 1.4s, create model: 0.5s, apply weights to model: 3.3s, calculate empty prompt: 1.5s). The error with both of these is: ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found When I console into the container and run: strings /home/abc/miniconda3/bin/../lib/libstdc++.so.6 | grep GLIBCXX I get: root@1dd670cc5061:/# strings /home/abc/miniconda3/bin/../lib/libstdc++.so.6 | grep GLIBCXX GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 GLIBCXX_3.4.3 GLIBCXX_3.4.4 GLIBCXX_3.4.5 GLIBCXX_3.4.6 GLIBCXX_3.4.7 GLIBCXX_3.4.8 GLIBCXX_3.4.9 GLIBCXX_3.4.10 GLIBCXX_3.4.11 GLIBCXX_3.4.12 GLIBCXX_3.4.13 GLIBCXX_3.4.14 GLIBCXX_3.4.15 GLIBCXX_3.4.16 GLIBCXX_3.4.17 GLIBCXX_3.4.18 GLIBCXX_3.4.19 GLIBCXX_3.4.20 GLIBCXX_3.4.21 GLIBCXX_3.4.22 GLIBCXX_3.4.23 GLIBCXX_3.4.24 GLIBCXX_3.4.25 GLIBCXX_3.4.26 GLIBCXX_3.4.27 GLIBCXX_3.4.28 GLIBCXX_3.4.29 GLIBCXX_DEBUG_MESSAGE_LENGTH _ZNKSt14basic_ifstreamIcSt11char_traitsIcEE7is_openEv@GLIBCXX_3.4 _ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEv@@GLIBCXX_3.4.5 _ZNKSbIwSt11char_traitsIwESaIwEE11_M_disjunctEPKw@GLIBCXX_3.4 _ZNKSt14basic_ifstreamIwSt11char_traitsIwEE7is_openEv@@GLIBCXX_3.4.5 GLIBCXX_3.4.21 GLIBCXX_3.4.9 _ZSt10adopt_lock@@GLIBCXX_3.4.11 GLIBCXX_3.4.10 GLIBCXX_3.4.16 GLIBCXX_3.4.1 _ZNSt19istreambuf_iteratorIcSt11char_traitsIcEEppEv@GLIBCXX_3.4 GLIBCXX_3.4.28 _ZNSs7_M_copyEPcPKcm@GLIBCXX_3.4 GLIBCXX_3.4.25 _ZNSt19istreambuf_iteratorIcSt11char_traitsIcEEppEv@@GLIBCXX_3.4.5 _ZNSs7_M_moveEPcPKcm@@GLIBCXX_3.4.5 _ZNKSt13basic_fstreamIwSt11char_traitsIwEE7is_openEv@GLIBCXX_3.4 _ZNKSt13basic_fstreamIcSt11char_traitsIcEE7is_openEv@GLIBCXX_3.4 _ZNSbIwSt11char_traitsIwESaIwEE4_Rep26_M_set_length_and_sharableEm@@GLIBCXX_3.4.5 _ZNSs4_Rep26_M_set_length_and_sharableEm@GLIBCXX_3.4 _ZSt10defer_lock@@GLIBCXX_3.4.11 _ZN10__gnu_norm15_List_node_base4swapERS0_S1_@@GLIBCXX_3.4 _ZNSs9_M_assignEPcmc@@GLIBCXX_3.4.5 _ZNKSbIwSt11char_traitsIwESaIwEE15_M_check_lengthEmmPKc@@GLIBCXX_3.4.5 _ZNKSt14basic_ifstreamIcSt11char_traitsIcEE7is_openEv@@GLIBCXX_3.4.5 _ZNSbIwSt11char_traitsIwESaIwEE7_M_moveEPwPKwm@GLIBCXX_3.4 GLIBCXX_3.4.24 _ZNVSt9__atomic011atomic_flag12test_and_setESt12memory_order@@GLIBCXX_3.4.11 GLIBCXX_3.4.20 _ZNSt11char_traitsIwE2eqERKwS2_@@GLIBCXX_3.4.5 GLIBCXX_3.4.12 _ZNSi6ignoreEv@@GLIBCXX_3.4.5 GLIBCXX_3.4.2 _ZNSt11char_traitsIcE2eqERKcS2_@@GLIBCXX_3.4.5 GLIBCXX_3.4.6 GLIBCXX_3.4.15 _ZNKSt13basic_fstreamIcSt11char_traitsIcEE7is_openEv@@GLIBCXX_3.4.5 _ZNSs9_M_assignEPcmc@GLIBCXX_3.4 GLIBCXX_3.4.19 _ZNKSt14basic_ofstreamIwSt11char_traitsIwEE7is_openEv@GLIBCXX_3.4 _ZNSt19istreambuf_iteratorIwSt11char_traitsIwEEppEv@GLIBCXX_3.4 GLIBCXX_3.4.27 _ZN10__gnu_norm15_List_node_base7reverseEv@@GLIBCXX_3.4 _ZN10__gnu_norm15_List_node_base4hookEPS0_@@GLIBCXX_3.4 _ZNSt11char_traitsIwE2eqERKwS2_@GLIBCXX_3.4 _ZNSbIwSt11char_traitsIwESaIwEE7_M_copyEPwPKwm@GLIBCXX_3.4 _ZNSbIwSt11char_traitsIwESaIwEE7_M_copyEPwPKwm@@GLIBCXX_3.4.5 GLIBCXX_3.4.23 GLIBCXX_3.4.3 GLIBCXX_3.4.7 _ZNSi6ignoreEl@@GLIBCXX_3.4.5 _ZNKSbIwSt11char_traitsIwESaIwEE11_M_disjunctEPKw@@GLIBCXX_3.4.5 _ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEv@GLIBCXX_3.4 _ZNKSt13basic_fstreamIwSt11char_traitsIwEE7is_openEv@@GLIBCXX_3.4.5 _ZNSbIwSt11char_traitsIwESaIwEE7_M_moveEPwPKwm@@GLIBCXX_3.4.5 GLIBCXX_3.4.18 _ZNSbIwSt11char_traitsIwESaIwEE4_Rep26_M_set_length_and_sharableEm@GLIBCXX_3.4 _ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEl@@GLIBCXX_3.4.5 _ZSt15future_category@@GLIBCXX_3.4.14 _ZNSi6ignoreEl@GLIBCXX_3.4 GLIBCXX_3.4.29 _ZNSt11char_traitsIcE2eqERKcS2_@GLIBCXX_3.4 _ZNKSs15_M_check_lengthEmmPKc@GLIBCXX_3.4 _ZN10__gnu_norm15_List_node_base8transferEPS0_S1_@@GLIBCXX_3.4 _ZNSbIwSt11char_traitsIwESaIwEE9_M_assignEPwmw@GLIBCXX_3.4 _ZNVSt9__atomic011atomic_flag5clearESt12memory_order@@GLIBCXX_3.4.11 _ZNKSt14basic_ofstreamIcSt11char_traitsIcEE7is_openEv@@GLIBCXX_3.4.5 _ZNKSt14basic_ofstreamIcSt11char_traitsIcEE7is_openEv@GLIBCXX_3.4 _ZNSs7_M_moveEPcPKcm@GLIBCXX_3.4 _ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEl@GLIBCXX_3.4 _ZNSbIwSt11char_traitsIwESaIwEE9_M_assignEPwmw@@GLIBCXX_3.4.5 _ZNKSbIwSt11char_traitsIwESaIwEE15_M_check_lengthEmmPKc@GLIBCXX_3.4 _ZNKSs11_M_disjunctEPKc@@GLIBCXX_3.4.5 _ZN10__gnu_norm15_List_node_base6unhookEv@@GLIBCXX_3.4 GLIBCXX_3.4.22 _ZNSt19istreambuf_iteratorIwSt11char_traitsIwEEppEv@@GLIBCXX_3.4.5 _ZNSi6ignoreEv@GLIBCXX_3.4 _ZNSs7_M_copyEPcPKcm@@GLIBCXX_3.4.5 GLIBCXX_3.4.8 GLIBCXX_3.4.13 _ZSt11try_to_lock@@GLIBCXX_3.4.11 _ZNKSt14basic_ofstreamIwSt11char_traitsIwEE7is_openEv@@GLIBCXX_3.4.5 GLIBCXX_3.4.17 GLIBCXX_3.4.4 _ZNKSs15_M_check_lengthEmmPKc@@GLIBCXX_3.4.5 _ZNKSt14basic_ifstreamIwSt11char_traitsIwEE7is_openEv@GLIBCXX_3.4 _ZNSs4_Rep26_M_set_length_and_sharableEm@@GLIBCXX_3.4.5 GLIBCXX_3.4.26 _ZNKSs11_M_disjunctEPKc@GLIBCXX_3.4 root@1dd670cc5061:/# When I run: strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX I get: root@1dd670cc5061:/# strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 GLIBCXX_3.4.3 GLIBCXX_3.4.4 GLIBCXX_3.4.5 GLIBCXX_3.4.6 GLIBCXX_3.4.7 GLIBCXX_3.4.8 GLIBCXX_3.4.9 GLIBCXX_3.4.10 GLIBCXX_3.4.11 GLIBCXX_3.4.12 GLIBCXX_3.4.13 GLIBCXX_3.4.14 GLIBCXX_3.4.15 GLIBCXX_3.4.16 GLIBCXX_3.4.17 GLIBCXX_3.4.18 GLIBCXX_3.4.19 GLIBCXX_3.4.20 GLIBCXX_3.4.21 GLIBCXX_3.4.22 GLIBCXX_3.4.23 GLIBCXX_3.4.24 GLIBCXX_3.4.25 GLIBCXX_3.4.26 GLIBCXX_3.4.27 GLIBCXX_3.4.28 GLIBCXX_3.4.29 GLIBCXX_3.4.30 GLIBCXX_DEBUG_MESSAGE_LENGTH root@1dd670cc5061:/# So GLIBCXX_3.4.32 is not available. However, I've recently had to wipe out this container but I've had this container installed in the past. I think I first installed it around late November/early December. During this time, I have InstantID installed and working with no errors. I have no logs from this time because - why would I? It was just working. I don't understand how it worked just a week ago and now, with a clean install - it just won't. I also see this container has only been updated less than a week ago so I thought it could be that. So I pulled an older tag and tested. I tried another. Neither the :latest tag, nor any other will let InstantID or ReActor run without this GLIBCXX_3.4.30 error. I've tried any number of ways to fix this. Basically do a search for 'GLIBCXX_3.4.32 not found' and attempt any of the fixes, like symbolic links or even outright replacing the files. I've even messed around with trying to set 'LD_LIBRARY_PATH'. I just cannot get this to work. I can't find a way to update GCC inside the container either. The only thing that looks promising is from here which says: sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt-get update sudo apt-get install --only-upgrade libstdc++6 So how does one add a ppa to a linuxserver base image? I can't even specify a lower version of insightface as they don't make it available. I'm truly stuck which is doubly annoying because it did actually work for me before. I will say that everything else I use in Automatic1111 works perfectly fine. Also, I'm running on unRAID v6.12.6 and installed this container from CA. I can't be the only one who is wanting to use InstantID or ReActor. Anyone else have it working or not working? Any fix for this?
×
×
  • Create New...