Araso

Members
  • Posts

    24
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Araso's Achievements

Noob

Noob (1/14)

2

Reputation

  1. The way I do it is to edit the WEBUI_VERSION key, separating my choices with a pipe: Then after saving, I get a nice an easy dropdown to select my preferred UI: Literally just add (or replace the existing value) this to the WEBUI_VERSION key: 02.forge It should be noted that at the moment, Forge development seems to have kind of stalled, unfortunately. Some things are broken, regional prompter being one of them. I use RP a lot so I switched back to standard A1111. If you're careful then the memory optimisations that Forge has that you lose don't really matter. Hopefully Forge picks back up again but at least there's a choice between the two.
  2. I must admit, I didn't think to do this. I expect, at most, to remove the env but to leave everything else in place whenever I start the container. But since there's nothing there I've customised or need to keep, I've gone ahead and deleted everything. I imagine since you said you don't use Kohya that you delete the whole directory regularly, which I don't do, and that's why we've been getting different results sometimes. The entire log now (after three starts): [migrations] started [migrations] no migrations found usermod: no changes ─────────────────────────────────────── _____ __ __ _____ _____ _____ _____ | | | | __|_ _| | | | --| | |__ | | | | | | | | | |_____|_____|_____| |_| |_____|_|_|_| _____ __ __ _ __ ____ | __ | | | | | | \ | __ -| | | | |__| | | |_____|_____|_|_____|____/ Based on images from linuxserver.io ─────────────────────────────────────── To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── [custom-init] No custom files found, skipping... App is starting! [ls.io-init] done. Local branch up-to-date, keeping existing venv Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done # All requested packages already installed. Channels: - conda-forge - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done # All requested packages already installed. Requirement already satisfied: pip in /config/70-kohya/env/lib/python3.10/site-packages (24.0) 15:20:04-388204 INFO Python version is 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] 15:20:04-556792 INFO Submodule initialized and updated. 15:20:04-562357 INFO Installing python dependencies. This could take a few minutes as it downloads files. 15:20:04-563542 INFO If this operation ever runs too long, you can rerun this script in verbose mode to check. 15:20:04-565226 INFO Kohya_ss GUI version: v23.0.15 15:20:04-567755 INFO Installing modules from requirements_linux.txt... 15:20:04-570023 INFO Installing modules from requirements.txt... 15:20:04-575680 INFO Installing package: -e ./sd-scripts 15:20:27-118198 INFO Configuring accelerate... 15:20:27-120563 WARNING Could not automatically configure accelerate. Please manually configure accelerate with the option in the menu or with: accelerate config. LAUNCHING KOHYA_SS ! 15:20:32-792246 INFO headless: True Sorted! Now you can check the forum without expecting yet another list of errors from me. Cheers for all your work!
  3. OK, tested: To start with, I simply started the container without touching the env or anything else: kohya-log-first-start.txt So there's a message in the log: Remote branch is ahead. If you encouter any issue after upgrade, try to clean venv for clean packages install There was also an error: error: Your local changes to the following files would be overwritten by checkout: README.md library/sdxl_original_unet.py Please commit your changes or stash them before you switch branches. Aborting fatal: Unable to checkout '6b1520a46b1b6ee7c33092537dc9449d1cc4f56f' in submodule path 'sd-scripts' 22:47:26-432880 ERROR Error during Git operation: Command '['git', 'submodule', 'update', '--init', '--recursive', '--quiet']' returned non-zero exit status 1. So I deleted the file 'Delete_this_file_to_clean_virtual_env_and_dependencies_at_next_launch' and started the container again while I watched to make sure it wiped the env correctly. It did, so that was all working as intended. Here's the log: kohya-log-second-start.txt I still got that error: error: Your local changes to the following files would be overwritten by checkout: README.md library/sdxl_original_unet.py Please commit your changes or stash them before you switch branches. Aborting fatal: Unable to checkout '6b1520a46b1b6ee7c33092537dc9449d1cc4f56f' in submodule path 'sd-scripts' 22:54:37-118317 ERROR Error during Git operation: Command '['git', 'submodule', 'update', '--init', '--recursive', '--quiet']' returned non-zero exit status 1. Then I stopped and started the container again for a third time: kohya-log-third-start.txt This also had the same error as above. All three times I started the container I could still load the UI without problems. What I haven't done, because it takes such a long time to do, is to actually test creating a LoRA. For all I know it might work without any problems or it might fail. As I say - it takes a long time to test something like that. I've never made any changes to those two files: README.md library/sdxl_original_unet.py So it's not a conflict there to prevent overwriting user-modified files. There's something odd here because I imagine it's pretty much an identical process for all UIs but I don't see these problems in A1111 or Forge. So it's something specific only to Kohya, as far as I can tell. But they're all only downloading the latest branch from Github, so they should all work in the same way.
  4. I might be worth mentioning that the last commit, as you can see here was 5 days ago. So the env that was there should have been fully up to date and not requiring an update at all. All of these reinstall have been happening well after the last commit. So I think it's maybe a bad idea to prevent normal updating procedure. I think it's something else - maybe some sort of conflict somewhere. But at the very least, there is now the possibility to delete the file to wipe the env if needed if you do go down that route.
  5. This is fixed and all existing files were moved over. Nice! This also is fixed. This, unfortunately, is not fixed. I keep getting this every time: Remote branch is ahead. Wiping venv for clean packages install Updating 6162193..5bbb4fc Full logs for the first three starts after installing this new version: kohya-log-first-start.txtkohya-log-second-start.txtkohya-log-third-start.txt
  6. Everything seems to be working. However, I have some observations. 1. The permissions files Two files are created in /appdata/stable-diffusion: Delete this file to clean virtual env and dependencies at next launch Delete this file to reset access rights at next launch I haven't tested these for reasons I'll get to below but I would suggest naming them slightly differently to this: Delete_this_file_to_clean_virtual_env_and_dependencies_at_next_launch Delete_this_file_to_reset_access_rights_at_next_launch The reason for this is because it can be tricky to delete files with spaces in them if anyone wants to do this from the console. So having underscores and no spaces removes all doubt and no need to use anything like quotation marks or escape characters. Linux based systems and spaces don't mix together very well. 2. Kohya installs fine, but... Kohya has installed without any problems and I've loaded the WebUI fine. However, it wipes the env and reinstalls on every start. This is without deleting the 'Delete this file to clean virtual env and dependencies at next launch' file. Specifically this directory is wiped every time: /appdata/stable-diffusion/70-kohya/env These are the logs: kohya-log-first-install.txtkohya-log-second-start.txtkohya-log-third-start.txt The ones called 'second' and 'third' are actually later than that. They're more like 'fifth' and sixth' but I didn't realise it was wiping the env until after a few stop/starts so I didn't save any of those logs. But they're all pretty much the same. Every time it reinstalls I can still open the UI so it still works - it just takes along, long time to go through all of this reinstallation first. 3. Kohya works. Kohya seems to be working fine. I've trained a (very crappy) quick LoRA and tested it in A1111 and I can see it's definitely working. It used CUDA correctly. Although, as I mentioned in my previous post, I've always had NVIDIA_VISIBLE_DEVICES set correctly so I don't know if it would work properly without that variable. 4. A1111 outputs are still going to the wrong directory To be clear, I'm talking about standard Automatic1111. You already fixed Forge to go to the correct directory but A1111 is still saving outputs in: /appdata/stable-diffusion/02-sd-webui/webui/output instead of: /appdata/stable-diffusion/outputs/02-sd-webui That's pretty much it for now. It takes forever when Kohya reinstalls during every start of the container so I haven't had much time to do a deep test of everything. At least Kohya installs now and it's not exactly a daily use kind of thing. So if it would take a lot of work to fix, then I say leave it until you have the time. Also, this didn't leave me with much time to test deleting the files to wipe the env or change permissions. For one thing - Kohya is deleting the env anyway without me having to touch that file and I didn't really want to test deleting my working A1111 or Forge. Hopefully some or all of these fixes are quick and easy. As soon as you have any new version I'll be on it for testing.
  7. I've always had this variable set: NVIDIA_VISIBLE_DEVICES Does this mean this variable now does nothing and/or can be safely removed from the template?: CLEAN_ENV I'll test Kohya and see what's what.
  8. If you read my posts a bit farther back, the only real change I've made was after I had some trouble with file permissions so I added UMASK. To do this, I added a variable to the template: Name/Key: UMASK Value: 000 I can't think of anything significant off the top of my head that I changed in A1111's settings. I've been using Forge and had to switch to standard A1111 due to a Forge bug that's there at the moment so my settings are basically at the defaults other than some cosmetic things. You could try disabling all your custom extensions but this one. Then enable your other extensions on-by-one until you find where it breaks. I've heard of people having trouble with conflicting extensions before. If not that, then maybe stop A1111 and temporarily move: config.json ui-config.json somewhere safe and start it up again so it recreates a default config. One thing I noticed was that it worked in the ComfyUI tab just fine with 'realvisxlV40_v40LightningBakedvae.safetensors' but it errored out when I tried to generate anything with that model in the txt2img tab. It would only work with a full SDXL model. For me, using standard SDXL is taking way too long when I've been used to turbo and lightning models for a while now. So I've had to disable this extension just to be able to use them. The point of me saying all of this is - maybe if you have a turbo or lightning model loaded then that alone could cause you problems.
  9. Actually, yes. I didn't even know this extension was available or even possible, so cheers for the heads up. I've installed it and first thing is that it tells you to reload the UI. This isn't enough - a full stop and start of the container makes it work. Presumably you've done that, though. My log: ################################################################ Launching launch.py... ################################################################ glibc version is 2.35 Check TCMalloc: libtcmalloc_minimal.so.4 libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4 Python 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] Version: v1.8.0 Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 CUDA 12.1 Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api Civitai Helper: Get Custom Model Folder 20:02:49 - ReActor - STATUS - Running v0.7.0-b7 on Device: CUDA Loading weights [d6a48d3e20] from /config/02-sd-webui/webui/models/Stable-diffusion/sdxllightning/realvisxlV40_v40LightningBakedvae.safetensors Creating model from config: /config/02-sd-webui/webui/repositories/generative-models/configs/inference/sd_xl_base.yaml Civitai Helper: Set Proxy: Running on local URL: http://0.0.0.0:9000 To create a public link, set `share=True` in `launch()`. [sd-webui-comfyui] Started callback listeners for process webui [sd-webui-comfyui] Starting subprocess for comfyui... [sd-webui-comfyui] Created a reverse proxy route to ComfyUI: /sd-webui-comfyui/comfyui Startup time: 59.9s (prepare environment: 16.8s, import torch: 20.4s, import gradio: 2.8s, setup paths: 7.2s, initialize shared: 0.3s, other imports: 1.7s, list SD models: 0.2s, load scripts: 6.5s, create ui: 3.1s, gradio launch: 0.2s, add APIs: 0.4s). Applying attention optimization: xformers... done. Model loaded in 4.9s (load weights from disk: 0.2s, create model: 0.6s, apply weights to model: 3.5s, calculate empty prompt: 0.3s). [ComfyUI] [sd-webui-comfyui] Setting up IPC... [ComfyUI] [sd-webui-comfyui] Using inter-process communication strategy: File system [ComfyUI] [sd-webui-comfyui] Started callback listeners for process comfyui [ComfyUI] [sd-webui-comfyui] Patching ComfyUI... [ComfyUI] [sd-webui-comfyui] Launching ComfyUI with arguments: --listen 127.0.0.1 --port 8189 [ComfyUI] ** ComfyUI startup time: 2024-03-20 20:03:11.148687 [ComfyUI] ** Platform: Linux [ComfyUI] ** Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] [ComfyUI] ** Python executable: /config/02-sd-webui/conda-env/bin/python3 [ComfyUI] ** Log path: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/comfyui.log [ComfyUI] ### Loading: ComfyUI-Manager (V2.10.2) [ComfyUI] ### ComfyUI Revision: 2077 [4b9005e9] | Released on '2024-03-20' [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] registered ws - sandbox_tab - 7fdd974d65054604850394a243948462 [ComfyUI] registered ws - preprocess_latent_img2img - fff2e5a47a004d34a784737b769602da [ComfyUI] registered ws - postprocess_latent_txt2img - 57d1208bf4254d4eb7a75f53330c7a8c [ComfyUI] registered ws - postprocess_img2img - 95e9cc98ce84468e942c503f0bb45829 [ComfyUI] registered ws - preprocess_img2img - 48382e9789424d21ac3e6da647c14cc1 [ComfyUI] registered ws - postprocess_latent_img2img - 0262e3f624aa4c35ae54783f2e19ac7b [ComfyUI] registered ws - before_save_image_txt2img - 2fe9c48759004cd7a7d68faec7fd8de2 [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [ComfyUI] registered ws - postprocess_txt2img - 8074d5435f164a26ad0fb02317be8a8a [ComfyUI] registered ws - before_save_image_img2img - 11f4a5b8fb2e4dbaaac9f154a7cd3ccb [ComfyUI] registered ws - postprocess_image_img2img - 34002d2f69d94cb9a459ddc77987b38c [ComfyUI] registered ws - postprocess_image_txt2img - 2e0a84b53e034bb18eda0e806f6d4ecc [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json [ComfyUI] FETCH DATA from: /config/02-sd-webui/webui/extensions/sd-webui-comfyui/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json Then I get: I've only had a few minutes of testing it. Generating from the ComfyUI tab is working fine. As you can see, I've changed the checkpoint, resolution and CGF/steps. However, generating from the txt2img tab results in: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) So I'm going to have to figure that one out.
  10. A quickie bug report which should be an easy fix: I've had to temporarily switch from Forge to standard A1111 because Regional Prompter is broken at the moment in Forge. Standard A1111 saves output images to: /appdata/stable-diffusion/02-sd-webui/webui/output Instead of: /appdata/stable-diffusion/outputs It's easy enough to get to them but things like Infinite Image Browsing is locked to whatever location is defined as output. This means I can't find something I generated in Forge and send it with all its parameters to txt2img, for example. And vice versa - when Forge is patched for Regional Prompter, I won't be able to browse easily to what I've been creating in standard A1111. At least not without moving everything manually.
  11. I don't want to pile anything extra on you since, at least for me, Forge is the most important thing. But I have been trying to install kohya and I consistently get: Downloading and Extracting Packages: ...working... done Preparing transaction: ...working... done Verifying transaction: ...working... done Executing transaction: ...working... done '/opt/sd-install/parameters/70.txt' -> '/config/70-kohya/parameters.txt' Cloning into 'kohya_ss'... Already up to date. Requirement already satisfied: pip in ./venv/lib/python3.10/site-packages (23.0.1) Collecting pip Using cached pip-24.0-py3-none-any.whl (2.1 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 23.0.1 Uninstalling pip-23.0.1: Successfully uninstalled pip-23.0.1 Successfully installed pip-24.0 Obtaining file:///config/70-kohya/kohya_ss/sd-scripts (from -r requirements.txt (line 46)) ERROR: file:///config/70-kohya/kohya_ss/sd-scripts (from -r requirements.txt (line 46)) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found. venv folder does not exist. Not activating... Warning: LD_LIBRARY_PATH environment variable is not set. Certain functionalities may not work correctly. Please ensure that the required libraries are properly configured. If you use WSL2 you may want to: export LD_LIBRARY_PATH=/usr/lib/wsl/lib/ Traceback (most recent call last): File "/config/70-kohya/kohya_ss/setup/validate_requirements.py", line 18, in <module> from kohya_gui.custom_logging import setup_logging File "/config/70-kohya/kohya_ss/kohya_gui/custom_logging.py", line 6, in <module> from rich.theme import Theme ModuleNotFoundError: No module named 'rich' Validation failed. Exiting... /entry.sh: line 11: /70: No such file or directory /entry.sh: line 12: /config/scripts/70: No such file or directory /entry.sh: line 13: /config/scripts/70.sh: No such file or directory error in webui selection variable App is starting! Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Then it just bootloops endlessly. This is with a completely clean install after deleting the whole '70-kohya' directory. I also tried both with and without the UMASK variable by removing it from the template altogether and I get the same error every time. Running kohya would be nice for me and the fix might be something really simple that was overlooked. If it's quick and easy to fix - then great. If it's more than that then I can live without it as long as Forge is working.
  12. An update... I did all of this yesterday but I got a bit sick of it in the end so I'm reporting it today. I spent about 6.5 hours doing this - reinstalling takes a long time when you do it over and over. Deleting conda-env and installing over the top didn't work (kind of as expected). So I renamed the whole '/appdata/stable-diffusion/02-sd-webui' to '/appdata/stable-diffusion/02-sd-webui-bak' and then it did install without a problem. However, I've spent some time installing extensions and customising my config so I'd like all of that back. Simply copying over files will not work - some do, but not all. It's more reliable to reinstall extensions from the tab than to copy all of the directories over from the backup. The two other main files I want back are 'styles.csv' and 'ui-config.json'. The reason why I was doing this for 6.5 hours was because I first tried restoring all of the files and directories from a backup in one go - but then I got a bunch of errors. So I deleted everything again and restarted from scratch. Then I began restoring things one-by-one with a stop/start of the container in between to figure out what exactly was causing the errors. This is why it took so long. This is how I figured out that reinstalling extensions from the tab is the way to go. In 'parameters.forge.txt' I was using for a long time: # Web + Network --listen --port 9000 # options --enable-insecure-extension-access # --xformers # --api --cuda-malloc --cuda-stream --pin-shared-memory # --no-half-vae # --disable-nan-check # --update-all-extensions # --reinstall-xformers # --reinstall-torch Now, for some reason, this refuses to work without errors. So I've gone back to the default included with this container: # Web + Network --listen --port 9000 # options --enable-insecure-extension-access --xformers --api --cuda-malloc --cuda-stream #--no-half-vae #--disable-nan-check #--update-all-extensions #--reinstall-xformers #--reinstall-torch Anyway, a couple of observations: Firstly, I've finally remembered that every time I've had to reset permissions on directories and files to manipulate them in the past, it removes the executable flag from .sh files. This is part of what's been preventing the container from booting properly. It might be worth when you run: chown'ing directory to ensure correct permissions. to also check that any .sh files are still executable if they need to be. Secondly, after a fair bit of reading around and looking at a few container templates, I've concluded that nobody really knows what to set UMASK as. linuxserver set their containers to: 022 binhex sets them to: 000 I tested (with a whole 'delete everything and start again approach) with the linuxserver approach of a UMASK of 022. When I generate images, it gives me: Directories: drwxr-xr-x Files: -rw-r--r-- I cannot do anything with these files/directories unless I reset their permissions. So this is no good. When I use a UMASK of 000, it gives me outputs with: Directories: drwxrwxrwx Files: -rw-rw-rw- These files/directories I can do something with. They have the 'rw' flag all the way so I can delete any generations I don't want straight from my image viewer. This is all I ever wanted, so this works for me. The other thing I wanted, since I make use of Dynamic Prompts, is access to /appdata/stable-diffusion/02-sd-webui/forge/extensions/sd-dynamic-prompts/wildcards. Now this also works as expected. Because this directory is so buried and because this container resets permissions on every start, it became very tedious very quickly to browse all the way through to reset permissions on just this directory. This is why I was simply resetting permissions on the whole /appdata/stable-diffusion/02-sd-webui directory, which led to other problems. The short version is (at least for me): Use a UMASK of 000 Use the default parameters.forge.txt Only reinstall extensions from the Extensions tab Anyway, so it's sorted now. Hopefully?
  13. Instead of possibly breaking my working install, I added a new container using the existing stable-diffusion template but renamed it stable-diffusion-TEST. Installing a completely new and clean copy into a brand new config directory went without a hitch. I think it's this constant permission juggling that's going on. I think the permissions should be set once and then left alone instead of the way things currently are where they are reset every single time. It also resets the output directory, meaning I can't perform any operations like deleting unwanted generations until I manually reset permissions myself. I'd prefer it if it was more like binhex's method (at the very bottom of here) where there is a text file inside the config directory where nothing happens if it exists. If you manually delete the file, then permissions are reset and a new text file is placed there to stop it from repeatedly happening. Just some file called 'delete_me_to_reset_permissions.txt' or something. For example, when I download with sonarr and radarr, they will create directories and files with permissions where I can move, rename, delete or do anything else with my file manager: sonarr directories: drwxrwxrwx sonarr files: -rw-rw-rw- radarr directories: drwxrwxrwx radarr files: -rw-rw-rw- However, the output of a test generation I've just done in this separate and completely clean install of this container gives directories and files with these permissions: This container directories: drwxr-xr-x This container files: -rw-r--r-- This means I cannot delete files or directories I don't want unless I manually reset the permissions. My PUID and PGID are correct. This finally led me to the solution to at least this problem: The short version is that you might want to specify a UMASK variable in the template. I've just started my original :3.1.0 install and generated a few images. It will then create: Directories: drwxrwxrwx Files: -rw-rw-rw- To get this, I added a variable to the template: Name/Key: UMASK Value: 000 Linuxserver base images have UMASK built-in so I made use of that. More info here. As for updating my main install to :latest - maybe tomorrow. I haven't really had time today and I didn't want to break my existing install when it's working but I'll get around to it when I have a bit more time.
  14. Yes, those errors are in the log after removing conda-env. I know I called it 'venv' but it's the same thing. Then I deleted conda-env and changed the tag to :3.1.0 and no errors at all. Just to be clear - this is still running Forge. What did you change?
  15. I updated to the latest version and got these in the log: Building wheels for collected packages: insightface Building wheel for insightface (pyproject.toml): started Building wheel for insightface (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error × Building wheel for insightface (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [221 lines of output] WARNING: pandoc not enabled running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-312 creating build/lib.linux-x86_64-cpython-312/insightface copying insightface/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface creating build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/common.py -> build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/face_analysis.py -> build/lib.linux-x86_64-cpython-312/insightface/app copying insightface/app/mask_renderer.py -> build/lib.linux-x86_64-cpython-312/insightface/app creating build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/insightface_cli.py -> build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/model_download.py -> build/lib.linux-x86_64-cpython-312/insightface/commands copying insightface/commands/rec_add_mask_param.py -> build/lib.linux-x86_64-cpython-312/insightface/commands creating build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/image.py -> build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/pickle_object.py -> build/lib.linux-x86_64-cpython-312/insightface/data copying insightface/data/rec_builder.py -> build/lib.linux-x86_64-cpython-312/insightface/data creating build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/arcface_onnx.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/attribute.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/inswapper.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/landmark.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/model_store.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/model_zoo.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/retinaface.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo copying insightface/model_zoo/scrfd.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty copying insightface/thirdparty/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty creating build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/constant.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/download.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/face_align.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/filesystem.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/storage.py -> build/lib.linux-x86_64-cpython-312/insightface/utils copying insightface/utils/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/utils creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d copying insightface/thirdparty/face3d/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/io.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/light.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/render.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh copying insightface/thirdparty/face3d/mesh/vis.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/io.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/light.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/render.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy copying insightface/thirdparty/face3d/mesh_numpy/vis.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/fit.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/load.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model copying insightface/thirdparty/face3d/morphable_model/morphabel_model.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model running egg_info writing insightface.egg-info/PKG-INFO writing dependency_links to insightface.egg-info/dependency_links.txt writing entry points to insightface.egg-info/entry_points.txt writing requirements to insightface.egg-info/requires.txt writing top-level names to insightface.egg-info/top_level.txt reading manifest file 'insightface.egg-info/SOURCES.txt' writing manifest file 'insightface.egg-info/SOURCES.txt' /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.data.images' is absent from the `packages` configuration. !! ******************************************************************************** ############################ # Package would be ignored # ############################ Python recognizes 'insightface.data.images' as an importable package[^1], but it is absent from setuptools' `packages` configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'insightface.data.images' is explicitly added to the `packages` configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about "package discovery" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'insightface.data.images' to be distributed and are already explicitly excluding 'insightface.data.images' via `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, you can try to use `exclude_package_data`, or `include-package-data=False` in combination with a more fine grained `package-data` configuration. You can read more about "package data files" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any `.py` files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. ******************************************************************************** !! check.warn(importable) /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.data.objects' is absent from the `packages` configuration. !! ******************************************************************************** ############################ # Package would be ignored # ############################ Python recognizes 'insightface.data.objects' as an importable package[^1], but it is absent from setuptools' `packages` configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'insightface.data.objects' is explicitly added to the `packages` configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about "package discovery" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'insightface.data.objects' to be distributed and are already explicitly excluding 'insightface.data.objects' via `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, you can try to use `exclude_package_data`, or `include-package-data=False` in combination with a more fine grained `package-data` configuration. You can read more about "package data files" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html Failed to build insightface [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any `.py` files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. ******************************************************************************** !! check.warn(importable) /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.thirdparty.face3d.mesh.cython' is absent from the `packages` configuration. !! ******************************************************************************** ############################ # Package would be ignored # ############################ Python recognizes 'insightface.thirdparty.face3d.mesh.cython' as an importable package[^1], but it is absent from setuptools' `packages` configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'insightface.thirdparty.face3d.mesh.cython' is explicitly added to the `packages` configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about "package discovery" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'insightface.thirdparty.face3d.mesh.cython' to be distributed and are already explicitly excluding 'insightface.thirdparty.face3d.mesh.cython' via `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`, you can try to use `exclude_package_data`, or `include-package-data=False` in combination with a more fine grained `package-data` configuration. You can read more about "package data files" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any `.py` files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. ******************************************************************************** !! check.warn(importable) creating build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/Tom_Hanks_54745.png -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_black.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_blue.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_green.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/mask_white.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images copying insightface/data/images/t1.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images creating build/lib.linux-x86_64-cpython-312/insightface/data/objects copying insightface/data/objects/meanshape_68.pkl -> build/lib.linux-x86_64-cpython-312/insightface/data/objects creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core.h -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.c -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpp -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.pyx -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython copying insightface/thirdparty/face3d/mesh/cython/setup.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython running build_ext building 'insightface.thirdparty.face3d.mesh.cython.mesh_core_cython' extension creating build/temp.linux-x86_64-cpython-312 creating build/temp.linux-x86_64-cpython-312/insightface creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython gcc -pthread -B /home/abc/miniconda3/compiler_compat -fno-strict-overflow -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -Iinsightface/thirdparty/face3d/mesh/cython -I/tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/numpy/core/include -I/home/abc/miniconda3/include/python3.12 -c insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp -o build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython/mesh_core.o error: command '/config/02-sd-webui/conda-env/bin/gcc' failed: Permission denied [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for insightface ERROR: Could not build wheels for insightface, which is required to install pyproject.toml-based projects And: Building wheels for collected packages: lmdb Building wheel for lmdb (setup.py): started Building wheel for lmdb (setup.py): finished with status 'error' error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [22 lines of output] py-lmdb: Using bundled liblmdb with py-lmdb patches; override with LMDB_FORCE_SYSTEM=1 or LMDB_PURE=1. patching file lmdb.h patching file mdb.c py-lmdb: Using CPython extension; override with LMDB_FORCE_CFFI=1. running bdist_wheel running build running build_py creating build/lib.linux-x86_64-cpython-312 creating build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/__init__.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/__main__.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/_config.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/cffi.py -> build/lib.linux-x86_64-cpython-312/lmdb copying lmdb/tool.py -> build/lib.linux-x86_64-cpython-312/lmdb running build_ext building 'cpython' extension creating build/temp.linux-x86_64-cpython-312 creating build/temp.linux-x86_64-cpython-312/build creating build/temp.linux-x86_64-cpython-312/build/lib creating build/temp.linux-x86_64-cpython-312/lmdb gcc -pthread -B /home/abc/miniconda3/compiler_compat -fno-strict-overflow -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -Ilib/py-lmdb -Ibuild/lib -I/home/abc/miniconda3/include/python3.12 -c build/lib/mdb.c -o build/temp.linux-x86_64-cpython-312/build/lib/mdb.o -DHAVE_PATCHED_LMDB=1 -UNDEBUG -w error: command '/config/02-sd-webui/conda-env/bin/gcc' failed: Permission denied [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for lmdb Running setup.py clean for lmdb Failed to build lmdb ERROR: Could not build wheels for lmdb, which is required to install pyproject.toml-based projects And: CUDA Stream Activated: True Traceback (most recent call last): File "/config/02-sd-webui/forge/launch.py", line 51, in <module> main() File "/config/02-sd-webui/forge/launch.py", line 47, in main start() File "/config/02-sd-webui/forge/modules/launch_utils.py", line 541, in start import webui File "/config/02-sd-webui/forge/webui.py", line 19, in <module> initialize.imports() File "/config/02-sd-webui/forge/modules/initialize.py", line 53, in imports from modules import processing, gradio_extensons, ui # noqa: F401 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/02-sd-webui/forge/modules/processing.py", line 18, in <module> import modules.sd_hijack File "/config/02-sd-webui/forge/modules/sd_hijack.py", line 5, in <module> from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches File "/config/02-sd-webui/forge/modules/sd_hijack_optimizations.py", line 13, in <module> from modules.hypernetworks import hypernetwork File "/config/02-sd-webui/forge/modules/hypernetworks/hypernetwork.py", line 13, in <module> from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors File "/config/02-sd-webui/forge/modules/sd_models.py", line 20, in <module> from modules_forge import forge_loader File "/config/02-sd-webui/forge/modules_forge/forge_loader.py", line 5, in <module> from ldm_patched.modules import model_detection File "/config/02-sd-webui/forge/ldm_patched/modules/model_detection.py", line 5, in <module> import ldm_patched.modules.supported_models File "/config/02-sd-webui/forge/ldm_patched/modules/supported_models.py", line 5, in <module> from . import model_base File "/config/02-sd-webui/forge/ldm_patched/modules/model_base.py", line 6, in <module> from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel, Timestep File "/config/02-sd-webui/forge/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 22, in <module> from ..attention import SpatialTransformer, SpatialVideoTransformer, default File "/config/02-sd-webui/forge/ldm_patched/ldm/modules/attention.py", line 21, in <module> import xformers ModuleNotFoundError: import of xformers halted; None in sys.modules /entry.sh: line 11: /02.forge: No such file or directory /entry.sh: line 12: /config/scripts/02.forge: No such file or directory /entry.sh: line 13: /config/scripts/02.forge.sh: No such file or directory error in webui selection variable App is starting! Channels: - defaults Platform: linux-64 Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Then it just bootloops endlessly. This is after I delete the venv so it's a clean install. I changed the tag back to :3.1.0 and that installs and runs perfectly fine.