[SUPPORT] - stable-diffusion Advanced


Recommended Posts

@brainbone_@Joly0 , Honestly I don't know if I want to add more stuff ... It's allready hard to test everything.

That's why I added the ability to run custom scripts :)
(It should allready be possible in the last version if I didn't forgot to push it 😅)

I have a script for Forge available here :

https://github.com/grokuku/stable-diffusion-custom-scripts/
 

You just have to put it in this folder :

/mnt/user/appdata/stable-diffusion/scripts


and in the container config you replace the number (eg 02) by the script name (eg

sd-webui-forge.sh)
I did not test a lot but it seems to work .

The WebUI will be installed in this folder :
/mnt/user/appdata/stable-diffusion/00-custom/sd-webui-forge
 

And outputs will be in this one :
/mnt/user/appdata/stable-diffusion/outputs/00-custom/sd-webui-forge

 

  • Like 1
Link to comment
3 hours ago, Wesley_Sun said:

Hi @Holaf,

 

Thank you for bring Stable Diffusion to CA App.

 

May i know if there is any way we can split /models and /outputs into different hard disk? I would like to put huge I/O to a different SSD, instead under /appdata

 

Thank you

Yes there is a way :)
With unraid on your container config you have an option at the bottom to add another path.

You'll have to add two path :
one that points to this one inside the container :
/config/outputs

and one to :
/config/models

for example for outputs it should looks like this :1720747365_Capturedcran2024-02-17213402.png.3ebc7c1ced0645a6caf0d616a86baf29.png

  • Like 1
Link to comment

I'm having a real struggle trying to get either InstantID or ReActor to work in Automatic1111.

 

Log when attempting to use InstantID:

################################################################
Launching launch.py...
################################################################
Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api
Civitai Helper: Get Custom Model Folder
[-] ADetailer initialized. version: 24.1.2, num models: 9
CivitAI Browser+: Aria2 RPC started
ControlNet preprocessor location: /config/02-sd-webui/webui/extensions/sd-webui-controlnet/annotator/downloads
2024-02-17 22:17:30,284 - ControlNet - INFO - ControlNet v1.1.440
2024-02-17 22:17:30,475 - ControlNet - INFO - ControlNet v1.1.440
WARNING ⚠️ user config directory '/home/abc/.config/Ultralytics' is not writeable, defaulting to '/tmp' or CWD.Alternatively you can define a YOLO_CONFIG_DIR environment variable for this path.
Loading weights [4726d3bab1] from /config/02-sd-webui/webui/models/Stable-diffusion/sdxlturbo/dreamshaperXL_v2TurboDpmppSDE.safetensors
2024-02-17 22:17:33,538 - AnimateDiff - INFO - Injecting LCM to UI.
2024-02-17 22:17:34,314 - AnimateDiff - INFO - Hacking i2i-batch.
2024-02-17 22:17:34,359 - ControlNet - INFO - ControlNet UI callback registered.
Civitai Helper: Set Proxy: 
Creating model from config: /config/02-sd-webui/webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Running on local URL:  http://0.0.0.0:9000

To create a public link, set `share=True` in `launch()`.
Startup time: 84.7s (prepare environment: 20.1s, import torch: 15.3s, import gradio: 3.0s, setup paths: 4.2s, initialize shared: 0.5s, other imports: 2.1s, setup codeformer: 0.6s, setup gfpgan: 0.1s, list SD models: 0.1s, load scripts: 34.0s, create ui: 3.8s, gradio launch: 0.6s).
Applying attention optimization: xformers... done.
Model loaded in 6.9s (load weights from disk: 1.4s, create model: 0.4s, apply weights to model: 4.6s, calculate empty prompt: 0.4s).
2024-02-17 22:19:59,112 - ControlNet - INFO - Preview Resolution = 512
Traceback (most recent call last):
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/controlnet_ui/controlnet_ui_group.py", line 1013, in run_annotator
    result, is_image = preprocessor(
                       ^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/utils.py", line 80, in decorated_func
    return cached_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/utils.py", line 64, in cached_func
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/global_state.py", line 37, in unified_preprocessor
    return preprocessor_modules[preprocessor_name](*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/processor.py", line 801, in run_model_instant_id
    self.load_model()
  File "/config/02-sd-webui/webui/extensions/sd-webui-controlnet/scripts/processor.py", line 739, in load_model
    from insightface.app import FaceAnalysis
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module>
    from . import app
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module>
    from .mask_renderer import *
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module>
    from ..thirdparty import face3d
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module>
    from . import mesh
  File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module>
    from .cython import mesh_core_cython
ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so)

 

Log when attempting to use ReActor:

################################################################
Launching launch.py...
################################################################
Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
CUDA 11.8
Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api
Civitai Helper: Get Custom Model Folder
[-] ADetailer initialized. version: 24.1.2, num models: 9
CivitAI Browser+: Aria2 RPC started
ControlNet preprocessor location: /config/02-sd-webui/webui/extensions/sd-webui-controlnet/annotator/downloads
2024-02-17 22:27:16,501 - ControlNet - INFO - ControlNet v1.1.440
2024-02-17 22:27:16,690 - ControlNet - INFO - ControlNet v1.1.440
WARNING ⚠️ user config directory '/home/abc/.config/Ultralytics' is not writeable, defaulting to '/tmp' or CWD.Alternatively you can define a YOLO_CONFIG_DIR environment variable for this path.
*** Error loading script: console_log_patch.py
    Traceback (most recent call last):
      File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/console_log_patch.py", line 4, in <module>
        import insightface
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module>
        from . import app
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module>
        from .mask_renderer import *
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module>
        from ..thirdparty import face3d
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module>
        from . import mesh
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module>
        from .cython import mesh_core_cython
    ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so)

---
*** Error loading script: reactor_api.py
    Traceback (most recent call last):
      File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_api.py", line 17, in <module>
        from scripts.reactor_swapper import EnhancementOptions, swap_face, DetectionOptions
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_swapper.py", line 11, in <module>
        import insightface
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module>
        from . import app
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module>
        from .mask_renderer import *
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module>
        from ..thirdparty import face3d
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module>
        from . import mesh
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module>
        from .cython import mesh_core_cython
    ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so)

---
*** Error loading script: reactor_faceswap.py
    Traceback (most recent call last):
      File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_faceswap.py", line 18, in <module>
        from reactor_ui import (
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/reactor_ui/__init__.py", line 2, in <module>
        import reactor_ui.reactor_tools_ui as ui_tools
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/reactor_ui/reactor_tools_ui.py", line 2, in <module>
        from scripts.reactor_swapper import build_face_model, blend_faces
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_swapper.py", line 11, in <module>
        import insightface
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module>
        from . import app
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module>
        from .mask_renderer import *
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module>
        from ..thirdparty import face3d
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module>
        from . import mesh
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module>
        from .cython import mesh_core_cython
    ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so)

---
*** Error loading script: reactor_swapper.py
    Traceback (most recent call last):
      File "/config/02-sd-webui/webui/modules/scripts.py", line 469, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/config/02-sd-webui/webui/extensions/sd-webui-reactor/scripts/reactor_swapper.py", line 11, in <module>
        import insightface
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/__init__.py", line 18, in <module>
        from . import app
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/__init__.py", line 2, in <module>
        from .mask_renderer import *
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/app/mask_renderer.py", line 8, in <module>
        from ..thirdparty import face3d
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/__init__.py", line 3, in <module>
        from . import mesh
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/__init__.py", line 9, in <module>
        from .cython import mesh_core_cython
    ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /config/02-sd-webui/webui/venv/lib/python3.11/site-packages/insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpython-311-x86_64-linux-gnu.so)

---
22:27:20 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA
Loading weights [4726d3bab1] from /config/02-sd-webui/webui/models/Stable-diffusion/sdxlturbo/dreamshaperXL_v2TurboDpmppSDE.safetensors
2024-02-17 22:27:21,571 - AnimateDiff - INFO - Injecting LCM to UI.
2024-02-17 22:27:22,412 - AnimateDiff - INFO - Hacking i2i-batch.
2024-02-17 22:27:22,458 - ControlNet - INFO - ControlNet UI callback registered.
Civitai Helper: Set Proxy: 
Creating model from config: /config/02-sd-webui/webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Running on local URL:  http://0.0.0.0:9000

To create a public link, set `share=True` in `launch()`.
Startup time: 85.3s (prepare environment: 26.2s, import torch: 15.4s, import gradio: 3.1s, setup paths: 4.5s, initialize shared: 0.5s, other imports: 2.2s, setup codeformer: 0.6s, setup gfpgan: 0.1s, list SD models: 0.1s, load scripts: 27.9s, create ui: 3.7s, gradio launch: 0.6s).
Applying attention optimization: xformers... done.
Model loaded in 6.7s (load weights from disk: 1.4s, create model: 0.5s, apply weights to model: 3.3s, calculate empty prompt: 1.5s).

 

The error with both of these is: ImportError: /home/abc/miniconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found

 

When I console into the container and run:

 

strings /home/abc/miniconda3/bin/../lib/libstdc++.so.6 | grep GLIBCXX

 

I get:

root@1dd670cc5061:/# strings /home/abc/miniconda3/bin/../lib/libstdc++.so.6 | grep GLIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_3.4.21
GLIBCXX_3.4.22
GLIBCXX_3.4.23
GLIBCXX_3.4.24
GLIBCXX_3.4.25
GLIBCXX_3.4.26
GLIBCXX_3.4.27
GLIBCXX_3.4.28
GLIBCXX_3.4.29
GLIBCXX_DEBUG_MESSAGE_LENGTH
_ZNKSt14basic_ifstreamIcSt11char_traitsIcEE7is_openEv@GLIBCXX_3.4
_ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEv@@GLIBCXX_3.4.5
_ZNKSbIwSt11char_traitsIwESaIwEE11_M_disjunctEPKw@GLIBCXX_3.4
_ZNKSt14basic_ifstreamIwSt11char_traitsIwEE7is_openEv@@GLIBCXX_3.4.5
GLIBCXX_3.4.21
GLIBCXX_3.4.9
_ZSt10adopt_lock@@GLIBCXX_3.4.11
GLIBCXX_3.4.10
GLIBCXX_3.4.16
GLIBCXX_3.4.1
_ZNSt19istreambuf_iteratorIcSt11char_traitsIcEEppEv@GLIBCXX_3.4
GLIBCXX_3.4.28
_ZNSs7_M_copyEPcPKcm@GLIBCXX_3.4
GLIBCXX_3.4.25
_ZNSt19istreambuf_iteratorIcSt11char_traitsIcEEppEv@@GLIBCXX_3.4.5
_ZNSs7_M_moveEPcPKcm@@GLIBCXX_3.4.5
_ZNKSt13basic_fstreamIwSt11char_traitsIwEE7is_openEv@GLIBCXX_3.4
_ZNKSt13basic_fstreamIcSt11char_traitsIcEE7is_openEv@GLIBCXX_3.4
_ZNSbIwSt11char_traitsIwESaIwEE4_Rep26_M_set_length_and_sharableEm@@GLIBCXX_3.4.5
_ZNSs4_Rep26_M_set_length_and_sharableEm@GLIBCXX_3.4
_ZSt10defer_lock@@GLIBCXX_3.4.11
_ZN10__gnu_norm15_List_node_base4swapERS0_S1_@@GLIBCXX_3.4
_ZNSs9_M_assignEPcmc@@GLIBCXX_3.4.5
_ZNKSbIwSt11char_traitsIwESaIwEE15_M_check_lengthEmmPKc@@GLIBCXX_3.4.5
_ZNKSt14basic_ifstreamIcSt11char_traitsIcEE7is_openEv@@GLIBCXX_3.4.5
_ZNSbIwSt11char_traitsIwESaIwEE7_M_moveEPwPKwm@GLIBCXX_3.4
GLIBCXX_3.4.24
_ZNVSt9__atomic011atomic_flag12test_and_setESt12memory_order@@GLIBCXX_3.4.11
GLIBCXX_3.4.20
_ZNSt11char_traitsIwE2eqERKwS2_@@GLIBCXX_3.4.5
GLIBCXX_3.4.12
_ZNSi6ignoreEv@@GLIBCXX_3.4.5
GLIBCXX_3.4.2
_ZNSt11char_traitsIcE2eqERKcS2_@@GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.15
_ZNKSt13basic_fstreamIcSt11char_traitsIcEE7is_openEv@@GLIBCXX_3.4.5
_ZNSs9_M_assignEPcmc@GLIBCXX_3.4
GLIBCXX_3.4.19
_ZNKSt14basic_ofstreamIwSt11char_traitsIwEE7is_openEv@GLIBCXX_3.4
_ZNSt19istreambuf_iteratorIwSt11char_traitsIwEEppEv@GLIBCXX_3.4
GLIBCXX_3.4.27
_ZN10__gnu_norm15_List_node_base7reverseEv@@GLIBCXX_3.4
_ZN10__gnu_norm15_List_node_base4hookEPS0_@@GLIBCXX_3.4
_ZNSt11char_traitsIwE2eqERKwS2_@GLIBCXX_3.4
_ZNSbIwSt11char_traitsIwESaIwEE7_M_copyEPwPKwm@GLIBCXX_3.4
_ZNSbIwSt11char_traitsIwESaIwEE7_M_copyEPwPKwm@@GLIBCXX_3.4.5
GLIBCXX_3.4.23
GLIBCXX_3.4.3
GLIBCXX_3.4.7
_ZNSi6ignoreEl@@GLIBCXX_3.4.5
_ZNKSbIwSt11char_traitsIwESaIwEE11_M_disjunctEPKw@@GLIBCXX_3.4.5
_ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEv@GLIBCXX_3.4
_ZNKSt13basic_fstreamIwSt11char_traitsIwEE7is_openEv@@GLIBCXX_3.4.5
_ZNSbIwSt11char_traitsIwESaIwEE7_M_moveEPwPKwm@@GLIBCXX_3.4.5
GLIBCXX_3.4.18
_ZNSbIwSt11char_traitsIwESaIwEE4_Rep26_M_set_length_and_sharableEm@GLIBCXX_3.4
_ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEl@@GLIBCXX_3.4.5
_ZSt15future_category@@GLIBCXX_3.4.14
_ZNSi6ignoreEl@GLIBCXX_3.4
GLIBCXX_3.4.29
_ZNSt11char_traitsIcE2eqERKcS2_@GLIBCXX_3.4
_ZNKSs15_M_check_lengthEmmPKc@GLIBCXX_3.4
_ZN10__gnu_norm15_List_node_base8transferEPS0_S1_@@GLIBCXX_3.4
_ZNSbIwSt11char_traitsIwESaIwEE9_M_assignEPwmw@GLIBCXX_3.4
_ZNVSt9__atomic011atomic_flag5clearESt12memory_order@@GLIBCXX_3.4.11
_ZNKSt14basic_ofstreamIcSt11char_traitsIcEE7is_openEv@@GLIBCXX_3.4.5
_ZNKSt14basic_ofstreamIcSt11char_traitsIcEE7is_openEv@GLIBCXX_3.4
_ZNSs7_M_moveEPcPKcm@GLIBCXX_3.4
_ZNSt13basic_istreamIwSt11char_traitsIwEE6ignoreEl@GLIBCXX_3.4
_ZNSbIwSt11char_traitsIwESaIwEE9_M_assignEPwmw@@GLIBCXX_3.4.5
_ZNKSbIwSt11char_traitsIwESaIwEE15_M_check_lengthEmmPKc@GLIBCXX_3.4
_ZNKSs11_M_disjunctEPKc@@GLIBCXX_3.4.5
_ZN10__gnu_norm15_List_node_base6unhookEv@@GLIBCXX_3.4
GLIBCXX_3.4.22
_ZNSt19istreambuf_iteratorIwSt11char_traitsIwEEppEv@@GLIBCXX_3.4.5
_ZNSi6ignoreEv@GLIBCXX_3.4
_ZNSs7_M_copyEPcPKcm@@GLIBCXX_3.4.5
GLIBCXX_3.4.8
GLIBCXX_3.4.13
_ZSt11try_to_lock@@GLIBCXX_3.4.11
_ZNKSt14basic_ofstreamIwSt11char_traitsIwEE7is_openEv@@GLIBCXX_3.4.5
GLIBCXX_3.4.17
GLIBCXX_3.4.4
_ZNKSs15_M_check_lengthEmmPKc@@GLIBCXX_3.4.5
_ZNKSt14basic_ifstreamIwSt11char_traitsIwEE7is_openEv@GLIBCXX_3.4
_ZNSs4_Rep26_M_set_length_and_sharableEm@@GLIBCXX_3.4.5
GLIBCXX_3.4.26
_ZNKSs11_M_disjunctEPKc@GLIBCXX_3.4
root@1dd670cc5061:/#

 

When I run:

 

strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX

 

I get:

root@1dd670cc5061:/# strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_3.4.21
GLIBCXX_3.4.22
GLIBCXX_3.4.23
GLIBCXX_3.4.24
GLIBCXX_3.4.25
GLIBCXX_3.4.26
GLIBCXX_3.4.27
GLIBCXX_3.4.28
GLIBCXX_3.4.29
GLIBCXX_3.4.30
GLIBCXX_DEBUG_MESSAGE_LENGTH
root@1dd670cc5061:/#

 

So GLIBCXX_3.4.32 is not available.

 

However, I've recently had to wipe out this container but I've had this container installed in the past. I think I first installed it around late November/early December. During this time, I have InstantID installed and working with no errors. I have no logs from this time because - why would I? It was just working. I don't understand how it worked just a week ago and now, with a clean install - it just won't.

 

I also see this container has only been updated less than a week ago so I thought it could be that. So I pulled an older tag and tested. I tried another. Neither the :latest tag, nor any other will let InstantID or ReActor run without this GLIBCXX_3.4.30 error.

 

I've tried any number of ways to fix this. Basically do a search for 'GLIBCXX_3.4.32 not found' and attempt any of the fixes, like symbolic links or even outright replacing the files. I've even messed around with trying to set 'LD_LIBRARY_PATH'. I just cannot get this to work. I can't find a way to update GCC inside the container either. The only thing that looks promising is from here which says:

 

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install --only-upgrade libstdc++6

 

So how does one add a ppa to a linuxserver base image? I can't even specify a lower version of insightface as they don't make it available. I'm truly stuck which is doubly annoying because it did actually work for me before.

 

I will say that everything else I use in Automatic1111 works perfectly fine. Also, I'm running on unRAID v6.12.6 and installed this container from CA.

 

I can't be the only one who is wanting to use InstantID or ReActor. Anyone else have it working or not working? Any fix for this?

Link to comment
21 hours ago, Holaf said:

@brainbone_@Joly0 , Honestly I don't know if I want to add more stuff ... It's allready hard to test everything.

That's why I added the ability to run custom scripts :)
(It should allready be possible in the last version if I didn't forgot to push it 😅)

I have a script for Forge available here :

https://github.com/grokuku/stable-diffusion-custom-scripts/
 

You just have to put it in this folder :

/mnt/user/appdata/stable-diffusion/scripts


and in the container config you replace the number (eg 02) by the script name (eg

sd-webui-forge.sh)
I did not test a lot but it seems to work .

The WebUI will be installed in this folder :
/mnt/user/appdata/stable-diffusion/00-custom/sd-webui-forge
 

And outputs will be in this one :
/mnt/user/appdata/stable-diffusion/outputs/00-custom/sd-webui-forge

 

You might be right, adding and testing all those webui´s is a hasle, but at the same time, not everyone can write a bash script. Also i think just a few here know, this method using the custom scripts exist or that your repo with custom scripts exist.

 

Might be a good idea to add some kind of testing environment, so the scripts can be tested more or less automatically after a new release/commit

Link to comment

I read the past few posts, specifically about WebUI Forge and the custom scripts directory and from there I found the Github repo with sd-webui-forge.sh in it. So I put that in the right place, set the template to boot from it and soon I had WebUI Forge installed.

 

A slight issue during every bootup of this script:

webui.sh: line 246: bc: command not found
webui.sh: line 246: [: -eq: unary operator expected

 

This doesn't appear to affect anything. It runs fine regardless.

 

The second issue is the main one for me. I installed WebUI Forge because I'd read articles and watched videos about it and that it was much more memory efficient than Automatic1111. Plus it has bug fixes and some built-in extensions that A1111 doesn't have. One of those is built-in version of Photomaker. This is similar to InstantID so I thought I'd try it and... It was... fine? It worked without issue and did the job to a point but it's still not what I'm after - that's still InstantID because when I had it working before it was near flawless.

 

However, if I show a bit more of that log:

################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Check TCMalloc: libtcmalloc_minimal.so.4
webui.sh: line 246: bc: command not found
webui.sh: line 246: [: -eq: unary operator expected
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
Python 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0]
Version: f0.0.14v1.8.0rc-latest-184-g43c9e3b5
Commit hash: 43c9e3b5ce1642073c7a9684e36b45489eeb4a49
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.

 

So insightface is not installed, which is necessary for InstantID to work.

 

For completeness, if I try to use InstantID regardless:

Traceback (most recent call last):
  File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/lib_ipadapter/IPAdapterPlus.py", line 560, in load_insight_face
    from insightface.app import FaceAnalysis
ModuleNotFoundError: No module named 'insightface'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/env/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_controlnet/lib_controlnet/controlnet_ui/controlnet_ui_group.py", line 847, in run_annotator
    result = preprocessor(
             ^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/scripts/forge_ipadapter.py", line 77, in __call__
    insightface=self.load_insightface(),
                ^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/scripts/forge_ipadapter.py", line 71, in load_insightface
    self.cached_insightface = opInsightFaceLoader(name='antelopev2')[0]
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/00-custom/sd-webui-forge/stable-diffusion-webui-forge/extensions-builtin/sd_forge_ipadapter/lib_ipadapter/IPAdapterPlus.py", line 562, in load_insight_face
    raise Exception(e)
Exception: No module named 'insightface'

 

Like everything else, searching for a way around this is geared toward Windows, WSL, Linux or Mac - nothing for Docker. It's beyond me how I would go about activating a venv in a container and pip installing something that way.

 

So I'm kinda stuck. Even if I did get insightface installed in WebUI Forge, I would still be stuck with my previous problem because:

glibc version is 2.35

 

I will say, though, that even though Stable Diffusion is fast moving and things could change at any moment - WebUI Forge seems to be the way to go right now. The memory efficiency is so much better than standard A1111.

 

Edit, I found another bug with WebUI Forge:

 

It doesn't save images after generation. I can save images manually one-by-one (they go to log/images) but there is only ever a preview, meaning everything is lost upon a new generation.

Edited by Araso
Found a saving images bug.
Link to comment

Is there a way to use this with a Radeon GFX ? 

i Got a RX 550 with 4 Gig on it and it has nothing to do so i coudl use it.

I tried to install your Docker but in the settings at the beginning theres a place to write in a GFX ID or whasoever and its asked only about NVidia Number

 

so i sopped installing was seeking for this answer in the net didnt find anytrhing and now i ask here.

 

Thx for your time

Link to comment

@Joly0 @Araso If you use the tag "test" you can try the next version earlier.

I changed my mind and added forge in thins version ...
Since it's a clean fork of auto1111 instead of having a number on it's own, you must use WEBUI_VERSION="02.forge"
It will be installed next to auto1111 and it will use the same environment.
If you choose to do so you can remove those two folders :
/mnt/user/appdata/stable-diffusion/02-sd-webui/env

/mnt/user/appdata/stable-diffusion/02-sd-webui/webui/venv
(I will auto-remove them in a future version if this one is working fine)

@Araso with a clean install reactor and instant-ID works fine in auto1111, since the test version isn't using the same conda environment I assume you can try to switch to this version to see if that's enough to fix your issue
They're also working in forge, but I have a mismatch in cuda version between onxx and torch, so you'll have errors in the log and some parts of the process will be done on CPU (it's still fast, but it will hit the global perf of your computer for a few seconds)

In this test version I have update lama cleaner (now IO Paint). It will install in a new folder.

@Olivilo I don't support AMD gpu because I can't test easily. You should take a look at Jolyu0 repo's, he is working on it :
https://forums.unraid.net/topic/143645-support-stable-diffusion-advanced/?do=findComment&comment=1371508

  • Thanks 2
Link to comment

I've installed the :test tag and have indeed been testing it. Other than the running on the CPU issue, I've found only three minor issues:

 

1. Starting Forge does not read from parameters.forge.txt

 

In the log I see this:

Launching Web UI with arguments: --listen --port 9000 --enable-insecure-extension-access --medvram --xformers --api
Arg --medvram is removed in Forge.
Now memory management is fully automatic and you do not need any command flags.
Please just remove this flag.
In extreme cases, if you want to force previous lowvram/medvram behaviors, please use --always-offload-from-vram

 

The contents of parameters.txt:

# Web + Network
--listen
--port 9000

# options
--enable-insecure-extension-access
--medvram
--xformers
--api

#--no-half-vae
#--disable-nan-check
#--update-all-extensions
#--reinstall-xformers
#--reinstall-torch

 

The contents of parameters.forge.txt:

# Web + Network
--listen
--port 9000

# options
--enable-insecure-extension-access
--xformers
--api
--cuda-malloc
--cuda-stream

#--no-half-vae
#--disable-nan-check
#--update-all-extensions
#--reinstall-xformers
#--reinstall-torch

 

So it looks like Forge is reading parameters.txt rather than parameters.forge.txt.

 

Although I think Forge simply ignores this line, I can remove it in parameters.txt but then if I want to use standard A1111, I might need it to be there. Or if I want to add something specific only for when I run Forge, it'll be there for both versions which could be problematic.

 

2. I figured out why Forge isn't saving its outputs. It's because I was wrong and it actually is but it's saving them in an unexpected location.

 

All versions (A1111, ComfyUI, etc.) send their outputs to:

appdata/stable-diffusion/outputs/

 

Except for Forge, which sends its outputs to:

appdata/stable-diffusion/02-sd-webui/forge/output/

 

How can I change this to make Forge save somewhere inside of this location?:

appdata/stable-diffusion/outputs/

 

3. I'm still seeing this when I start Forge:

webui.sh: line 246: bc: command not found
webui.sh: line 246: [: -eq: unary operator expected

 

I've been reading:

 

https://docs.linuxserver.io/general/container-customization/

https://github.com/linuxserver/docker-mods/tree/universal-package-install?tab=readme-ov-file

 

Essentially, what I've done is add some variables to the template:

- DOCKER_MODS=linuxserver/mods:universal-package-install
- INSTALL_PACKAGES=bc

 

Which gives me this in the log:

**** Adding bc to OS package install list ****
[mod-init] **** Installing all mod packages ****
Get:1 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB]
Get:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Get:3 http://archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy/main Sources [1,668 kB]
Get:5 http://archive.ubuntu.com/ubuntu jammy/restricted Sources [28.2 kB]
Get:6 http://archive.ubuntu.com/ubuntu jammy/universe Sources [22.0 MB]
Get:7 http://archive.ubuntu.com/ubuntu jammy/multiverse Sources [361 kB]
Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB]
Get:9 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB]
Get:10 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB]
Get:11 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1,792 kB]
Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse Sources [21.8 kB]
Get:13 http://archive.ubuntu.com/ubuntu jammy-updates/main Sources [595 kB]
Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/universe Sources [398 kB]
Get:15 http://archive.ubuntu.com/ubuntu jammy-updates/restricted Sources [70.1 kB]
Get:16 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [50.4 kB]
Get:17 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [1,907 kB]
Get:18 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1,343 kB]
Get:19 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1,786 kB]
Get:20 http://archive.ubuntu.com/ubuntu jammy-security/universe Sources [231 kB]
Get:21 http://archive.ubuntu.com/ubuntu jammy-security/main Sources [316 kB]
Get:22 http://archive.ubuntu.com/ubuntu jammy-security/multiverse Sources [12.1 kB]
Get:23 http://archive.ubuntu.com/ubuntu jammy-security/restricted Sources [65.9 kB]
Get:24 http://archive.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [1,070 kB]
Get:25 http://archive.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1,502 kB]
Get:26 http://archive.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [1,859 kB]
Get:27 http://archive.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [44.6 kB]
Fetched 55.5 MB in 12s (4,745 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  bc
0 upgraded, 1 newly installed, 0 to remove and 19 not upgraded.
Need to get 87.6 kB of archives.
After this operation, 220 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 bc amd64 1.07.1-3build1 [87.6 kB]
Fetched 87.6 kB in 16s (5,319 B/s)
Selecting previously unselected package bc.
(Reading database ... 49023 files and directories currently installed.)
Preparing to unpack .../bc_1.07.1-3build1_amd64.deb ...
Unpacking bc (1.07.1-3build1) ...
Setting up bc (1.07.1-3build1) ...
[custom-init] No custom files found, skipping...
Done!

 

So bc is installed and the error is gone. I know this is only a small thing and it didn't appear to really affect anything but I do prefer to not see errors in my logs.

 

I've run some generations to see what the speeds are. If I run with and without ReActor, I get 21s without ReActor and 1m10s with it. I used exactly the same amount in a batch. It's not scientifically down to the exact seed or anything but it wouldn't make any difference here anyway. I wonder how much difference there would be if it could run on the GPU. I have nothing to compare it with since I only tried to use ReActor after I had my previous problems with InstantID and I've only been able to use it with this :test tagged version.

 

So, at least for me, there are really only minor issues if you don't count:

2024-02-26 02:09:53.027799529 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

2024-02-26 02:10:32.154622901 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

2024-02-26 02:10:33.533160219 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

2024-02-26 02:10:34.076110107 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

2024-02-26 02:10:34.562393148 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

 

I'm happier than I was before because even if something does have to run on the CPU - at least it runs. Cheers!

 

I've only tested Forge so far - not standard A1111. I think you might be suggesting the above errors don't happen in A1111 so I'll get to testing that when I can.

 

Also: If you push this to :latest, will you announce it? Otherwise I might be stuck on :test forever.

Link to comment
2 hours ago, Araso said:

I wonder how much difference there would be if it could run on the GPU.

 

Wonder no more...

 

Due to this Github issue being closed a short time ago and the fact that every time Forge boots it pulls every new version up to the minute, the time taken to generate with ReActor selected has now been massively reduced. A test on the same batch size is now down to 36 or so seconds.

 

The only caveat is that in ReActor, under Restore Face, GFPGAN must be selected - CodeFormer is still just as slow.

 

There are always pros and cons with bleeding edge software. There's still a CPU spike though, so it's not exactly running on the GPU but I'll take it..

Link to comment

@Holaf

 

I see there's been a new :test tag which I've updated to. I'm seeing this in the log:

*** Cannot import xformers
    Traceback (most recent call last):
      File "/config/02-sd-webui/forge/modules/sd_hijack_optimizations.py", line 160, in <module>
        import xformers.ops
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/__init__.py", line 8, in <module>
        from .fmha import (
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 10, in <module>
        from . import attn_bias, cutlass, decoder, flash, small_k, triton, triton_splitk
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/triton_splitk.py", line 21, in <module>
        if TYPE_CHECKING or _has_triton21():
                            ^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 192, in _has_triton21
        if not _has_a_version_of_triton():
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 176, in _has_a_version_of_triton
        import triton  # noqa: F401
        ^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/__init__.py", line 20, in <module>
        from .compiler import compile, CompilationError
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/__init__.py", line 1, in <module>
        from .compiler import CompiledKernel, compile, instance_descriptor
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/compiler.py", line 27, in <module>
        from .code_generator import ast_to_ttir
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/code_generator.py", line 8, in <module>
        from .. import language
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/__init__.py", line 4, in <module>
        from . import math
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/math.py", line 4, in <module>
        from . import core
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/core.py", line 1375, in <module>
        @jit
         ^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 542, in jit
        return decorator(fn)
               ^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 534, in decorator
        return JITFunction(
               ^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 433, in __init__
        self.run = self._make_launcher()
                   ^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 388, in _make_launcher
        scope = {"version_key": version_key(),
                                ^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 120, in version_key
        ptxas = path_to_ptxas()[0]
                ^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/backend.py", line 114, in path_to_ptxas
        result = subprocess.check_output([ptxas_bin, "--version"], stderr=subprocess.STDOUT)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 466, in check_output
        return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 548, in run
        with Popen(*popenargs, **kwargs) as process:
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1026, in __init__
        self._execute_child(args, executable, preexec_fn, close_fds,
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1953, in _execute_child
        raise child_exception_type(errno_num, err_msg, err_filename)
    PermissionError: [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas'

---
*** Error loading script: preprocessor_marigold.py
    Traceback (most recent call last):
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 710, in _get_module
        return importlib.import_module("." + module_name, self.__name__)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/importlib/__init__.py", line 126, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
      File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/loaders/unet.py", line 27, in <module>
        from ..models.embeddings import ImageProjection, MLPProjection, Resampler
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/models/embeddings.py", line 23, in <module>
        from .attention_processor import Attention
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 32, in <module>
        import xformers.ops
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/__init__.py", line 8, in <module>
        from .fmha import (
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 10, in <module>
        from . import attn_bias, cutlass, decoder, flash, small_k, triton, triton_splitk
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/fmha/triton_splitk.py", line 21, in <module>
        if TYPE_CHECKING or _has_triton21():
                            ^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 192, in _has_triton21
        if not _has_a_version_of_triton():
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/xformers/ops/common.py", line 176, in _has_a_version_of_triton
        import triton  # noqa: F401
        ^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/__init__.py", line 20, in <module>
        from .compiler import compile, CompilationError
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/__init__.py", line 1, in <module>
        from .compiler import CompiledKernel, compile, instance_descriptor
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/compiler.py", line 27, in <module>
        from .code_generator import ast_to_ttir
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/compiler/code_generator.py", line 8, in <module>
        from .. import language
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/__init__.py", line 4, in <module>
        from . import math
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/math.py", line 4, in <module>
        from . import core
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/language/core.py", line 1375, in <module>
        @jit
         ^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 542, in jit
        return decorator(fn)
               ^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 534, in decorator
        return JITFunction(
               ^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 433, in __init__
        self.run = self._make_launcher()
                   ^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 388, in _make_launcher
        scope = {"version_key": version_key(),
                                ^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/runtime/jit.py", line 120, in version_key
        ptxas = path_to_ptxas()[0]
                ^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/backend.py", line 114, in path_to_ptxas
        result = subprocess.check_output([ptxas_bin, "--version"], stderr=subprocess.STDOUT)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 466, in check_output
        return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 548, in run
        with Popen(*popenargs, **kwargs) as process:
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1026, in __init__
        self._execute_child(args, executable, preexec_fn, close_fds,
      File "/config/02-sd-webui/conda-env/lib/python3.11/subprocess.py", line 1953, in _execute_child
        raise child_exception_type(errno_num, err_msg, err_filename)
    PermissionError: [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas'

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 710, in _get_module
        return importlib.import_module("." + module_name, self.__name__)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/importlib/__init__.py", line 126, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
      File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/models/unet_2d_condition.py", line 22, in <module>
        from ..loaders import UNet2DConditionLoadersMixin
      File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 700, in __getattr__
        module = self._get_module(self._class_to_module[name])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 712, in _get_module
        raise RuntimeError(
    RuntimeError: Failed to import diffusers.loaders.unet because of the following error (look up to see its traceback):
    [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas'

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "/config/02-sd-webui/forge/modules/scripts.py", line 544, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/forge/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/config/02-sd-webui/forge/extensions-builtin/forge_preprocessor_marigold/scripts/preprocessor_marigold.py", line 10, in <module>
        from marigold.model.marigold_pipeline import MarigoldPipeline
      File "/config/02-sd-webui/forge/extensions-builtin/forge_preprocessor_marigold/marigold/model/marigold_pipeline.py", line 9, in <module>
        from diffusers import (
      File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 701, in __getattr__
        value = getattr(module, name)
                ^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 700, in __getattr__
        module = self._get_module(self._class_to_module[name])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/conda-env/lib/python3.11/site-packages/diffusers/utils/import_utils.py", line 712, in _get_module
        raise RuntimeError(
    RuntimeError: Failed to import diffusers.models.unet_2d_condition because of the following error (look up to see its traceback):
    Failed to import diffusers.loaders.unet because of the following error (look up to see its traceback):
    [Errno 13] Permission denied: '/config/02-sd-webui/conda-env/lib/python3.11/site-packages/triton/common/../third_party/cuda/bin/ptxas'

 

I'm not sure what this affects but I've run a quick generate and I got my results. This is with a straight-up generation of a simple positive prompt and no negative. I'll be using it some more with more complex things in a bit so whatever the above is - maybe it affects something else like ControlNet or whatever.

 

Any chance of a changelog?

 

Edit: What is the 'Holaf_tests' tag for?

Edited by Araso
Extra question.
Link to comment

Holaf_tests is only the branch where I test things, it's broken most of the time ^^

The last things I changed in the test version are what you reported.
I add the "bc" command, I fix the bug with the parameters file and the output folder.

I updated again the test version and I don't see errors on my side 🤔

Link to comment

Ah. Changelogs would be useful. :)

 

I still have DOCKER_MODS installing bc - so I can now take that out. Good to know.

 

I've added a parameter to my file (parameters.txt) which is now not in the file read by Forge any more (parameters.forge.txt) so now I need to add it in the Forge file and revert it in the A1111 file.

 

On my side, it's still saving to:

appdata/stable-diffusion/02-sd-webui/forge/output/

 

My first thought is I'd need to wipe out:

appdata/stable-diffusion/02-sd-webui/conda-env/

 

But what's in there is not configuration files. So I'd want to be looking at some of those. In which file is this value changed and what has it been changed to?

 

Unless it's a change within one of the .sh files inside the container itself, in which case why isn't it working since I updated to the new :test tag version?

 

Maybe I'll delete the container and start again just for my own peace of mind.

Link to comment

I found another problem, except this time it was user error - sort of. At some point between when I last updated and tested things and made the post above, you'd pushed out another update. However, I hadn't seen any notification of a new update. Just a short while ago I noticed there was an update for the container and when I went to look at Docker Hub, I see that it was pushed 14 hours ago. So all today I've been using an out of date version.

 

The moral of this story is that I will have to manually check for updates before I even start the container each time.

 

The good news is that you have, indeed, fixed the file saving location. When I posted above, I was referring to the older version. This new version is correctly saving everything in the right place. So that's fixed.

 

The bad news is in a log file so long I won't post it here in a code block because there are so many lines of it. So I'll attach it to avoid a huge wall of text.

 

It's not even the whole of the log because so much shoots past the log window I can't catch it all and most of it disappears.

 

On the plus side: Everything I use still seems to work, somehow... All the standard things plus ReActor is what I've been using. The end of the attached log says:

ERROR: Could not build wheels for insightface, onnx, which is required to install pyproject.toml-based projects

 

Problems with insightface being installed was what I had before when I was trying to use InstantID. It isn't affecting ReActor from what I can tell so far, which is good. However, this error is a thing that's happening after a few full stops and starts of the container, so it's not a one-off.

 

The only way to get a full log would be if you could send log output to disk. Maybe rotate the last three logs or something?

 

log.txt

Link to comment

@Holaf What are the differences between these versions:

TAG
latest
Last pushed 17 hours ago by holaflenain
430293c40eb5

TAG
3.1.0
Last pushed 16 hours ago by holaflenain
07cbd209f7e3

TAG
test
Last pushed 20 hours ago by holaflenain
381488f205f3

 

Which should I switch to?

 

Are the changes you made in :test now in :latest or :3.1.0 and :test is deprecated now?

 

Usually, :latest and the highest numbered version are one and the same. Why are :latest and :3.1.0 not the same?

 

I don't know which to go for.

Link to comment

My GPU is P104-100, when  I am trying to generate a picture, something wrong appear:

 

NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 8040, 8, 40) (torch.float16) key : shape=(1, 8040, 8, 40) (torch.float16) value : shape=(1, 8040, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (7, 0) but your GPU has capability (6, 1) (too old) attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `[email protected]` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old) operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old) operator wasn't built - see `python -m xformers.info` for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for now `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 40

Time taken: 3.0 sec.

 

How to solve this?

Link to comment
13 hours ago, Max-SDU said:

My GPU is P104-100, when  I am trying to generate a picture, something wrong appear:

 

NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 8040, 8, 40) (torch.float16) key : shape=(1, 8040, 8, 40) (torch.float16) value : shape=(1, 8040, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (7, 0) but your GPU has capability (6, 1) (too old) attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `[email protected]` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old) operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (6, 1) (too old) operator wasn't built - see `python -m xformers.info` for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for now `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 40

Time taken: 3.0 sec.

 

How to solve this?

I'm getting this same issue with a 2080 Super that I just installed in my own server using Automatic1111's webUI

Link to comment

@Max-SDU and @HealthCareUSA: This thread is for container support more than anything, whereas your issues seems to be to do with the WebUI. Also, @Max-SDU, you do not mention which front end you're using when you see this error.

 

Either way, doing a quick search for the term:

NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs

 

Finds me this as the first result on the Automatic1111 Github where there's lots of mention of deleting the venv and reinstalling. Maybe try that first and if that's not enough to solve your problems, try opening an issue on the relevant Github.

 

In fact, at the first sign of trouble I've got myself into the habit of deleting the venv before anything else. Yes, it takes a while but it's not a step that you can discount since it can fix a lot from one version to another.

 

Also, have you tried WebUI Forge? It's much more memory efficient than standard A1111 (you do need a decent amount of RAM though) and it has other bug fixes and optimisations as well.

Link to comment
On 12/29/2023 at 11:14 AM, domrockt said:

Can the Intel ARC770 be supported? Afaik there are Modifications to allow that. Your 

 

regards Dom

 

(i got it working with an regular Docker command and Portainer) 

(converted a docker to Unraid)

Would you be able to point me in that direction? I have an arc card as well that I wanted to try out.

Link to comment
10 minutes ago, Italiandevil0505 said:

Would you be able to point me in that direction? I have an arc card as well that I wanted to try out.

 

i dont have the ARC GPU anymore, it did work but maybe just "ok" with any AI use case out there, but here is my Template for docker and i used this repo 

 

i now use my RTX a4500 for that.

 

nuullll/ipex-arc-sd:latest

template.thumb.png.7fbec9637915cd7c3f2dc9ba82442c30.png

 

Edited by domrockt
Link to comment
On 3/8/2024 at 8:56 AM, domrockt said:

 

i dont have the ARC GPU anymore, it did work but maybe just "ok" with any AI use case out there, but here is my Template for docker and i used this repo 

 

i now use my RTX a4500 for that.

 

 

 

 

Thanks a bunch, I'll give it a try. That's a hell of an upgrade for you haha

Link to comment

StableSwarmUI was migrated to .NET 8 with one of the more recent versions and fails to build due to .NET 7 being used in the script.

 

MSBuild version 17.4.8+6918b863a for .NET
  Determining projects to restore...
/usr/lib/dotnet/sdk/7.0.116/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.TargetFrameworkInference.targets(144,5): error NETSDK1045: The current .NET SDK does not support targeting .NET 8.0.  Either target .NET 7.0 or lower, or use a version of the .NET SDK that supports .NET 8.0. [/config/07-StableSwarm/StableSwarmUI/src/StableSwarmUI.csproj]

Build FAILED.

/usr/lib/dotnet/sdk/7.0.116/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.TargetFrameworkInference.targets(144,5): error NETSDK1045: The current .NET SDK does not support targeting .NET 8.0.  Either target .NET 7.0 or lower, or use a version of the .NET SDK that supports .NET 8.0. [/config/07-StableSwarm/StableSwarmUI/src/StableSwarmUI.csproj]
    0 Warning(s)
    1 Error(s)

Time Elapsed 00:00:00.69
error in webui selection variable

 

Can this be updated or for future proofing even made version agnostic?

Link to comment

I updated to the latest version and got these in the log:

 

Building wheels for collected packages: insightface
  Building wheel for insightface (pyproject.toml): started
  Building wheel for insightface (pyproject.toml): finished with status 'error'
  error: subprocess-exited-with-error
  
  × Building wheel for insightface (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [221 lines of output]
      WARNING: pandoc not enabled
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-cpython-312
      creating build/lib.linux-x86_64-cpython-312/insightface
      copying insightface/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface
      creating build/lib.linux-x86_64-cpython-312/insightface/app
      copying insightface/app/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/app
      copying insightface/app/common.py -> build/lib.linux-x86_64-cpython-312/insightface/app
      copying insightface/app/face_analysis.py -> build/lib.linux-x86_64-cpython-312/insightface/app
      copying insightface/app/mask_renderer.py -> build/lib.linux-x86_64-cpython-312/insightface/app
      creating build/lib.linux-x86_64-cpython-312/insightface/commands
      copying insightface/commands/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/commands
      copying insightface/commands/insightface_cli.py -> build/lib.linux-x86_64-cpython-312/insightface/commands
      copying insightface/commands/model_download.py -> build/lib.linux-x86_64-cpython-312/insightface/commands
      copying insightface/commands/rec_add_mask_param.py -> build/lib.linux-x86_64-cpython-312/insightface/commands
      creating build/lib.linux-x86_64-cpython-312/insightface/data
      copying insightface/data/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/data
      copying insightface/data/image.py -> build/lib.linux-x86_64-cpython-312/insightface/data
      copying insightface/data/pickle_object.py -> build/lib.linux-x86_64-cpython-312/insightface/data
      copying insightface/data/rec_builder.py -> build/lib.linux-x86_64-cpython-312/insightface/data
      creating build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/arcface_onnx.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/attribute.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/inswapper.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/landmark.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/model_store.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/model_zoo.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/retinaface.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      copying insightface/model_zoo/scrfd.py -> build/lib.linux-x86_64-cpython-312/insightface/model_zoo
      creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty
      copying insightface/thirdparty/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty
      creating build/lib.linux-x86_64-cpython-312/insightface/utils
      copying insightface/utils/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/utils
      copying insightface/utils/constant.py -> build/lib.linux-x86_64-cpython-312/insightface/utils
      copying insightface/utils/download.py -> build/lib.linux-x86_64-cpython-312/insightface/utils
      copying insightface/utils/face_align.py -> build/lib.linux-x86_64-cpython-312/insightface/utils
      copying insightface/utils/filesystem.py -> build/lib.linux-x86_64-cpython-312/insightface/utils
      copying insightface/utils/storage.py -> build/lib.linux-x86_64-cpython-312/insightface/utils
      copying insightface/utils/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/utils
      creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d
      copying insightface/thirdparty/face3d/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d
      creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      copying insightface/thirdparty/face3d/mesh/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      copying insightface/thirdparty/face3d/mesh/io.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      copying insightface/thirdparty/face3d/mesh/light.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      copying insightface/thirdparty/face3d/mesh/render.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      copying insightface/thirdparty/face3d/mesh/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      copying insightface/thirdparty/face3d/mesh/vis.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy
      copying insightface/thirdparty/face3d/mesh_numpy/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy
      copying insightface/thirdparty/face3d/mesh_numpy/io.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy
      copying insightface/thirdparty/face3d/mesh_numpy/light.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy
      copying insightface/thirdparty/face3d/mesh_numpy/render.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy
      copying insightface/thirdparty/face3d/mesh_numpy/transform.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy
      copying insightface/thirdparty/face3d/mesh_numpy/vis.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh_numpy
      creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model
      copying insightface/thirdparty/face3d/morphable_model/__init__.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model
      copying insightface/thirdparty/face3d/morphable_model/fit.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model
      copying insightface/thirdparty/face3d/morphable_model/load.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model
      copying insightface/thirdparty/face3d/morphable_model/morphabel_model.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/morphable_model
      running egg_info
      writing insightface.egg-info/PKG-INFO
      writing dependency_links to insightface.egg-info/dependency_links.txt
      writing entry points to insightface.egg-info/entry_points.txt
      writing requirements to insightface.egg-info/requires.txt
      writing top-level names to insightface.egg-info/top_level.txt
      reading manifest file 'insightface.egg-info/SOURCES.txt'
      writing manifest file 'insightface.egg-info/SOURCES.txt'
      /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.data.images' is absent from the `packages` configuration.
      !!
      
              ********************************************************************************
              ############################
              # Package would be ignored #
              ############################
              Python recognizes 'insightface.data.images' as an importable package[^1],
              but it is absent from setuptools' `packages` configuration.
      
              This leads to an ambiguous overall configuration. If you want to distribute this
              package, please make sure that 'insightface.data.images' is explicitly added
              to the `packages` configuration field.
      
              Alternatively, you can also rely on setuptools' discovery methods
              (for example by using `find_namespace_packages(...)`/`find_namespace:`
              instead of `find_packages(...)`/`find:`).
      
              You can read more about "package discovery" on setuptools documentation page:
      
              - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
      
              If you don't want 'insightface.data.images' to be distributed and are
              already explicitly excluding 'insightface.data.images' via
              `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
              you can try to use `exclude_package_data`, or `include-package-data=False` in
              combination with a more fine grained `package-data` configuration.
      
              You can read more about "package data files" on setuptools documentation page:
      
              - https://setuptools.pypa.io/en/latest/userguide/datafiles.html
      
      
              [^1]: For Python, any directory (with suitable naming) can be imported,
                    even if it does not contain any `.py` files.
                    On the other hand, currently there is no concept of package data
                    directory, all directories are treated like packages.
              ********************************************************************************
      
      !!
        check.warn(importable)
      /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.data.objects' is absent from the `packages` configuration.
      !!
      
              ********************************************************************************
              ############################
              # Package would be ignored #
              ############################
              Python recognizes 'insightface.data.objects' as an importable package[^1],
              but it is absent from setuptools' `packages` configuration.
      
              This leads to an ambiguous overall configuration. If you want to distribute this
              package, please make sure that 'insightface.data.objects' is explicitly added
              to the `packages` configuration field.
      
              Alternatively, you can also rely on setuptools' discovery methods
              (for example by using `find_namespace_packages(...)`/`find_namespace:`
              instead of `find_packages(...)`/`find:`).
      
              You can read more about "package discovery" on setuptools documentation page:
      
              - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
      
              If you don't want 'insightface.data.objects' to be distributed and are
              already explicitly excluding 'insightface.data.objects' via
              `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
              you can try to use `exclude_package_data`, or `include-package-data=False` in
              combination with a more fine grained `package-data` configuration.
      
              You can read more about "package data files" on setuptools documentation page:
      
              - https://setuptools.pypa.io/en/latest/userguide/datafiles.html
      
      
Failed to build insightface
              [^1]: For Python, any directory (with suitable naming) can be imported,
                    even if it does not contain any `.py` files.
                    On the other hand, currently there is no concept of package data
                    directory, all directories are treated like packages.
              ********************************************************************************
      
      !!
        check.warn(importable)
      /tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'insightface.thirdparty.face3d.mesh.cython' is absent from the `packages` configuration.
      !!
      
              ********************************************************************************
              ############################
              # Package would be ignored #
              ############################
              Python recognizes 'insightface.thirdparty.face3d.mesh.cython' as an importable package[^1],
              but it is absent from setuptools' `packages` configuration.
      
              This leads to an ambiguous overall configuration. If you want to distribute this
              package, please make sure that 'insightface.thirdparty.face3d.mesh.cython' is explicitly added
              to the `packages` configuration field.
      
              Alternatively, you can also rely on setuptools' discovery methods
              (for example by using `find_namespace_packages(...)`/`find_namespace:`
              instead of `find_packages(...)`/`find:`).
      
              You can read more about "package discovery" on setuptools documentation page:
      
              - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
      
              If you don't want 'insightface.thirdparty.face3d.mesh.cython' to be distributed and are
              already explicitly excluding 'insightface.thirdparty.face3d.mesh.cython' via
              `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
              you can try to use `exclude_package_data`, or `include-package-data=False` in
              combination with a more fine grained `package-data` configuration.
      
              You can read more about "package data files" on setuptools documentation page:
      
              - https://setuptools.pypa.io/en/latest/userguide/datafiles.html
      
      
              [^1]: For Python, any directory (with suitable naming) can be imported,
                    even if it does not contain any `.py` files.
                    On the other hand, currently there is no concept of package data
                    directory, all directories are treated like packages.
              ********************************************************************************
      
      !!
        check.warn(importable)
      creating build/lib.linux-x86_64-cpython-312/insightface/data/images
      copying insightface/data/images/Tom_Hanks_54745.png -> build/lib.linux-x86_64-cpython-312/insightface/data/images
      copying insightface/data/images/mask_black.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images
      copying insightface/data/images/mask_blue.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images
      copying insightface/data/images/mask_green.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images
      copying insightface/data/images/mask_white.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images
      copying insightface/data/images/t1.jpg -> build/lib.linux-x86_64-cpython-312/insightface/data/images
      creating build/lib.linux-x86_64-cpython-312/insightface/data/objects
      copying insightface/data/objects/meanshape_68.pkl -> build/lib.linux-x86_64-cpython-312/insightface/data/objects
      creating build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      copying insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      copying insightface/thirdparty/face3d/mesh/cython/mesh_core.h -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.c -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpp -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      copying insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.pyx -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      copying insightface/thirdparty/face3d/mesh/cython/setup.py -> build/lib.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      running build_ext
      building 'insightface.thirdparty.face3d.mesh.cython.mesh_core_cython' extension
      creating build/temp.linux-x86_64-cpython-312
      creating build/temp.linux-x86_64-cpython-312/insightface
      creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty
      creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d
      creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh
      creating build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython
      gcc -pthread -B /home/abc/miniconda3/compiler_compat -fno-strict-overflow -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -Iinsightface/thirdparty/face3d/mesh/cython -I/tmp/pip-build-env-k2d9v2k7/overlay/lib/python3.12/site-packages/numpy/core/include -I/home/abc/miniconda3/include/python3.12 -c insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp -o build/temp.linux-x86_64-cpython-312/insightface/thirdparty/face3d/mesh/cython/mesh_core.o
      error: command '/config/02-sd-webui/conda-env/bin/gcc' failed: Permission denied
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for insightface
ERROR: Could not build wheels for insightface, which is required to install pyproject.toml-based projects

 

And:

Building wheels for collected packages: lmdb
  Building wheel for lmdb (setup.py): started
  Building wheel for lmdb (setup.py): finished with status 'error'
  error: subprocess-exited-with-error
  
  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [22 lines of output]
      py-lmdb: Using bundled liblmdb with py-lmdb patches; override with LMDB_FORCE_SYSTEM=1 or LMDB_PURE=1.
      patching file lmdb.h
      patching file mdb.c
      py-lmdb: Using CPython extension; override with LMDB_FORCE_CFFI=1.
      running bdist_wheel
      running build
      running build_py
      creating build/lib.linux-x86_64-cpython-312
      creating build/lib.linux-x86_64-cpython-312/lmdb
      copying lmdb/__init__.py -> build/lib.linux-x86_64-cpython-312/lmdb
      copying lmdb/__main__.py -> build/lib.linux-x86_64-cpython-312/lmdb
      copying lmdb/_config.py -> build/lib.linux-x86_64-cpython-312/lmdb
      copying lmdb/cffi.py -> build/lib.linux-x86_64-cpython-312/lmdb
      copying lmdb/tool.py -> build/lib.linux-x86_64-cpython-312/lmdb
      running build_ext
      building 'cpython' extension
      creating build/temp.linux-x86_64-cpython-312
      creating build/temp.linux-x86_64-cpython-312/build
      creating build/temp.linux-x86_64-cpython-312/build/lib
      creating build/temp.linux-x86_64-cpython-312/lmdb
      gcc -pthread -B /home/abc/miniconda3/compiler_compat -fno-strict-overflow -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -O2 -isystem /home/abc/miniconda3/include -fPIC -Ilib/py-lmdb -Ibuild/lib -I/home/abc/miniconda3/include/python3.12 -c build/lib/mdb.c -o build/temp.linux-x86_64-cpython-312/build/lib/mdb.o -DHAVE_PATCHED_LMDB=1 -UNDEBUG -w
      error: command '/config/02-sd-webui/conda-env/bin/gcc' failed: Permission denied
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for lmdb
  Running setup.py clean for lmdb
Failed to build lmdb
ERROR: Could not build wheels for lmdb, which is required to install pyproject.toml-based projects

 

And:

CUDA Stream Activated:  True
Traceback (most recent call last):
  File "/config/02-sd-webui/forge/launch.py", line 51, in <module>
    main()
  File "/config/02-sd-webui/forge/launch.py", line 47, in main
    start()
  File "/config/02-sd-webui/forge/modules/launch_utils.py", line 541, in start
    import webui
  File "/config/02-sd-webui/forge/webui.py", line 19, in <module>
    initialize.imports()
  File "/config/02-sd-webui/forge/modules/initialize.py", line 53, in imports
    from modules import processing, gradio_extensons, ui  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/02-sd-webui/forge/modules/processing.py", line 18, in <module>
    import modules.sd_hijack
  File "/config/02-sd-webui/forge/modules/sd_hijack.py", line 5, in <module>
    from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches
  File "/config/02-sd-webui/forge/modules/sd_hijack_optimizations.py", line 13, in <module>
    from modules.hypernetworks import hypernetwork
  File "/config/02-sd-webui/forge/modules/hypernetworks/hypernetwork.py", line 13, in <module>
    from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors
  File "/config/02-sd-webui/forge/modules/sd_models.py", line 20, in <module>
    from modules_forge import forge_loader
  File "/config/02-sd-webui/forge/modules_forge/forge_loader.py", line 5, in <module>
    from ldm_patched.modules import model_detection
  File "/config/02-sd-webui/forge/ldm_patched/modules/model_detection.py", line 5, in <module>
    import ldm_patched.modules.supported_models
  File "/config/02-sd-webui/forge/ldm_patched/modules/supported_models.py", line 5, in <module>
    from . import model_base
  File "/config/02-sd-webui/forge/ldm_patched/modules/model_base.py", line 6, in <module>
    from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel, Timestep
  File "/config/02-sd-webui/forge/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 22, in <module>
    from ..attention import SpatialTransformer, SpatialVideoTransformer, default
  File "/config/02-sd-webui/forge/ldm_patched/ldm/modules/attention.py", line 21, in <module>
    import xformers
ModuleNotFoundError: import of xformers halted; None in sys.modules
/entry.sh: line 11: /02.forge: No such file or directory
/entry.sh: line 12: /config/scripts/02.forge: No such file or directory
/entry.sh: line 13: /config/scripts/02.forge.sh: No such file or directory
error in webui selection variable

App is starting!
Channels:
 - defaults
Platform: linux-64
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done

 

Then it just bootloops endlessly. This is after I delete the venv so it's a clean install.

 

I changed the tag back to :3.1.0 and that installs and runs perfectly fine.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.