Jump to content

[Support] Comfyui (Nvidia) Docker


Recommended Posts

ComfyUI is a Stable Diffusion webUI.

Since a Flux example was recently added, I created this container builder to test it.

As I built it with support for specifying the UID and GID, I made an Unraid version :)


2024-09-02: Approved for CA :)
 

 

After installing the tool from Community Apps, and letting it run for the first time (to install all the required python packages), please restart it.

This is needed to tweak settings for ComfyUI Manager to allow missing nodes installation.

 

The source code is public and can be found at https://github.com/mmartial/ComfyUI-Nvidia-Docker

Docker builds are released at https://hub.docker.com/r/mmartial/comfyui-nvidia-docker

 

The "run directory" is where the folders needed will be created (please see the GitHub for up-to-date information):

- HF is the expected location of the HF_HOME (HuggingFace installation directory)

- ComfyUI is the git clone version of the tool, with all its sub-directories, among which:

  - custom_nodes for additional support nodes, for example ComfyUI-Manager,

  - models and all its sub-directories is where checkpoints, clip, loras, unet, etc have to be placed.

  - input and output are where input images are to be placed and generated images will end up.

- venv is the virtual environment where all the required python packages for ComfyUI and other additions will be placed. A default ComfyUI package installation requires about 5GB of additional installation in addition to the container itself; those packages will be in this venv folder.


All those folders will be created with the WANTED_UID and WANTED_GID parameters (by default using Unraid's default of 99:100), allowing the end-user to place directly into the "run/ComfyUI/models" sub-folders their checkpoints, unet, lora and other required models.

 

Note that the base container does not include weights/models; you must obtain those and install them in the proper directories under the mount you have selected for the "run" folder.

 

Output files will be placed into the "run/ComfyUI/output" folder

 

Hopefully, you will enjoy it. Please share your ideas.

FYSA: if interested, the template is available at
https://raw.githubusercontent.com/mmartial/unraid-templates/main/templates/ComfyUI-Nvidia-Docker.xml

Screenshot 2024-09-02 at 2.33.37 PM.png

Edited by martial
Link to comment
Posted (edited)

[FAQ]

 

To avoid having to maintain multiple sources of truth (which will diverge fast from past experience :) ), I ask that you please check the main README.md at https://github.com/mmartial/ComfyUI-Nvidia-Docker and the Extra FAQ: https://github.com/mmartial/ComfyUI-Nvidia-Docker/blob/main/extras/FAQ.md in addition to this support thread

 

----- First start

The tool will download the latest source code of ComfyUI from Github, install all its requirements in a Python virtualenv, then do the same for the ComfyUI-Manager custom_node. This step will take some time and download all the python packages required. All this installation is done in the "run" directory; "run/venv" is where the Python packages are installed and "run/ComfyUI" is where the ComfyUI sources are installed.

You must restart the container after its first start for ComfyUI-Manager to enable custom_nodes installation.

Link: Direct link to section on the project's GitHub page

 

 

----- Custom init script

If you have an RTX 4000 series, you might want to start ComfyUI with --fast

Create a "run/user_script.bash" file and alter it following the instructions on the project's GitHub

 

 

----- First run
The first time you use the WebUI, you should see the bottle example.

This example requires the "v1-5-pruned-emaonly.ckpt" file.

It is available, for example, at https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt

 

The way to get the WebUI to see if is first to put it in the models/checkpoints folder:

cd /mnt/user/appdata/comfyui-nvidia/mnt/ComfyUI/models/checkpoints

wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt

 

After the download is complete, click "Refresh" on the WebUI and "Queue Prompt"

Link: Direct link to section on the project's GitHub

 

 

----- Re-installing ComfyUI or cleaning up virtualenv

If you encounter trouble with the container and want to reinstall ComfyUI or cleaning up the python packages installation directory.

Do not delete the `run` directory, as you will lose all your models, output images, etc.

Instead, I recommend stopping the container, renaming `run` as `run.old`, and re-running the container. It will re-install everything cleanly.

You will then have the content of `run.old` for all the models, vae, loras,... and outputs in the `run.old/ComfyUI` base.

 

 

----- Updating Comfy

ComfyUI-Manager is here to help with this process.

The "Manager" button provides a few useful options, among which its "Update All", "Update ComfyUI", "Fetch Updates" buttons.

The git commit of your ComfyUI (and its date) will be shown in the info box on the right of the manager UI.

Please see ComfyUI-Manager's "How to Use" section for details.

 

 

 

Edited by martial
Link to comment
27 minutes ago, birdwatcher said:

Thanks for your efforts! I'm really looking forward to trying out Flux.

Has this been indexed in CA? When I search for ComfyUi the only result is ComfyUi-Magic by futrlabsmagic.

 

Hello, I have submitted to CA about a week ago.

I understand the team has to do a review before allowing new content, so I hope it will be released shortly.

 

The wrapper code is public (make a docker base, pull content from ComfyUI's GH and ComfyUI-Manager GH) on the GH listed in the first post.

 

The template file is public if you know how to add it manually.

https://raw.githubusercontent.com/mmartial/unraid-templates/main/templates/ComfyUI-Nvidia-Docker.xml

 

 

  • Like 1
Link to comment

You are my hero. For realz man. 

 

ComfyUI-Magic: This docker, somehow, I was able to make the Flux models work until I updated ComfyUI with manager. It broke and was going crazy trying to get it to work. Not even clean installs work. 

 

Your docker saved me so much headache and pain. 

 

To install the docker, just follow the instructions from this post InvokeAI, same thing, have to copy the XML file to your USB config and just run it. This InvokeAI has worked well too, another alternative to run models.

 

Regardless, your Docker, It works. 

Checkpoint models and the ones that require the VAE/CLIP. No more Errors when loading Workflows from Flux examples

 

I'm so beyond grateful mate. Now I can stop using ComfyUI on Windows and run this docker. 

 

Again, thanks so much for your post.

Link to comment
8 hours ago, unraidrocks said:

ComfyUI-Magic: This docker, somehow, I was able to make the Flux models work until I updated ComfyUI with manager. It broke and was going crazy trying to get it to work. Not even clean installs work.

[...]

To install the docker, just follow the instructions from this post InvokeAI, same thing, have to copy the XML file to your USB config and just run it. This InvokeAI has worked well too, another alternative to run models.

[...]

I'm so beyond grateful mate. Now I can stop using ComfyUI on Windows and run this docker.

 

Thank you for the kind words. I am delighted to hear the tool is useful. :)
 

I'm sorry to hear about the trouble.

If you encounter trouble with this one, I recommend stopping the container, renaming `run` as `run.old`, and re-running the container. It will re-install everything cleanly.

You will then have the content of `run.old` for all the models, vae, loras,... and outputs in the `run.old/ComfyUI` base.


I was also using a frontend on Windows, but the amount of VRAM used by other tools was killing the performance of any runs.

I originally built the container on an Ubuntu system and tested Flux1Dev (full fp16) on a 3090 without any other UI. The speed-up is real; we are talking less than 30 seconds per image generation compared to the minutes I kept reading about.

 

I am familiar with Forge, Invoke, Fooocus, but the nodes/workflows worked best for me.

 

Link to comment



I have a question about updating docker to support this and future models from Flux and Beyond. The docker, PIP command works and updated the requirements. However, when I type "python" it doesn't seem that the docker recognizes as python installed. If I ever need to run python commands to install inside the docker, please can you provide detail on how to run that? 

 

I'm trying to juggle between This Docker and ComfyUI in Windows. Most videos and documentation online all have steps for installing on Windows. I have to figure out how to take the Windows instructions and apply them to this docker. Still trying to get this to work, but at least updating ComfyUI from manager didn't break anything. 

 

python3 -s -m pip install -U bitsandbytes --user

 

Turns out that bitsandbytes needs to be installed not only through PIP but in Python as well. Tried that command and seems to have installed. However the Custom Node NF4 just can't be loaded and doesn't recognize the bitsandbytes. Can't find the module. 

 

Not sure if you've had a chance to try the Flux NF4 models, but they require some manual steps to update ComfyUI. 

 

Thanks again.

Edited by unraidrocks
Link to comment

I have not tried the NF4 node, I tried the FP16 (and FP8, I believe) and they worked out on my GPU.
 

The way the python packages are installed is via a virtual environment, as such it is possible to “activate” it and then any install will be done with the other already installed components. I documented this in the GitHub in https://github.com/mmartial/ComfyUI-Nvidia-Docker?tab=readme-ov-file#51-virtualenv

Basically use the menu dropdown to get a shell on the container (making sure it is bash) then source the venv/bin/activate 
Any “pip3 install” command will add those to the virtual env.
 

The same thing should work from the WebUI with ComfyUI Manager’s “pip install” option.

In order to make those change repeatable if you were to update the container, I would also look at the “user_script.bash” section (right after the virtual env one) as this one was made to allow an “at restart” mods, such as pip package install and such.

Is there a step by step (text) version by any chance of that NF4 description I can look at so I can try?

Edited by martial
Link to comment

 

https://civitai.com/models/638187/flux1-dev-v1v2-flux1-schnell-bnb-nf4?modelVersionId=721627

 

Above are the files for Flux NF4 and model from CivitAI. I was curious to make this new model work and thats where I ran into issues. It's not so straight forward. I stumbled on this reddit post that explains what needs to be done to install the NF4 models and how to fix errors

 

 

 

https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4

 

The above is the Custom Module that needs to install for ComfyUI, but need to install a requirements.txt using PIP, also need to run install "bitsandbytes" inside Python. 
 

python -s -m pip install -U bitsandbytes --user

 

I believe the Docker environment everything installed correctly, but will not work because it wasn't installed to "/comfy/mnt/venv". I need to update the VENV just as you mentioned in your docker documentation. Still trying to figure all this out, but regardless.

 

Thanks again for response. 

 

 

Link to comment
7 hours ago, unraidrocks said:

 

https://civitai.com/models/638187/flux1-dev-v1v2-flux1-schnell-bnb-nf4?modelVersionId=721627

 

Above are the files for Flux NF4 and model from CivitAI. I was curious to make this new model work and thats where I ran into issues. It's not so straight forward. I stumbled on this reddit post that explains what needs to be done to install the NF4 models and how to fix errors

 

 

 

https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4

 

The above is the Custom Module that needs to install for ComfyUI, but need to install a requirements.txt using PIP, also need to run install "bitsandbytes" inside Python. 
 

python -s -m pip install -U bitsandbytes --user

 

I believe the Docker environment everything installed correctly, but will not work because it wasn't installed to "/comfy/mnt/venv". I need to update the VENV just as you mentioned in your docker documentation. Still trying to figure all this out, but regardless.

 

Thanks again for response. 

 

 

 


I have confirmed the steps can be done using ComfyUI-Manager directly, no need for the CLI

 

I am going to refer to the steps listed as sections in https://github.com/mmartial/ComfyUI-Nvidia-Docker/blob/main/extras/FAQ.md
 

1) Update ComfyUI

2) Update ComfyUI-Manager

3) ComfyUI_bitsandbytes_NF4

 

I also note that NF4 was marked as deprecated in favor of GGUF which can be directly installed using the Manager that is installed with the tool.

I made an entry in that FAQ about that if you want to try it

 

 










OLDER -- still useful but see the FAQ
Thanks that does help.

 

I am going to try and will update in this post if/when I get it working but FYSA, the current steps I did:

1) Update Comfy to the latest version

- Start ComfyUI

- Open "Manager"

- use "Update ComfyUI"

- check the container's logs, and wait for it to be completed (just to be safe)

- reload the webpage

- Open "Manager", and check the date of the last pulled git commit (the box on the right with the news, if you scroll down). Mine says 2024-09-03 so the very latest Comfy

1b) Recommended to update ComfyUI-Manager as well

from "Manager", select “Custom Nodes Manager”, select “Installed” and “Try Update” on “ComfyUI-Manager” … wait, check logs, restart once prompted and reload the page to be safe


[Step 2 is likely optional because I believe it would be done by Manager in 3, but since I tested it, I am keeping it here for other people that would like to add packages]
2) Start installing the NF4 requirements:

- Either click on the icon in the Docker tab and use the "> Console" (I recommend making sure the "Shell" is "Bash" from the "Edit" option) before that, or from a shell on your Unraid system find the name of your Comfy container (docker container ls); In general, it should be ComfyUI-Nvidia-Docker. If that is the case you can copy/paste: docker exec -it ComfyUI-Nvidia-Docker /bin/bash

- once within the running container, activate the venv: source /comfy/mnt/venv/bin/activate

- your terminal will now show a (venv) before the rest of the prompt

- we can install everything we want at this point, it will end up in the venv: pip3 install bitsandbytes

- verify installation if you want: pip3 freeze | grep bits

 returns bitsandbytes==0.43.3 on my system

 

3) Install the node itself (since I did 2 first, mine said bitsandbytes was already installed, but it installed the node):

- From Manager, select “Custom Node Manager”

- "install from Git" and give it the URL: https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4.git

- after success, restart and reload page

Double-clicking on the canvas and searching for nf4 shows a new entry with NF4 in the title

 

Might have to restart the UI a couple times ... per the log, the first time it was complaining bitsandbytes was missing and after the second ComfyManager restart it installed it

 


4) Grab weights (they go in checkpoint, not unet)

- In the `docker exec` terminal:

cd /comfy/mnt/ComfyUI/models/checkpoints

wget 'https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/resolve/main/flux1-dev-bnb-nf4-v2.safetensors?download=true' -O flux1-dev-bnb-nf4-v2.safetensors

 

You will also need the ae.safetensors file

 

I found a test Workflow https://openart.ai/workflows/mentor_ai/flux-nf4-comfyui-basic-workflow/7QgBrjXFKDO57w0orNVc

Seems to run at 8GB of VRAM but appears to generate results :)

 

 

Screenshot 2024-09-03 at 5.28.15 PM.png

Edited by martial
Link to comment

Yes, your instructions worked and I was able to produce a Flux-NF4 from workflow provided. 

 

The most important lesson I got was how to install/update the custom python inside the VENV folder. I guess this applies to dockers and linux boxes? This is big. I followed the instructions and installed bytesandbits to the docker, and not the VENV folder. Thats why my Comfyui always reported error when loading NF4.

 

This has been a huge help. Also very comfortable updating the ComfyUI through the Manager. My previous experience with another docker/windows versions, when I ran update, everything just breaks. So I've been scarred by that. 

 

If I can ask another question, Lora, I've applied them to Checkpoints and work, I'm unable to use Lora on full version of Flux Dev. I've attached an image with workflow I've tried using. Do you happen to have a Workflow that can use Lora?

 

Thanks again for your help 

ComfyUI_00407_.png

Link to comment
15 minutes ago, unraidrocks said:

Yes, your instructions worked and I was able to produce a Flux-NF4 from workflow provided. 

 

The most important lesson I got was how to install/update the custom python inside the VENV folder. I guess this applies to dockers and linux boxes? This is big. I followed the instructions and installed bytesandbits to the docker, and not the VENV folder. Thats why my Comfyui always reported error when loading NF4.

 

This has been a huge help. Also very comfortable updating the ComfyUI through the Manager. My previous experience with another docker/windows versions, when I ran update, everything just breaks. So I've been scarred by that. 

 

If I can ask another question, Lora, I've applied them to Checkpoints and work, I'm unable to use Lora on full version of Flux Dev. I've attached an image with workflow I've tried using. Do you happen to have a Workflow that can use Lora?

 

Thanks again for your help 

 

Glad to hear you got NF4 working.

I saw on the repo that they recommend people use GGUF, by the way.

 

I used a custom node named "Power Lora" from "rgthree". I am attaching an image of the workflow.

The workflow itself is embedded in this image https://raw.githubusercontent.com/mmartial/ComfyUI-Nvidia-Docker/main/workflow/ComfyUI-Flux1Dev-ExtendedWorkflow.png

When you drop it in Comfy, the missing node will be displayed, you can use Manager to "Install Missing Nodes" (gotta love Manager)

That workflow was used for a Flux LoRA I trained on myself (which is kind of when I started building that container, to gain as much free VRAM as possible :) )

It is at https://blg.gkr.one/20240818-flux_lora_training/ if you are curious.

 

 

As for the detailed documentation above, my pleasure. I am never sure what people will try so I can only do so when people ask.
I tried to build the container as "recoverable" (at worst, as I mentioned in the FAQ, delete the venv and restart the container; it should redownload-reinstall everything).
 

The "Manager" is a potent tool for performing software installations, especially if the node is in the list found by "Search." 

Comfy-Flux1Dev_alternate.png

Edited by martial
Link to comment

Is the 3090 good enough to do training or is it better to get a 4090?

 

I finally hit a wall with that workflow with Lora, the card I'm using, 3060ti can't handle more than 1 or else I get out of memory errors. 

 

I need this 5090 to come out now. 

 

Also downloading the GGUF models and try them out. My card finally hitting a wall. I have to lower the output to 512x512. 

 

I'll reach out if I have any questions.

 

Thanks

Link to comment
8 minutes ago, unraidrocks said:

Is the 3090 good enough to do training or is it better to get a 4090?

 

I finally hit a wall with that workflow with Lora, the card I'm using, 3060ti can't handle more than 1 or else I get out of memory errors. 

 

I need this 5090 to come out now. 

 

Also downloading the GGUF models and try them out. My card finally hitting a wall. I have to lower the output to 512x512. 

 

I'll reach out if I have any questions.

 

Thanks

 

The 3090 is more than enough to train this LoRA. You need a GPU with at least 24GB of memory.
Although it takes 4 hours to train on a 3090, the result is stunning. 

Many people rent a GPU for a few hours to do this training; investing in a 3090 is unnecessary.

 

I ran the training in a Ubuntu 24.04 system, not Unraid, although I believe it could be done using the base container I use for the Comfy and a shell within.

 

Edited by martial
Link to comment

Got another question, not related to docker, but has to do more with Workflows/Image Imports 

 

I've noticed some workflows show a warning Icon that it will conflict with other installed plugins with ComfyUI. 

 

Is it safe to import workflows and installing missing custom nodes even if there is a conflict?

Will the conflict cause ComfyUI to break?

Is the conflict just trying to use incompatible nodes in Workflow or with incompatible nodes existing with ComfyUI?

 

I've tried searching and don't really have a clear answer. I've been cautious about adding workflows and installing nodes that show warning. 

 

Thanks 

Link to comment

Just completed training my first model. It really is impressive. I had purchased an image generator from a website. Model is just as good. Just wish I had discovered this earlier. But regardless. 

 

The only tricky part was setting up the VENV for the trainer. I kept getting a weird error, the command just didn't detect the video card in the docker. I had to play around with removing/downgrading certain pip requirements. I don't know what package I downgraded or upgraded, but eventually it worked. Only other thing that is important is getting the hugging face READ token. It was all downhill from there. 

 

 

Package                   Version
------------------------- -----------
absl-py                   2.1.0
accelerate                0.34.2
aiofiles                  23.2.1
albucore                  0.0.15
albumentations            1.4.15
annotated-types           0.7.0
antlr4-python3-runtime    4.9.3
anyio                     4.4.0
attrs                     24.2.0
bitsandbytes              0.43.3
certifi                   2024.8.30
cffi                      1.17.1
charset-normalizer        3.3.2
clean-fid                 0.1.35
click                     8.1.7
clip-anytorch             2.6.0
contourpy                 1.3.0
controlnet_aux            0.0.7
cycler                    0.12.1
dctorch                   0.1.2
diffusers                 0.31.0.dev0
docker-pycreds            0.4.0
einops                    0.8.0
eval_type_backport        0.2.0
exceptiongroup            1.2.2
fastapi                   0.114.1
ffmpy                     0.4.0
filelock                  3.16.0
flatbuffers               24.3.25
flatten-json              0.1.14
fonttools                 4.53.1
fsspec                    2024.9.0
ftfy                      6.2.3
gitdb                     4.0.11
GitPython                 3.1.43
gradio                    4.44.0
gradio_client             1.3.0
grpcio                    1.66.1
h11                       0.14.0
hf_transfer               0.1.8
httpcore                  1.0.5
httpx                     0.27.2
huggingface-hub           0.24.7
idna                      3.8
imageio                   2.35.1
importlib_metadata        8.5.0
importlib_resources       6.4.5
inquirerpy                0.3.4
invisible-watermark       0.2.0
jax                       0.4.32
jaxlib                    0.4.32
Jinja2                    3.1.4
jsonmerge                 1.9.2
jsonschema                4.23.0
jsonschema-specifications 2023.12.1
k-diffusion               0.1.1.post1
kiwisolver                1.4.7
kornia                    0.7.3
kornia_rs                 0.1.5
lazy_loader               0.4
lpips                     0.1.4
lycoris_lora              1.8.3
Markdown                  3.7
markdown-it-py            3.0.0
MarkupSafe                2.1.5
matplotlib                3.9.2
mdurl                     0.1.2
mediapipe                 0.10.15
ml-dtypes                 0.4.0
mpmath                    1.3.0
networkx                  3.3
ninja                     1.11.1.1
numpy                     1.26.4
nvidia-cublas-cu12        12.1.3.1
nvidia-cuda-cupti-cu12    12.1.105
nvidia-cuda-nvrtc-cu12    12.1.105
nvidia-cuda-runtime-cu12  12.1.105
nvidia-cudnn-cu12         9.1.0.70
nvidia-cufft-cu12         11.0.2.54
nvidia-curand-cu12        10.3.2.106
nvidia-cusolver-cu12      11.4.5.107
nvidia-cusparse-cu12      12.1.0.106
nvidia-nccl-cu12          2.20.5
nvidia-nvjitlink-cu12     12.6.68
nvidia-nvtx-cu12          12.1.105
omegaconf                 2.3.0
open_clip_torch           2.26.1
opencv-contrib-python     4.10.0.84
opencv-python             4.10.0.84
opencv-python-headless    4.9.0.80
opt-einsum                3.3.0
optimum-quanto            0.2.4
orjson                    3.10.7
oyaml                     1.0
packaging                 24.1
pandas                    2.2.2
peft                      0.12.0
pfzy                      0.3.4
pillow                    10.4.0
pip                       22.0.2
platformdirs              4.3.2
prodigyopt                1.0
prompt_toolkit            3.0.47
protobuf                  4.25.4
psutil                    6.0.0
py-cpuinfo                9.0.0
pycparser                 2.22
pydantic                  2.9.1
pydantic_core             2.23.3
pydub                     0.25.1
Pygments                  2.18.0
pyparsing                 3.1.4
python-dateutil           2.9.0.post0
python-dotenv             1.0.1
python-multipart          0.0.9
python-slugify            8.0.4
pytorch-fid               0.3.0
pytz                      2024.2
PyWavelets                1.7.0
PyYAML                    6.0.2
referencing               0.35.1
regex                     2024.9.11
requests                  2.32.3
rich                      13.8.1
rpds-py                   0.20.0
ruff                      0.6.4
safetensors               0.4.5
scikit-image              0.24.0
scipy                     1.14.1
seaborn                   0.13.2
semantic-version          2.10.0
sentencepiece             0.2.0
sentry-sdk                2.14.0
setproctitle              1.3.3
setuptools                59.6.0
shellingham               1.5.4
six                       1.16.0
smmap                     5.0.1
sniffio                   1.3.1
sounddevice               0.5.0
starlette                 0.38.5
sympy                     1.13.2
tensorboard               2.17.1
tensorboard-data-server   0.7.2
text-unidecode            1.3
tifffile                  2024.8.30
timm                      1.0.9
tokenizers                0.19.1
toml                      0.10.2
tomlkit                   0.12.0
torch                     2.4.1
torchdiffeq               0.2.4
torchsde                  0.2.6
torchvision               0.19.1
tqdm                      4.66.5
trampoline                0.1.2
transformers              4.44.2
triton                    3.0.0
typer                     0.12.5
typing_extensions         4.12.2
tzdata                    2024.1
ultralytics               8.2.92
ultralytics-thop          2.0.6
urllib3                   2.2.3
uvicorn                   0.30.6
wandb                     0.18.0
wcwidth                   0.2.13
websockets                12.0
Werkzeug                  3.0.4
zipp                      3.20.1

 

 

Backing up the training VENV. Made so much progress in 1 month and don't regret buying the 3090. The second the 5090 is out I'm getting it. 

 

This docker rocks man. 

 

 

 

This is the error I got at first before the fix: 

libGL.so.1: cannot open shared object file: No such file or directory

 

This was the post that helped me solve the error: 

https://github.com/ultralytics/ultralytics/issues/1270

 

Trial and error, I played around with uninstalling/upgrading. But posted pip list of working results with the 3090 docker.

Edited by unraidrocks
Link to comment
9 hours ago, unraidrocks said:

[...]

 

This is the error I got at first before the fix: 

libGL.so.1: cannot open shared object file: No such file or directory

 

This was the post that helped me solve the error: 

https://github.com/ultralytics/ultralytics/issues/1270

 

Trial and error, I played around with uninstalling/upgrading. But posted pip list of working results with the 3090 docker.


Wonderful to hear. I am always curious about the workflows people come up with.

I had this `libgl.so.1` issue recently with "AdvancedLivePortrait" and installed it via the `user_init.bash` script

 

#echo "== Adding system package"
DEBIAN_FRONTEND=noninteractive sudo apt-get update
DEBIAN_FRONTEND=noninteractive sudo apt-get install -y libgl1 libglib2.0-0

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...