[SUPPORT] - Sygil-webui (sd-webui)


Recommended Posts

Overview: Support for Docker XML TEMPLATE of the Streamlit ui for Stable Diffusion.

Application: Streamlit ui for Stable Diffusion

Docker Hub: https://hub.docker.com/r/hlky/sd-webui https://hub.docker.com/r/tukirito/sygil-webui

GitHub: https://github.com/Sygil-Dev/sygil-webui/

Documentation: https://sygil-dev.github.io/sygil-webui/

Official Discord: https://discord.gg/gyXNe4NySY

Official Support: https://github.com/Sygil-Dev/sygil-webui/discussions

 

Note: For issues with the actual software or docker image I recommend you login to the discord server to get support from the team there. This thread is simply to assist with unraid integration/general support. 

 

By default the XML template uses the :latest tag which allows you to download the 4GB image. On boot the stable diffusion models will be downloaded to the mapped folder /mnt/user/appdata/sd-webui/sd/outputs (unless you changed it from default). The :runpod version is more up to date however this tag will download 32 GB to your unraid docker image on INSTALL (not run). This may quickly fill up your base docker image, use at your own risk.
 

UPDATE:

For those with graphics card with a small amount of memory (P2000 P2200 etc) try using a smaller model such as Waifu-Diffusion or look for one you like here https://huggingface.co/models?library=diffusers

 

Download the new model .ckpt file and place it in your sd\models\custom folder. It should appear in your dropdown box in the webui (may require a restart). For example you can try midjourney-v4-diffusion which is 2.13 GB (https://huggingface.co/prompthero/midjourney-v4-diffusion/blob/main/mdjrny-v4.ckpt).

Edited by pyrater
updated docker hub
Link to comment

It is most likely due to using too much of the graphics card memory.  Try using a smaller model such as Waifu-Diffusion or look for one you like here https://huggingface.co/models?library=diffusers

 

Download the new model .ckpt file and place it in your sd\models\custom folder. It should appear in your dropdown box in the webui (may require a restart). For example you can try midjourney-v4-diffusion which is 2.13 GB (https://huggingface.co/prompthero/midjourney-v4-diffusion/blob/main/mdjrny-v4.ckpt).

 

Edited by pyrater
  • Like 1
Link to comment
9 hours ago, veritas2884 said:

I loaded the mid journey ckpt file into custom folder, chmod 777'd it, and then shutdown and restarted the docker. I can see the Mid journey option in the drop down, but when I go to generate an image, I get this. Any ideas where I went wrong?

 

Triart and Waifu work.

Screenshot 2022-11-15 120048.png

 

You need more VRAM. I think.

 

Try upgrading to a better video card. Or try the wafu option.

Link to comment
FileNotFoundError: [Errno 2] No such file or directory: '/sd/outputs/txt2vid-samples/986090480_an-apple.mp4'

Traceback:

File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 562, in _run_script exec(code, module.__dict__)

File "/sd/scripts/webui_streamlit.py", line 174, in <module> layout()

File "/sd/scripts/webui_streamlit.py", line 146, in layout layout()

File "/sd/scripts/txt2vid.py", line 787, in layout video, seed, info, stats = txt2vid(prompts=prompt, gpu=st.session_state["defaults"].general.gpu,

File "/sd/scripts/txt2vid.py", line 572, in txt2vid st.session_state["preview_video"].video(open(video_path, 'rb').read())

 

I get the following when using txt2video.  The preview images are shown, but at the end no video is displayed.

Link to comment

@drbaltar

 

Quote

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 562, in _run_script
    exec(code, module.__dict__)
  File "/sd/scripts/webui_streamlit.py", line 174, in <module>
    layout()
  File "/sd/scripts/webui_streamlit.py", line 146, in layout
    layout()
  File "/sd/scripts/txt2vid.py", line 787, in layout
    video, seed, info, stats = txt2vid(prompts=prompt, gpu=st.session_state["defaults"].general.gpu,
  File "/sd/scripts/txt2vid.py", line 418, in txt2vid
    load_diffusers_model(weights_path, torch_device)
  File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/caching/cache_utils.py", line 253, in wrapper
    return get_or_create_cached_value()
  File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in get_or_create_cached_value
    return_value = func(*args, **kwargs)
  File "/sd/scripts/txt2vid.py", line 278, in load_diffusers_model
    raise OSError("You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.")
OSError: You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.
 

 

Is what i get when i try txt to video, under settings it says its under construction.... that part may not be done.

Link to comment

I installed this and managed to get it up and going. However it seems to be hammering my CPU and not utilizing the GPU.

 

I tried the default 'all' for the NVIDIA_VISIBLE_DEVICES variable and adding my GPU ID  with the same result. The CPU pegs out all cores while the GPU stays idling. I have a ASUS NVIDIA GeForce RTX 3060 w/ 12GB GDDR6.

Link to comment

@pyrater yes I tried with the GPU ID. No action on the GPU with "all" or the GPU ID. Plex, jellyfin, tdarr, and a few other containers work with the GPU. I did notice that the SD-WEBUI does not have the NVIDIADRIVERCAPABILITES variable like the other containers that use the GPU have. Think I should add that var? 

Link to comment
22 hours ago, ency98 said:

@pyrater yes I tried with the GPU ID. No action on the GPU with "all" or the GPU ID. Plex, jellyfin, tdarr, and a few other containers work with the GPU. I did notice that the SD-WEBUI does not have the NVIDIADRIVERCAPABILITES variable like the other containers that use the GPU have. Think I should add that var? 

 

You can try i doubt it would work, not sure as i cannot duplicate the error.

 

Link to comment

Well I figured out my issue. If any one else is having an issue where your CPU pegs out and you using an Nvidia GPU go to settings and make sure your see your GPU as installed. Seems like on my last reboot unraid did not pick up my GPU. I rebooted my server and made sure the GPU was picked up. I fired up this container and it worked like a charm.

Link to comment
35 minutes ago, ency98 said:

Well I figured out my issue. If any one else is having an issue where your CPU pegs out and you using an Nvidia GPU go to settings and make sure your see your GPU as installed. Seems like on my last reboot unraid did not pick up my GPU. I rebooted my server and made sure the GPU was picked up. I fired up this container and it worked like a charm.

 

Great this worked out for you. My GPU has been showing up as expected the whole time in sygil, but I still have this CPU issue.

Link to comment

What about in the nvidia setting in unraid? The container had no issues "seeing" the gpu. But it was unraid that was not seeing the GPU.

Also how much ram and what kind of CPU do you have? I found that loading the models will peg out my i9 for a bit and will often hang (pegging out the CPU) when switching between models. I need to restart the container when that happens. Could be your getting CPU or memory bound while loading the model and not when generating an image. Dont know your setup but worth a shot.

Link to comment
8 minutes ago, ency98 said:

What about in the nvidia setting in unraid? The container had no issues "seeing" the gpu. But it was unraid that was not seeing the GPU.

Also how much ram and what kind of CPU do you have? I found that loading the models will peg out my i9 for a bit and will often hang (pegging out the CPU) when switching between models. I need to restart the container when that happens. Could be your getting CPU or memory bound while loading the model and not when generating an image. Dont know your setup but worth a shot.

 

The nvidia driver and nvidia-smi both see my GPU (1660 super). I've got a 5600X with 32gb RAM. I'm experiencing these issues on fresh start of the container, just starting it fresh then clicking the Generate button with the default prompt and models, etc.

Link to comment

Yeah, that's more than enough horse power and similar to my setup.

 

what about starting the container and then selecting a different model before generating? the default SD 1.4 gives me a EOF error. I found that to get things to work I need to try to generate something on sd 1.4 first to get the error then I change the model to the SD 1.5 I downloaded it will work 70-ish % of the time. But after generating a few images if I try to switch models the CPU will peg out and everything gets unresponsive until i restart the container.

Link to comment
  • 2 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.