pyrater Posted November 14, 2022 Share Posted November 14, 2022 (edited) Overview: Support for Docker XML TEMPLATE of the Streamlit ui for Stable Diffusion. Application: Streamlit ui for Stable Diffusion Docker Hub: https://hub.docker.com/r/hlky/sd-webui https://hub.docker.com/r/tukirito/sygil-webui GitHub: https://github.com/Sygil-Dev/sygil-webui/ Documentation: https://sygil-dev.github.io/sygil-webui/ Official Discord: https://discord.gg/gyXNe4NySY Official Support: https://github.com/Sygil-Dev/sygil-webui/discussions Note: For issues with the actual software or docker image I recommend you login to the discord server to get support from the team there. This thread is simply to assist with unraid integration/general support. By default the XML template uses the :latest tag which allows you to download the 4GB image. On boot the stable diffusion models will be downloaded to the mapped folder /mnt/user/appdata/sd-webui/sd/outputs (unless you changed it from default). The :runpod version is more up to date however this tag will download 32 GB to your unraid docker image on INSTALL (not run). This may quickly fill up your base docker image, use at your own risk. UPDATE: For those with graphics card with a small amount of memory (P2000 P2200 etc) try using a smaller model such as Waifu-Diffusion or look for one you like here https://huggingface.co/models?library=diffusers Download the new model .ckpt file and place it in your sd\models\custom folder. It should appear in your dropdown box in the webui (may require a restart). For example you can try midjourney-v4-diffusion which is 2.13 GB (https://huggingface.co/prompthero/midjourney-v4-diffusion/blob/main/mdjrny-v4.ckpt). Edited May 1 by pyrater updated docker hub Quote Link to comment
veritas2884 Posted November 14, 2022 Share Posted November 14, 2022 Thank you for putting this together. I am using a P2000 and keep getting a Cuda out of Memory error. Is there a way to limit it and make it work? 1 Quote Link to comment
ubermetroid Posted November 14, 2022 Share Posted November 14, 2022 1 hour ago, veritas2884 said: Thank you for putting this together. I am using a P2000 and keep getting a Cuda out of Memory error. Is there a way to limit it and make it work? Me too. I think its time to upgrade to more then 5 gigs. Quote Link to comment
pyrater Posted November 14, 2022 Author Share Posted November 14, 2022 (edited) It is most likely due to using too much of the graphics card memory. Try using a smaller model such as Waifu-Diffusion or look for one you like here https://huggingface.co/models?library=diffusers Download the new model .ckpt file and place it in your sd\models\custom folder. It should appear in your dropdown box in the webui (may require a restart). For example you can try midjourney-v4-diffusion which is 2.13 GB (https://huggingface.co/prompthero/midjourney-v4-diffusion/blob/main/mdjrny-v4.ckpt). Edited November 14, 2022 by pyrater 1 Quote Link to comment
ubermetroid Posted November 15, 2022 Share Posted November 15, 2022 I could not just copy and paste the file over. I had to reset permissions. Working great now though. Quote Link to comment
veritas2884 Posted November 15, 2022 Share Posted November 15, 2022 (edited) I loaded the mid journey ckpt file into custom folder, chmod 777'd it, and then shutdown and restarted the docker. I can see the Mid journey option in the drop down, but when I go to generate an image, I get this. Any ideas where I went wrong? Triart and Waifu work. Edited November 15, 2022 by veritas2884 Quote Link to comment
ubermetroid Posted November 16, 2022 Share Posted November 16, 2022 9 hours ago, veritas2884 said: I loaded the mid journey ckpt file into custom folder, chmod 777'd it, and then shutdown and restarted the docker. I can see the Mid journey option in the drop down, but when I go to generate an image, I get this. Any ideas where I went wrong? Triart and Waifu work. You need more VRAM. I think. Try upgrading to a better video card. Or try the wafu option. Quote Link to comment
drbaltar Posted November 16, 2022 Share Posted November 16, 2022 FileNotFoundError: [Errno 2] No such file or directory: '/sd/outputs/txt2vid-samples/986090480_an-apple.mp4' Traceback: File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 562, in _run_script exec(code, module.__dict__) File "/sd/scripts/webui_streamlit.py", line 174, in <module> layout() File "/sd/scripts/webui_streamlit.py", line 146, in layout layout() File "/sd/scripts/txt2vid.py", line 787, in layout video, seed, info, stats = txt2vid(prompts=prompt, gpu=st.session_state["defaults"].general.gpu, File "/sd/scripts/txt2vid.py", line 572, in txt2vid st.session_state["preview_video"].video(open(video_path, 'rb').read()) I get the following when using txt2video. The preview images are shown, but at the end no video is displayed. Quote Link to comment
pyrater Posted November 17, 2022 Author Share Posted November 17, 2022 That's a new one for me. I haven't really messed with the video stuff but I can take a look today after work or it might be better to post in the official discord support channel. 1 Quote Link to comment
pyrater Posted November 20, 2022 Author Share Posted November 20, 2022 @drbaltar Quote During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 562, in _run_script exec(code, module.__dict__) File "/sd/scripts/webui_streamlit.py", line 174, in <module> layout() File "/sd/scripts/webui_streamlit.py", line 146, in layout layout() File "/sd/scripts/txt2vid.py", line 787, in layout video, seed, info, stats = txt2vid(prompts=prompt, gpu=st.session_state["defaults"].general.gpu, File "/sd/scripts/txt2vid.py", line 418, in txt2vid load_diffusers_model(weights_path, torch_device) File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/caching/cache_utils.py", line 253, in wrapper return get_or_create_cached_value() File "/opt/conda/lib/python3.8/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/sd/scripts/txt2vid.py", line 278, in load_diffusers_model raise OSError("You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token.") OSError: You need a huggingface token in order to use the Text to Video tab. Use the Settings page from the sidebar on the left to add your token. Is what i get when i try txt to video, under settings it says its under construction.... that part may not be done. Quote Link to comment
ubermetroid Posted November 20, 2022 Share Posted November 20, 2022 Does this docker keep up todate with the main branch of SD WebUI? Quote Link to comment
pyrater Posted November 21, 2022 Author Share Posted November 21, 2022 (edited) it is the docker that is maintained by the developers. The official dev response is "that isnt being updated atm There's also hlky/sd-webui:runpod which includes a lot of models already". Edited November 21, 2022 by pyrater 1 Quote Link to comment
ency98 Posted November 25, 2022 Share Posted November 25, 2022 I installed this and managed to get it up and going. However it seems to be hammering my CPU and not utilizing the GPU. I tried the default 'all' for the NVIDIA_VISIBLE_DEVICES variable and adding my GPU ID with the same result. The CPU pegs out all cores while the GPU stays idling. I have a ASUS NVIDIA GeForce RTX 3060 w/ 12GB GDDR6. Quote Link to comment
russelg Posted November 26, 2022 Share Posted November 26, 2022 I'm having the exact same issue as the above poster. GTX 1660 Super 6GB in my case (yes I know this card has issues by default, I've toggled the options I need to). Quote Link to comment
ubermetroid Posted November 26, 2022 Share Posted November 26, 2022 I have NVIDIA_VISIBLE_DEVICES : ALL and the docker picks up my P2000. @ency98 if you figure it out let me know the 12GB 3060 works. That looks like the cheapest + best upgrade option. Quote Link to comment
pyrater Posted November 27, 2022 Author Share Posted November 27, 2022 Do you have the nvidia driver package installed into unraid? (i assume yes). If so try putting the GPU ID instead of all and see if that works. Not sure why it wouldnt work as it works fine with my p2200 Quote Link to comment
ency98 Posted November 27, 2022 Share Posted November 27, 2022 @pyrater yes I tried with the GPU ID. No action on the GPU with "all" or the GPU ID. Plex, jellyfin, tdarr, and a few other containers work with the GPU. I did notice that the SD-WEBUI does not have the NVIDIADRIVERCAPABILITES variable like the other containers that use the GPU have. Think I should add that var? Quote Link to comment
pyrater Posted November 28, 2022 Author Share Posted November 28, 2022 22 hours ago, ency98 said: @pyrater yes I tried with the GPU ID. No action on the GPU with "all" or the GPU ID. Plex, jellyfin, tdarr, and a few other containers work with the GPU. I did notice that the SD-WEBUI does not have the NVIDIADRIVERCAPABILITES variable like the other containers that use the GPU have. Think I should add that var? You can try i doubt it would work, not sure as i cannot duplicate the error. Quote Link to comment
ency98 Posted November 29, 2022 Share Posted November 29, 2022 Well I figured out my issue. If any one else is having an issue where your CPU pegs out and you using an Nvidia GPU go to settings and make sure your see your GPU as installed. Seems like on my last reboot unraid did not pick up my GPU. I rebooted my server and made sure the GPU was picked up. I fired up this container and it worked like a charm. Quote Link to comment
russelg Posted November 29, 2022 Share Posted November 29, 2022 35 minutes ago, ency98 said: Well I figured out my issue. If any one else is having an issue where your CPU pegs out and you using an Nvidia GPU go to settings and make sure your see your GPU as installed. Seems like on my last reboot unraid did not pick up my GPU. I rebooted my server and made sure the GPU was picked up. I fired up this container and it worked like a charm. Great this worked out for you. My GPU has been showing up as expected the whole time in sygil, but I still have this CPU issue. Quote Link to comment
ency98 Posted November 30, 2022 Share Posted November 30, 2022 What about in the nvidia setting in unraid? The container had no issues "seeing" the gpu. But it was unraid that was not seeing the GPU. Also how much ram and what kind of CPU do you have? I found that loading the models will peg out my i9 for a bit and will often hang (pegging out the CPU) when switching between models. I need to restart the container when that happens. Could be your getting CPU or memory bound while loading the model and not when generating an image. Dont know your setup but worth a shot. Quote Link to comment
russelg Posted November 30, 2022 Share Posted November 30, 2022 8 minutes ago, ency98 said: What about in the nvidia setting in unraid? The container had no issues "seeing" the gpu. But it was unraid that was not seeing the GPU. Also how much ram and what kind of CPU do you have? I found that loading the models will peg out my i9 for a bit and will often hang (pegging out the CPU) when switching between models. I need to restart the container when that happens. Could be your getting CPU or memory bound while loading the model and not when generating an image. Dont know your setup but worth a shot. The nvidia driver and nvidia-smi both see my GPU (1660 super). I've got a 5600X with 32gb RAM. I'm experiencing these issues on fresh start of the container, just starting it fresh then clicking the Generate button with the default prompt and models, etc. Quote Link to comment
ency98 Posted November 30, 2022 Share Posted November 30, 2022 Yeah, that's more than enough horse power and similar to my setup. what about starting the container and then selecting a different model before generating? the default SD 1.4 gives me a EOF error. I found that to get things to work I need to try to generate something on sd 1.4 first to get the error then I change the model to the SD 1.5 I downloaded it will work 70-ish % of the time. But after generating a few images if I try to switch models the CPU will peg out and everything gets unresponsive until i restart the container. Quote Link to comment
Draco1544 Posted February 7 Share Posted February 7 Hello ! Your container support cpu compute or I need a gpu ? Quote Link to comment
Reptar Posted February 9 Share Posted February 9 Is this docker still being updated with new github commits? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.