-
Posts
143 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Outlet6957
-
-
Anyone know why this happens with latest update (05-comfy-ui)
04/15/2024 08:40:42 PM Prestartup times for custom nodes: 04/15/2024 08:40:42 PM 0.9 seconds: /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager 04/15/2024 08:40:42 PM 04/15/2024 08:40:54 PM Total VRAM 12037 MB, total RAM 96462 MB 04/15/2024 08:40:54 PM Set vram state to: NORMAL_VRAM 04/15/2024 08:40:54 PM Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync 04/15/2024 08:40:54 PM VAE dtype: torch.bfloat16 04/15/2024 08:41:02 PM Using pytorch cross attention 04/15/2024 08:41:09 PM Traceback (most recent call last): 04/15/2024 08:41:09 PM File "/config/05-comfy-ui/ComfyUI/nodes.py", line 1864, in load_custom_node 04/15/2024 08:41:09 PM module_spec.loader.exec_module(module) 04/15/2024 08:41:09 PM File "<frozen importlib._bootstrap_external>", line 940, in exec_module 04/15/2024 08:41:09 PM File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed 04/15/2024 08:41:09 PM File "/config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager/__init__.py", line 18, in <module> 04/15/2024 08:41:09 PM from .glob import manager_core as core 04/15/2024 08:41:09 PM File "/config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 9, in <module> 04/15/2024 08:41:09 PM import git 04/15/2024 08:41:09 PM ModuleNotFoundError: No module named 'git' 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM Cannot import /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager module for custom nodes: No module named 'git' 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM Import times for custom nodes: 04/15/2024 08:41:09 PM 0.0 seconds: /config/05-comfy-ui/ComfyUI/custom_nodes/websocket_image_save.py 04/15/2024 08:41:09 PM 0.1 seconds (IMPORT FAILED): /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM Setting output directory to: /config/outputs/05-comfy-ui 04/15/2024 08:41:09 PM Starting server 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM To see the GUI go to: http://0.0.0.0:9000
-
On 1/9/2024 at 4:30 AM, CoreyG said:
Did you ever solve this ?
Oh also, the https://github.com/facefusion/facefusion
uses pytorch now instead of tensorflow but the one within the hollafain uses tensorflow for some reason even though the script pulls from the original repo, thus creating that issue as well. I'll check this out later tonight and see if I can push a change to their github
-
On 1/9/2024 at 4:30 AM, CoreyG said:
Did you ever solve this ?
No but I did find a solution that needs to be tested. Apparently when you install extensions they install the onnx-runtime which creates a conflict with existing onnx files. When this happens it disables the CUDA. But again, I have not tested this out. If I do I will report back here and tag you.
-
On 1/12/2024 at 3:53 AM, sonofdbn said:
I currently run Comfyui and A1111 on a Win11 VM (with GPU passthrough) on an unRAID server. All output and models etc. are kept in the VM (nothing on the array). I think using this docker might be a cleaner solution. My main concern is whether there would be a performance hit. I have a 12GB VRAM Nvidia 4070. Theoretically (or practically) is there a significant downside/upside in moving to the docker?
If I do the migration, I'm thinking of following these steps:
1. Remove GPU binding
2. Reboot
3. Start Win11 VM and make sure it's running OK without GPU
3. Install this Stable Diffusion docker
4. Install SD GUIs and organise folder mappings
5. Go into Win11 VM and copy the necessary files to the corresponding docker mapped folders (I'm guessing mainly json workflows, models, custom nodes, extensions, embedding, LoRAs).
6. Might have to fix some model locations in the workflows, but that shouldn't be a big problem.
Any suggestions or advice on this?
You are getting a performance hit by running Windows 11 in a VM. Docker would remove the crazy amount of overhead Windows introduces. If you're going to use a VM and take away your GPU from your docker environment, I would use a linux VM (ubuntu without a GUI at all). But yeah you could move out all of the stable diffusion files from your VM and just find the corresponding folders of this docker container, they should mostly map 1:1 just map your /config directory to a place you can access and that is where the files will exist.
-
On 12/13/2023 at 7:16 PM, FoxxMD said:
I forked holaf's repository, available at foxxmd/stable-diffusion, and have been improving off of it instead of using the repo from last post. There are individual pull requests in for all my improvements on his repository BUT the main branch on my repo has everything combined as well and where i'll be working on things until/if holaf merges my PRs.
My combined main branch is also available as a docker image at foxxmd/stable-diffusion:latest on dockerhub and ghcr.io/foxxmd/stable-diffusion:latest
I have tested with SD.Next only but everything else should also work.
To migrate from holaf to my image on unraid edit (or create) the stable-diffusion template:
- Repository => foxxmd/stable-diffusion:latest
-
Edit Stable-Diffusion UI Path
- Container Path => /config
-
Remove Outputs
- These will still be generated at /mnt/user/appdata/stable-diffusion/outputs
-
Add Variable
- Name/Key => PUID
- Value => 99
-
Add Variable
- Name/Key => PGID
- Value => 100
_______
Changes (as of this post):
- Switched to Linuxserver.io ubuntu base image
- Installed missing git dependency
- Fixed SD.Next memory leak
-
For SD.Next and automatic1111
- Packages (venv) are only re-installed if your container is out-of-date with upstream git repository -- this reduces startup time after first install by like 90%
- Packages can be forced to be reinstalled by setting the environmental variable CLEAN_ENV=true on your docker container (Variable in unraid template)
______
If you have issues you must post your problem with the WEBUI_VERSION you are using
Bruh, you are a champion. Thank you and happy holidays!
-
Ok I've done some digging. If I add face fusion via the docker settings or within automatic1111 extensions, it does not detect GPU thus not giving me a "CUDA" option.
The container works fine with the GPU for generating art, and device shows up in nvidia-smi when you docker exec into the container. However if I activate the venv within the docker container and runpython3 -c "import tensorflow; print(tensorflow.config.experimental.list_physical_devices('GPU'))"
It returns []
For some reason tensorflow does not detect GPU and facefusion needs that detection for it to provide the CUDA option. Any thoughts?
-
I know you mentioned some models are not production ready. Face fusion included. Is that warning the reason I don't see "CUDA" as a provider on the face fusion webui?
-
Thank you for the awesome plugin and keeping the nerd packages alive. Could you please add ripgrep?
-
Question, if I set the bonding mode to NO as indicated, how do you access VMs now? They pull the 122.X network address due to BR0 going away from using that setting.
-
I am having this same issue. Any custom dockers I create will not boot up after I reboot the server. It says network id not found and I have to recreate the containers. Did you ever find a fix for this?
-
On 4/23/2020 at 6:27 AM, mskerman said:
Did anyone have any luck with this? I'm no expert with Unraid or Linux - have had it for ~ 8 years now and the UNRAID server has been working fine, but I've never had time to do more than file serving with it till now. I just acquired some IP cams and want to get BI going without dedicating another discrete machine to the task. I managed to get the BI docker loaded and can see my cameras, but there is an issue I'm having trouble getting around..
No matter what network settings I have tried, I cannot access the cameras via the web interface on LAN or WAN. Seems like the WINE has some sort of firewall built in. I forward the port, but the web page will not connect remotely or locally. Canyouseeme confirms the port (81) is not blocked by isp. - Tried Bridge and Host - No luck. - 8080 noip vnc works fine on LAN and WAN network.
I have installed BI on another PC running win 10 and can access it via the web page on LAN or WAN.
Any suggestions would be welcome.
Am on the verge of giving up and using a VM instead. Just seems like a huge waste of resources when the Docker is so close to working.
I can get this up and running by modifying a shell file within the docker container itself. However I am curious why the environment variable BLUEIRIS_VERSION=4 does not work.
-
On 3/23/2020 at 11:48 PM, klogg said:
This is just for anyone else who hits this wall. Setting up the Bookstack container, I got an endless amount of:
Illuminate\Database\QueryException : SQLSTATE[HY000] [1045] Access denied for user 'xxxx'@'yyy' (using password: YES) (SQL: select * from information_schema.tables where table_schema = bookstack and table_name = migrations and table_type = 'BASE TABLE')
In the end, I noticed in one of log files in the container persistence files (./appdata/bookstack/...) that it had my mariaDB username and password logged, and the password was truncated. Despite trying the `root` user and the customer `bookstack` user I created, both passwords contained a `#` character that wasn't being handled properly and the password would truncate.
Removed the `#` and it fired up like I hadn't lost 3 evenings chasing my tail. Hope this helps someone.
/klogg
Jesus christ, thank you so much. I was going nuts. I used a password manager to generate the DB password and it had some symbols in it to include a '#'. I appreciate you!!
-
On 12/30/2019 at 1:59 AM, A75G said:
Thank you for the guide been using it for months now it really helped me on my certs.
One question is there way to reduce the size of the IMG because my SSD isn't keeping up with it?
You can reduce the size by converting to a QCOW2 image instead. It is MUCH smaller than the .img file above. The command for that is this
qemu-img convert -f vmdk -O qcow2 ./GNS3_VM-disk1.vmdk ./GNS3VM-disk1.qcow2
I've been running GNS3 for about 2 years now with this method.
Here are my file sizes for reference1.5G ./GNS3VM-disk1.qcow2
42M ./GNS3VM-disk2.qcow2
532M ./GNS3_VM-disk1.vmdk
1.9M ./GNS3_VM-disk2.vmdkAs far as the settings for the VM, the only thing I changed was the bios stays SeaBIOS but the Machine is i440fx-4.2. Also the Primary vDisk Bus is now SCSI instead of SATA.
Let me know if you get stuck!
- 1
-
On 11/10/2019 at 6:03 PM, ffhelllskjdje said:
do you happen to have the Borg tutorial stored anywhere? The reddit link is deleted
Yeah I can provide my borg script here. If you need help on it let me know. But borg makes a local backup and rsync clones it off site. This gives you 3 copies of your data and 2 of them local. Also the script will not re-run if rsync hasn't finished its last operation (slow internet) or if parity sync is running. The key factor in not having everything being constantly checked by Borg is the files-cache=mtime,size. I was noticing everytime I ran Borg it would index files that haven't changed. This command fixed that which has to do with unRAID's constant changing inode values. The borg docs are very good (https://borgbackup.readthedocs.io/en/stable/)
Let me know if you get stuck. Obviously this script wont work until you setup your repository.
#!/bin/sh LOGFILE="/boot/logs/TDS-Log.txt" LOGFILE2="/boot/logs/Borg-RClone-Log.txt" # Close if rclone/borg running if pgrep "borg" || pgrep "rclone" > /dev/null then echo "$(date "+%m-%d-%Y %T") : Backup already running, exiting" 2>&1 | tee -a $LOGFILE exit exit fi # Close if parity sync running #PARITYCHK=$(/root/mdcmd status | egrep STARTED) #if [[ $PARITYCHK == *"STARTED"* ]]; then # echo "Parity check running, exiting" # exit # exit #fi #This is the location your Borg program will store the backup data to export BORG_REPO='/mnt/disks/Backups/Borg/' #This is the location you want Rclone to send the BORG_REPO to export CLOUDDEST='GDrive:/Backups/borg/TDS-Repo-V2/' #Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='<MYENCRYPTIONKEYPASSWORD>' #or this to ask an external program to supply the passphrase: (I leave this blank) #export BORG_PASSCOMMAND='' #I store the cache on the cache instead of tmp so Borg has persistent records after a reboot. export BORG_CACHE_DIR='/mnt/user/appdata/borg/cache/' export BORG_BASE_DIR='/mnt/user/appdata/borg/' #Backup the most important directories into an archive (I keep a list of excluded directories in the excluded.txt file) SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Borg backup has started" 2>&1 | tee -a $LOGFILE borg create \ --verbose \ --info \ --list \ --filter AMEx \ --files-cache=mtime,size \ --stats \ --show-rc \ --compression lz4 \ --exclude-caches \ --exclude-from /mnt/disks/Backups/Borg/Excluded.txt \ \ $BORG_REPO::'{hostname}-{now}' \ \ /mnt/user/Archive \ /mnt/disks/Backups/unRAID-Auto-Backup \ /mnt/user/Backups \ /mnt/user/Nextcloud \ /mnt/user/system/ \ >> $LOGFILE2 2>&1 backup_exit=$? # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The '{hostname}-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: #echo "$(date "+%m-%d-%Y %T") : Borg pruning has started" 2>&1 | tee -a $LOGFILE borg prune \ --list \ --prefix '{hostname}-' \ --show-rc \ --keep-daily 7 \ --keep-weekly 4 \ --keep-monthly 6 \ >> $LOGFILE2 2>&1 prune_exit=$? #echo "$(date "+%m-%d-%Y %T") : Borg pruning has completed" 2>&1 | tee -a $LOGFILE # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) # Execute if no errors if [ ${global_exit} -eq 0 ]; then borgstart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Borg backup completed in $(($borgstart/ 3600))h:$(($borgstart% 3600/60))m:$(($borgstart% 60))s" | tee -a >> $LOGFILE 2>&1 #Reset timer SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync has started" >> $LOGFILE rclone sync $BORG_REPO $CLOUDDEST -P --stats 1s -v 2>&1 | tee -a $LOGFILE2 rclonestart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync completed in $(($rclonestart/ 3600))h:$(($rclonestart% 3600/60))m:$(($rclonestart% 60))s" 2>&1 | tee -a $LOGFILE # All other errors else echo "$(date "+%m-%d-%Y %T") : Borg has errors code:" $global_exit 2>&1 | tee -a $LOGFILE fi exit ${global_exit}
- 4
-
On 7/12/2019 at 2:56 AM, ProphetSe7en said:
Hi
I was looking into the borg+rclone setup and it looks like this is what I need.
Is it possible to change the script to get discord notifications and not email notifications when backups has error or is done?
Super late reply but yes you can get discord notifications via slack. You would need to check the discord docs for that but its super easy. The command line to fire off the notification
############ WEBH_URL="https://discordapp.com/api/webhooks/<MYDISCORDWEBHOOKNUMBER>/<MYOTHERDISCORDWEBHOOKNUMBER>/slack" APP_NAME="unRAID Server" TITLE="$1" MESSAGE="$2" ############ TITLE=$(echo -e "$TITLE") MESSAGE=$(echo -e "$MESSAGE") curl -X POST --header 'Content-Type: application/json' \ -d "{\"username\": \"$APP_NAME\", \"text\": \"*$TITLE* \n $MESSAGE\"}" $WEBH_URL 2>&1
-
On 4/22/2019 at 2:23 PM, TheFreemancer said:
For some reason I can't use EmbyCon (Kodi Add-On) with Jellyfin. There are many reports that it works fine.
When I try to log in from EmbyCon I get connection refused.
[2019-04-22 09:21:10.166 -03:00] [WRN] HTTP Response 204 to "192.168.0.177". Time (slow): 0:00:00.7501618. "http://192.168.0.198:8097/emby/Sessions/Capabilities/Full" [2019-04-22 09:21:10.950 -03:00] [WRN] HTTP Response 200 to "192.168.0.177". Time (slow): 0:00:00.5905623. "http://192.168.0.198:8097/emby/LiveTv/Programs/Recommended?userId=8543016d2cdd4114861281e08d78b976&IsAiring=true&limit=1&ImageTypeLimit=1&EnableImageTypes=Primary%2CThumb%2CBackdrop&EnableTotalRecordCount=false&Fields=ChannelInfo%2CPrimaryImageAspectRatio" [2019-04-22 09:21:26.285 -03:00] [INF] WS "http://192.168.0.198:8097/embywebsocket?api_key=d5305bee3e844051840d1ca835ff75e5&deviceId=TW96aWxsYS81LjAgKFdpbmRvd3MgTlQgMTAuMDsgV2luNjQ7IHg2NCkgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzczLjAuMzY4My4xMDMgU2FmYXJpLzUzNy4zNnwxNTU1OTI5MDQzMzY5". UserAgent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" [2019-04-22 09:21:41.825 -03:00] [INF] Authentication request for "nobody" "has been denied". [2019-04-22 09:21:41.872 -03:00] [ERR] Invalid user or password entered. [2019-04-22 09:21:47.486 -03:00] [INF] Authentication request for "nobody" "has been denied". [2019-04-22 09:21:47.488 -03:00] [ERR] Invalid user or password entered.
There's no password for this user.
Already tried with password and got nothing.
Also Jellyfin binds to a different IP address compared to Emby.
Jellyfin 172.17.0.4 as local, while Emby uses the correct one 192.168.0.198.
I also had to change the default TCP port to 8097 so it would not conflict with Emby.
I`m just saying this for the sake of saying because I even tried disabling emby and having jellyfin at it's default port maping and the problem still persists.
So they fixed the password thing with the hotfix yesterday, however clients need to be updated too. So a lot of people are currently having login issues.
-
43 minutes ago, chi110r said:
I fixed it by myself.
The kodi sync plugin was installed double
Mine is also crashing, looks like the "reports" plugin was installed twice. I can see a lot of people having an issue after this update.
- 1
-
Does anyone know how to get the pihole docker to resolve a local subdomain.domain.com to a local IP address? I am hosting a small webserver via unRAID and pihole does not know to resolve to it locally if I am accessing the website from a LAN device.
Everytime I go to the site, I am seen as "external" traffic. If I can resolve this, I could also resolve Plex issues. I just don't know how to tell pihole to point the domain to the internal unraid IP. I've tried variables such as extra_hosts etc. I am exhausted.
Edit I got the resolution by adding 02-custom.conf in the /mnt/user/appdata/dnsmasq.d folder. The format for that .conf file is
subdomain.domain.com 192.168.1.X where X is the unraid server IP. However the website stops working when I do this. I even can do nslookup subdomain.domain.com and it shows the 192.168.1.X address. I am using nginx/letsencrypt to host the proxy portion of the site. This basically forwards the subdomain to an internal IP of another docker. Any help would be appreciated.
-
On 3/18/2019 at 12:26 PM, jowi said:
I've installed this using the excellent video i can get into the webui etc, everything seems to work. I gave it an ip address of 192.168.1.10, and i'm using the cloudflare dns (1.1.1.1 / 1.0.0.1). Now, if i change my router's DNS (Netgear WNDR3700) to use 192.168.1.10, i can't browse to any sites... it just won't work.
If i return the router to 1.1.1.1/1.0.0.1 and configure my mac maually so it uses pihole as it's DNS server, i can browse perfectly, and i can see in pihole's query log everything is also logged etc. so it does work... But why won't it work if i configure it on my router?
I also made the pihole docker a static ip adress in the router.
What am i missing?
Hey jowi,
Can you ping your netgear from pihole's docker? meaning you terminal into unraid and type "docker exec -it pihole ping 192.168.1.X" where the X is the last octet of your netgear device. See if you get replies. They may not be communicating.
-
Does anyone know how to point a local domain "subdomain.domain.com" to a local IP address? Whenever I reach my subdomain, pihole thinks I am coming from an external network. I want pihole to resolve any devices from the LAN going to this subdomain to stay internal. There are lots of answers online to resolve this but they all involve pihole running on a raspberry_pi and not docker.
I have exhausted many of my resources to figure this out. Thanks in advance. -
So I am trying to setup reverse proxy with this. I can access the docker on LAN via the reverse proxy address. However, if I try to access that same link on WAN, I get "Forbidden". To do some testing, I want to enable https via the default port. However when I manually add it to the docker it does not resolve via HTTPS://LOCALIP:8920
Any ideas? Further why wasn't the HTTPS: port included? Any particular reason? Thanks Binhex!
-
Makes sense, yes borg is installed. I will take a look!
Edit: Let me just say thank you, you always reply and always add/update packages. You're pure gold and I thank you so much.
-
Python 3 wont uninstall. Every other package uninstalls if I have it selected to Off. However python 3 doesn't do anything when I hit apply. It refreshes the page and it says ON again even though I hit off.
-
Is there a reason the"fuzzy image searching functionality is not included in the docker? I see exif data and content scans only.
[SUPPORT] - stable-diffusion Advanced
in Docker Containers
Posted
Did you ever get this working in ComfyUI? I have the EXACT same issue. If so, what was the fix?