CoreyG

Members
  • Posts

    5
  • Joined

  • Last visited

Everything posted by CoreyG

  1. Hi everyone, I've been experimenting with various Stable Diffusion solutions and have successfully implemented many of them. However, I'm encountering an issue with FaceFusion. Despite its capabilities, it doesn't seem to leverage my NVIDIA 3090 GPU effectively. I believe CUDA not being an option would mean it's not loading, or not configured properly. Has anyone else faced this issue? If so, are there any specific configurations or tricks to optimize GPU utilization for FaceFusion? Your insights would be greatly appreciated! For reference, here are the solutions I have work so far: sd, invokeai, comfy-ui
  2. @jmmrly Don't have much info to help but try these as I recently had issues when using custom IP for dockers 1. Set Docker to Bridge Mode 2. Privileged Mode must be on 3. If you have VMs try it first with them off. If this does work, you can turn on debug mode to true and add a the line the verbose option to the opvn file. It looks like this. remote us-ny.vpnunlimitedapp.com 1194 client dev tun persist-key ping 5 ping-exit 30 nobind comp-lzo no remote-random remote-cert-tls server auth-nocache route-metric 1 cipher AES-256-CBC auth sha512 float verb 4 <--------------------------- Remove this comment/and arrows but this will add more debug info ---------------> <ca>
  3. thanks for the reply. I agree I settled on the bridge and just changed the ports. just to know my future options I'm playing around with the custom VLANs to see if that doesn't cause the conflict. thank you.
  4. Hope someone has a solution to this one at a loss and not much online or in the forums. Been using this Docker for a while worked well until recently. There appears to be a negative relationship between this Docker and any VM I run. I have mostly windows but some Kali Linux for fun. Either causes the following problem. When I run this docker after a VM has started it does not create a connection with my VPN provider. I can see the tun0 that is created upon success using "ifconfig" and once it's created things work correctly. I have been experimenting trying to see if there is something unique with my network setup or VMS but nothing seems to unique. I have even started creating a test Unraid server on a old laptop for testing. Similar results on stock install. Any suggestion as I can't see why a docker and vms would be this bonded to begin with. Perhaps there is some networking related docker or VM that uses similar naming causing this. Would appreciate any suggestions just not stable when I need it to be. Environment: Latest versions on binhex/arch-privoxyvpn Unraid 6.9.2 can get more details if requested. Cheers, Update 9-5-21 @binhexNot sure if it's a feature or a bug. Here is what I found. If you are using your VPN container and change the default network from bridge to br0 and assigned an IP address to save on port mapping conflicts. Then the above issue/relationship occurs. If you use bridge there does not appear to be an issue. MY TEST: 1. LAUNCH any VM and wait until start, I use Kali it's quick. 2. LAUNCH BINHEX - PRIVOXYVPN and open the console 3. RUN WATCH IFCONFIG (this will pool ifconfig for ever) =========== Below is what I use to identify the issue ============= The tun0 failed to establish and typically will never establish... ============================================================= eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 291 bytes 98024 (95.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 322 bytes 51706 (50.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 20 bytes 1056 (1.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 1056 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 4. SHUTDOWN THE VM 5. WAIT about 5-10 Seconds and the tun0 will appear looking something like this. =========== Below is what believe is success connection ============= ============================================================= eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 291 bytes 98024 (95.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 322 bytes 51706 (50.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 20 bytes 1056 (1.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 1056 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 10.200.0.58 netmask 255.255.255.255 destination 10.200.0.57 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 154 bytes 69799 (68.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 239 bytes 26190 (25.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Great app, hope this helps. If there is something you want to test I have the rigs setup to perform stuff quickly. Cheers, -C