Zeze21

Members
  • Posts

    112
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Zeze21's Achievements

Apprentice

Apprentice (3/14)

9

Reputation

4

Community Answers

  1. sorry i am stupid i did not see the guide
  2. Hi, I am trying to install freepbx apps SIP server as VM. Unfortunately I can't get it to have a network device. (None is showing during the install process) My Mainboard has 3 rj45 outlets. So theoretically I could use one just for freepbx. Can someone tell me how to pass that through to the VM? Or is there a simpler solution? Thank you Gesendet von meinem Pixel 6 Pro mit Tapatalk
  3. So I have trying to set up this frigate container for some time now and it took me a while to figure out that it apparently keeps restarting within several seconds. (I thought i had a faulty config.yml because i just got the spinning wheel on the web ui and tried to mess around with it) This is the error i get: Traceback (most recent call last): File "/usr/local/go2rtc/create_config.py", line 27, in <module> config: dict[str, any] = yaml.safe_load(raw_config) File "/usr/local/lib/python3.9/dist-packages/yaml/__init__.py", line 125, in safe_load return load(stream, SafeLoader) File "/usr/local/lib/python3.9/dist-packages/yaml/__init__.py", line 81, in load return loader.get_single_data() File "/usr/local/lib/python3.9/dist-packages/yaml/constructor.py", line 49, in get_single_data node = self.get_single_node() File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 36, in get_single_node document = self.compose_document() File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 55, in compose_document node = self.compose_node(None, None) File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 84, in compose_node node = self.compose_mapping_node(anchor) File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 133, in compose_mapping_node item_value = self.compose_node(node, item_key) File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 84, in compose_node node = self.compose_mapping_node(anchor) File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 127, in compose_mapping_node while not self.check_event(MappingEndEvent): File "/usr/local/lib/python3.9/dist-packages/yaml/parser.py", line 98, in check_event self.current_event = self.state() File "/usr/local/lib/python3.9/dist-packages/yaml/parser.py", line 428, in parse_block_mapping_key if self.check_token(KeyToken): File "/usr/local/lib/python3.9/dist-packages/yaml/scanner.py", line 116, in check_token self.fetch_more_tokens() File "/usr/local/lib/python3.9/dist-packages/yaml/scanner.py", line 223, in fetch_more_tokens return self.fetch_value() File "/usr/local/lib/python3.9/dist-packages/yaml/scanner.py", line 577, in fetch_value raise ScannerError(None, None, yaml.scanner.ScannerError: mapping values are not allowed here in "<unicode string>", line 13, column 11: ffmpeg: ^ s6-rc: info: service legacy-services: stopping s6-rc: info: service legacy-services successfully stopped s6-rc: info: service nginx: stopping s6-rc: info: service go2rtc-healthcheck: stopping s6-rc: info: service go2rtc-healthcheck successfully stopped s6-rc: info: service nginx successfully stopped s6-rc: info: service nginx-log: stopping s6-rc: info: service frigate: stopping s6-rc: info: service frigate successfully stopped s6-rc: info: service go2rtc: stopping s6-rc: info: service frigate-log: stopping s6-rc: info: service nginx-log successfully stopped s6-rc: info: service go2rtc successfully stopped s6-rc: info: service go2rtc-log: stopping s6-rc: info: service frigate-log successfully stopped s6-rc: info: service go2rtc-log successfully stopped s6-rc: info: service log-prepare: stopping s6-rc: info: service s6rc-fdholder: stopping s6-rc: info: service log-prepare successfully stopped s6-rc: info: service legacy-cont-init: stopping s6-rc: info: service s6rc-fdholder successfully stopped s6-rc: info: service legacy-cont-init successfully stopped s6-rc: info: service fix-attrs: stopping s6-rc: info: service fix-attrs successfully stopped s6-rc: info: service s6rc-oneshot-runner: stopping s6-rc: info: service s6rc-oneshot-runner successfully stopped 2023-12-24 11:13:28.919764877 File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 133, in compose_mapping_node 2023-12-24 11:13:28.919765669 item_value = self.compose_node(node, item_key) 2023-12-24 11:13:28.919780186 File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 84, in compose_node 2023-12-24 11:13:28.919781148 node = self.compose_mapping_node(anchor) 2023-12-24 11:13:28.919782120 File "/usr/local/lib/python3.9/dist-packages/yaml/composer.py", line 127, in compose_mapping_node 2023-12-24 11:13:28.919783051 while not self.check_event(MappingEndEvent): 2023-12-24 11:13:28.919783933 File "/usr/local/lib/python3.9/dist-packages/yaml/parser.py", line 98, in check_event 2023-12-24 11:13:28.919784684 self.current_event = self.state() 2023-12-24 11:13:28.919785606 File "/usr/local/lib/python3.9/dist-packages/yaml/parser.py", line 428, in parse_block_mapping_key 2023-12-24 11:13:28.919807958 if self.check_token(KeyToken): 2023-12-24 11:13:28.919809081 File "/usr/local/lib/python3.9/dist-packages/yaml/scanner.py", line 116, in check_token 2023-12-24 11:13:28.919809842 self.fetch_more_tokens() 2023-12-24 11:13:28.919810744 File "/usr/local/lib/python3.9/dist-packages/yaml/scanner.py", line 223, in fetch_more_tokens 2023-12-24 11:13:28.919811645 return self.fetch_value() 2023-12-24 11:13:28.919812547 File "/usr/local/lib/python3.9/dist-packages/yaml/scanner.py", line 577, in fetch_value 2023-12-24 11:13:28.919813288 raise ScannerError(None, None, 2023-12-24 11:13:28.919814080 yaml.scanner.ScannerError: mapping values are not allowed here 2023-12-24 11:13:28.919814851 in "<unicode string>", line 13, column 11: 2023-12-24 11:13:28.919815573 ffmpeg: 2023-12-24 11:13:28.919816244 ^ 2023-12-24 11:13:28.919816885 2023-12-24 11:13:28.919817687 ************************************************************* 2023-12-24 11:13:28.919818498 *** End Config Validation Errors *** 2023-12-24 11:13:28.919819310 ************************************************************* 2023-12-24 11:13:29.277951226 [INFO] Preparing go2rtc config... 2023-12-24 11:13:29.535628713 [INFO] The go2rtc service exited with code 1 (by signal 0) 2023-12-24 11:13:30.100706110 [INFO] Service Frigate exited with code 1 (by signal 0) 2023-12-24 11:13:30.111917236 [INFO] The go2rtc-healthcheck service exited with code 256 (by signal 15) 2023-12-24 11:13:30.149506121 [INFO] Service NGINX exited with code 0 (by signal 0) What did i do wrong? EDIT: AHA! The name of the cam is not a mapping this was throwing me off!!
  4. Ok this is a general question. Is it possible to use PXE to boot a VM from the server? To clarify: let's say I have a working VM on unraid and I have a pxe capable device, could I use this device to boot the VM on that device? To be more precise: I have a Windows 11 VM all setup and working on unraid. I also have a laptop that supports PXE booting. Is there some way to boot that Windows 11 VM so it runs on the laptop via PXE, or is that just not how PXE works? Thanks for your time. Gesendet von meinem Pixel 6 Pro mit Tapatalk
  5. Zeze21

    Media scraping

    I would suggest tinymediamanager for movies and series (highly customizable) as docker image And musicbrainz Picard for music files also as docker image Gesendet von meinem Pixel 6 Pro mit Tapatalk
  6. To be honest I was in the same situation. Swag was running fine for (I want to say years but at least months) and I couldn't for the life of me figure out what was wrong. I switched to nginx proxy manager (there is one with a configuration web interface and box is it easy to set it up and it gives you a nice overview what is running) So yeah... I just don't use swag anymore... Gesendet von meinem Pixel 6 Pro mit Tapatalk
  7. Hello everyone, Ever since I've installed that awesome stable diffusion container I am looking for more AI docker containers. Right now I'm wondering whether there is a good and easy to use voice cloning AI container. (This is actually for my home assistant installation or to be more specific for my vacuum cleaner robots - I want to give them custom, to be more specific, German voices) In general I like running stuff on my own server instead of somewhere else in the Cloud, where who knows what happens to my data. So if anyone has a good recommendation for me I would greatly appreciate it. thank you. Gesendet von meinem Pixel 6 Pro mit Tapatalk
  8. well i had to reinstall the vm... as i did not find any way to make it run again
  9. Ok the solution was simple the dockers i wanted to connect to the rest of the Network had to be in br0 mode. Hope anybody who has the same issue finds this helpful
  10. Ok the solution was simple the dockers i wanted to connect to the rest of the Network had to be in br0 mode. Hope anybody who has the same issue finds this helpful
  11. Unfortunately the update to 6.12.4 did not change anything I still get a 502 Bad Request when trying to access home assistant via the web
  12. Docker seems to not be able to communicate outside the docker network after several problems i had initially when upgrading from 6.11.5 to 6.12.3. I have a bit of a special setup: I have a fritzbox 7590 as router with dhcp disabled. it's Ipv4 is 10.10.10.1 with the subnetmask 255.255.0.0 I have a raspberry pi with adguard home running - it acts as my dns server and has dhcp enabled. Its Ipv4 adress is 10.10.10.2 and gives out ip adresses from 10.10.100.1 to 10.10.100.255 My smart home devices have all set ip adresses in th range of 10.10.11.1 to 10.10.11.255 Home assistant runs as VM and has the ipv4 10.10.11.1 My unraid server has 10.10.10.10 I have a windows 11 VM with 10.10.10.11 I have several docker containers (nextcloud, guacamole, home assistance and others) which run all as bridge and are accessible from their respective subdomains. While nextcloud, bitwarden and others work fine, Guacamole works semi (I can load up guacamole as service but can not connect to my VM) and home assistant just gives me a 502. To my understanding the common thing here is that the docker containers seem to be unable connecting outside the dockernetwork itself. (at least that's how i would be able to explain why guacamole loads up fine but i can not connect to a vm and home assistant does not work) This is my docker config: In Swag: nextcloud.subdomain.config: ## Version 2023/06/24 # make sure that your nextcloud container is named nextcloud # make sure that your dns has a cname set for nextcloud # assuming this container is called "swag", edit your nextcloud container's config # located at /config/www/nextcloud/config/config.php and add the following lines before the ");": # 'trusted_proxies' => ['swag'], # 'overwrite.cli.url' => 'https://nextcloud.example.com/', # 'overwritehost' => 'nextcloud.example.com', # 'overwriteprotocol' => 'https', # # Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this: # array ( # 0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it. # 1 => 'nextcloud.example.com', # ), server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name cloud.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.10.10; set $upstream_port 1443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; # Hide proxy response headers from Nextcloud that conflict with ssl.conf # Uncomment the Optional additional headers in SWAG's ssl.conf to pass Nextcloud's security scan proxy_hide_header Referrer-Policy; proxy_hide_header X-Content-Type-Options; proxy_hide_header X-Frame-Options; proxy_hide_header X-XSS-Protection; # Disable proxy buffering proxy_buffering off; } } guacamole.subdomain.config: ## Version 2023/05/31 # make sure that your guacamole container is named guacamole # make sure that your dns has a cname set for guacamole server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name remote.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth (requires ldap-location.conf in the location block) #include /config/nginx/ldap-server.conf; # enable for Authelia (requires authelia-location.conf in the location block) #include /config/nginx/authelia-server.conf; # enable for Authentik (requires authentik-location.conf in the location block) #include /config/nginx/authentik-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable for ldap auth (requires ldap-server.conf in the server block) #include /config/nginx/ldap-location.conf; # enable for Authelia (requires authelia-server.conf in the server block) #include /config/nginx/authelia-location.conf; # enable for Authentik (requires authentik-server.conf in the server block) #include /config/nginx/authentik-location.conf; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.10.10; set $upstream_port 8088; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_buffering off; } } homeassistant.subdomain.config ## Version 2023/05/31 # make sure that your homeassistant container is named homeassistant # make sure that your dns has a cname set for homeassistant # As of homeassistant 2021.7.0, it is now required to define the network range your proxy resides in, this is done in Homeassitants configuration.yaml # https://www.home-assistant.io/integrations/http/#trusted_proxies # Example below uses the default dockernetwork ranges, you may need to update this if you dont use defaults. # # http: # use_x_forwarded_for: true # trusted_proxies: # - 172.16.0.0/12 server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name home.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth (requires ldap-location.conf in the location block) #include /config/nginx/ldap-server.conf; # enable for Authelia (requires authelia-location.conf in the location block) #include /config/nginx/authelia-server.conf; # enable for Authentik (requires authentik-location.conf in the location block) #include /config/nginx/authentik-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable for ldap auth (requires ldap-server.conf in the server block) #include /config/nginx/ldap-location.conf; # enable for Authelia (requires authelia-server.conf in the server block) #include /config/nginx/authelia-location.conf; # enable for Authentik (requires authentik-server.conf in the server block) #include /config/nginx/authentik-location.conf; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.11.1; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location ~ ^/(api|local|media)/ { include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.11.1; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } The swag logs give absolutely no error: [migrations] started [migrations] 01-nginx-site-confs-default: skipped [migrations] done ─────────────────────────────────────── ██╗ ███████╗██╗ ██████╗ ██║ ██╔════╝██║██╔═══██╗ ██║ ███████╗██║██║ ██║ ██║ ╚════██║██║██║ ██║ ███████╗███████║██║╚██████╔╝ ╚══════╝╚══════╝╚═╝ ╚═════╝ Brought to you by linuxserver.io ─────────────────────────────────────── To support the app dev(s) visit: Certbot: https://supporters.eff.org/donate/support-work-on-certbot To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── using keys found in /config/keys Variables set: PUID=99 PGID=100 TZ=Europe/Berlin URL=mydomain.com SUBDOMAINS=cloud,heim,home,media,remote,robot,vaultwarden,vpn EXTRA_DOMAINS= ONLY_SUBDOMAINS=true VALIDATION=http CERTPROVIDER= DNSPLUGIN=cloudflare EMAIL= STAGING=false Using Let's Encrypt as the cert provider SUBDOMAINS entered, processing Sub-domains processed are: cloud.mydomain.com,heim.mydomain.com,home.mydomain.com,media.mydomain.com,remote.mydomain.com,robot.mydomain.com,vaultwarden.mydomain.com,vpn.mydomain.com No e-mail address entered or address invalid http validation is selected Certificate exists; parameters unchanged; starting nginx The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am). [custom-init] No custom files found, skipping... [ls.io-init] done. Server ready So to my understanding swag should be configured correctly but the docker network seems to have a slight "hickup". Just to clearify the log is with the correct domain not mydomain.com. Also the VMs work fine when connecting to them within my own network or via vpn (windows can be loaded up via rdp) Home assitant can be accessed via 10.10.11.1:8123 Can someone please help me out? Thank you all so much!
  13. I suddenly have the problem that i can not access the VM pages on my unraid server anymore. Neither the VM page nor the VM settings page are accessible. the logs state Aug 26 03:09:30 Server nginx: 2023/08/26 03:09:30 [error] 7827#7827: *298579 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.10.100.31, server: , request: "GET /Main HTTP/1.1", subrequest: "/auth-request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "10.10.10.10" Aug 26 03:09:30 Server nginx: 2023/08/26 03:09:30 [error] 7827#7827: *298579 auth request unexpected status: 504 while sending to client, client: 10.10.100.31, server: , request: "GET /Main HTTP/1.1", host: "10.10.10.10" Aug 26 03:11:02 Server nginx: 2023/08/26 03:11:02 [error] 7827#7827: *299096 upstream timed out (110: Connection timed out) while reading upstream, client: 10.10.100.31, server: , request: "GET /VMs HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "10.10.10.10", referrer: "http://10.10.10.10/Docker" Aug 26 03:11:03 Server nginx: 2023/08/26 03:11:03 [error] 7827#7827: *299140 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.10.100.31, server: , request: "GET /Settings HTTP/1.1", subrequest: "/auth-request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "10.10.10.10", referrer: "http://10.10.10.10/VMs" Aug 26 03:11:03 Server nginx: 2023/08/26 03:11:03 [error] 7827#7827: *299140 auth request unexpected status: 504 while sending to client, client: 10.10.100.31, server: , request: "GET /Settings HTTP/1.1", host: "10.10.10.10", referrer: "http://10.10.10.10/VMs" Also when i open the Docker Page I get the message that "Array must be Started to view Docker containers." I can see the docker containers, access and modify them. I can not soft restart the server. server-diagnostics-20230826-0959.zip
  14. Swag seems to not be able to communicate outside the docker network after several problems i had initially when upgrading from 6.11.5 to 6.12.3. I have a bit of a special setup: I have a fritzbox 7590 as router with dhcp disabled. it's Ipv4 is 10.10.10.1 with the subnetmask 255.255.0.0 I have a raspberry pi with adguard home running - it acts as my dns server and has dhcp enabled. It's Ipv4 adress is 10.10.10.2 and gives out ip adresses from 10.10.100.1 to 10.10.100.255 My smart home devices have all set ip adresses in th range of 10.10.11.1 to 10.10.11.255 Home assistant runs as VM and has the ipv4 10.10.11.1 My unraid server has 10.10.10.10 I have a windows 11 VM with 10.10.10.11 I have several docker containers (nextcloud, guacamole, home assistance and others) which run all as bridge and are accessible from their respective subdomains. While nextcloud, bitwarden and others work fine, Guacamole works semi (I can load up guacamole as service but can not connect to my VM) and home assistant just gives me a 502. To my understanding the common thing here is that swag seems to be unable connecting outside the dockernetwork itself. (at least that's how i would be able to explain why guacamole loads up fine but i can not connect to a vm and home assistant does not work) This is my docker config: nextcloud.subdomain.config: ## Version 2023/06/24 # make sure that your nextcloud container is named nextcloud # make sure that your dns has a cname set for nextcloud # assuming this container is called "swag", edit your nextcloud container's config # located at /config/www/nextcloud/config/config.php and add the following lines before the ");": # 'trusted_proxies' => ['swag'], # 'overwrite.cli.url' => 'https://nextcloud.example.com/', # 'overwritehost' => 'nextcloud.example.com', # 'overwriteprotocol' => 'https', # # Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this: # array ( # 0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it. # 1 => 'nextcloud.example.com', # ), server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name cloud.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.10.10; set $upstream_port 1443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; # Hide proxy response headers from Nextcloud that conflict with ssl.conf # Uncomment the Optional additional headers in SWAG's ssl.conf to pass Nextcloud's security scan proxy_hide_header Referrer-Policy; proxy_hide_header X-Content-Type-Options; proxy_hide_header X-Frame-Options; proxy_hide_header X-XSS-Protection; # Disable proxy buffering proxy_buffering off; } } guacamole.subdomain.config: ## Version 2023/05/31 # make sure that your guacamole container is named guacamole # make sure that your dns has a cname set for guacamole server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name remote.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth (requires ldap-location.conf in the location block) #include /config/nginx/ldap-server.conf; # enable for Authelia (requires authelia-location.conf in the location block) #include /config/nginx/authelia-server.conf; # enable for Authentik (requires authentik-location.conf in the location block) #include /config/nginx/authentik-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable for ldap auth (requires ldap-server.conf in the server block) #include /config/nginx/ldap-location.conf; # enable for Authelia (requires authelia-server.conf in the server block) #include /config/nginx/authelia-location.conf; # enable for Authentik (requires authentik-server.conf in the server block) #include /config/nginx/authentik-location.conf; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.10.10; set $upstream_port 8088; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_buffering off; } } homeassistant.subdomain.config ## Version 2023/05/31 # make sure that your homeassistant container is named homeassistant # make sure that your dns has a cname set for homeassistant # As of homeassistant 2021.7.0, it is now required to define the network range your proxy resides in, this is done in Homeassitants configuration.yaml # https://www.home-assistant.io/integrations/http/#trusted_proxies # Example below uses the default dockernetwork ranges, you may need to update this if you dont use defaults. # # http: # use_x_forwarded_for: true # trusted_proxies: # - 172.16.0.0/12 server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name home.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth (requires ldap-location.conf in the location block) #include /config/nginx/ldap-server.conf; # enable for Authelia (requires authelia-location.conf in the location block) #include /config/nginx/authelia-server.conf; # enable for Authentik (requires authentik-location.conf in the location block) #include /config/nginx/authentik-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable for ldap auth (requires ldap-server.conf in the server block) #include /config/nginx/ldap-location.conf; # enable for Authelia (requires authelia-server.conf in the server block) #include /config/nginx/authelia-location.conf; # enable for Authentik (requires authentik-server.conf in the server block) #include /config/nginx/authentik-location.conf; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.11.1; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location ~ ^/(api|local|media)/ { include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app 10.10.11.1; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } The swag logs give absolutely no error: [migrations] started [migrations] 01-nginx-site-confs-default: skipped [migrations] done usermod: no changes ─────────────────────────────────────── ██╗ ███████╗██╗ ██████╗ ██║ ██╔════╝██║██╔═══██╗ ██║ ███████╗██║██║ ██║ ██║ ╚════██║██║██║ ██║ ███████╗███████║██║╚██████╔╝ ╚══════╝╚══════╝╚═╝ ╚═════╝ Brought to you by linuxserver.io ─────────────────────────────────────── To support the app dev(s) visit: Certbot: https://supporters.eff.org/donate/support-work-on-certbot To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── using keys found in /config/keys Variables set: PUID=99 PGID=100 TZ=Europe/Berlin URL=mydomain.com SUBDOMAINS=cloud,heim,home,media,remote,robot,vaultwarden,vpn EXTRA_DOMAINS= ONLY_SUBDOMAINS=true VALIDATION=http CERTPROVIDER= DNSPLUGIN=cloudflare EMAIL= STAGING=false Using Let's Encrypt as the cert provider SUBDOMAINS entered, processing Sub-domains processed are: cloud.mydomain.com,heim.mydomain.com,home.mydomain.com,media.mydomain.com,remote.mydomain.com,robot.mydomain.com,vaultwarden.mydomain.com,vpn.mydomain.com No e-mail address entered or address invalid http validation is selected Certificate exists; parameters unchanged; starting nginx The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am). [custom-init] No custom files found, skipping... [ls.io-init] done. Server ready Just to clearify the log is with the correct domain not mydomain.com. Also the VMs work fine when connecting to them within my own network or via vpn (windows can be loaded up via rdp) Home assitant can be accessed via 10.10.11.1:8123 Can someone please help me out? Thank you all so much!
  15. I changed a lot on my server (maybe you have seen posts from me concerning upgrading from 6.11.5 to 6.12.3... A long story short: I upgraded, things didn't work i had to change many things around, now everything works.....except home assistant I have it running as VM with the following config: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='1'> <name>Home Assistant</name> <uuid>6b3c36bf-a787-ff1d-8879-dc5c66f63613</uuid> <description>Unsere Heimautomatisierung</description> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="Hassio_2.png" os="linux"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>2097152</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='17'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/6b3c36bf-a787-ff1d-8879-dc5c66f63613_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='1' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/homeassistant/haos_ova-9.4.qcow2' index='1'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:2d:ef:ff'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='e1000'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Home Assistant/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='3'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='de'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='usb' managed='no'> <source startupPolicy='optional'> <vendor id='0x10c4'/> <product id='0xea60'/> <address bus='5' device='2'/> </source> <alias name='hostdev0'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source startupPolicy='optional'> <vendor id='0x2357'/> <product id='0x0604'/> <address bus='5' device='3'/> </source> <alias name='hostdev1'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> As you can clearly see the mac adress should be: 52:54:00:2d:ef:ff But when i check the Unraid VM Page I see: Also the IP adresses look strange to me. Meaning they are not from any range I have set up (Docker uses different Ip Addresses) Here is my log: 2023-08-16 18:27:02.845+0000: starting up libvirt version: 8.7.0, qemu version: 7.1.0, kernel: 6.1.38-Unraid, hostname: Server LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant' \ XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant/.local/share' \ XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant/.cache' \ XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-1-Home Assistant/.config' \ /usr/local/sbin/qemu \ -name 'guest=Home Assistant,debug-threads=on' \ -S \ -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-Home Assistant/master-key.aes"}' \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/6b3c36bf-a787-ff1d-8879-dc5c66f63613_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-q35-7.1,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -accel kvm \ -cpu host,migratable=on,topoext=on,host-cache-info=on,l3-cache=off \ -m 4096 \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":4294967296}' \ -overcommit mem-lock=off \ -smp 2,sockets=1,dies=1,cores=1,threads=2 \ -uuid 6b3c36bf-a787-ff1d-8879-dc5c66f63613 \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=34,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device '{"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"}' \ -device '{"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"}' \ -device '{"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"}' \ -device '{"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"}' \ -device '{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"}' \ -device '{"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x2.0x5"}' \ -device '{"driver":"pcie-pci-bridge","id":"pci.7","bus":"pci.1","addr":"0x0"}' \ -device '{"driver":"ich9-usb-ehci1","id":"usb","bus":"pcie.0","addr":"0x7.0x7"}' \ -device '{"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pcie.0","multifunction":true,"addr":"0x7"}' \ -device '{"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pcie.0","addr":"0x7.0x1"}' \ -device '{"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pcie.0","addr":"0x7.0x2"}' \ -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.2","addr":"0x0"}' \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/homeassistant/haos_ova-9.4.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ -device '{"driver":"virtio-blk-pci","bus":"pci.3","addr":"0x0","drive":"libvirt-1-format","id":"virtio-disk2","bootindex":1,"write-cache":"on"}' \ -netdev tap,fd=35,id=hostnet0 \ -device '{"driver":"e1000","netdev":"hostnet0","id":"net0","mac":"52:54:00:2d:ef:ff","bus":"pci.7","addr":"0x1"}' \ -chardev pty,id=charserial0 \ -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=33,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"3"}' \ -audiodev '{"id":"audio1","driver":"none"}' \ -vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \ -k de \ -device '{"driver":"qxl-vga","id":"video0","max_outputs":1,"ram_size":67108864,"vram_size":67108864,"vram64_size_mb":0,"vgamem_mb":16,"bus":"pcie.0","addr":"0x1"}' \ -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/005/002","id":"hostdev0","bus":"usb.0","port":"1"}' \ -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/005/003","id":"hostdev1","bus":"usb.0","port":"2"}' \ -device '{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.4","addr":"0x0"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/0 (label charserial0) qxl_send_events: spice-server bug: guest stopped, ignoring I can not access Home Assistant from the network and Home Assistant says that it has no real Network connection: I don't really know what to do... Any help is greatly appreciated