[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

Hi,

My swag containers run on a custom bridge network.
But some of my containers (JellyFin, Plex...) needs to run on 'Host' network (DLNA support)
How can I configure the nginx conf files for those services ?
I tried to edit: ''set $upstream_app 192.168.1.5;" but it doesn't work.

Any way to communicate with the host ?
Thanks

Link to comment

I've been trying to get SWAG to work with HomeAssistant for a few days now. Been running into some problems. 

 

I've searched the forms and since I'm using the docker containers on unraid, I'm using the subdomain templates, for this I've set it up to read the upstream_app as 'home-assistant' -- this did not solve the problem

 

I can access from inside my network fine but pointing outside of my network it gives me a '502 bad gateway' error.

 

I'm running other docker containers successfully through SWAG, nextcloud, website, and a few other things.

 

Couple things:

Using base HomeAssistant template in SWAG

Using Google domains with the proper subdomains

Using no extra configuration in the configuration.yaml in HA

 

Would be great if anyone has ideas, or suggestions.

Link to comment

I've had SWAG working for awhile now (For Nextcloud, Emby, etc.), but I am trying to get it to work for Bitwarden, but I am getting a 522 error.

 

Cname has been created and added to SWAG for the LE certificate.

 

Bitwarden container is on the same proxynet as SWAG.

 

I can reach Bitwarden using the localip:port, but not by domain name. I'm using the default bitwarden.subdomain.conf file with a minor change to location /admin. I changed the Bitwarden container name to just bitwarden from bitwardenrs to match the file.

# make sure that your dns has a cname set for bitwarden and that your bitwarden container is not using a base url
# make sure your bitwarden container is named "bitwarden"
# set the environment variable WEBSOCKET_ENABLED=true on your bitwarden container

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name bitwarden.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 128M;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /admin {
        return 404;
    }

    location /notifications/hub {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 3012;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /notifications/hub/negotiate {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}

I'm drawing a blank on what to try next.

Edited by Endy
Link to comment
2 minutes ago, Endy said:

I've had SWAG working for awhile now (For Nextcloud, Emby, etc.), but I am trying to get it to work for Bitwarden, but I am getting a 522 error.

 

Cname has been created and added to SWAG for the LE certificate. 

 

I can reach Bitwarden using the localip:port, but not by domain name. I'm using the default bitwarden.subdomain.conf file with a minor change to location /admin. I changed the Bitwarden container name to just bitwarden from bitwardenrs to match the file.


# make sure that your dns has a cname set for bitwarden and that your bitwarden container is not using a base url
# make sure your bitwarden container is named "bitwarden"
# set the environment variable WEBSOCKET_ENABLED=true on your bitwarden container

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name bitwarden.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 128M;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /admin {
        return 404;
    }

    location /notifications/hub {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 3012;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /notifications/hub/negotiate {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}

I'm drawing a blank on what to try next.

sorry meant to delete my post, I managed to fix it. Turns out I had my domain spelt wrong on my domain host.

Link to comment

I apologize if this has been asked already but I've done a quick search and couldn't find the answer....

 

How do I set up the reverse proxy to point to a server that isn't hosted on the same docker network?  i.e. point it to a different server all together w/ a different IP?  

 

For instance I tried the following to point to my pihole...but it doesn't seem to work


 

# make sure that your dns has a cname set for pihole and that your pihole container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name pihole.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 192.168.1.1 valid=30s;
        set $upstream_app 192.168.1.129;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_hide_header X-Frame-Options;
    }

    location /admin {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 192.168.1.1 valid=30s;
        set $upstream_app 192.168.1.129;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_hide_header X-Frame-Options;
    }
}

 

Edited by pltaylor
Formatting
Link to comment
On 12/5/2020 at 5:51 PM, quinchu said:

I've been trying to get SWAG to work with HomeAssistant for a few days now. Been running into some problems. 

 

I've searched the forms and since I'm using the docker containers on unraid, I'm using the subdomain templates, for this I've set it up to read the upstream_app as 'home-assistant' -- this did not solve the problem

 

I can access from inside my network fine but pointing outside of my network it gives me a '502 bad gateway' error.

 

I'm running other docker containers successfully through SWAG, nextcloud, website, and a few other things.

 

Couple things:

Using base HomeAssistant template in SWAG

Using Google domains with the proper subdomains

Using no extra configuration in the configuration.yaml in HA

 

Would be great if anyone has ideas, or suggestions.

My problem ended up being that I was using Bridge mode, as soon as I set the container up to use another IP Address everything started to work fine. 

Link to comment

Hello - 

As of last night, my swag/letsencrypt container can't retrieve new certs. It's getting stuck in the challenge phase and reporting it might be a firewall issue. I've not made any changes to my firewall (pfsense) and I can see the traffic passing from the WAN to the unRaid box (see logs below). I've tried removing and re-adding the container to no avail. I also verified my DNS and CNAME records are correct and resolve to my IP via duckdns. 

I changed my actual domain name to mydomain.com in the logs for privacy.

 

Please assist. I've attached below the logs from swag and also a screenshot of my firewall log showing the traffic passing through. 

Let me know if any other logs would help. 

 

Thank you.

ErrorWarningSystemArrayLogin


[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=America/Chicago
URL=mydomain.com
SUBDOMAINS=cloud,docsrv,bw1,cctv
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=false

SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d cloud.mydomain.com -d docsrv.mydomain.com -d bw1.mydomain.com -d cctv.mydomain.com
E-mail address entered: [email protected]
http validation is selected
Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Account registered.
Requesting a certificate for cloud.mydomain.com and 3 more domains
Performing the following challenges:
http-01 challenge for bw1.mydomain.com
http-01 challenge for cctv.mydomain.com
http-01 challenge for cloud.mydomain.com
http-01 challenge for docsrv.mydomain.com
Waiting for verification...
Challenge failed for domain cctv.mydomain.com

Challenge failed for domain cloud.mydomain.com

Challenge failed for domain docsrv.mydomain.com

Challenge failed for domain bw1.mydomain.com

http-01 challenge for cctv.mydomain.com
http-01 challenge for cloud.mydomain.com
http-01 challenge for docsrv.mydomain.com
http-01 challenge for bw1.mydomain.com
Cleaning up challenges
Some challenges have failed.

IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: cctv.mydomain.com
Type: connection
Detail: Fetching
http://cctv.mydomain.com/.well-known/acme-challenge/axPVzX1R_0b-z_RUA4RFpx49MHuuuNahNC_j91SaWqI:
Timeout during connect (likely firewall problem)

Domain: cloud.mydomain.com
Type: connection
Detail: Fetching
http://cloud.mydomain.com/.well-known/acme-challenge/U45bVQqmdevLwK7wKy0AbDomXOzoZgLvqYXYgRzxld0:
Timeout during connect (likely firewall problem)

Domain: docsrv.mydomain.com
Type: connection
Detail: Fetching
http://docsrv.mydomain.com/.well-known/acme-challenge/YgPNqYJs7Vm9eBME8emyeRH50UR4qpMlfjmkCtmPNjo:
Timeout during connect (likely firewall problem)

Domain: bw1.mydomain.com
Type: connection
Detail: Fetching
http://bw1.mydomain.com/.well-known/acme-challenge/udN7LtCgSavk48_JkuoDJ6VUMaNXx3_AtP3g95NnWM8:
Timeout during connect (likely firewall problem)

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

  

pfsense.PNG

Link to comment

Hello 

I have moved from LetsEncrtpt to Swag not so long ago and I have a question around something I would like to setup, but not sure if its possible. 

 

I have 1 external IP, and I run around 3/4 VM's internally. So at the moment people who connect to those VM's have to use something like the below 

vm1 - remote1.domain.co.uk:8081
vm2 - remote2.domain.co.uk:8082

 

and so on with port numbers - The hardware firewall I have then port forwards the 8081 and 82 to the correct internal IP address for that VM and port for RDP. 

 

With SWAG is it possiable to set something up like the below (Without the port number) and then in a config file you would specifiy the URL so remote1.domain.co.uk and then the IP that goes with that URL and then it would know what internal server to pass the RDP request onto? 

 

vm1 - remote1.domain.co.uk
vm2 - remote2.domain.co.uk

 

Hope that make sense :) - It did in my head lol 

 

 

Link to comment

I used to use the Let's Enccrypt docker but I am having trouble with swag.  I have forwarded router ports 443 and 80 to my unraid server port 5443 and 5080.  I am using dyndns and my url is wayner.dyndns.org.  This resolves properly to my WAN IP address.

 

In the docker config I have a domain of dyndns.org and a subdomain of wayner. I have only subdomains set to true Validation is set to http

 

But when I start the docker I get a challenge failed for wayner.dyndns.org.

 

 

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

What am I doing wrong? Do I have to manually create a certificate? I have other ports that are forwarded and they work.

Link to comment
9 hours ago, wayner said:

I used to use the Let's Enccrypt docker but I am having trouble with swag.  I have forwarded router ports 443 and 80 to my unraid server port 5443 and 5080.  I am using dyndns and my url is wayner.dyndns.org.  This resolves properly to my WAN IP address.

 

In the docker config I have a domain of dyndns.org and a subdomain of wayner. I have only subdomains set to true Validation is set to http

 

But when I start the docker I get a challenge failed for wayner.dyndns.org.

 

 


To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

What am I doing wrong? Do I have to manually create a certificate? I have other ports that are forwarded and they work.

wayner.dyndns.org goes in the domain field. You do not own dyndns.org.

Link to comment
14 hours ago, learnin2walk said:

Hello - 

As of last night, my swag/letsencrypt container can't retrieve new certs. It's getting stuck in the challenge phase and reporting it might be a firewall issue. I've not made any changes to my firewall (pfsense) and I can see the traffic passing from the WAN to the unRaid box (see logs below). I've tried removing and re-adding the container to no avail. I also verified my DNS and CNAME records are correct and resolve to my IP via duckdns. 

I changed my actual domain name to mydomain.com in the logs for privacy.

 

Please assist. I've attached below the logs from swag and also a screenshot of my firewall log showing the traffic passing through. 

Let me know if any other logs would help. 

 

Thank you.


ErrorWarningSystemArrayLogin


[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=America/Chicago
URL=mydomain.com
SUBDOMAINS=cloud,docsrv,bw1,cctv
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=false

SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d cloud.mydomain.com -d docsrv.mydomain.com -d bw1.mydomain.com -d cctv.mydomain.com
E-mail address entered: [email protected]
http validation is selected
Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Account registered.
Requesting a certificate for cloud.mydomain.com and 3 more domains
Performing the following challenges:
http-01 challenge for bw1.mydomain.com
http-01 challenge for cctv.mydomain.com
http-01 challenge for cloud.mydomain.com
http-01 challenge for docsrv.mydomain.com
Waiting for verification...
Challenge failed for domain cctv.mydomain.com

Challenge failed for domain cloud.mydomain.com

Challenge failed for domain docsrv.mydomain.com

Challenge failed for domain bw1.mydomain.com

http-01 challenge for cctv.mydomain.com
http-01 challenge for cloud.mydomain.com
http-01 challenge for docsrv.mydomain.com
http-01 challenge for bw1.mydomain.com
Cleaning up challenges
Some challenges have failed.

IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: cctv.mydomain.com
Type: connection
Detail: Fetching
http://cctv.mydomain.com/.well-known/acme-challenge/axPVzX1R_0b-z_RUA4RFpx49MHuuuNahNC_j91SaWqI:
Timeout during connect (likely firewall problem)

Domain: cloud.mydomain.com
Type: connection
Detail: Fetching
http://cloud.mydomain.com/.well-known/acme-challenge/U45bVQqmdevLwK7wKy0AbDomXOzoZgLvqYXYgRzxld0:
Timeout during connect (likely firewall problem)

Domain: docsrv.mydomain.com
Type: connection
Detail: Fetching
http://docsrv.mydomain.com/.well-known/acme-challenge/YgPNqYJs7Vm9eBME8emyeRH50UR4qpMlfjmkCtmPNjo:
Timeout during connect (likely firewall problem)

Domain: bw1.mydomain.com
Type: connection
Detail: Fetching
http://bw1.mydomain.com/.well-known/acme-challenge/udN7LtCgSavk48_JkuoDJ6VUMaNXx3_AtP3g95NnWM8:
Timeout during connect (likely firewall problem)

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

  

pfsense.PNG

Have you followed this troubleshooting guide https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

Link to comment

Can someone help me with this? Previously had Nextcloud working with Letsencrypt. Upgraded to SWAG and changed my local network numbering (went from 192.168.69.x to 192.168.1.x) and now I cannot get SWAG to get from my external domain to the local Nextcloud docker. I use a subfolder config. Here are what I think are the relevant details:

 

First the log error shows:

 

2020/12/15 19:40:58 [error] 451#451: *12 nextcloud could not be resolved (110: Operation timed out), client: 192.168.1.1, server: _, request: "GET /nextcloud/ HTTP/2.0", host: "x.net"

I have my Unifi security gateway mapping port 443 to the local network port 444 for SWAG:

 

1569175703_ScreenShot2020-12-15at7_51_56PM.thumb.png.ff85060d5a6371858f380f9093634431.png

 

I can reach my Nextcloud instance on the local network on port 1443:

 

424004720_ScreenShot2020-12-15at7_54_06PM.thumb.png.2f481f60db0b62901f104c33db530411.png

 

I have used the default subfolder config in SWAG:

 

## Version 2020/12/09
# Assuming this container is called "swag", edit your nextcloud container's config
# located at /config/www/nextcloud/config/config.php and add the following lines before the ");":
#  'trusted_proxies' => ['swag'],
#  'overwritewebroot' => '/nextcloud',
#  'overwrite.cli.url' => 'https://your-domain.com/nextcloud',
#
# Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this:
#  array (
#    0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it.
#    1 => 'your-domain.com',
#  ),

# Redirects for DAV clients
location = /.well-known/carddav {
    return 301 $scheme://$host/nextcloud/remote.php/dav;
}

location = /.well-known/caldav {
    return 301 $scheme://$host/nextcloud/remote.php/dav;
}

location /nextcloud {
    return 301 $scheme://$host/nextcloud/;
}

location ^~ /nextcloud/ {
    include /config/nginx/proxy.conf;
    resolver 127.0.0.11 valid=30s;
    set $upstream_app nextcloud;
    set $upstream_port 443;
    set $upstream_proto https;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    rewrite /nextcloud(.*) $1 break;
    proxy_max_temp_file_size 2048m;
    proxy_set_header Range $http_range;
    proxy_set_header If-Range $http_if_range;
    proxy_redirect off;
    proxy_ssl_session_reuse off;
}

Naturally, I appropriately renamed it to nextcloud.subfolder.conf

I also edited the Nextcloud config as recommended in the nextcloud.subfolder.conf:

 

<?php
$CONFIG = array (
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'datadirectory' => '/data',
  'instanceid' => 'x',
  'passwordsalt' => 'x',
  'secret' => 'x',
  'trusted_domains' => 
  array (
    0 => '192.168.1.99:1443',
    1 => 'www.x.net',
    2 => 'x.net',
  ),
  'trusted_proxies' => 
  array (
    0 => 'swag',
  ),
  'overwritewebroot' => '/nextcloud',
  'overwrite.cli.url' => 'https://x.net/nextcloud',
  'dbtype' => 'mysql',
  'version' => '18.0.6.0',
  'dbname' => 'nextcloud',
  'dbhost' => '192.168.1.99:3306',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'x',
  'dbpassword' => 'x',
  'installed' => true,
  'maintenance' => false,
  'theme' => '',
  'loglevel' => 0,
  'mysql.utf8mb4' => true,
);

In the browser, I keep getting "502 Bad Gateway"

 

 The Nextcloud error log shows this:

 

2020/12/15 19:41:53 [error] 374#374: *156 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.69, server: _, request: "GET /nextcloud/ocs/v2.php/apps/text/workspace?path=%2F HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.99:1443", referrer: "https://192.168.1.99:1443/nextcloud/index.php/apps/files/"

 

I'm starting to lose my mind. Greatly appreciate any assistance. Happy Holidays and thanks!

Edited by commander-flatus
additional info
Link to comment
8 hours ago, commander-flatus said:

Can someone help me with this? Previously had Nextcloud working with Letsencrypt. Upgraded to SWAG and changed my local network numbering (went from 192.168.69.x to 192.168.1.x) and now I cannot get SWAG to get from my external domain to the local Nextcloud docker. I use a subfolder config. Here are what I think are the relevant details:

 

First the log error shows:

 


2020/12/15 19:40:58 [error] 451#451: *12 nextcloud could not be resolved (110: Operation timed out), client: 192.168.1.1, server: _, request: "GET /nextcloud/ HTTP/2.0", host: "x.net"

I have my Unifi security gateway mapping port 443 to the local network port 444 for SWAG:

 

1569175703_ScreenShot2020-12-15at7_51_56PM.thumb.png.ff85060d5a6371858f380f9093634431.png

 

I can reach my Nextcloud instance on the local network on port 1443:

 

424004720_ScreenShot2020-12-15at7_54_06PM.thumb.png.2f481f60db0b62901f104c33db530411.png

 

I have used the default subfolder config in SWAG:

 


## Version 2020/12/09
# Assuming this container is called "swag", edit your nextcloud container's config
# located at /config/www/nextcloud/config/config.php and add the following lines before the ");":
#  'trusted_proxies' => ['swag'],
#  'overwritewebroot' => '/nextcloud',
#  'overwrite.cli.url' => 'https://your-domain.com/nextcloud',
#
# Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this:
#  array (
#    0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it.
#    1 => 'your-domain.com',
#  ),

# Redirects for DAV clients
location = /.well-known/carddav {
    return 301 $scheme://$host/nextcloud/remote.php/dav;
}

location = /.well-known/caldav {
    return 301 $scheme://$host/nextcloud/remote.php/dav;
}

location /nextcloud {
    return 301 $scheme://$host/nextcloud/;
}

location ^~ /nextcloud/ {
    include /config/nginx/proxy.conf;
    resolver 127.0.0.11 valid=30s;
    set $upstream_app nextcloud;
    set $upstream_port 443;
    set $upstream_proto https;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    rewrite /nextcloud(.*) $1 break;
    proxy_max_temp_file_size 2048m;
    proxy_set_header Range $http_range;
    proxy_set_header If-Range $http_if_range;
    proxy_redirect off;
    proxy_ssl_session_reuse off;
}

Naturally, I appropriately renamed it to nextcloud.subfolder.conf

I also edited the Nextcloud config as recommended in the nextcloud.subfolder.conf:

 


<?php
$CONFIG = array (
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'datadirectory' => '/data',
  'instanceid' => 'x',
  'passwordsalt' => 'x',
  'secret' => 'x',
  'trusted_domains' => 
  array (
    0 => '192.168.1.99:1443',
    1 => 'www.x.net',
    2 => 'x.net',
  ),
  'trusted_proxies' => 
  array (
    0 => 'swag',
  ),
  'overwritewebroot' => '/nextcloud',
  'overwrite.cli.url' => 'https://x.net/nextcloud',
  'dbtype' => 'mysql',
  'version' => '18.0.6.0',
  'dbname' => 'nextcloud',
  'dbhost' => '192.168.1.99:3306',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'x',
  'dbpassword' => 'x',
  'installed' => true,
  'maintenance' => false,
  'theme' => '',
  'loglevel' => 0,
  'mysql.utf8mb4' => true,
);

In the browser, I keep getting "502 Bad Gateway"

 

 The Nextcloud error log shows this:

 


2020/12/15 19:41:53 [error] 374#374: *156 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.69, server: _, request: "GET /nextcloud/ocs/v2.php/apps/text/workspace?path=%2F HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.99:1443", referrer: "https://192.168.1.99:1443/nextcloud/index.php/apps/files/"

 

I'm starting to lose my mind. Greatly appreciate any assistance. Happy Holidays and thanks!

Change the upstream port from 443 to 1443 in the proxy-conf.

Our confs are based on using a custom bridge and talk internally on that bridge, but you are using the default bridge, so you have to set the port to the one you used in the nextcloud template.

Link to comment
25 minutes ago, saarg said:

Change the upstream port from 443 to 1443 in the proxy-conf.

Our confs are based on using a custom bridge and talk internally on that bridge, but you are using the default bridge, so you have to set the port to the one you used in the nextcloud template.

Thank you for taking time to reply to my question.

I tried that. I still get a 502 error when accessing via my custom domian. My SWAG nginx log shows:

2020/12/16 04:38:19 [error] 455#455: *1 nextcloud could not be resolved (110: Operation timed out), client: 192.168.1.1, server: _, request: "GET /nextcloud/ HTTP/2.0", host: "x.net"

My nextcloud nginx error log shows:

020/12/16 04:38:37 [error] 375#375: *23 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.69, server: _, request: "GET /nextcloud/ocs/v2.php/apps/text/workspace?path=%2F HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.99:1443", referrer: "https://192.168.1.99:1443/nextcloud/index.php/apps/files/"

 

Link to comment
1 hour ago, commander-flatus said:

Thank you for taking time to reply to my question.

I tried that. I still get a 502 error when accessing via my custom domian. My SWAG nginx log shows:


2020/12/16 04:38:19 [error] 455#455: *1 nextcloud could not be resolved (110: Operation timed out), client: 192.168.1.1, server: _, request: "GET /nextcloud/ HTTP/2.0", host: "x.net"

My nextcloud nginx error log shows:


020/12/16 04:38:37 [error] 375#375: *23 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.69, server: _, request: "GET /nextcloud/ocs/v2.php/apps/text/workspace?path=%2F HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.99:1443", referrer: "https://192.168.1.99:1443/nextcloud/index.php/apps/files/"

 

502 means swag can't connect to nextcloud.

You have added your domain to two array entries. Try removing the www.x.net.

How are you accessing the URL? Inside your network or from the outside?

Link to comment
1 hour ago, saarg said:

502 means swag can't connect to nextcloud.

You have added your domain to two array entries. Try removing the www.x.net.

How are you accessing the URL? Inside your network or from the outside?

I removed www.x.net. I still get 502 both inside the network and from the outside when using x.net/nextcloud.

I can access it fine from inside the network on 192.168.1.99:1443/nextcloud

 

My ISP does not block ports.

 

My nginx log from swag:

 

2020/12/16 07:13:20 [error] 455#455: *211 nextcloud could not be resolved (110: Operation timed out), client: 174.235.137.23, server: _, request: "GET /nextcloud/ HTTP/2.0", host: "polydipsia.net"
2020/12/16 07:35:05 [error] 455#455: *276 nextcloud could not be resolved (110: Operation timed out), client: 192.168.1.1, server: _, request: "GET /nextcloud/status.php HTTP/1.1", host: "polydipsia.net:443"
2020/12/16 07:36:49 [error] 455#455: *309 nextcloud could not be resolved (110: Operation timed out), client: 192.168.1.1, server: _, request: "GET /nextcloud/status.php HTTP/1.1", host: "polydipsia.net:443"

I'm open to reinstalling nextcloud, there's nothing special in the container. I have a concern there's something wonky in the settings. I'm running the most updated version (19, I believe). Can you point me to a tutorial on networking config with swag and nextcloud using swag's custom networks?

 

This Unraid server has been up for years and I only do light maintenance. From my reading, the default bridge network isn't really recommended anymore (I believe it was at the time).

 

Once again, thank you for the assistance.

 

Edited by commander-flatus
Link to comment
On 12/15/2020 at 4:10 AM, saarg said:

I did, thanks. I enabled cloudflare dns validation and was able to get past the errors, but still can't access my apps. This is what the ngnix log shows: 

192.168.5.1 is my pfsense box.

 

Anything else that I could check?  Like I said, I have not made any change in my pfsense or unraid config. I have rebooted both just in case to no avail. 

2020/12/16 15:08:49 [error] 472#472: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: cloud.*, request: "GET / HTTP/2.0", upstream: "https://172.19.0.4:1443/", host: "cloud.mydomain.com"
2020/12/16 15:08:50 [error] 472#472: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: cloud.*, request: "GET /favicon.ico HTTP/2.0", upstream: "https://172.19.0.4:1443/favicon.ico", host: "cloud.mydomain.com", referrer: "https://cloud.mydomain.com/"

 

Link to comment
39 minutes ago, learnin2walk said:

I did, thanks. I enabled cloudflare dns validation and was able to get past the errors, but still can't access my apps. This is what the ngnix log shows: 

192.168.5.1 is my pfsense box.

 

Anything else that I could check?  Like I said, I have not made any change in my pfsense or unraid config. I have rebooted both just in case to no avail. 


2020/12/16 15:08:49 [error] 472#472: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: cloud.*, request: "GET / HTTP/2.0", upstream: "https://172.19.0.4:1443/", host: "cloud.mydomain.com"
2020/12/16 15:08:50 [error] 472#472: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: cloud.*, request: "GET /favicon.ico HTTP/2.0", upstream: "https://172.19.0.4:1443/favicon.ico", host: "cloud.mydomain.com", referrer: "https://cloud.mydomain.com/"

 

Nginx has nothing to do with the renewal process, so no point looking in those logs for an answer.

You have not said anything about the result of what you checked from the troubleshooting guide.

Link to comment
4 hours ago, saarg said:

Nginx has nothing to do with the renewal process, so no point looking in those logs for an answer.

You have not said anything about the result of what you checked from the troubleshooting guide.

Sorry, I ended up figuring it out. Cox blocks incoming port 80, so the http validation would not work. I switched to dns + cloudflare to bypass that. 

The only problem I have now is with the onlyoffice container not working via the domain. It works fine with the internal unraid_ip:port, but I get a "400 bad request" error if using the domain name. I'm using the proxy conf file from spaceinvader's video, which I'm attaching. Is there something wrong that jumps out? 

# only office doc server


server {
    listen 443 ssl;

    server_name opendoc.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;


    location / {
				include /config/nginx/proxy.conf;
				resolver 127.0.0.11 valid=30s;
                set $upstream_docs OnlyOfficeDocumentServer;
				proxy_pass https://$upstream_docs:443;
                proxy_redirect off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Host $server_name;
                proxy_set_header X-Forwarded-Proto $scheme;
        }
}

 

Link to comment
8 hours ago, learnin2walk said:

Sorry, I ended up figuring it out. Cox blocks incoming port 80, so the http validation would not work. I switched to dns + cloudflare to bypass that. 

The only problem I have now is with the onlyoffice container not working via the domain. It works fine with the internal unraid_ip:port, but I get a "400 bad request" error if using the domain name. I'm using the proxy conf file from spaceinvader's video, which I'm attaching. Is there something wrong that jumps out? 


# only office doc server


server {
    listen 443 ssl;

    server_name opendoc.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;


    location / {
				include /config/nginx/proxy.conf;
				resolver 127.0.0.11 valid=30s;
                set $upstream_docs OnlyOfficeDocumentServer;
				proxy_pass https://$upstream_docs:443;
                proxy_redirect off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Host $server_name;
                proxy_set_header X-Forwarded-Proto $scheme;
        }
}

 

Are you using a custom bridge?

The name of the container has to be all lowercase. Your is OnlyOfficeDocumentServer which will not work. So change the container name to all lowercase both in the proxy-conf and the container template.

Link to comment
On 12/17/2020 at 5:43 AM, saarg said:

Are you using a custom bridge?

The name of the container has to be all lowercase. Your is OnlyOfficeDocumentServer which will not work. So change the container name to all lowercase both in the proxy-conf and the container template.

Yes, I'm using a custom bridge. I've changed the name to all lowercase but it still shows 400 bad request error. I also tried changing the port in the proxy_pass line to match the port mapped in the container (4430) but that gave me a 502 gateway error message. 

Are there any logs we can look at to see what's going on? 

edit: I looked at the nginx log and it shows this message: 

2020/12/18 07:34:27 [error] 469#469: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: opendoc.*, request: "GET / HTTP/2.0", upstream: "https://172.19.0.2:4430/", host: "opendoc.mydomain.com"
2020/12/18 07:34:27 [error] 469#469: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: opendoc.*, request: "GET /favicon.ico HTTP/2.0", upstream: "https://172.19.0.2:4430/favicon.ico", host: "opendoc.mydomain.com", referrer: "https://opendoc.mydomain.com/"

 

Edited by learnin2walk
Link to comment
15 minutes ago, learnin2walk said:

Yes, I'm using a custom bridge. I've changed the name to all lowercase but it still shows 400 bad request error. I also tried changing the port in the proxy_pass line to match the port mapped in the container (4430) but that gave me a 502 gateway error message. 

Are there any logs we can look at to see what's going on? 

edit: I looked at the nginx log and it shows this message: 


2020/12/18 07:34:27 [error] 469#469: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: opendoc.*, request: "GET / HTTP/2.0", upstream: "https://172.19.0.2:4430/", host: "opendoc.mydomain.com"
2020/12/18 07:34:27 [error] 469#469: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.5.1, server: opendoc.*, request: "GET /favicon.ico HTTP/2.0", upstream: "https://172.19.0.2:4430/favicon.ico", host: "opendoc.mydomain.com", referrer: "https://opendoc.mydomain.com/"

 

If you use a custom bridge, you use the container port, not the port you mapped on the host side.

As for the 400 bad request, I don't know what it could be.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.