[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

Hey guys,

 

I'm setting up this container from scratch now instead of moving my old Letsencrypt folder. The readme in the proxy-conf folder says this: 

 

- If you are using unraid, create a custom network in command line via `docker network create [networkname]`, then go to docker service settings (under advanced) and set the option `Preserve user defined networks:` to `Yes`. Then in each container setting, including the swag container, in the network type dropdown, select `Custom : [networkname]`.  This is a necessary step as the bridge network that unraid uses by default does not allow container to container communication.

 

And I've done that. Yet when I try the default 

set $upstream_app plex;
set $upstream_port 32400;

 

It doesn't work (502 missing gateway). I have to edit it to either use the docker-IP and container-port:

set $upstream_app 172.18.0.2;
set $upstream_port 32400;

 

or use the host-IP and host-PORT:

set $upstream_app 192.168.1.7;
set $upstream_port [HOST PORT];

 

 

As far as I can tell using the first example means container to container communication, which means the docker-network-setup has been successful. I don't understand why the dns-lookup from the sample config doesn't work? 

 

Would like to get this working before setting it up on all other containers too. It would be sweet if I didn't have to hard-code the IPs/ports as it's more robust using the name I think.

Link to comment
22 minutes ago, alturismo said:

but may post a screen like this from your docker(s) around swag how this looks like.

 

 

Heres an example with tautulli:

image.thumb.png.37a7633456655682d66a8a4c895470bb.png

 

domain.com/tautulli works if tautulli.subfolder.conf looks like this:

set $upstream_app 172.18.0.10;
set $upstream_port 8181;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;

 

But if I have it like this I'm just getting 404 on domain.com/tautulli

set $upstream_app tautulli;
set $upstream_port 8181;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;

 

As you can see I'm still able to ping tautulli from my swag-container - and it resolves to the correct IP, so I'm not really sure whats going on here.

 

Link to comment
8 minutes ago, Fredrick said:

As you can see I'm still able to ping tautulli from my swag-container - and it resolves to the correct IP, so I'm not really sure whats going on here.

 

true ... looking all fine i would say from here as name resolution and pinging working as expected.

 

last i would say, use

docker create network proxynet

 

without any special characters like your underscore in your network name, still cant imagine that would be it but may worth a try, of course change the networks via dropdown in the docker settings to proxynet (oor any other name without special characters)

Link to comment
1 hour ago, alturismo said:

without any special characters like your underscore in your network name, still cant imagine that would be it but may worth a try, of course change the networks via dropdown in the docker settings to proxynet (oor any other name without special characters)

 

I've tried this too now with no luck unfortunately. I still need to hard code the IPs or it will just give me 404. 

 

I've also tried restarting my server just to make sure. 

Link to comment
42 minutes ago, Fredrick said:

I've also tried restarting my server just to make sure. 

 

sorry, then im out of ideas, i personally dont use this setup anymore, i only use custom:br0 with individual ip's per docker, but you are right, it should work with names ... worked also always here, also in the custom:br0 setup ...

 

may some swag pro can chim in ;)

Link to comment
2 hours ago, sonic6 said:

@Fredrick can you post your tautulli.subfolder.conf?

Here it is:

## Version 2021/05/18
# first go into tautulli settings, under "Web Interface", click on show advanced, set the HTTP root to /tautulli and restart the tautulli container

location ^~ /tautulli {
    include /config/nginx/proxy.conf;
    include /config/nginx/resolver.conf;
    set $upstream_app tautulli;
    set $upstream_port 8181;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

}

location ^~ /tautulli/api {
    include /config/nginx/proxy.conf;
    include /config/nginx/resolver.conf;
    set $upstream_app tautulli;
    set $upstream_port 8181;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

}

location ^~ /tautulli/newsletter {
    include /config/nginx/proxy.conf;
    include /config/nginx/resolver.conf;
    set $upstream_app tautulli;
    set $upstream_port 8181;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

}

location ^~ /tautulli/image {
    include /config/nginx/proxy.conf;
    include /config/nginx/resolver.conf;
    set $upstream_app tautulli;
    set $upstream_port 8181;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

}

 

/tautulli has been set as the baseurl as per the instructions on the second line. This problem is not isolated to tautulli either. Same goes fro Sonarr, Overseerr and Organizr which I've also tested. 

 

error.log from nginx:

2022/01/04 20:11:36 [error] 517#517: *2532 tautulli could not be resolved (3: Host not found), client: 192.168.1.1, server: domain.com, request: "GET /tautulli HTTP/2.0", host: "domain.com"

 

I have also verified that the containers resolve on the same network using docker network inspect proxynet. Its not marked as default and ICC is enabled.

 

My site-conf/default too for reference. I've commented out the location / block as per this guide:

## Version 2021/04/27 - Changelog: https://github.com/linuxserver/docker-swag/commits/master/root/defaults/default

error_page 502 /502.html;

# redirect all traffic to https
server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

# main server block
server {
    listen 443 ssl http2 default_server;
    listen [::]:443 ssl http2 default_server;

    root /config/www;
    index index.html index.htm index.php;

    server_name domain.com;

    # enable subfolder method reverse proxy confs
    include /config/nginx/proxy-confs/*.subfolder.conf;

    # all ssl related config moved to ssl.conf
    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    #location / {
    #    try_files $uri $uri/ /index.html /index.php?$args =404;
    #}

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        include /etc/nginx/fastcgi_params;
    }

}

# enable subdomain method reverse proxy confs
include /config/nginx/proxy-confs/*.subdomain.conf;
# enable proxy cache for auth
proxy_cache_path cache/ keys_zone=auth_cache:10m;
Link to comment
4 minutes ago, Fredrick said:

Here it is:

my subfolder sample looks a bit different:

 

## Version 2021/05/18
# make sure that your dns has a cname set for tautulli and that your tautulli container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name tautulli.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app tautulli;
        set $upstream_port 8181;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location ~ (/tautulli)?/api {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app tautulli;
        set $upstream_port 8181;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location ~ (/tautulli)?/newsletter {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app tautulli;
        set $upstream_port 8181;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location ~ (/tautulli)?/image {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app tautulli;
        set $upstream_port 8181;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

}

 

Link to comment

I have found more weird stuff.. I changed this block in the subfolder.conf. I commented out the substitute variables and instead hardcoded them into the proxy_pass line. 

location ^~ /tautulli {
    # enable the next two lines for http auth
    #auth_basic "Restricted";
    #auth_basic_user_file /config/nginx/.htpasswd;

    # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf
    #auth_request /auth;
    #error_page 401 =200 /ldaplogin;

    # enable for Authelia, also enable authelia-server.conf in the default site config
    #include /config/nginx/authelia-location.conf;

    include /config/nginx/proxy.conf;
    include /config/nginx/resolver.conf;
    #set $upstream_app tautulli;
    #set $upstream_port 8181;
    #set $upstream_proto http;
    proxy_pass http://tautulli:8181;

}

 

And this works! Why, why why does this work? 

Link to comment
11 minutes ago, Fredrick said:

And this works! Why, why why does this work? 

Great to hear. My Swag is older than this variables, i never used them, so i don't know if those works normally without problems.

 

Maybe wrong Editor or Encoding?

Edited by sonic6
Link to comment

Hey guys. Followed SpaceInvaderOne's guide to installing nextcloud and swag. Have my own domain name via cloudflare and all that is setup per his instructions. My unraid is behind a pfsense firewall, and everything is configured well there. I can access the swag startup page from my public IP, but NOT from my domain. I get a cloudflare 522. Anyone know how I can fix this?

Link to comment
16 hours ago, toxicrevenger said:

Hey guys. Followed SpaceInvaderOne's guide to installing nextcloud and swag. Have my own domain name via cloudflare and all that is setup per his instructions. My unraid is behind a pfsense firewall, and everything is configured well there. I can access the swag startup page from my public IP, but NOT from my domain. I get a cloudflare 522. Anyone know how I can fix this?

I had similar issues, and was able to resolve this issue by moving my Cloudflare settings from SSL/TLS section from Flexible to Full(Strict).  That said, I had already configured the subdomain for Nextcloud, and SWAG was configured to use Cloudflare DNS to validate the domain and create SSL/TLS certificates.  Give that a shot and see if it clears anything up for you. You may also need to use the "purge cache" option in Cloudflare under overview.

Link to comment

Hi!

 

I'm having problems with renewing my certificates (they expires tomorrow) and I'm hoping that I can get some help from you.

In my router I have port forward for 80 and 443 to my unraid server and the ports 180 and 1443.

My SWAG container has HTTP as 180 and https as 1433.
The external access to my dockers work (at least for https, I don't have anything on http to access).

If i try to browse http://domain.io i get redirected to https://domain.io

 

this is what the log says.

 

Quote

Renewing an existing certificate for domain.io and 2 more domains

Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:
Domain: a.domain.io
Type: connection
Detail: Fetching http://a.domain.io/.well-known/acme-challenge/XjpwrDrMtR1f43_cIMGKxKvBYjOsdeB4sHyPD2omvJM: Timeout during connect (likely firewall problem)

Domain: domain.io
Type: connection
Detail: Fetching http://domain.io/.well-known/acme-challenge/b1X2c6qobCn94AmrVvNsDy-EjV-IdiaLUK2kxg3xJDs: Timeout during connect (likely firewall problem)

Domain: b.domain.io
Type: connection
Detail: Fetching http://b.domain.io/.well-known/acme-challenge/2MOMj6EHRpPNpM4QhApiQV-vWXXUi_nZv1_hULvgLWc: Timeout during connect (likely firewall problem)

Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.

Failed to renew certificate domain.io with error: Some challenges have failed.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
All renewals failed. The following certificates could not be renewed:

 

any suggestions?

 

EDIT: Well, it was all my fault, as usual.

My firewall is constantly complaining about intrusion attempt, so I geoblock countries.
LetsEncrypt couldn't fetch the challenge since I had blocked USA.

Edited by lusitopp
Link to comment

So I´m struggling right know. 
I followed spaceinvaders guide to setup nextcloud with swag last night, and had everything working as it should. 
I then decided to try and add a second subdomain for qbit, and something borked the fuck out! 
Now nothing is working. 
I have port 80 set to 180 in unraid, and port 80 forward to 180 on my router, 443 set to 1443 and 443 forward to 1443 on router. 

At the moment I cant even access it on my local lan. 192.168.x.x:180 times out, 192.168.x.x:1443 times out, I´ve tried to remove swag and do a totally clean install, still the same. 
Everything is configured right but nothing is working.
If I click on docker and try to open webui it times out. 
The setup right now is word for word after spaceinvaders guide, and open port chech says both 80 and 443 are open (not that it matters for local network) and if I shut down swag the ports read as closed. 

Link to comment

So SWAG has been working just fine on my server for several months, but today suddenly I can't access the two dockers I have that utilize it, Nextcloud and Ombi. Checking the log for SWAG, all I see is:

 

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=America/Los_Angeles
URL=[REDACTED]
SUBDOMAINS=ombi,cloud
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
CERTPROVIDER=
DNSPLUGIN=
EMAIL=[REDACTED]
STAGING=false

Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d ombi.[REDACTED].com -d cloud.[REDACTED].com
E-mail address entered: [REDACTED]
http validation is selected
Certificate exists; parameters unchanged; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing...
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 70-templates: executing...
**** The following nginx confs have different version dates than the defaults that are shipped. ****
**** This may be due to user customization or an update to the defaults. ****
**** To update them to the latest defaults shipped within the image, delete these files and restart the container. ****
**** If they are user customized, check the date version at the top and compare to the upstream changelog via the link. ****
/config/nginx/ssl.conf
/config/nginx/site-confs/default
/config/nginx/proxy.conf
/config/nginx/nginx.conf
/config/nginx/authelia-server.conf
/config/nginx/authelia-location.conf

**** The following reverse proxy confs have different version dates than the samples that are shipped. ****
**** This may be due to user customization or an update to the samples. ****
**** You should compare them to the samples in the same folder to make sure you have the latest updates. ****
/config/nginx/proxy-confs/ombi.subdomain.conf
/config/nginx/proxy-confs/nextcloud.subdomain.conf

[cont-init.d] 70-templates: exited 0.
[cont-init.d] 90-custom-folders: executing...
[cont-init.d] 90-custom-folders: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready

 

I don't know if those **** lines in blue are the cause of my issue, or if it's something else. Can anyone offer some assistance?

 

UPDATE: I found the problem was my ISP had pushed an update to my gateway that broke port forwarding. I resolved the issue by removing the port forwarding from the static IP of my server, and instead applied it to the "friendly name" entry in the device list. This was a Pace 5268AC AT&T Fiber router, if that helps anyone looking at this in the future.

Edited by MikaelTarquin
  • Like 1
Link to comment
8 hours ago, alturismo said:

 

most could ask now what happened when you tried it ... but to make it short, yes ;)

That was easy.  Just needed to set set $upstream_app 192.168.1.252; (whatever youyr internal IP is)


Then in Homeassistant to allow for the reverse proxy by adding to configuration.yaml

 

http:
  use_x_forwarded_for: true
  trusted_proxies:
  - 192.168.1.79
  ip_ban_enabled: true
  login_attempts_threshold: 5

 

Link to comment
1 hour ago, Trenta27 said:

Anyone get the SWAG dashboard mod to work? It says that it is working just fine but I'm unable to access it from the "https://dashboard.xxxxxxxx.com" domain on my LAN.

 

https://www.linuxserver.io/blog/introducing-swag-dashboard

you need a local DNS or edit the nameresolution on your device for resolving "dashboard.YOURDOMAIN.com" to your external IP, or (if SWAG uses host port 80/433) to your unraid server.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.