[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


5640 posts in this topic Last Reply

Recommended Posts

I have run into a problem where it seems that letsencrypt or nginx are not forwarding the subdomains to the proper ports. Everytime I enter the subdomain as written, it redirects to my Unraid Tower GUI. Here is my proxy-confs file for ombi. Can't seem to figure it out but my brain is also fried from struggling with my ISP and consumer modems.

 

# make sure that your dns has a cname set for ombi and that your ombi container is not using a base url
server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name ombi.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ombi ombi;
        proxy_pass http://$upstream_ombi:3579;
    }

    # This allows access to the actual api
    location ~ (/ombi)?/api {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ombi ombi;
        proxy_pass http://$upstream_ombi:3579;
   }

    # This allows access to the documentation for the api
    location ~ (/ombi)?/swagger {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ombi ombi;
        proxy_pass http://$upstream_ombi:3579;
   }
   if ($http_referer ~* /ombi) {
       rewrite ^/swagger/(.*) /ombi/swagger/$1? redirect;
   }
}

 

Link to post
  • Replies 5.6k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I will only post this once. Feel free to refer folks to this post.   A few points of clarification:   The last update of this image didn't break things. Letsencrypt abruptly disabl

Application Name: SWAG - Secure Web Application Gateway Application Site:  https://docs.linuxserver.io/general/swag Docker Hub: https://hub.docker.com/r/linuxserver/swag Github: https:/

I don't need support.  I just wanted to say thanks for this container and its continuous maintenance.  I started with Aptalca's container then switched to the linuxserver.io container.  Its been close

Posted Images

Hi All,

 

Been trying this mornig to get LetsEncrypt working but am getting stuck on getting a certificate issued: "Timeout during connect"

 

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=Australia/Brisbane
URL=servebeer.com
SUBDOMAINS=darkremote
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
EMAIL=brandanweeks@gmail.com
STAGING=

2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d darkremote.servebeer.com
E-mail address entered: brandanweeks@gmail.com
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for darkremote.servebeer.com
Waiting for verification...
Challenge failed for domain darkremote.servebeer.com
http-01 challenge for darkremote.servebeer.com
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: darkremote.servebeer.com
Type: connection
Detail: Fetching
http://darkremote.servebeer.com/.well-known/acme-challenge/rqRuSRoXiqrXg9GmbfHo9h0gi8LmYjL2PHgp_rtZ1Qk:
Timeout during connect (likely firewall problem)

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

Here is my port forwarding:

 

image.png.316c3942e8663c2443e1f0aaf9c91719.png

 

And here is my docker run command:

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='letsencrypt' --net='bridge' --privileged=true -e TZ="Australia/Brisbane" -e HOST_OS="Unraid" -e 'EMAIL'='********@gmail.com' -e 'URL'='REDACTED' -e 'SUBDOMAINS'='REDACTED' -e 'ONLY_SUBDOMAINS'='true' -e 'DHLEVEL'='2048' -e 'VALIDATION'='http' -e 'DNSPLUGIN'='' -e 'PUID'='99' -e 'PGID'='100' -p '180:80/tcp' -p '1444:443/tcp' -v '/mnt/user/appdata/letsencrypt':'/config':'rw' -v '/mnt/user':'/unraid':'rw' 'linuxserver/letsencrypt' 

51200ee6003878292897acde5133facdc76a0d6cc84fb6e79e263256cfe56857

 

Now, I've read through threads etc and tried the following to no avail:

 

  • Directed port 80 at another service to attempt to connect / confirm no ISP blocking - all good
  • Tried setting container share to /mnt/disk1 
  • Reinstalled the latest docker / app from CA for this about 6 times fresh

One thing I wanted to check... Should I be able to access the NGINX landing page regardless of the cert being installed at HTTP port 180 / HTTPS 1444 of the IP of the unraid server? Because I can't get to it locally or externally. I only ever get 'Connection Refused'.

 

Is it possible that the docker isn't responding to HTTP requests? How would I check?

 

Sorry, I've tried everything I can think of and tried to find people with the same issue but a lot of them haven't resolved the issue or never responded if it was fixed 😕

 

Thanks in advance!

 

Edit: This is working now. I decided to call my ISP anyway to at least see if they could see anything trying to connect. Turns out port 80 / 443 was blocked.

 

I assumed it wasn't as I was able to remotely connect over port 80 to other services. They said it could have been Hairpin NAT on my router basically working it out for me.

 

As a general lesson I guess - always call your ISP FIRST to make sure that those ports are going to be open on their side before you go any further.

Edited by Brandan
Updated resolution
Link to post
11 hours ago, BabyPandaSteak said:

I have run into a problem where it seems that letsencrypt or nginx are not forwarding the subdomains to the proper ports. Everytime I enter the subdomain as written, it redirects to my Unraid Tower GUI. Here is my proxy-confs file for ombi. Can't seem to figure it out but my brain is also fried from struggling with my ISP and consumer modems.

 


# make sure that your dns has a cname set for ombi and that your ombi container is not using a base url
server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name ombi.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ombi ombi;
        proxy_pass http://$upstream_ombi:3579;
    }

    # This allows access to the actual api
    location ~ (/ombi)?/api {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ombi ombi;
        proxy_pass http://$upstream_ombi:3579;
   }

    # This allows access to the documentation for the api
    location ~ (/ombi)?/swagger {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_ombi ombi;
        proxy_pass http://$upstream_ombi:3579;
   }
   if ($http_referer ~* /ombi) {
       rewrite ^/swagger/(.*) /ombi/swagger/$1? redirect;
   }
}

 

Check your port forwarding

Link to post

I think there is an issue with letsencrypt/nginx. If I port forward directly to the related docker (e.g., ombi), it will load correctly to the ombi landing page. I have my Letsencrypt docker Container: 80 set to 180 and 443 set to 1443 but forwarding to 180 results in cloudflare error 521 "Web server is down". If I set to listen to 80 via:

server {
	listen 80;
	listen [::]:80;
	server_name _;
	return 301 https://$host$request_uri;
}

it returns with a ERR_TOO_MANY_REDIRECTS.

 

I am probably making a fool's mistake here as I am very new to a lot of this material.

Link to post
56 minutes ago, BabyPandaSteak said:

I think there is an issue with letsencrypt/nginx. If I port forward directly to the related docker (e.g., ombi), it will load correctly to the ombi landing page. I have my Letsencrypt docker Container: 80 set to 180 and 443 set to 1443 but forwarding to 180 results in cloudflare error 521 "Web server is down". If I set to listen to 80 via:


server {
	listen 80;
	listen [::]:80;
	server_name _;
	return 301 https://$host$request_uri;
}

it returns with a ERR_TOO_MANY_REDIRECTS.

 

I am probably making a fool's mistake here as I am very new to a lot of this material.

Turn cloudflare proxy off. Click on the orange cloud and make sure it's gray

Link to post

Hi everyone. I had all my reverse proxying set up and working nicely with my old Asus RT-N66U router. But recently due to an ISP switch to gigabit connection, I now have to use a Smart / RG SR515AC router.

 

I don't think this router supports NAT reflection or whatever it was that allowed me to access my-domain.net from within the home network.

 

The reverse proxy works perfectly fine and my-domain.net works when I'm outside the network.

 

I've spent three days on getting the router to do it but I might be SOL. And I've come here to find out if there are other options.

 

Is there something I can do on the Nginx side or anything else that doesn't involve the router? I've read about editing my HOST file or running my own DNS server but seems overkill ...

Link to post
Hi everyone. I had all my reverse proxying set up and working nicely with my old Asus RT-N66U router. But recently due to an ISP switch to gigabit connection, I now have to use a Smart / RG SR515AC router.
 
I don't think this router supports NAT reflection or whatever it was that allowed me to access my-domain.net from within the home network.
 
The reverse proxy works perfectly fine and my-domain.net works when I'm outside the network.
 
I've spent three days on getting the router to do it but I might be SOL. And I've come here to find out if there are other options.
 
Is there something I can do on the Nginx side or anything else that doesn't involve the router? I've read about editing my HOST file or running my own DNS server but seems overkill ...
Why do you have to use that? Why not spin up a pfsense box or buy your own router that can handle gigabit speeds.

Sent from my SM-N960U using Tapatalk

Link to post
4 hours ago, ijuarez said:

Why do you have to use that? Why not spin up a pfsense box or buy your own router that can handle gigabit speeds.

Sent from my SM-N960U using Tapatalk
 

Unfortunately I'm contracted to rent the router from the ISP. I've tried my previous Asus with the connection and I think it's too old to handle the speed. So I'm trying to make the best of the situation since I'm stuck with it and I can't afford a new router yet.

 

Someone on SNBFORUMS suggested a static DNS entry in the router which I did and it works now.

Edited by vurt
Link to post
Unfortunately I'm contracted to rent the router from the ISP. I've tried my previous Asus with the connection and I think it's too old to handle the speed. So I'm trying to make the best of the situation since I'm stuck with it and I can't afford a new router yet.
 
Someone on SNBFORUMS suggested a static DNS entry in the router which I did and it works now.
Good, great to hear. That stinks that they force you to rent their router.

Sent from my SM-N960U using Tapatalk

Link to post

Hi again, I have something different I want to do but not sure how...

I'm trying to use Nginx to login to my Hikvision NVR using HTTPS, not a docker so a little different...

I created the sub domain name, successfully added it in Letsencrypt and for nginx, what I did (and I'm really not sure this is the correct way) is to create a file named "nvr" and added this in it:

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    root /config/www;
    index index.html index.htm index.php;

    server_name nvr.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        auth_basic "Restricted";
        auth_basic_user_file /config/nginx/.htpasswd;
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.1.149:88;
    }
} 

When I restart everything and test I get

403 Forbidden

nginx/1.14.2

 

and in logs, I get:

2019/03/16 21:18:10 [error] 358#358: *1 open() "/config/nginx/.htpasswd" failed (2: No such file or directory), client: 192.168.1.254, server: nvr.*, request: "GET / HTTP/2.0", host: "nvr.mydomain.com"
2019/03/16 21:18:10 [error] 358#358: *1 open() "/config/nginx/.htpasswd" failed (2: No such file or directory), client: 192.168.1.254, server: nvr.*, request: "GET /favicon.ico HTTP/2.0", host: "nvr.sfnetwork.ca", referrer: "https://nvr.mydomain.com/"

Anything obvious to resolve this? Maybe I'm not doing this the correct way too...

Thanks in advanced...

Link to post
3 minutes ago, sfnetwork said:

Hi again, I have something different I want to do but not sure how...

I'm trying to use Nginx to login to my Hikvision NVR using HTTPS, not a docker so a little different...

I created the sub domain name, successfully added it in Letsencrypt and for nginx, what I did (and I'm really not sure this is the correct way) is to create a file named "nvr" and added this in it:


server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    root /config/www;
    index index.html index.htm index.php;

    server_name nvr.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        auth_basic "Restricted";
        auth_basic_user_file /config/nginx/.htpasswd;
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.1.149:88;
    }
} 

When I restart everything and test I get

403 Forbidden

nginx/1.14.2

 

and in logs, I get:


2019/03/16 21:18:10 [error] 358#358: *1 open() "/config/nginx/.htpasswd" failed (2: No such file or directory), client: 192.168.1.254, server: nvr.*, request: "GET / HTTP/2.0", host: "nvr.mydomain.com"
2019/03/16 21:18:10 [error] 358#358: *1 open() "/config/nginx/.htpasswd" failed (2: No such file or directory), client: 192.168.1.254, server: nvr.*, request: "GET /favicon.ico HTTP/2.0", host: "nvr.sfnetwork.ca", referrer: "https://nvr.mydomain.com/"

Anything obvious to resolve this? Maybe I'm not doing this the correct way too...

Thanks in advanced...

Did you create an .htpasswd file? The command to do so is in the image readme linked in the first post

Link to post
4 minutes ago, aptalca said:

Did you create an .htpasswd file? The command to do so is in the image readme linked in the first post

No, not at all... Is that to protect the site or if the site has a password itself? Not optional?

Looked optional "If you'd like to password protect your sites, you can use htpasswd."

Edited by sfnetwork
Link to post
4 hours ago, sfnetwork said:

No, not at all... Is that to protect the site or if the site has a password itself? Not optional?

Looked optional "If you'd like to password protect your sites, you can use htpasswd."

Yeah, it's optional for default config but you specifically turned it on when you included the following two lines in your conf:

auth_basic "Restricted";         auth_basic_user_file /config/nginx/.htpasswd;

 

Either create the .htpasswd or comment those lines out. The error you're getting specifically says that nginx can't find the .htpasswd file

Link to post
6 hours ago, ijuarez said:

Good, great to hear.

Sent from my SM-N960U using Tapatalk
 

I spoke too soon. The static DNS entry only works when I'm hardwired to the router. When I'm on WiFi, I run into the same problem again, router directing port 80 calls to its admin page instead of my webserver. 😕

Link to post
6 hours ago, aptalca said:

Yeah, it's optional for default config but you specifically turned it on when you included the following two lines in your conf:

auth_basic "Restricted";         auth_basic_user_file /config/nginx/.htpasswd;

 

Either create the .htpasswd or comment those lines out. The error you're getting specifically says that nginx can't find the .htpasswd file

ok, it works when I set the password but if I comment it or delete its content, the https URL still prompts me for credentails... any way to bypass it?

Link to post
6 minutes ago, sfnetwork said:

ok, it works when I set the password but if I comment it or delete its content, the https URL still prompts me for credentails... any way to bypass it?

ok I got it, I had to comment the auth_basic lines, it works perfectly now! :)

 

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    root /config/www;
    index index.html index.htm index.php;

    server_name nvr.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
#        auth_basic "Restricted";
#        auth_basic_user_file /config/nginx/.htpasswd;
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.1.149:88;
    }
} 
Link to post

edit// Seems the proxy changes propagated and now it seems to load.

 

I have my namecheap domain pointed at cloudflare via dns. I pass the dns challenge via cloudflare fine.

 

On cloudflare is it correct to have an "a" record with my domain pointing to my home ip address?

Then i have "cname" records pointing www to my domain and sonarr to my domain etc.?

SGVTpFk.jpg

 

When I have port 80 disabled in the default config I get error 521 web server is down.

When I have it enabled by removing the # on lines 5-10 I then get err_too_many_redirects.

 

I have cloudflare proxy turned off, but it's only been about 30 minutes. How long does it take for that to all clear out?

 

I am able to punch in my home ip address /radarr for example and that does load for me.

 

I'm also trying with subdomains and subfolders with the same results.

 

XlQXvSo.jpg

 

 

aHc2APv.jpg

 

Edited by munit85
Link to post
I have my namecheap domain pointed at cloudflare via dns. I pass the dns challenge via cloudflare fine.
 
On cloudflare is it correct to have an "a" record with my domain pointing to my home ip address?
Then i have "cname" records pointing www to my domain and sonarr to my domain etc.?
SGVTpFk.jpg&key=8dba5d698c976f662508c7525f5dd5dab490a834551c8b9dafc4a9dc4fe21eef
 
When I have port 80 disabled in the default config I get error 521 web server is down.
When I have it enabled by removing the # on lines 5-10 I then get err_too_many_redirects.
 
I have cloudflare proxy turned off, but it's only been about 30 minutes. How long does it take for that to all clear out?
 
I am able to punch in my home ip address /radarr for example and that does load for me.
 
I'm also trying with subdomains and subfolders with the same results.
 
XlQXvSo.jpg&key=401b3c88aff77e3c805d906d7429fbb2775066d43205eb9720142121c7d6d4e7
 
 
aHc2APv.jpg&key=b21aec5888c3d7e619583650ff0cfcf90398da3d8eb38201c532d5eff077bf8b
 
You need a DDNS so your name cheap knows your IP

Sent from my SM-N960U using Tapatalk

Link to post
6 minutes ago, ijuarez said:

I guess it should, so you have a static IP?

Sent from my SM-N960U using Tapatalk
 

My isp doesn't really change mine much. Maybe because I'm on fiber? not sure.

 

I just tried to load things again and it seems to be working now. I think turning off the cloudflare proxy finally propagated and now it works.

Link to post
My isp doesn't really change mine much. Maybe because I'm on fiber? not sure.

 

I just tried to load things again and it seems to be working now. I think turning off the cloudflare proxy finally propagated and now it works.

That's was going to be my next suggestion. I use it since all of my sites have their comodo ssl, I still run LE

Glad you got it working

 

Sent from my SM-N960U using Tapatalk

 

 

 

Link to post
On 3/11/2019 at 6:23 AM, sfnetwork said:

OMG I finally found the issue!!!

It's about CloudFlare CNAME records...

I had to disable the traffic going through cloudflare:

cloudflare2.thumb.png.3bf1281cc688b1f3b5f71194418c5c11.png

 

Now EVERYTHING works perfectly....
Hope this can help someone else and avoid losing so much time lol

This was my problem as well. I don't know why, but once the changes propagated everything worked. 

 

thanks.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.