[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

It seems like it but, I'm not. If I stop Letsencrypt none of my sites work. If I start it up they start working again while still throwing that error.

 

I thought it might be because of some weird behavior with subdomains listening on the same port but, that is not the case since I switched all ports around so each listen directive is different but, that just throws errors for each port I specified.

Edited by phiyuku
Link to comment
16 hours ago, phiyuku said:

It seems like it but, I'm not. If I stop Letsencrypt none of my sites work. If I start it up they start working again while still throwing that error.

 

I thought it might be because of some weird behavior with subdomains listening on the same port but, that is not the case since I switched all ports around so each listen directive is different but, that just throws errors for each port I specified.

 

Did you change the nginx.conf to make it run as a daemon? That would be the reason. Set it to "daemon off;" 

Link to comment
On 11/2/2017 at 8:29 AM, aptalca said:

 

Did you change the nginx.conf to make it run as a daemon? That would be the reason. Set it to "daemon off;" 

 

That did it. Thank you. There actually wasn't any daemon directive but, I specifically specified dameon off and that did it.

Edited by phiyuku
Link to comment

Hello, 

 

I have been trying to set up my letsencrypt docker following this guide. https://cyanlabs.net/tutorials/the-complete-unraid-reverse-proxy-duck-dns-dynamic-dns-and-letsencrypt-guide/

I get the following error from the docker when I run it. 

 

 

Failed authorization procedure. DOMAIN.duckdns.org (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Error getting validation data
IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: DOMAIN.duckdns.org
Type: connection
Detail: Error getting validation data

 

I have also tried a noip dynamic DNS but I received the same error. I also disabled my firewall to check to see if that was the problem but I received the same error.

 

Ideally what I want to do is use the domain I bought in this, but I bought my domain through squarespace and have no idea how to do that. 

 

Any advice in this situation would be appreciated. 

Link to comment
On 12/11/2016 at 5:11 PM, CHBMB said:

Here's what I got in nginx

 

 


    	location /plexpy/ {
   		proxy_pass http://192.168.0.1:8181;
	include /config/nginx/proxy.conf;
      		proxy_bind $server_addr;
      		proxy_set_header X-Forwarded-Host $server_name;
	proxy_set_header X-Forwarded-Ssl     on;
	auth_basic "Restricted";
      		auth_basic_user_file /config/.htpasswd;
	}
 

 

 

Here's what I got in Plexpy

 

eEZNwwJ.png

And in my plexpy docker log

 

 


2016-12-11 15:25:11 - INFO :: MainThread : PlexPy WebStart :: Starting PlexPy web server on http://0.0.0.0:8181/plexpy/
 

 

 

I've used this exact configuration and it is working for me from the external world using my domain etc... However when configured this way I can no longer access the webui from my local lan. I end up getting a 404 error. Any ideas?

Link to comment
 
I've used this exact configuration and it is working for me from the external world using my domain etc... However when configured this way I can no longer access the webui from my local lan. I end up getting a 404 error. Any ideas?
Add /plexpy to the address you use on your local lan

Sent from my SGH-I337M using Tapatalk

Link to comment

I just recently switched to the latest RC from the stable build and now my container is getting this following error:

 

/usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint letsencrypt (fc2e171b205d381282bc2fe1064943f0a4b0947cb88a04cd12dc693a55c02a90): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use.

 

Any ideas?

Link to comment
19 minutes ago, Diode663 said:

I just recently switched to the latest RC from the stable build and now my container is getting this following error:

 

/usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint letsencrypt (fc2e171b205d381282bc2fe1064943f0a4b0947cb88a04cd12dc693a55c02a90): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use.

 

Any ideas?

 

Unraid gui uses 443 so letsencrypt container cannot bind to it. You can disable unraid https in settings

  • Upvote 1
Link to comment

Well shit.  I tried disabling https in unraid to no effect. So I switched the https port to 445 and restarted and now I cannot get into the dashboard.  But I can see all of my shares and letsencrypt does seem to be working so I guess thats a bonus.  Is there a way to undo this from the terminal?

Link to comment
On 12/11/2016 at 2:11 PM, CHBMB said:

Here's what I got in nginx

 

 


    	location /plexpy/ {
   		proxy_pass http://192.168.0.1:8181;
	include /config/nginx/proxy.conf;
      		proxy_bind $server_addr;
      		proxy_set_header X-Forwarded-Host $server_name;
	proxy_set_header X-Forwarded-Ssl     on;
	auth_basic "Restricted";
      		auth_basic_user_file /config/.htpasswd;
	}
 

 

 

Here's what I got in Plexpy

 

eEZNwwJ.png

And in my plexpy docker log

 

 


2016-12-11 15:25:11 - INFO :: MainThread : PlexPy WebStart :: Starting PlexPy web server on http://0.0.0.0:8181/plexpy/
 

 

I set this in my /site-confs default file but when I go to my xxxxx.duckdns.org/plexpy, I get: 

This site can’t be reached

_’s server DNS address could not be found.

 

Link to comment
5 hours ago, puncho said:

I set this in my /site-confs default file but when I go to my xxxxx.duckdns.org/plexpy, I get: 

This site can’t be reached

_’s server DNS address could not be found.

 

 

this is how the location block should look:

 

    location /plexpy{
        proxy_pass http://xxx.xxx.xxx.xxx:8181;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_read_timeout 90;
        proxy_set_header X-Forwarded-Proto $scheme;
        set $xforwardedssl "off";
        if ($scheme = https) {
                set $xforwardedssl "on";
        }
        proxy_set_header X-Forwarded-Ssl $xforwardedssl;
        proxy_redirect ~^(http(?:s)?://)([^:/]+)(?::\d+)?(/.*)?$ $1$2:$server_port$3;
    }

 

Also make sure the server block has the server directive set correctly:

 

server_name example.com;

 

I believe yours right now is:

 

server_name _;

Link to comment
19 hours ago, aptalca said:

 

Unraid gui uses 443 so letsencrypt container cannot bind to it. You can disable unraid https in settings

 

So here is what i tried:

 

I disabled HTTPS then restarted - wont load the gui

I changed the HTTPS port and restarted - wont load the gui

 

Is there something I am missing here?

  • Upvote 1
Link to comment
On 3/22/2017 at 4:50 PM, local.bin said:

Thanks for that. It does indeed work as you suggest it does.

 

I am trying to incorporate sendmail using the standard fail2ban actions in actions.d in order that the email content is created by fail2ban, to give me the detail of who has been banned etc.

 

The fail2ban docs talk about adding the following to jail.local, which works, but because sendmail does not have the base config setup via its config (-H 'exec openssl s_client -quiet -tls1 -connect smtp.gmail.com:465' -auMYEMAILADDRESS -apMYPASSWORD) it fails, as sendmail tries to send from localhost.


mta = mail

action = %(action_mw)s 

I'm having some trouble with this. 

 

My jail.local in /config looks like this

# This is the custom version of the jail.conf for fail2ban
# Feel free to modify this and add additional filters
# Then you can drop the new filter conf files into the fail2ban-filters
# folder and restart the container

[DEFAULT]

# "bantime" is the number of seconds that a host is banned.
bantime  = 600

# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime  = 600

# "maxretry" is the number of failures before a host get banned.
maxretry = 5


[ssh]

enabled = false


[nginx-http-auth]

enabled  = true
filter   = nginx-http-auth
port     = http,https
logpath  = /config/log/nginx/error.log
mta = sendmail
action = sendmail-whois[name=letsencrypt, dest=<[email protected]>]

[nginx-badbots]

enabled  = true
port     = http,https
filter   = nginx-badbots
logpath  = /config/log/nginx/access.log
maxretry = 2


[nginx-botsearch]

enabled  = true
port     = http,https
filter   = nginx-botsearch
logpath  = /config/log/nginx/access.log

And in config/action.d I copied the sendmail-whois.conf to sendmail-whois.local 

# Fail2Ban configuration file
#
# Author: Cyril Jaquier
#
#

[INCLUDES]

before = sendmail-common.conf

[Definition]

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionban = printf %%b "Subject: [Fail2Ban] <name>: banned <ip> from `uname -n`
            Date: `LC_ALL=C date +"%%a, %%d %%h %%Y %%T %%z"`
            From: <sendername> <<sender>>
            To: <dest>\n
            Hi,\n
            The IP <ip> has just been banned by Fail2Ban after
            <failures> attempts against <name>.\n\n
            Here is more information about <ip> :\n
            `/usr/bin/whois <ip> || echo missing whois program`\n
            Regards,\n
            Fail2Ban" | /usr/sbin/sendmail -t -v -H 'exec openssl s_client -quiet -tls1 -connect smtp.gmail.com:465' -au<username> -ap<password> <dest>

[Init]

# Default name of the chain
#
name = default

But I get this in fail2ban.log 

2017-11-07 22:16:42,999 fail2ban.jail           [310]: INFO    Jail 'nginx-http-auth' started
2017-11-07 22:16:43,001 fail2ban.jail           [310]: INFO    Jail 'nginx-botsearch' started
2017-11-07 22:16:43,002 fail2ban.jail           [310]: INFO    Jail 'nginx-badbots' started
2017-11-07 22:16:43,009 fail2ban.utils          [310]: ERROR   printf %b "Subject: [Fail2Ban] letsencrypt: started on `uname -n`
Date: `LC_ALL=C date +"%a, %d %h %Y %T %z"`
From: Fail2Ban <fail2ban>
To: <email@gmail.com>\n
Hi,\n
The jail letsencrypt has been started successfully.\n
Regards,\n
Fail2Ban" | /usr/sbin/sendmail -f fail2ban <email@gmail.com> -- stderr:
2017-11-07 22:16:43,009 fail2ban.utils          [310]: ERROR    -- stderr: '/bin/sh: syntax error: unexpected end of file'
2017-11-07 22:16:43,009 fail2ban.utils          [310]: ERROR   printf %b "Subject: [Fail2Ban] letsencrypt: started on `uname -n`
Date: `LC_ALL=C date +"%a, %d %h %Y %T %z"`
From: Fail2Ban <fail2ban>
To: <email@gmail.com>\n
Hi,\n
The jail letsencrypt has been started successfully.\n
Regards,\n
Fail2Ban" | /usr/sbin/sendmail -f fail2ban <email@gmail.com> -- returned 2
2017-11-07 22:16:43,010 fail2ban.actions        [310]: ERROR   Failed to start jail 'nginx-http-auth' action 'sendmail-whois': Error starting action Jail('nginx-http-auth')/sendmail-whois

It's like it skips the .local file and uses the sendmail-whois.conf file???

I event completely removed the container and deleted the image and /config folder, but this still happens. 

 

When I bash into the container and do this: 

sendmail -t -v  -H 'exec openssl s_client -quiet -tls1 -connect smtp.gmail.com:465' -auMYEMAILADDRESS -apMYPASSWORD <mail.txt

it works just fine. 

 

Anyone know whats wrong?

Link to comment

IGNORE THIS!! I'm going to leave this here for prudence sake in case someone else has the same issue.

 

It turns out the template DID have a "www" subdomain, which I removed (I removed the "subdomains" variable completely). For some reason this did not work. So I remade the subdomains variable, manually removed the container, and remade the container using the Unraid GUI. This time it worked.

 

------------------------------------------------------------------------------

 

I'm having some issues with getting a new cert generated by letsencrypt / certbot. Everything was working fine up until this morning.

 

It is trying to get a cert for www.mydomain.ddns.net - but that does not exist since it is a DDNS service - there is only mydomain.ddns.net. How do I change it so that it does not try to validate www.mydomain.ddns.net? My unraid docker config does not have any subdomains listed for it.

 

See the log below.

 

-------------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donations/
-------------------------------------
GID/UID
-------------------------------------
User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
2048 bit DH parameters present
SUBDOMAINS entered, processing
Sub-domains processed are: -d www.MYDOMAIN.ddns.net
E-mail address entered: REDACTED
Different sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
usage:
certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert.
certbot: error: argument --cert-path: No such file or directory

Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for MYDOIMAIN.ddns.net
tls-sni-01 challenge for www.MYDOMAIN.ddns.net
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. www.MYDOMAIN.ddns.net (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: DNS problem: NXDOMAIN looking up A for www.MYDOMAIN.ddns.net

IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: www.MYDOIMAIN.ddns.net
Type: connection
Detail: DNS problem: NXDOMAIN looking up A for www.MYDOMAIN.ddns.net

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
/var/run/s6/etc/cont-init.d/50-config: line 127: cd: /config/keys/letsencrypt: No such file or directory
[cont-init.d] 50-config: exited 1.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

 

Edited by cglatot
Link to comment

Hello,

 

I've got the letsencrypt docker running, as well as some other dockers (mostly Nextcloud and Plex).

 

I tried to redirect all http-requests to https using a site-config-file as shown below:

 

server {
	listen 80;
	server_name *.example.com example.com;
	return 301 https://$host$request_uri;
}

The site-configs for Nextcloud etc. are in separate files and the default site-config-file is just an empty file.

 

Whenever I try to access my Cloud via https://cloud.example.com (note the https), everything is working fine.

But, whenever I try to access my Cloud via http://cloud.example.com, no redirect takes place and I am promted to enter my Unraid username and password (like root and password).

 

Does anyone have an idea what could be the reason for that?

Link to comment
1 hour ago, Altair said:

Hello,

 

I've got the letsencrypt docker running, as well as some other dockers (mostly Nextcloud and Plex).

 

I tried to redirect all http-requests to https using a site-config-file as shown below:

 


server {
	listen 80;
	server_name *.example.com example.com;
	return 301 https://$host$request_uri;
}

The site-configs for Nextcloud etc. are in separate files and the default site-config-file is just an empty file.

 

Whenever I try to access my Cloud via https://cloud.example.com (note the https), everything is working fine.

But, whenever I try to access my Cloud via http://cloud.example.com, no redirect takes place and I am promted to enter my Unraid username and password (like root and password).

 

Does anyone have an idea what could be the reason for that?

 

Could be your site config for nextcloud or could be browser cache. 301 is a permanent redirect.

Link to comment
1 minute ago, aptalca said:

 

Could be your site config for nextcloud or could be browser cache. 301 is a permanent redirect.

 

I tried opening the page via the by then untouched IE and still get the same result, e.g. http://subd.example.com/ will redirect me to the Unraid login dialog.

 

It does not only happen when trying to access my cloud but also whenever I try to open e.g. http://plex.example.com for Plex.

Thus, I conduct that it has to do something with letsencrypt.

Link to comment
On 4/22/2017 at 5:35 PM, dukiethecorgi said:

 

Hey, got it working!  The problem was the location of the GeoIP.dat file, it defaulted to /usr/share/GeoIP/GeoIP.dat so I created /config/geodata, changed the config, and manually downloaded the data

Would you be able to share how you set this up? I would like to also implement Geo blocking.

 

Thanks.

Edited by FalconX
Link to comment
16 hours ago, Altair said:

 

I tried opening the page via the by then untouched IE and still get the same result, e.g. http://subd.example.com/ will redirect me to the Unraid login dialog.

 

It does not only happen when trying to access my cloud but also whenever I try to open e.g. http://plex.example.com for Plex.

Thus, I conduct that it has to do something with letsencrypt.

 

Then it's your site config. Without seeing that, we have no idea

Link to comment

For the most part I'm happy with the docker and it does what I need it to do, however I'm running into 413 Entity request too large when I try to upload anything larger than 10MB to my nextcloud Docker. Downloading works fine. Uploading directly to nextcloud works fine as well (not using reverse proxy).

 

Example error from the log:

2017/11/14 16:17:29 [error] 339#339: *79 client intended to send too large body: 16014220 bytes, client: 79.223.239.124, server: cloud.*, request: "PUT /remote.php/webdav/Photos/2017/11/17-11-14%2013-30-11%200175.mov HTTP/1.1", host: "cloud.dixl.me"

 

My config for this particular reverse proxy:

server {
        listen 443 ssl;

        root /config/www;
        index index.html index.htm index.php;

        server_name cloud.*;

        ssl_certificate /config/keys/letsencrypt/fullchain.pem;
        ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
        ssl_dhparam /config/nginx/dhparams.pem;
        ssl_ciphers 'stuff I'm not sure I want to share';
        ssl_prefer_server_ciphers on;

        client_max_body_size 0;
        client_body_temp_path /unraid/www/cache/;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
#               auth_basic "Restricted";
#               auth_basic_user_file /config/nginx/.htpasswd;
                include /config/nginx/proxy.conf;
                proxy_pass https://192.168.59.140:443;
        }

 

Any help would be appreciated.

Link to comment
21 minutes ago, Napper198 said:

For the most part I'm happy with the docker and it does what I need it to do, however I'm running into 413 Entity request too large when I try to upload anything larger than 10MB to my nextcloud Docker. Downloading works fine. Uploading directly to nextcloud works fine as well (not using reverse proxy).

 

Example error from the log:


2017/11/14 16:17:29 [error] 339#339: *79 client intended to send too large body: 16014220 bytes, client: 79.223.239.124, server: cloud.*, request: "PUT /remote.php/webdav/Photos/2017/11/17-11-14%2013-30-11%200175.mov HTTP/1.1", host: "cloud.dixl.me"

 

My config for this particular reverse proxy:


server {
        listen 443 ssl;

        root /config/www;
        index index.html index.htm index.php;

        server_name cloud.*;

        ssl_certificate /config/keys/letsencrypt/fullchain.pem;
        ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
        ssl_dhparam /config/nginx/dhparams.pem;
        ssl_ciphers 'stuff I'm not sure I want to share';
        ssl_prefer_server_ciphers on;

        client_max_body_size 0;
        client_body_temp_path /unraid/www/cache/;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
#               auth_basic "Restricted";
#               auth_basic_user_file /config/nginx/.htpasswd;
                include /config/nginx/proxy.conf;
                proxy_pass https://192.168.59.140:443;
        }

 

Any help would be appreciated.

 

there's a solution its on the thread i dont have it off hand. search for client-size=10m; its on the nginx config

 

its needs to be zero.

Link to comment
32 minutes ago, ijuarez said:

 

there's a solution its on the thread i dont have it off hand. search for client-size=10m; its on the nginx config

 

its needs to be zero.

 

If you mean client_max_body_size that is also zero in the config.

Any idea about the wording of that post? I can't seem to find it unfortunately

 

user abc;
worker_processes 4;
pid /run/nginx.pid;
include /etc/nginx/modules/*.conf;

events {
        worker_connections 768;
        # multi_accept on;
}

http {
        # Basic Settings
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        client_max_body_size 0;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # Logging Settings
        access_log /config/log/nginx/access.log;
        error_log /config/log/nginx/error.log;

        # Gzip Settings
		gzip off;
        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

        # Virtual Host Configs
        include /etc/nginx/conf.d/*.conf;
        include /config/nginx/site-confs/*;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Robots-Tag none;
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7

}

 

Edited by Napper198
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.