[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

15 minutes ago, Rene said:

 

I don't know how to do the Let's Encrypt part, I'm not good with the command line stuff.

 

No need for command line. Just do everything in the unraid GUI. 

 

Or do you mean the reverse proxy part?

Edited by GilbN
  • Like 1
Link to comment
On 4/8/2018 at 5:43 PM, wgstarks said:

 

 

3 hours ago, GilbN said:

 

1. You are proxy_passing http not https. AND you are proxy_passing the letsencrypt container?! you need to proxy_pass the NEXTCLOUD container!  proxy_pass https://192.168.1.113:8443;

 

2: you config.php is wrong it needs to be 'overwrite.cli.url' => 'https://YOURsubdomain.duckdns.org', NOT you localip to nextcloud

holy shit shit shit shit it works after all that time it works thank you so much i am incredibly appreciative you have no idea how good it feels to see owncloud come up secured

Link to comment
4 minutes ago, phreeq said:

I just know that I'm missing something, but I can't figure out what. If any of y'all can give me a hand, I'd appreciate it.

 

Docker.png

http forward.png

https forward.png

log.png

NAT.png

 

cant do wildcard with http validation iirc. Read the FAQ on github

Edited by GilbN
Link to comment
2 hours ago, myt30 said:

I dug through most of this post and I'm sure I'm missing something obvious, but could someone please look through my settings. I have tried everything that I can think of and I cant get it to work. Thank you Folks

docker config.PNG

Docker Log.PNG

port forward 1.PNG

port forward 2.PNG

 

Hmm, the port forwarding seems to be correct. Is the IP on duckdns correct?

 

If so, try restarting your router. And if that doesn't work, try setting the config folder location to /mnt/cache or /mnt/disk instead of /mnt/user

Link to comment
22 hours ago, aptalca said:

 

Hmm, the port forwarding seems to be correct. Is the IP on duckdns correct?

 

If so, try restarting your router. And if that doesn't work, try setting the config folder location to /mnt/cache or /mnt/disk instead of /mnt/user

IP is correct, if I ping the domain name it comes up with my IP.

I appreciate your help, so far I've tried these:

restart router(plex port forward does work)

update firmware on router

reinstall docker from user to cache

change domain name to staticserver.duckdns.org instead of duckdns.org

change to different ports

 

I'm kind of at a loss for other things to try, is there a docker for testing port forward settings?

Link to comment
4 hours ago, myt30 said:

I'm kind of at a loss for other things to try, is there a docker for testing port forward settings?

Install a docker for a plain web server configured for host 85 and 4445, and see if you can get external 80 and 443 answered properly there. You can have multiple dockers answering on the same ports as long as they aren't ever running at the same time.

Link to comment
18 hours ago, jonathanm said:

Install a docker for a plain web server configured for host 85 and 4445, and see if you can get external 80 and 443 answered properly there. You can have multiple dockers answering on the same ports as long as they aren't ever running at the same time.

Thankyou for your help. Unfortunately my understanding of unraid and networking is pretty basic, and mostly comes from youtube. While I think I get what you are saying, I am not sure how to do it. Any chance you could reference a video or specific docker container?

Link to comment

So I just got an expiry notice from Letsencrypt that my certs are going to expire in 20 days.  Yet when I restart my Letsencrypt docker, it shows:

 

Quote

http validation is selected
Certificate exists; parameters unchanged; attempting renewal
<------------------------------------------------->

<------------------------------------------------->
cronjob running on Thu Apr 19 09:52:10 EDT 2018
Running certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------

No renewals were attempted.
No hooks were run.
 

 

 

So the question is, why are they not renewing?

Link to comment
1 hour ago, IamSpartacus said:

So I just got an expiry notice from Letsencrypt that my certs are going to expire in 20 days.  Yet when I restart my Letsencrypt docker, it shows:

 

 

 

So the question is, why are they not renewing?

Have you changed the list of domains at any point while using the LE docker? It's very possible that the certificate that is expiring is not on the list of currently used domains.

Link to comment
1 hour ago, jonathanm said:

Have you changed the list of domains at any point while using the LE docker? It's very possible that the certificate that is expiring is not on the list of currently used domains.

 

All the domains listed in the email I got from LE about them expiring are active and match what's in my current config.  The only difference is that the email address associated with the domains has changed.  So the email I got was sent to my old email address, the current (same) domains are registered to a different email.

 

I thought maybe the email didn't apply because the certs were tied to a different email but on LE's website they mention that you won't get an expiring notice if the same domain is renewed even by a different email account.  So that's what has me confused.  When I look at my current cert, it says they are set to expire on May 8...so why are they not renewing when the LE container restarts?

Link to comment
8 hours ago, IamSpartacus said:

 

All the domains listed in the email I got from LE about them expiring are active and match what's in my current config.  The only difference is that the email address associated with the domains has changed.  So the email I got was sent to my old email address, the current (same) domains are registered to a different email.

 

I thought maybe the email didn't apply because the certs were tied to a different email but on LE's website they mention that you won't get an expiring notice if the same domain is renewed even by a different email account.  So that's what has me confused.  When I look at my current cert, it says they are set to expire on May 8...so why are they not renewing when the LE container restarts?

 

I don't know. You clipped the log, so we can't see why no renewals were attempted.

 

Post a full log.

Link to comment
11 hours ago, aptalca said:

 

I don't know. You clipped the log, so we can't see why no renewals were attempted.

 

Post a full log.

 

Here you go, with just my personal info redacted.

 

Quote

 

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=America/New_York
URL=MY_DOMAIN
SUBDOMAINS=MY_SUBDOMAINS
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=false
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
EMAIL=MY_EMAIL
STAGING=

Backwards compatibility check. . .
No compatibility action needed
2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Sub-domains processed are:  -d MY_SUBDOMAIN1 -d MY_SUBDOMAIN2 -d MY_SUBDOMAIN3 -d MY_SUBDOMAIN4
E-mail address entered: MY_EMAIL
http validation is selected
Certificate exists; parameters unchanged; attempting renewal
<------------------------------------------------->

<------------------------------------------------->
cronjob running on Thu Apr 19 09:52:10 EDT 2018
Running certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------

No renewals were attempted.
No hooks were run.
-------------------------------------------------------------------------------
[cont-init.d] 50-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready

 

 

Edited by IamSpartacus
Link to comment
28 minutes ago, IamSpartacus said:

 

Here you go, with just my personal info redacted.

 

 

 

That is strange.

 

Run the following commands on unraid terminal:

docker exec -it letsencrypt bash
cat /var/log/letsencrypt/letsencrypt.log
ls -al /config/etc/letsencrypt/renewal

 

Link to comment
15 minutes ago, aptalca said:

 

That is strange.

 

Run the following commands on unraid terminal:


docker exec -it letsencrypt bash
cat /var/log/letsencrypt/letsencrypt.log
ls -al /config/etc/letsencrypt/renewal

 

 

 

Here you are:

 

Quote

root@972df1a70c01:/$ cat /var/log/letsencrypt/letsencrypt.log
2018-04-20 02:08:02,223:DEBUG:certbot.main:certbot version: 0.23.0
2018-04-20 02:08:02,224:DEBUG:certbot.main:Arguments: ['-n', '--pre-hook', 'if ps aux | grep [n]ginx: > /dev/null; then s6-svc -d /var/run/s6/services/nginx; fi', '--post-hook', "if ps aux | grep 's6-supervise nginx' | grep -v grep > /dev/null; then s6-svc -u /var/run/s6/services/nginx; fi;     cd /config/keys/letsencrypt &&     openssl pkcs12 -export -out privkey.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem -passout pass:"]
2018-04-20 02:08:02,224:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#certbot-route53:auth,PluginEntryPoint#dns-cloudflare,PluginEntryPoint#dns-cloudxns,PluginEntryPoint#dns-digitalocean,PluginEntryPoint#dns-dnsimple,PluginEntryPoint#dns-dnsmadeeasy,PluginEntryPoint#dns-google,PluginEntryPoint#dns-luadns,PluginEntryPoint#dns-nsone,PluginEntryPoint#dns-rfc2136,PluginEntryPoint#dns-route53,PluginEntryPoint#manual,PluginEntryPoint#null,PluginEntryPoint#standalone,PluginEntryPoint#webroot)
2018-04-20 02:08:02,252:DEBUG:certbot.log:Root logging level set at 20
2018-04-20 02:08:02,253:INFO:certbot.log:Saving debug log to /var/log/letsencrypt/letsencrypt.log
2018-04-20 02:08:02,254:DEBUG:certbot.renewal:no renewal failures
root@972df1a70c01:/$ ls -al /config/etc/letsencrypt/renewal
total 0
drwxrwxrwx 2 abc abc   6 Feb 10 12:53 .
drwxrwxrwx 9 abc abc 108 Apr 20 02:08 ..
 

 

Edited by IamSpartacus
Link to comment
3 hours ago, IamSpartacus said:

 

 

Here you are:

 

 

 

Very strange. Your letsencrypt setup is missing the renewal config file which should be auto generated along with the cert. Could be due to a bug from the last time your cert was generated, or it was somehow deleted by mistake.

 

Change the container settings and add or remove a subdomain, which should force cert generation from scratch. After that, you can run the same commands and the last one should list a conf file with the name of your domain

Link to comment
5 minutes ago, aptalca said:

 

Very strange. Your letsencrypt setup is missing the renewal config file which should be auto generated along with the cert. Could be due to a bug from the last time your cert was generated, or it was somehow deleted by mistake.

 

Change the container settings and add or remove a subdomain, which should force cert generation from scratch. After that, you can run the same commands and the last one should list a conf file with the name of your domain

 

Yup, editing the config forced a new cert generation and seems to have fixed it.  I'll keep that in mind if I ever have issues again in the future.  Thank you!

Link to comment

I've gotten everything working so far for use with Ombi on unRAID 6.1.9, (I'll eventually be moving everything over to the sister server v6.5.0)

 

My questions now are really focused on security.

I have this server on my home network, with a few business servers running on the same network, and some business data even in the same unRAID server.

I'm hoping I may list my setup configurations here, and someone may be able to answer a few questions.

(All sensitive information fields have been redacted)

 

DDNS

 

DDNS is handled through freedns.afraid.org where mysub.strangled.net resolves to my home dynamically agssigned IP provided by my ISP.

The DDNS update is maintained by the DDNS updater in my DD-WRT flashed router.

 

DNS 

 

The domain I use is actually a split domain, as the base domain mydomain.com points to an external business mail server.  I set up a separate sub-domain for use, we'll call it: mattflix.mydomain.com.  This is a CNAME using cloudflare DNS, that resolves to my DDNS domain provided by freedns.  This setup looks like so:

  • mattflix.mydomain.com --> mysub.strangled.net --> external IP at home.

Questions:

 

I assume because I am using a sub-domain of mydomain.com and not the base domain, this is what would be limiting me to only being able to use one letsencrypt/nginx/site-conf file (default) instead of what I've seen in this thread about using multiple files one for each subdomain?

  • I've tried every possible way I could find, and think of,  to make this work, with the default, without, with main server block in the default, and separate in each site conf..  But every time I have more than one site-conf file it kills the page and gives a connection refused.  (This does the same if I try to list more than one sub-domain /location in the default file.

I'm just confirming a suspicion here, and I know I can switch to mydomain.com/service over subdomains, or just buy a dedicated base domain.com.  Just want to confirm that is the solution or I'm doing something wrong.

 

 

Also: I currently have a couple subdomain.mydomains.com that resolve to the same DDNS destination.  The problem is they all translate to mattflix.mydomain.com

  • I'd like if possible to only allow specifically mattflix.mydomian.com resolve, and any other valid subdomain.mydomain.com either time out, or error.  I tried but just couldn't get it, I played around with the server listen block, and I suspect I'm just missing something in there?

 

 

Port Forwarding / LetsEncrypt Docker Container Setup

 

Firewall port forwarding is standard: External --> Internal  80 ---> 81 | 443 --> 444 

LetsEncrypt Docker Configuration

 

/letsencrypt/nginx/site-confs/default

## Source: https://github.com/1activegeek/nginx-config-collection/blob/master/apps/ombi/ombi.md

server {
      listen		80;
       server_name	mattflix.mydomain.com;
       return		301 https://$host$request_uri;
}

server {
listen 443 ssl;
server_name mattflix.mydomain.com;

## Set root directory & index
        root /config/www;
        index index.html index.htm index.php;

## Turn off client checking of client request body size
        client_max_body_size 0;

## Custom error pages
	error_page 400 401 402 403 404 /error.php?error=$status;


#SSL settings
        include /config/nginx/strong-ssl.conf;


location / {
	## Default <port> is 5000, adjust if necessary
		proxy_pass  http://myipaddress:38084;

	## Using a single include file for commonly used settings
		include /config/nginx/proxy.conf;

	proxy_cache_bypass $http_upgrade;
	proxy_set_header Connection keep-alive;
	proxy_set_header Upgrade $http_upgrade;
	proxy_set_header X-Forwarded-Host $server_name;
	proxy_set_header X-Forwarded-Ssl on;
	}
	## Required for Ombi 3.0.2517+
		if ($http_referer ~* /) {
			rewrite ^/dist/([0-9\d*]).js /dist/$1.js last;
		}

 

/letsencrypt/nginx/strong-ssl.conf

## Source: https://github.com/1activegeek/nginx-config-collection
## READ THE COMMENT ON add_header X-Frame-Options AND add_header Content-Security-Policy IF YOU USE THIS ON A SUBDOMAIN YOU WANT TO IFRAME!

## Certificates from LE container placement
	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;

## Strong Security recommended settings per cipherli.st
	ssl_dhparam /config/nginx/dhparams.pem; # Bit value: 4096
	ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
	ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
	ssl_session_timeout  10m;

## Settings to add strong security profile (A+ on securityheaders.io/ssllabs.com)
	add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
	add_header X-Content-Type-Options nosniff;
	add_header X-XSS-Protection "1; mode=block";

	#SET THIS TO none IF YOU DONT WANT GOOGLE TO INDEX YOU SITE!
		add_header X-Robots-Tag none;

	## Use *.domain.com, not *.sub.domain.com when using this on a sub-domain that you want to iframe!
		add_header Content-Security-Policy "frame-ancestors https://*.$server_name https://$server_name";

	## Use *.domain.com, not *.sub.domain.com when using this on a sub-domain that you want to iframe!
		add_header X-Frame-Options "ALLOW-FROM https://*.$server_name" always;

	add_header Referrer-Policy "strict-origin-when-cross-origin";
	proxy_cookie_path / "/; HTTPOnly; Secure";
	more_set_headers "Server: Classified";
	more_clear_headers 'X-Powered-By';

	#ONLY FOR TESTING!!! READ THIS!: https://scotthelme.co.uk/a-new-security-header-expect-ct/
		add_header Expect-CT max-age=0,report-uri="https://domain.report-uri.com/r/d/ct/reportOnly";

 

/letsencrypt/nginx/proxy.conf

client_max_body_size 10m;
client_body_buffer_size 128k;

#Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Advanced Proxy Config
send_timeout 5m;
proxy_read_timeout 240;
proxy_send_timeout 240;
proxy_connect_timeout 240;

# Basic Proxy Config
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect  http://  $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 32 4k;

 

This all gets me an A+ on securityheaders.io which is great and all, but what does that actually do for me?

I'm concerned about brute force, DDOS, and some thug trying to muscle into my server.

 

I've been trying to read up on fail2ban and it's implementation, however from what I've found, because I'm using this for Ombi, and it authenticates off of the users Plex account, this bypasses the fail2ban?

 

I've tried monitoring the fail2ban status however I get this error when I try to check the status:

 

docker exec -it LetsEncrypt bash
root@d3fc185ce9d5:/$ fail2ban-client -i
Fail2Ban v0.10.1 reads log file that contains password failure report
and bans the corresponding IP addresses using firewall rules.

fail2ban> status nginx-http-auth
 Failed to access socket path: /var/run/fail2ban/fail2ban.sock. Is fail2ban running?
fail2ban>

I know the first time I ran the command, everything reported, though it reported no activity.

 

/letsencrypt/fail2ban/jail.local

# This is the custom version of the jail.conf for fail2ban
# Feel free to modify this and add additional filters
# Then you can drop the new filter conf files into the fail2ban-filters
# folder and restart the container

[DEFAULT]

#       ##"bantime" is the number of seconds that a host is banned.
                bantime  = 259200

#       ## A host is banned if it has generated "maxretry" during the last "findtime" seconds.
                findtime  = 600

#       ## "maxretry" is the number of failures before a host get banned.
                maxretry = 3


        [ssh]

                enabled = false


        [nginx-http-auth]

                enabled  = true
                filter   = nginx-http-auth
                port     = http,https
                logpath  = /config/log/nginx/error.log
#               ignorip = myipaddress.0/24


        [nginx-badbots]

                enabled  = true
                port     = http,https
                filter   = nginx-badbots
                logpath  = /config/log/nginx/access.log
                maxretry = 2


        [nginx-botsearch]

                enabled  = true
                port     = http,https
                filter   = nginx-botsearch
                logpath  = /config/log/nginx/access.log



## Unbanning

#       ## SSH into the container with:
#               docker exec -it LetsEncrypt bash

#       ## Enter fail2ban interactive mode:
#               fail2ban-client -i

#       ## Check the status of the jail:
#               status nginx-http-auth

#       ## Unban with:
#       set nginx-http-auth unbanip 77.16.40.104

#       ## If you already know the IP you want to unban you can just type this:
#               docker exec -it letsencrypt fail2ban-client set nginx-http-auth unbanip 77.16.40.104

 

 

I know there's no such thing at perfectly 100% secure, but with port 80 & 443 being open, and only relying on Ombi/Plex password security, I just feel like my ass is hanging in the wind.

Any guidance would be most appreciated.

Edited by Drider
Added proxy.conf
Link to comment
3 hours ago, Drider said:

I've gotten everything working so far for use with Ombi on unRAID 6.1.9, (I'll eventually be moving everything over to the sister server v6.5.0)

 

My questions now are really focused on security.

I have this server on my home network, with a few business servers running on the same network, and some business data even in the same unRAID server.

I'm hoping I may list my setup configurations here, and someone may be able to answer a few questions.

(All sensitive information fields have been redacted)

 

DDNS

 

DDNS is handled through freedns.afraid.org where mysub.strangled.net resolves to my home dynamically agssigned IP provided by my ISP.

The DDNS update is maintained by the DDNS updater in my DD-WRT flashed router.

 

DNS 

 

The domain I use is actually a split domain, as the base domain mydomain.com points to an external business mail server.  I set up a separate sub-domain for use, we'll call it: mattflix.mydomain.com.  This is a CNAME using cloudflare DNS, that resolves to my DDNS domain provided by freedns.  This setup looks like so:

  • mattflix.mydomain.com --> mysub.strangled.net --> external IP at home.

Questions:

 

I assume because I am using a sub-domain of mydomain.com and not the base domain, this is what would be limiting me to only being able to use one letsencrypt/nginx/site-conf file (default) instead of what I've seen in this thread about using multiple files one for each subdomain?

  • I've tried every possible way I could find, and think of,  to make this work, with the default, without, with main server block in the default, and separate in each site conf..  But every time I have more than one site-conf file it kills the page and gives a connection refused.  (This does the same if I try to list more than one sub-domain /location in the default file.

I'm just confirming a suspicion here, and I know I can switch to mydomain.com/service over subdomains, or just buy a dedicated base domain.com.  Just want to confirm that is the solution or I'm doing something wrong.

 

 

Also: I currently have a couple subdomain.mydomains.com that resolve to the same DDNS destination.  The problem is they all translate to mattflix.mydomain.com

  • I'd like if possible to only allow specifically mattflix.mydomian.com resolve, and any other valid subdomain.mydomain.com either time out, or error.  I tried but just couldn't get it, I played around with the server listen block, and I suspect I'm just missing something in there?

 

 

Port Forwarding / LetsEncrypt Docker Container Setup

 

Firewall port forwarding is standard: External --> Internal  80 ---> 81 | 443 --> 444 

LetsEncrypt Docker Configuration

 

/letsencrypt/nginx/site-confs/default


## Source: https://github.com/1activegeek/nginx-config-collection/blob/master/apps/ombi/ombi.md

server {
      listen		80;
       server_name	mattflix.mydomain.com;
       return		301 https://$host$request_uri;
}

server {
listen 443 ssl;
server_name mattflix.mydomain.com;

## Set root directory & index
        root /config/www;
        index index.html index.htm index.php;

## Turn off client checking of client request body size
        client_max_body_size 0;

## Custom error pages
	error_page 400 401 402 403 404 /error.php?error=$status;


#SSL settings
        include /config/nginx/strong-ssl.conf;


location / {
	## Default <port> is 5000, adjust if necessary
		proxy_pass  http://myipaddress:38084;

	## Using a single include file for commonly used settings
		include /config/nginx/proxy.conf;

	proxy_cache_bypass $http_upgrade;
	proxy_set_header Connection keep-alive;
	proxy_set_header Upgrade $http_upgrade;
	proxy_set_header X-Forwarded-Host $server_name;
	proxy_set_header X-Forwarded-Ssl on;
	}
	## Required for Ombi 3.0.2517+
		if ($http_referer ~* /) {
			rewrite ^/dist/([0-9\d*]).js /dist/$1.js last;
		}

 

/letsencrypt/nginx/strong-ssl.conf


## Source: https://github.com/1activegeek/nginx-config-collection
## READ THE COMMENT ON add_header X-Frame-Options AND add_header Content-Security-Policy IF YOU USE THIS ON A SUBDOMAIN YOU WANT TO IFRAME!

## Certificates from LE container placement
	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;

## Strong Security recommended settings per cipherli.st
	ssl_dhparam /config/nginx/dhparams.pem; # Bit value: 4096
	ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
	ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
	ssl_session_timeout  10m;

## Settings to add strong security profile (A+ on securityheaders.io/ssllabs.com)
	add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
	add_header X-Content-Type-Options nosniff;
	add_header X-XSS-Protection "1; mode=block";

	#SET THIS TO none IF YOU DONT WANT GOOGLE TO INDEX YOU SITE!
		add_header X-Robots-Tag none;

	## Use *.domain.com, not *.sub.domain.com when using this on a sub-domain that you want to iframe!
		add_header Content-Security-Policy "frame-ancestors https://*.$server_name https://$server_name";

	## Use *.domain.com, not *.sub.domain.com when using this on a sub-domain that you want to iframe!
		add_header X-Frame-Options "ALLOW-FROM https://*.$server_name" always;

	add_header Referrer-Policy "strict-origin-when-cross-origin";
	proxy_cookie_path / "/; HTTPOnly; Secure";
	more_set_headers "Server: Classified";
	more_clear_headers 'X-Powered-By';

	#ONLY FOR TESTING!!! READ THIS!: https://scotthelme.co.uk/a-new-security-header-expect-ct/
		add_header Expect-CT max-age=0,report-uri="https://domain.report-uri.com/r/d/ct/reportOnly";

 

/letsencrypt/nginx/proxy.conf


client_max_body_size 10m;
client_body_buffer_size 128k;

#Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Advanced Proxy Config
send_timeout 5m;
proxy_read_timeout 240;
proxy_send_timeout 240;
proxy_connect_timeout 240;

# Basic Proxy Config
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect  http://  $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 32 4k;

 

This all gets me an A+ on securityheaders.io which is great and all, but what does that actually do for me?

I'm concerned about brute force, DDOS, and some thug trying to muscle into my server.

 

I've been trying to read up on fail2ban and it's implementation, however from what I've found, because I'm using this for Ombi, and it authenticates off of the users Plex account, this bypasses the fail2ban?

 

I've tried monitoring the fail2ban status however I get this error when I try to check the status:

 


docker exec -it LetsEncrypt bash
root@d3fc185ce9d5:/$ fail2ban-client -i
Fail2Ban v0.10.1 reads log file that contains password failure report
and bans the corresponding IP addresses using firewall rules.

fail2ban> status nginx-http-auth
 Failed to access socket path: /var/run/fail2ban/fail2ban.sock. Is fail2ban running?
fail2ban>

I know the first time I ran the command, everything reported, though it reported no activity.

 

/letsencrypt/fail2ban/jail.local


# This is the custom version of the jail.conf for fail2ban
# Feel free to modify this and add additional filters
# Then you can drop the new filter conf files into the fail2ban-filters
# folder and restart the container

[DEFAULT]

#       ##"bantime" is the number of seconds that a host is banned.
                bantime  = 259200

#       ## A host is banned if it has generated "maxretry" during the last "findtime" seconds.
                findtime  = 600

#       ## "maxretry" is the number of failures before a host get banned.
                maxretry = 3


        [ssh]

                enabled = false


        [nginx-http-auth]

                enabled  = true
                filter   = nginx-http-auth
                port     = http,https
                logpath  = /config/log/nginx/error.log
#               ignorip = myipaddress.0/24


        [nginx-badbots]

                enabled  = true
                port     = http,https
                filter   = nginx-badbots
                logpath  = /config/log/nginx/access.log
                maxretry = 2


        [nginx-botsearch]

                enabled  = true
                port     = http,https
                filter   = nginx-botsearch
                logpath  = /config/log/nginx/access.log



## Unbanning

#       ## SSH into the container with:
#               docker exec -it LetsEncrypt bash

#       ## Enter fail2ban interactive mode:
#               fail2ban-client -i

#       ## Check the status of the jail:
#               status nginx-http-auth

#       ## Unban with:
#       set nginx-http-auth unbanip 77.16.40.104

#       ## If you already know the IP you want to unban you can just type this:
#               docker exec -it letsencrypt fail2ban-client set nginx-http-auth unbanip 77.16.40.104

 

 

I know there's no such thing at perfectly 100% secure, but with port 80 & 443 being open, and only relying on Ombi/Plex password security, I just feel like my ass is hanging in the wind.

Any guidance would be most appreciated.

 

There are a lot of questions here so I'll take a stab and try to answer a few. But before I start I should let you know, it seems you took a lot of these configs from guides or other people. They are all so heavily modified that it is difficult for me to help troubleshoot. Nginx is a highly capable and thus complicated piece of software. For specific nginx config questions, I would recommend asking the folks who wrote the specific configs you're using. 

 

First of all, I don't quite understand why you came up with such a confusing domain name forwarding structure. To me it seems all you needed was to use your main domain for business, and set A records for certain subdomains that point to your home server. If you're already using cloudflare for managing those dns records, I'm sure your ddwrt router can update those subdomains on cloudflare with ip changes. You shouldn't need to use the freedns ddns as an intermediate. 

 

In letsencrypt, you can do one of two things, 1) set the url as mydomain.com, set the subdomains as matt,matt2,matt3 and set only_subdomains to true. That way it won't try to validate mydomain.com but you can add as many subdomains as you want, or 2) set the url to matt.mydomain.com and set subdomains to 1,2,3 and you'll get a cert that covers 1.matt.mydomain.com etc. 

 

With regards to multiple site configs, it's up to you. Nginx simply combines them all into one giant config file through include statements in nginx.conf and other sub-confs. The only rule is, there has to be one named default, otherwise the container will create one. You likely had issues due to duplicate servers or locations. 

 

If you want your server to only respond to certain requests, play with the server name. Keep in mind that you can define a default server name for each port, so if nginx gets a request with an unrecognized destination address, it will send it to the default server. So you can create a separate server block as a catch all by defining it as the default and make it do whatever you want, serve a 404, redirect to Google, etc. 

Edited by aptalca
Link to comment

Thank you for your reply, it's very informative, and definitely gives a better understanding of what I'm trying to accomplish with DNS/DDNS.

 

2 hours ago, aptalca said:

There are a lot of questions here so I'll take a stab and try to answer a few. But before I start I should let you know, it seems you took a lot of these configs from guides or other people. They are all so heavily modified that it is difficult for me to help troubleshoot. Nginx is a highly capable and thus complicated piece of software. For specific nginx config questions, I would recommend asking the folks who wrote the specific configs you're using. 

You're right I did go through what felt like 5000 forums posts and hundreds of configs, splicing together the files I currently use.  I believe I have a good base, providing some security, (again I'm not sure as to how much beyond I need), and not mucking up with too much unneeded.  I have posted the same security related questions on the forums I found information outside of Lime.

 

2 hours ago, aptalca said:

First of all, I don't quite understand why you came up with such a confusing domain name forwarding structure. To me it seems all you needed was to use your main domain for business, and set A records for certain subdomains that point to your home server. If you're already using cloudflare for managing those dns records, I'm sure your ddwrt router can update those subdomains on cloudflare with ip changes. You shouldn't need to use the freedns ddns as an intermediate

I was not aware I could cut out the intermediary DDNS by using CloudFlare.  I've always been used to using DDNS with some kind of update client, I didn't know CloudFlare could do this automatically. 

  • I'll have to do some digging to understand how this is properly configured, as I'm a little foggy what host my subdomain A record should point to, I assume my current dynamically assigned IP?
  • I'm not entirely sure how I'd get a client updating the, as the router I use has a field for custom DDNS service, but I'm not sure where to even begin with that.  I guess Some more searching may shed some light.

 

2 hours ago, aptalca said:

In letsencrypt, you can do one of two things, 1) set the url as mydomain.com, set the subdomains as matt,matt2,matt3 and set only_subdomains to true. That way it won't try to validate mydomain.com but you can add as many subdomains as you want, or 2) set the url to matt.mydomain.com and set subdomains to 1,2,3 and you'll get a cert that covers 1.matt.mydomain.com etc. 

I believe I understand what you're saying here, and it sounds like all I need to do is set only_subdomains true and this will accomplish what I'm looking for in both only allowing mattflix.ouritservice.com, while refusing connections the the other subdomains pointed here (Given my default and subdomain site-confs are correct), as well as allowing me to use seperate site conf files.

 

I think I've had the lightbulb "ahh" moment.  .. at least I hope.

 

Now, if anyone could provide some insight into the security side of this using Ombi and Plex user logins.

 

 

Thanks for your response!

Edited by Drider
spelling, prolly missed more..
Link to comment
10 hours ago, Drider said:

Thank you for your reply, it's very informative, and definitely gives a better understanding of what I'm trying to accomplish with DNS/DDNS.

 

You're right I did go through what felt like 5000 forums posts and hundreds of configs, splicing together the files I currently use.  I believe I have a good base, providing some security, (again I'm not sure as to how much beyond I need), and not mucking up with too much unneeded.  I have posted the same security related questions on the forums I found information outside of Lime.

 

I was not aware I could cut out the intermediary DDNS by using CloudFlare.  I've always been used to using DDNS with some kind of update client, I didn't know CloudFlare could do this automatically. 

  • I'll have to do some digging to understand how this is properly configured, as I'm a little foggy what host my subdomain A record should point to, I assume my current dynamically assigned IP?
  • I'm not entirely sure how I'd get a client updating the, as the router I use has a field for custom DDNS service, but I'm not sure where to even begin with that.  I guess Some more searching may shed some light.

 

I believe I understand what you're saying here, and it sounds like all I need to do is set only_subdomains true and this will accomplish what I'm looking for in both only allowing mattflix.ouritservice.com, while refusing connections the the other subdomains pointed here (Given my default and subdomain site-confs are correct), as well as allowing me to use seperate site conf files.

 

I think I've had the lightbulb "ahh" moment.  .. at least I hope.

 

Now, if anyone could provide some insight into the security side of this using Ombi and Plex user logins.

 

 

Thanks for your response!

 

For Ombi you can setup .htpasswd and have fail2ban ban the ip after x amount of failed logins. Fail2ban is already setup to do that with [nginx-http-auth]. I would add ignoreip = x.x.x.x/24 so you don't ban yourself. Like this. 

 

[nginx-http-auth]

enabled  = true
filter   = nginx-http-auth
port     = http,https
logpath  = /config/log/nginx/error.log
ignoreip = 192.168.1.0/24

 

Or you could setup Organizr and use server authentication so that only users that are logged in to organizr can access domain.com/ombi. And setup fail2ban on the organizr login page.

With Organizr users that log in will automatically be logged into ombi/plex using SSO.  https://imgur.com/a/rcwq6rg

 

You can also setup geoblocking, that will block any country of your choosing. 

Edited by GilbN
  • Like 1
  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.