[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

I wondered if you had any plans of supporting manual updates, or forcing the use of the same CA cert, in order to be able to manage hpkp pinning?

 

I have the certs setup and working, but the auto update would mean I likely lock myself out of my site each time the renewal occurs.

 

I wondered if anyone had a working hpkp pinning process at all?

 

Thanks in advance

 

certbot-http-public-key-pinning-hpkp/

Edited by local.bin
Link to comment
1 hour ago, CHBMB said:

That's some config thing you've put in i would guess.  Have you added the word "unraid" to a config file somewhere?

Thanks, fixed it. It was a direct copy and paste oops ?. Now I'm wondering why the redirection is no longer working. No errors in logs too..

Link to comment
I wondered if you had any plans of supporting manual updates, or forcing the use of the same CA cert, in order to be able to manage hpkp pinning?
 
I have the certs setup and working, but the auto update would mean I likely lock myself out of my site each time the renewal occurs.
 
I wondered if anyone had a working hpkp pinning process at all?
 
Thanks in advance
 
certbot-http-public-key-pinning-hpkp/


Probably not. It seems to be more hassle than it's worth. IE and Edge don't support it either.

Plus, the whole purpose of this container is the automated certs.

You can use the plain nginx container and add the certs yourself if you only want manual updates. You can also run the letsencrypt container once, let it create the certs and stop it, then use the certs in the plain nginx container
Link to comment

Ok thanks 

20 minutes ago, aptalca said:

 


Probably not. It seems to be more hassle than it's worth. IE and Edge don't support it either.

Plus, the whole purpose of this container is the automated certs.

You can use the plain nginx container and add the certs yourself if you only want manual updates. You can also run the letsencrypt container once, let it create the certs and stop it, then use the certs in the plain nginx container

 

 

Probably right, just me trying to get the A+ from an A :)

 

Seems there is some dev from LE to perhaps make the process more automatic, so maybe there will be more options in the future.

Link to comment

I'm not able to get more than one subdomain working.

this is my nginx/site-confs/default

# listening on port 80 disabled by default, remove the "#" signs to enable
# redirect all traffic to https
#server {
#	listen 80;
#	server_name _;
#	return 301 https://$host$request_uri;
#}

# main server block
server {
	listen 443 ssl default_server;

	root /config/www;
	index index.html index.htm index.php;

	server_name _;

	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
	ssl_dhparam /config/nginx/dhparams.pem;
	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
	ssl_prefer_server_ciphers on;

	client_max_body_size 0;

	location / {
		try_files $uri $uri/ /index.html /index.php?$args =404;
	}

	location ~ \.php$ {
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		# With php7-cgi alone:
		fastcgi_pass 127.0.0.1:9000;
		# With php7-fpm:
		#fastcgi_pass unix:/var/run/php7-fpm.sock;
		fastcgi_index index.php;
		include /etc/nginx/fastcgi_params;
	}

# sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp"
# notice this is within the same server block as the base
# don't forget to generate the .htpasswd file as described on docker hub
#	location ^~ /cp {
#		auth_basic "Restricted";
#		auth_basic_user_file /config/nginx/.htpasswd;
#		include /config/nginx/proxy.conf;
#		proxy_pass http://192.168.1.50:5050/cp;
#	}
}

# sample reverse proxy config without url base, but as a subdomain "cp", ip and port same as above
# notice this is a new server block, you need a new server block for each subdomain
server {
	listen 443 ssl;

	root /config/www;
	index index.html index.htm index.php;

	server_name ombi.*;

	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
	ssl_dhparam /config/nginx/dhparams.pem;
	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
	ssl_prefer_server_ciphers on;

	client_max_body_size 0;

	location / {
		include /config/nginx/proxy.conf;
		proxy_pass http://192.168.0.178:3579;	
	}
}

This config correctly works with mydomain.com and ombi.mydomain.com 

When I copy/paste the second server block (and change subdomain/port), I get the following error what starting the letsencrypt docker:

nginx: [emerg] unexpected end of file, expecting ";" or "}" in /config/nginx/site-confs/default:98

 

Any tips on how to get multiple subdomains working?

I'm absolutely sure I'm copying the entire server block (everything from server { to the closing })

 

EDIT: sorry I realized this is for unraid only. mods feel free to delete if the question isn't relevant. I've also posted to the linuxserver.io forums.

Edited by P1X3LPU5H3R
whoops
Link to comment

Good Evening all,

 

I'm pulling my hair out trying to get nginx work as a reverse proxy only, i have a wildcard cert ill use :) Ill say it now but thanks for anything you can suggest!

 

I have an old nginx server running on a vm that i'd love to move to a docker, moving yet another device over to all that is great (unraid) but its deceiving me as to how in the heck to make it work :)

 

error in nginx logs is 

"nginx: [emerg] bind() to 192.168.1.82:8888 failed (99: Address not available)"

 

I have port 80 forwarded to 8888 (my unraid box)

 

my config file

 

server {

# The IP that you forwarded in your router (nginx proxy)

  listen 192.168.1.82:8888; # this is not the default_server

# Make site accessible from http://localhost/

 server_name share.mydomain.com;

# The internal IP of the VM that host you Apache config

 set $upstream 192.168.1.82:4443;

 location / {

 

 proxy_pass_header Authorization;

 proxy_pass http://$upstream;

 proxy_set_header Host $host;

 proxy_set_header X-Real-IP $remote_addr;

 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

 proxy_http_version 1.1;

 proxy_set_header Connection "";

 proxy_buffering off;

 client_max_body_size 0;

 proxy_read_timeout 36000s;

 proxy_redirect off;

 }

}


server {

# The IP that you forwarded in your router (nginx proxy)

  listen 192.168.1.82:8888; # this is not the default_server

# Make site accessible from http://localhost/

 server_name req.mydomain.com;

# The internal IP of the VM that host you Apache config

 set $upstream 192.168.1.82:3579;

 location / {

 

 proxy_pass_header Authorization;

 proxy_pass http://$upstream;

 proxy_set_header Host $host;

 proxy_set_header X-Real-IP $remote_addr;

 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

 proxy_http_version 1.1;

 proxy_set_header Connection "";

 proxy_buffering off;

 client_max_body_size 0;

 proxy_read_timeout 36000s;

 proxy_redirect off;

 }

}
server {

 listen 192.168.1.82:4444 ssl;

 

 # SSL config

 ssl on;

 ssl_certificate mnt/user/appdata//nginx/ctx.mydomain.com.crt;

 ssl_certificate_keymnt/user/appdata//nginx/ctx.mydomain.com.key;

 # Make site accessible from http://localhost/

server_name ctx.mydomain.com;

 set $upstream 192.168.1.233;

 location / {

 proxy_pass_header Authorization;

 proxy_pass https://$upstream;

 proxy_set_header Host $host;

 proxy_set_header X-Real-IP $remote_addr;

 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

 proxy_http_version 1.1;

 proxy_set_header Connection "";

 proxy_buffering off;

 client_max_body_size 0;

 proxy_read_timeout 36000s;

 proxy_redirect off;

 proxy_ssl_session_reuse off;

 }

}

server {

 listen 192.168.1.82:4444 ssl;

 

 # SSL config

 ssl on;

 ssl_certificate mnt/user/appdata//nginx/keys/share.mydomain.com.crt;

 ssl_certificate_key mnt/user/appdata//nginx/share.mydomain.com.key;

 # Make site accessible from http://localhost/

server_name share.server.com;

 set $upstream 192.168.1.82:4444;

 location / {

 proxy_pass_header Authorization;

 proxy_pass https://$upstream;

 proxy_set_header Host $host;

 proxy_set_header X-Real-IP $remote_addr;

 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

 proxy_http_version 1.1;

 proxy_set_header Connection "";

 proxy_buffering off;

 client_max_body_size 0;

 proxy_read_timeout 36000s;

 proxy_redirect off;

 proxy_ssl_session_reuse off;

 }

}
server {

# The IP that you forwarded in your router (nginx proxy)

  listen 192.168.1.82:8888; # this is not the default_server

# Make site accessible from http://localhost/

 server_name req.mydomain.com;

# The internal IP of the VM that host you Apache config

 set $upstream 192.168.1.82:3579;

 location / {

 

 proxy_pass_header Authorization;

 proxy_pass http://$upstream;

 proxy_set_header Host $host;

 proxy_set_header X-Real-IP $remote_addr;

 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

 proxy_http_version 1.1;

 proxy_set_header Connection "";

 proxy_buffering off;

 client_max_body_size 0;

 proxy_read_timeout 36000s;

 proxy_redirect off;

 }

}
 

 

 

nginx1.PNG

Edited by acbaldwi
wrong ports listed
Link to comment

If 192.168.1.82 is the IP of you're unRAID server, you are doing it wrong.

a docker container gets its own IP in a totally different subnet and its dynamic depending on what oder the container gets started up

 

You should make the nginx configuration be the default_server (ie any interface IP), that way it don't care about this detail.

Link to comment
21 minutes ago, ken-ji said:

If 192.168.1.82 is the IP of you're unRAID server, you are doing it wrong.

a docker container gets its own IP in a totally different subnet and its dynamic depending on what oder the container gets started up

 

You should make the nginx configuration be the default_server (ie any interface IP), that way it don't care about this detail.

 

 

82 is my unraid box, all my other dockers ive just set a different port, perhaps im hosing up life there, ive got plex, ombi, nextcloud all running that way

 

I guess i should say the only things i want external facing are my nextcloud and my ombi

Edited by acbaldwi
Link to comment
 
 
82 is my unraid box, all my other dockers ive just set a different port, perhaps im hosing up life there, ive got plex, ombi, nextcloud all running that way
 
I guess i should say the only things i want external facing are my nextcloud and my ombi

Docker is forwarding port 8888 to 80. Nginx inside the container will see/listen on port 80. There won't be a connection coming from/to localhost
Link to comment

Drop the listen lines, and just use server_name ones

server {
    listen 80 default;
    root /var/www/default;
}
server {
    listen 80;
    server_name mediastore;
    server_name 192.168.2.5;

    location / {
        proxy_pass http://192.168.2.5:8080;
    }

    location /transmission {
        satisfy any;
        allow 192.168.2.0/24;
        auth_basic "Transmission Remote Web Client";
        auth_basic_user_file /config/transmission.passwd;
        proxy_pass http://192.168.2.5:9091;
    }

    location /kibana {
        proxy_pass http://192.168.2.5:9000;
        rewrite ^/kibana$ /kibana/ permanent;
        rewrite ^/kibana/(.*) /$1 break;
        access_log off;
    }
}

Here's mine, where I proxy unRAID WebUI (on 8080), Transmission on 9091, and Kibana on 9000. my unRAID is on 192.168.2.5

and note that I don't listen on a specific IP, just setup valid server_names to use and an empty default (kinda jail)

Link to comment
1 hour ago, ken-ji said:

Drop the listen lines, and just use server_name ones


server {
    listen 80 default;
    root /var/www/default;
}
server {
    listen 80;
    server_name mediastore;
    server_name 192.168.2.5;

    location / {
        proxy_pass http://192.168.2.5:8080;
    }

    location /transmission {
        satisfy any;
        allow 192.168.2.0/24;
        auth_basic "Transmission Remote Web Client";
        auth_basic_user_file /config/transmission.passwd;
        proxy_pass http://192.168.2.5:9091;
    }

    location /kibana {
        proxy_pass http://192.168.2.5:9000;
        rewrite ^/kibana$ /kibana/ permanent;
        rewrite ^/kibana/(.*) /$1 break;
        access_log off;
    }
}

Here's mine, where I proxy unRAID WebUI (on 8080), Transmission on 9091, and Kibana on 9000. my unRAID is on 192.168.2.5

and note that I don't listen on a specific IP, just setup valid server_names to use and an empty default (kinda jail)

 

Thanks I was able to piece it together from there, that being said... what you mentioned earlier, is this considered a insecure thing to do? perhaps i should keep it on its separate nginx server,  your thoughts?

Link to comment

Well if there was a vulnerability in NGINX, it could be a point of attack.

That said, you only expose the stuff you want to be external facing if you must, and use VPNs to access anything else.

My config by the way is internal facing. A separate config (with SSL) and separate DNS names is used to "isolate" public facing services.

Link to comment
On 11/17/2016 at 8:09 AM, joachimvadseth said:

Ok thanks, but first things first - how do I access the /mnt/user/appdata folder from my mac? A long long time ago I used ubuntu and mounting sshfs was not that big a deal and CLI is not my happiest place to work.. :)

 

Yeah I was able to side load in my ssl info so the Nextcloud was ssl secured, unfortunately plex requests isn't ssl yet so that's a no go. I was trying to avoid setting up something more serious like a netscaler but it looks like that may be the "best" thing to do I suppose

Link to comment
6 minutes ago, acbaldwi said:

 

Yeah I was able to side load in my ssl info so the Nextcloud was ssl secured, unfortunately plex requests isn't ssl yet so that's a no go. I was trying to avoid setting up something more serious like a netscaler but it looks like that may be the "best" thing to do I suppose

This particular NGINX docker is supposed to do the SSL part for you and reverse proxy connections in so that accesses to the service is SSL'd with an auto renewing certificate from Let's Encrypt service. As far as the proxied web app is concerned it is not using SSL.

Link to comment

Pretty sure I royally screwed something up.

 

I tried to add a new subdomain and regenerate certs but I kept receiving a unknownhost DNS error that was newly logged since I made the change. In the process of troubleshooting the DNS I restarted the docker a few times and ended up getting this error:

 

There were too many requests of a given type :: Error creating new authz :: Too many invalid authorizations recently.

 

After a bit of googling it looks like I now have to wait a week and the Lets Encrypt docker wont even start now. There should really be a warning in the description about the rate limits and instructions how to put this in test cert mode so others don't make the same mistake I did. Is there any way around this to at least get things up and running again?

Link to comment

Pretty sure I royally screwed something up.

 

I tried to add a new subdomain and regenerate certs but I kept receiving a unknownhost DNS error that was newly logged since I made the change. In the process of troubleshooting the DNS I restarted the docker a few times and ended up getting this error:

 

There were too many requests of a given type :: Error creating new authz :: Too many invalid authorizations recently.

 

After a bit of googling it looks like I now have to wait a week and the Lets Encrypt docker wont even start now. There should really be a warning in the description about the rate limits and instructions how to put this in test cert mode so others don't make the same mistake I did. Is there any way around this to at least get things up and running again?


It's a letsencrypt thing and outside of our control. You can contact them about it.

Or you can get a free subdomain from duckdns and create as many sub-subdomains as you need

With regards to the dns error, I can't say anything without logs or more info on your setup
Link to comment

Pretty sure I royally screwed something up.

 

I tried to add a new subdomain and regenerate certs but I kept receiving a unknownhost DNS error that was newly logged since I made the change. In the process of troubleshooting the DNS I restarted the docker a few times and ended up getting this error:

 

There were too many requests of a given type :: Error creating new authz :: Too many invalid authorizations recently.

 

After a bit of googling it looks like I now have to wait a week and the Lets Encrypt docker wont even start now. There should really be a warning in the description about the rate limits and instructions how to put this in test cert mode so others don't make the same mistake I did. Is there any way around this to at least get things up and running again?



If you setup teamviewer I can look into it for you. I don't mind helping those that need help. Of course, only when i'm free.
Link to comment

Thanks for the offers to help guys, but I got it fixed. Looks like I didn't have to wait a week afterall and when i updated my dockers this morning it worked without issue. It would be nice if the docker at least was allowed to stay running so the reverse proxy would still work even if there are issues with the certificates. 

Edited by johnsanc
Link to comment
Thanks for the offers to help guys, but I got it fixed. Looks like I didn't have to wait a week afterall and when i updated my dockers this morning it worked without issue. It would be nice if the docker at least was allowed to stay running so the reverse proxy would still work even if there are issues with the certificates. 

Container should run even if the certs are not generated. But nginx won't start because the config requires those certs.

If you remove the two lines defining certs from your site config nginx should start
Link to comment

I can't seem to get this one up and running - and can't seem to figure out how anyone else did it :)

The thing is, letsencrypt tries to validate my domain before nginx is even up - and that fails since no one is listening on port 443 (no nginx)

Since this occurs on the init script, which fails, s6 terminates everything bringing the container down - so nginx doesn't even come up.

 

How did you guys set it up? An easy fix for me would running another container with just nginx - but that misses the point of having both on the same container :o

 

Here are my logs:

Quote

-------------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donations/
-------------------------------------
GID/UID
-------------------------------------
User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
2048 bit DH parameters present
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d x.mydomain.com
Different sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
usage:
certbot [SUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

(default) run Obtain & install a cert in your current webserver
certonly Obtain cert, but do not install it (aka "auth")
install Install a previously obtained cert in a server
renew Renew previously obtained certs that are near expiry
revoke Revoke a previously obtained certificate
register Perform tasks related to registering with the CA
rollback Rollback server configuration changes made during install
config_changes Show changes made to server config during installation
plugins Display information about installed plugins
certbot: error: argument --cert-path: No such file or directory

Generating new certificate
WARNING: The standalone specific supported challenges flag is deprecated.

Please use the --preferred-challenges flag instead.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for x.mydomain.com
/usr/lib/python2.7/site-packages/OpenSSL/rand.py:58: UserWarning: implicit cast from 'char *' to a different pointer type: will be forbidden in the future (check that the types are as you expect; use an explicit ffi.cast() if they are correct)
result_code = _lib.RAND_bytes(result_buffer, num_bytes)
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. x.mydomain.com (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Failed to connect to xx.xxx.xx.xx:443 for tls-sni-01 challenge

IMPORTANT NOTES:
- If you lose your account credentials, you can recover through
e-mails sent to [email protected].
- The following errors were reported by the server:

Domain: x.mydomain.com
Type: connection
Detail: Failed to connect to xx.xxx.xx.xx:443 for tls-sni-01

challenge

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
/var/run/s6/etc/cont-init.d/50-config: line 105: cd: /config/keys/letsencrypt: No such file or directory
[cont-init.d] 50-config: exited 1.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
Cleaning up challenges
Failed authorization procedure. x.mydomain.com (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Failed to connect to xx.xxx.xx.xx:443 for tls-sni-01 challenge

IMPORTANT NOTES:
- If you lose your account credentials, you can recover through
e-mails sent to [email protected].
- The following errors were reported by the server:

Domain: x.mydomain.com
Type: connection
Detail: Failed to connect to xx.xxx.xx.xx:443 for tls-sni-01

challenge

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
/var/run/s6/etc/cont-init.d/50-config: line 105: cd: /config/keys/letsencrypt: No such file or directory
[cont-init.d] 50-config: exited 1.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
 

Link to comment
1 hour ago, natiz said:

The thing is, letsencrypt tries to validate my domain before nginx is even up - and that fails since no one is listening on port 443 (no nginx)

Pretty sure it's not verifying the port is open, it's just verifying that your DNS is properly in place. Does pinging your FQDN that you are trying to get SSL certified resolve to your current public IP? It doesn't need to answer, just properly resolve. If not, then, no, the container won't start.

Link to comment
45 minutes ago, jonathanm said:

Pretty sure it's not verifying the port is open, it's just verifying that your DNS is properly in place. Does pinging your FQDN that you are trying to get SSL certified resolve to your current public IP? It doesn't need to answer, just properly resolve. If not, then, no, the container won't start.

 

Yes its resolved and answered. I do think its trying to connect, as the error clearly says:

Quote

Failed authorization procedure. x.mydomain.com (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Failed to connect to xx.xxx.xx.xx:443 for tls-sni-01 challenge

 

also, see this: https://tools.ietf.org/html/draft-ietf-acme-acme-01#section-7.3

Link to comment
3 hours ago, natiz said:

 

I can't seem to get this one up and running - and can't seem to figure out how anyone else did it :)

The thing is, letsencrypt tries to validate my domain before nginx is even up - and that fails since no one is listening on port 443 (no nginx)

Since this occurs on the init script, which fails, s6 terminates everything bringing the container down - so nginx doesn't even come up.

 

How did you guys set it up? An easy fix for me would running another container with just nginx - but that misses the point of having both on the same container

 

I think you misunderstand, it's not nginx that answers.  Otherwise nobody would ever be able to install this the first time as the nginx server wouldn't be ready to answer.   Are your ports open and forwarded correctly.  Check 443 on your firewall/router is being forwarded to Unraid, and post your docker run command.  (Link in my sig)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.