Jump to content
linuxserver.io

[Support] Linuxserver.io - Letsencrypt (Nginx)

2883 posts in this topic Last Reply

Recommended Posts

I’m getting ready to setup a reverse proxy for my Tautulli and Ombi containers but I wanted to see where I should buy my domain first. I know it’s possible to just use DuckDNS as a solution but I wanted a cheap domain that my parents would remember. I was thinking under $5 for the year.
 
I’m going to follow spaceinvader one’s guide on YouTube so if anyone has any advice, I’d greatly appreciate that as well.
I don't think you'll find any provider under that, unless they have a sale and it's only for one year.

Sent from my Nokia 7.1 using Tapatalk

Share this post


Link to post
1 hour ago, CHBMB said:

Namecheap is my default go to provider.  Using Cloudflare as DNS

Thanks!

 

Do you have to set an A record for the Cloudflare DNS?

Share this post


Link to post

Yes for my TLD, then Cnames for my subdomains and a wildcard cert from LE.

Share this post


Link to post

So I got an e-mail yesterday that my certificate expires in 20 days. I check the let's encrypt log and it says that the renewal conf file is broken, how do I fix it?

 

cronjob running on Wed Jan 30 02:08:00 CET 2019
Running certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/example.domain.com.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/certbot/renewal.py", line 67, in _reconstitute
    renewal_candidate = storage.RenewableCert(full_path, config)
  File "/usr/lib/python2.7/site-packages/certbot/storage.py", line 461, in __init__
    self._check_symlinks()
  File "/usr/lib/python2.7/site-packages/certbot/storage.py", line 520, in _check_symlinks
    "expected {0} to be a symlink".format(link))
CertStorageError: expected /etc/letsencrypt/live/example.domain.com/cert.pem to be a symlink
Renewal configuration file /etc/letsencrypt/renewal/example.domain.com.conf is broken. Skipping.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

No renewals were attempted.
No hooks were run.

Additionally, the following renewal configurations were invalid: 
  /etc/letsencrypt/renewal/example.domain.com.conf (parsefail)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
0 renew failure(s), 1 parse failure(s)

 

Share this post


Link to post
1 hour ago, strike said:

So I got an e-mail yesterday that my certificate expires in 20 days. I check the let's encrypt log and it says that the renewal conf file is broken, how do I fix it?

 


cronjob running on Wed Jan 30 02:08:00 CET 2019
Running certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/example.domain.com.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/certbot/renewal.py", line 67, in _reconstitute
    renewal_candidate = storage.RenewableCert(full_path, config)
  File "/usr/lib/python2.7/site-packages/certbot/storage.py", line 461, in __init__
    self._check_symlinks()
  File "/usr/lib/python2.7/site-packages/certbot/storage.py", line 520, in _check_symlinks
    "expected {0} to be a symlink".format(link))
CertStorageError: expected /etc/letsencrypt/live/example.domain.com/cert.pem to be a symlink
Renewal configuration file /etc/letsencrypt/renewal/example.domain.com.conf is broken. Skipping.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

No renewals were attempted.
No hooks were run.

Additionally, the following renewal configurations were invalid: 
  /etc/letsencrypt/renewal/example.domain.com.conf (parsefail)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
0 renew failure(s), 1 parse failure(s)

 

Did you restore your letsencrypt app data from a backup or did you copy it to a different location over the last couple of months? If so, the method you used messed up the symlinks and copied the actual files so now certbot/letsencrypt is not happy. 

 

You can change one of the parameters in the container settings to force it to revoke and delete the old cert and create a new one. Adding a subdomain should do it.

Share this post


Link to post
12 minutes ago, aptalca said:

Did you restore your letsencrypt app data from a backup or did you copy it to a different location over the last couple of months? If so, the method you used messed up the symlinks and copied the actual files so now certbot/letsencrypt is not happy. 

 

You can change one of the parameters in the container settings to force it to revoke and delete the old cert and create a new one. Adding a subdomain should do it.

Ah, you're totally right. I did restore the appdata from a backup, forgot I did that. I will try adding a subdomain, hopefully that will fix it. Thank you sir!

 

Edit: I also copied the appdata using MC and forgot to check the preserve symlinks "button", that was probably the thing that messed it up..

Edited by strike

Share this post


Link to post

I used the setting used in the spaceinvaderone video to configure the reverse proxy and everything works well but mmmm is there anyone who know how I could add mariadb to the reverse proxy? Because I need to query my database from outside my server space. At the moment the only way u found is to open the port for it because I'm not able to create the file for it in letsencrypt.

Any idea?

Thanks

Share this post


Link to post

I found a problem uploading a file from my nextcloud client 

 

"413 Request entity too large" to "PUT https://mynextclouddomain/remote.php/dav/uploads/myuser/2144831028/00000001" (skipped due to earlier error, try again in 24 hours)

 

And to be more exact, it tries to upload and it stops at exactly 90MB before error out again

 

I found the solution is to add in the subdomain.conf

 

client_max_body_size 10024M

 

But this didn't seem to work, do I need to put this somewhere else? According to this forum I might, but it doesn't apply to this specific docker container.

 

https://github.com/nextcloud/docker/issues/32

 

 

Edited by gacpac

Share this post


Link to post

August last year i set this all up and haven't touch it since, but my cert expired and so began the process of working out why auto-renew failed.

 

In the end it was because there wasn't an "A Record" for heimdall that i set-up. But in the end I didn't use it, so i must have deleted it off of name cheap. I looked at the default file and i had commented out all the heimdall stuff, but when i look at the letsencrypt log it was failing due to it not being defined. I simply recreated the "A record", let the docker run it's auto-renew at 2am, like it does every night and it was successful.

 

I would have thought that by deleting it from namecheap and commenting out the heimdall stuff in default, it wouldn't even know to look for it. There must be another location. 

 

Anyone with experience of this and namecheap have any idea where lets encrypt is getting it's info from?

 

whenever i start lets encrypt docker it always references the 3 places i defined:

 

Sub-domains processed are: -d www.xxxx.com -d start.xxxx.com -d heimdall.xxxx.com

 

 

Edited by Ockingshay

Share this post


Link to post

Is there a way to give the letsencrypt docker a different LAN IP address to the Unraid server?  The reason is because the Unraid server uses port 80 and port 443, the port mapping on the router and in the docker config has to be something else (I used 180 and 1443 as suggested by @SpaceInvaderOne in his very helpful tutorial. I want to use port 80 and port 443 because I have a local DNS server so I was planning to point my domains like plex.domain.com at the ip of the docker not the ip of the Unraid server. Thanks for your time guys. 

Share this post


Link to post
Just now, dgwharrison said:

Is there a way to give the letsencrypt docker a different LAN IP address to the Unraid server?  The reason is because the Unraid server uses port 80 and port 443, the port mapping on the router and in the docker config has to be something else (I used 180 and 1443 as suggested by @SpaceInvaderOne in his very helpful tutorial. I want to use port 80 and port 443 because I have a local DNS server so I was planning to point my domains like plex.domain.com at the ip of the docker not the ip of the Unraid server. Thanks for your time guys. 

Adjust the ports that unraid webui run's on:

image.png.b4601ebd08e71f9fec98fadaff871f29.png

Share this post


Link to post
1 hour ago, Ockingshay said:

August last year i set this all up and haven't touch it since, but my cert expired and so began the process of working out why auto-renew failed.

 

In the end it was because there wasn't an "A Record" for heimdall that i set-up. But in the end I didn't use it, so i must have deleted it off of name cheap. I looked at the default file and i had commented out all the heimdall stuff, but when i look at the letsencrypt log it was failing due to it not being defined. I simply recreated the "A record", let the docker run it's auto-renew at 2am, like it does every night and it was successful.

 

I would have thought that by deleting it from namecheap and commenting out the heimdall stuff in default, it wouldn't even know to look for it. There must be another location. 

 

Anyone with experience of this and namecheap have any idea where lets encrypt is getting it's info from?

 

whenever i start lets encrypt docker it always references the 3 places i defined:

 

Sub-domains processed are: -d www.xxxx.com -d start.xxxx.com -d heimdall.xxxx.com

 

 

OMG, dickhead...it was defined in the docker container haha. i'll leave this here though, just in case some one else is as stupid as me

Share this post


Link to post
1 hour ago, dgwharrison said:

Is there a way to give the letsencrypt docker a different LAN IP address to the Unraid server?  The reason is because the Unraid server uses port 80 and port 443, the port mapping on the router and in the docker config has to be something else (I used 180 and 1443 as suggested by @SpaceInvaderOne in his very helpful tutorial. I want to use port 80 and port 443 because I have a local DNS server so I was planning to point my domains like plex.domain.com at the ip of the docker not the ip of the Unraid server. Thanks for your time guys. 

Or in your router you could setup port redirection. I have mine as public redirect 443 to (local) dockerip:444

Share this post


Link to post

feels a little weird to be posting new issues into this mammoth discussion, but so far, miraculously, most if not all issues seem to be addressed...so here I go again:

I have been trying to move an important development website from Bluehost to my unRAID server. The site uses MySQL databases. The programmer I have used to help me develop this site has been trying to get it to run from within my letsencrypt docker, but has informed me that Nginx ignored the htaccess file. I can only acknowledge what he tells me, as I simply don't understand enough about what differentiates Apache and Nginx to help him with this part.

He created the following .htaccess file:

<ifmodule mod_rewrite.c>
    RewriteEngine on
    RewriteBase /
    RewriteCond %{REQUEST_URI} ^system.*
    RewriteRule ^(.*)$ /index.php/$1 [L]
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule ^(.*)$ index.php?/$1 [L]
</ifmodule>

What needs to be done in order for Nginx to work the same way as Apache does with this .htaccess file?

Share this post


Link to post

Docker: Bitwarden

Issue: 502 Bad Gateway Error

 

I'm able to access Bitwarden from a subdomain but only if I hardcode the docker IP address in the conf file. When using upstream, I get a 502 error. Not sure what I'm doing wrong.


I used this sample radarr conf:

        set $upstream_radarr radarr;
        proxy_pass http://$upstream_radarr:7878;

This should work but I get a 502 error. The name of the docker container begins with a capital letter so no typo here. 

        set $upstream_bitwarden Bitwarden;
        proxy_pass http://$upstream_bitwarden:80;

The conf with the hardcoded IP that works:

# make sure that your dns has a cname set for bitwarden

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name bitwarden.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_bitwarden Bitwarden;
        proxy_pass http://10.10.8.28:80;
    }
}

Note: The conf works even without adding "set $upstream_bitwarden Bitwarden;". 

Edited by Katherine

Share this post


Link to post



Docker: Bitwarden
Issue: 502 Bad Gateway Error
 
I'm able to access Bitwarden from a subdomain but only if I hardcode the docker IP address in the conf file. When using upstream, I get a 502 error. Not sure what I'm doing wrong.

I used this sample radarr conf:
        set $upstream_radarr radarr;        proxy_pass http://$upstream_radarr:7878;

This should work but I get a 502 error. The name of the docker container begins with a capital letter so no typo here. 

       set $upstream_bitwarden Bitwarden;        proxy_pass http://$upstream_bitwarden:80;

The conf with the hardcoded IP that works:

# make sure that your dns has a cname set for bitwardenserver {   listen 443 ssl;   listen [::]:443 ssl;   server_name bitwarden.*;   include /config/nginx/ssl.conf;   client_max_body_size 0;   # enable for ldap auth, fill in ldap details in ldap.conf   #include /config/nginx/ldap.conf;   location / {       # enable the next two lines for http auth       #auth_basic "Restricted";       #auth_basic_user_file /config/nginx/.htpasswd;       # enable the next two lines for ldap auth       #auth_request /auth;       #error_page 401 =200 /login;       include /config/nginx/proxy.conf;       resolver 127.0.0.11 valid=30s;       set $upstream_bitwarden Bitwarden;       proxy_pass http://10.10.8.28:80;   }}

Note: The conf works even without adding "set $upstream_bitwarden Bitwarden;". 



Are you using a custom network interface?

This: set $upstream_bitwarden Bitwarden

Will not work until you do. With a custom docker network interface, internal DNS will translate "Bitwarden" to 10.10.8.28 but it won't work with the standard bridge interface.

Sent from my SM-G930W8 using Tapatalk

  • Upvote 1

Share this post


Link to post

I use 1Password as my password manager on my iOS devices. I've noticed that the password autofill in iOS 12 automatically recognizes certain websites like "facebook.com" and automatically recommends those passwords without having to open 1Password. This doesn't happen with my personal domain. My Google searches seem to be coming up empty. Is there some modification I need to make to my Nginx configuration to allow the iOS password autofill to recognize my web site and connect it with the saved password?

 

It seems it's entirely possible this functionality doesn't work with http basic authentication. I've noticed when I go to my NextCloud page it automatically suggests passwords from my domain. It's only been since iOS 12 came out that any kind of password autofill has worked with basic authentication.

 

If this isn't going to work with http basic authentication is there a way to configure a central authentication page that automatically redirects to the requested subdomain or subfolder when authentication is successful?

Share this post


Link to post

Sharing some notes on how I got my grafana subdomain to work with the reverse proxy in lets encrypt. Hopefully this helps someone with a similar setup. I am simply a journeyman when it comes to nginx, encryption, and docker so take my cobbling with a grain of salt and make sure you back up your .conf files before you screw around!

 

I have the grafana container created by grafana, and the letsencrypt container from linuxserver both in the same proxy network I set up.

 

For most of my other subdomains the proxy works fine with the conf line :

proxy_pass http://$upstream_[app]:[port]; 

However; for whatever reason that did not work with Grafana for me. I commented out the proxy pass conf line and changed it to the following, where the ip is the internal ip for unraid:

proxy_pass http://192.168.1.20:3000;

I changed the settings for the Grafana container and change my GF_SERVER_ROOT_URL to:

https://[mydomain].com

Of course, I added grafana to letsencrypt subdomain list.

 

An additional configuration I added to all of my subdomain .conf files to force http to https was a new server block as follows:

server {
    listen 80 ;
    listen [::]:80 ;
    server_name [subdomain].*;
    return 301 https://$host$request_uri;
}

So now inside and outside my network going to grafana.[mydomain].com sends me to the Grafana login page.

  • Upvote 2

Share this post


Link to post
8 hours ago, Katherine said:

Docker: Bitwarden

Issue: 502 Bad Gateway Error

 

I'm able to access Bitwarden from a subdomain but only if I hardcode the docker IP address in the conf file. When using upstream, I get a 502 error. Not sure what I'm doing wrong.


I used this sample radarr conf:


        set $upstream_radarr radarr;
        proxy_pass http://$upstream_radarr:7878;

This should work but I get a 502 error. The name of the docker container begins with a capital letter so no typo here. 


        set $upstream_bitwarden Bitwarden;
        proxy_pass http://$upstream_bitwarden:80;

The conf with the hardcoded IP that works:


# make sure that your dns has a cname set for bitwarden

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name bitwarden.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_bitwarden Bitwarden;
        proxy_pass http://10.10.8.28:80;
    }
}

Note: The conf works even without adding "set $upstream_bitwarden Bitwarden;". 

Dns via container name doesn't handle uppercase. It has to be all lowercase. You need to change the container name to bitwarden

  • Like 1

Share this post


Link to post

I have a pretty weird one going on here. If I’m connected outside of my network and if I hit https://subdomain.domain.me, the site comes up and works great. Now if I am connected (wired or Wi-Fi) to my network and hit the same site, I get the following message...

 

”cannot open page because it cannot connect to the server”. 

 

 

What’s really weird is that it used to work and after one of the recent updates it has stopped working. Any ideas what’s going on? 

Share this post


Link to post
9 hours ago, aptalca said:

Dns via container name doesn't handle uppercase. It has to be all lowercase. You need to change the container name to bitwarden

And... it is working. 😎

 

Thank you so much.

 

I've started getting this warning in the log file since I started using stream: 

nginx: [warn] could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size

Looking up the documentation, this is related to stream:

Syntax: variables_hash_bucket_size size;
Default: variables_hash_bucket_size 64;
Context: stream

Syntax: variables_hash_max_size size;
Default: variables_hash_max_size 1024;
Context: stream

I'm wondering where should I place these settings? Under proxy-confs or site-confs? or in nginx.conf?

Share this post


Link to post
On 12/20/2018 at 5:34 PM, aptalca said:

I believe you can just enable in the ssl.conf

Unfortunately it was not that simple. Linux alpine was using libressl which does not yet support TLS1.3, however with Alpine 3.9 they have switched to OpenSSL 1.1.1a which does support TLS1.3. In addition, main docker nginx repo has just rebased their image to Alpine 3.9, officially supporting TLS1.3!!! 

 

@aptalca Please update your repo to alpine 3.9 as this will allow everyone to enable TLS1.3! 😄

Share this post


Link to post
6 hours ago, microservices said:

Unfortunately it was not that simple. Linux alpine was using libressl which does not yet support TLS1.3, however with Alpine 3.9 they have switched to OpenSSL 1.1.1a which does support TLS1.3. In addition, main docker nginx repo has just rebased their image to Alpine 3.9, officially supporting TLS1.3!!! 

 

@aptalca Please update your repo to alpine 3.9 as this will allow everyone to enable TLS1.3! 😄

In due time

Share this post


Link to post
On 1/30/2019 at 2:31 PM, aptalca said:

Did you restore your letsencrypt app data from a backup or did you copy it to a different location over the last couple of months? If so, the method you used messed up the symlinks and copied the actual files so now certbot/letsencrypt is not happy. 

 

You can change one of the parameters in the container settings to force it to revoke and delete the old cert and create a new one. Adding a subdomain should do it.

 

I have the same problem that I got an email that my cert is going to expire.

I didn't change anything in my docker config for over a year so I don't really know what was causing this.

 

So I tried adding a subdomain in the container settings which triggert a cert renewal.

But I still got the problem that the renewal process is somehow  not working properly.

When checking the letsencrypt logfile it didn't get changed for more than 20 days now.

This is the last entry:

 

cronjob running on Sat Jan 19 02:08:00 CET 2019
Running certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/xxxxxxxserver.com.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not yet due for renewal

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/xxxxxxxserver.com/fullchain.pem expires on 2019-02-18 (skipped)
No renewals were attempted.
No hooks were run.

Even the manual renewal by adding a subdomain did not trigger a log entry.

Whats going on here?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now