Jump to content
linuxserver.io

[Support] Linuxserver.io - Letsencrypt (Nginx)

4692 posts in this topic Last Reply

Recommended Posts

1 hour ago, Jerky_san said:

So I checked again this morning. It succeeded in renewing but letsencrypt still died and can't access any websites hosted behind it so it's not that.

Anything in the docker log?

Share this post


Link to post
10 minutes ago, aptalca said:

Anything in the docker log?

That I see no.. just says "server ready" until I restart it. Ports go completely down but the docker itself is still running. Error logs do not show anything either but it was able to renew the cert last night so it must of went down after that happened.

Share this post


Link to post
1 hour ago, Jerky_san said:

That I see no.. just says "server ready" until I restart it. Ports go completely down but the docker itself is still running. Error logs do not show anything either but it was able to renew the cert last night so it must of went down after that happened.

Can you post the output of ”ps -ef" from inside the container when that happens?

Share this post


Link to post

Am I doing this wrong or what don't I understand here....   ? (which is a lot)

 

I'm playing with a Gotify docker container for push notifications.

I'm playing with this letsencrypt docker for SSL certificates.

 

Is it possible/how do I use the SSL certs from the letscrypt container in the Gotify container?

 

The Gotify config file has an area for SSL


  ssl:
    enabled: false # if https should be enabled
    redirecttohttps: true # redirect to https if site is accessed by http
    listenaddr: "" # the address to bind on, leave empty to bind on all addresses
    port: 443 # the https port
    certfile: # the cert file (leave empty when using letsencrypt)
    certkey: # the cert key (leave empty when using letsencrypt)
    letsencrypt:
      enabled: false # if the certificate should be requested from letsencrypt
      accepttos: false # if you accept the tos from letsencrypt
      cache: data/certs # the directory of the cache from letsencrypt

But this seems to require that letsencrypt is running within the same docker container?

 

I've tried just copying the files from appdata/letsencrypt to a folder in appdata/gotify but the files "weren't found", so not sure where gotify was looking for them.  The main config file is found in appdata/gotify/config, tried the certs there also.

 

Gotify doesn't have a support thread here so I'll try in the letsencrypt thread, since I need letsencrypt files ;)

 

Thanks for any assistance.

Share this post


Link to post

Success

 

I modified all the lines from okavangonextcloud to okavangonextcloud.duckdns.org and that did the trick.  Not usre if they should all be like that but it worked.  I was able to login via internal webguie and externally

 

array (
    0 => '192.168.1.138:444',
    1 => 'okavangonextcloud.duckdns.org',
  ),
  'dbtype' => 'mysql',
  'version' => '19.0.0.12',
  'overwrite.cli.url' => 'https://okavangonextcloud.duckdns.org',
  'overwritehost' => 'okavangonextcloud.duckdns.org',

Share this post


Link to post
7 hours ago, Energen said:

Am I doing this wrong or what don't I understand here....   ? (which is a lot)

 

I'm playing with a Gotify docker container for push notifications.

I'm playing with this letsencrypt docker for SSL certificates.

 

Is it possible/how do I use the SSL certs from the letscrypt container in the Gotify container?

 

The Gotify config file has an area for SSL

 


  ssl:
    enabled: false # if https should be enabled
    redirecttohttps: true # redirect to https if site is accessed by http
    listenaddr: "" # the address to bind on, leave empty to bind on all addresses
    port: 443 # the https port
    certfile: # the cert file (leave empty when using letsencrypt)
    certkey: # the cert key (leave empty when using letsencrypt)
    letsencrypt:
      enabled: false # if the certificate should be requested from letsencrypt
      accepttos: false # if you accept the tos from letsencrypt
      cache: data/certs # the directory of the cache from letsencrypt

 

But this seems to require that letsencrypt is running within the same docker container?

 

I've tried just copying the files from appdata/letsencrypt to a folder in appdata/gotify but the files "weren't found", so not sure where gotify was looking for them.  The main config file is found in appdata/gotify/config, tried the certs there also.

 

Gotify doesn't have a support thread here so I'll try in the letsencrypt thread, since I need letsencrypt files ;)

 

Thanks for any assistance.

It's explained in the readme, but you really should reverse proxy rather than share certs

Share this post


Link to post
19 hours ago, aptalca said:

Can you post the output of ”ps -ef" from inside the container when that happens?

This morning it appears it didn't do. So perhaps it really is connected to the cron job? It renewed the night before but still went down. Last night it simply said "Cert not yet due for renewal" and everything is running this morning. Very strange..

Share this post


Link to post

Hi, 

 

I'm having a problem with my nextcloud application that doesn't allow me to upload files larger than 50Mb. I have checked everything on the installation and the problem is leading me to believe that is the let's encrypt docker container. 

 

I have updated the php-local.ini post_max_size and upload_max_filezise to 3GB. Also added to the subdomain.conf client_max_body_size,  and proxy_max_temp_file_size 30720M

 

I don't know what am I missing or if this is something happening to someone else in the forum. 

 

Server replied "413 Request Entity Too Large" to "PUT https://nextcloud.mydomain.com/remote.php/dav/uploads/myprofile/213883260/00000001" 

 

Share this post


Link to post

@gacpac 50mb sounds almost like you use webdav to upload

 

in case its webdav u use, windows is limited to 50mb by default when using webdav, u can increase the size by editing

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters

 

key FileSizeLimitInBytes change to max 4294967295 (4gb)

 

only if webdav is the protocol u try to use

Share this post


Link to post
12 minutes ago, alturismo said:

@gacpac 50mb sounds almost like you use webdav to upload

 

in case its webdav u use, windows is limited to 50mb by default when using webdav, u can increase the size by editing

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters

 

key FileSizeLimitInBytes change to max 4294967295 (4gb)

 

only if webdav is the protocol u try to use

I'm using whatever the nextcloud application is using. I'm looking to replace dropbox for nextcloud completely. Let me give it a try

Share this post


Link to post

@alturismo your fix in the registry actually worked for manually accessing nextcloud using webdav. But the application for Windows doesn't work. And it seems the app is using webdav as well, but it just doesn't work. It doesn't make sense, is there a setting that I should be looking out regarding webdav or something?

 

If someone had this problem with the nextcloud app, maybe I'm not the only one. 

 

 

Share this post


Link to post
Posted (edited)

I followed the comment from. I modified the proxy.conf I'm not sure if that opens a security hole with the other subdomains. But it did the job

 

https://github.com/nextcloud/docker/issues/762#issuecomment-504225433

 

Quote:

@JanMalte
I finally solved my issue regarding 413 response with files over 10mb. I'm not sure if it'll help you but I fixed my issue by editing the proxy.conf file for my letsencrypt container. It can be found in your appdata directory at~/appdata/letsencrypt/nginx/proxy.conf.

You can also enter the container by typing docker exec -it letsencrypt bash and then edit /config/nginx/proxy.conf.

The first line in this file reads client_max_body_size 10m;. Change the 10m to the size you desire. Then restart the letsencrypt container via docker restart letsencrypt and that should fix the issue.

 

Update

 

Based on the changelog for the proxy.conf it's better to remove the client_max_body_size so it takes the information from the subdomain.conf

Edited by gacpac

Share this post


Link to post

Seems like you can only have one TLD and then multiple subdomains.  Is there a way to get all my domains in this?  Like:

www.example.tld

plex.example.tld

www.example2.tld

 

Share this post


Link to post
4 hours ago, uek2wooF said:

Seems like you can only have one TLD and then multiple subdomains.  Is there a way to get all my domains in this?  Like:

www.example.tld

plex.example.tld

www.example2.tld

 

Did you read the readme on GitHub? It's explained there.

Share this post


Link to post

This seems to have stopped working for me this morning for no obvious reason. I've scanned all of the error logs and there's no errors logged that are any different to those that sporadically appear from time to time anyway. I'm perplexed.

Share this post


Link to post
1 minute ago, allanp81 said:

This seems to have stopped working for me this morning for no obvious reason. I've scanned all of the error logs and there's no errors logged that are any different to those that sporadically appear from time to time anyway. I'm perplexed.

Nothing we can do to help either with that info.

Share this post


Link to post

No I realise that, just wasn't sure if anyone was experiencing the same "issue".

 

It seems to be a DNS issue, I'm getting different IPs returned when I look up my domain name depending on what computer I'm using.

Share this post


Link to post
On 6/23/2020 at 8:49 AM, Jerky_san said:

o-o welp that helped.. was trying to renew my domain that is behind cloudflare so it was failing.. Danke Danke

I ended up converting from HTTP to DSN Validation through ClouldFlare to my own Domain instead of DuckDNS and that fixed my issue.   I've been meaning to move to my own domain from DuckDNS, this just finally gave me the motivation. 

Edited by SeveredDime

Share this post


Link to post

I managed to config various subdomains to the relevant dockers but I'm struggling with the simplest of stuff.

 

So I have some "pac" proxy scripts that I save at /mnt/cache/proxy, mapped to /config/www/proxy of the letsecrypt docker (since it has nginx and my understanding is nginx works as a http server).

I want to point proxy.domain.com to /config/www/proxy.

The end result is to be able type https://proxy.domain.com/script01.pac in the browser and the script would be downloaded / loaded.

 

It seems rather simple but I just can't get it to work. Please can someone help with the conf file. Many thanks.

 

 

 

 

Share this post


Link to post

Hi,

 

I currently use this docker for nextcloud and bitwarden dockers, that works great.
Now im trying to setup a Wordress inside the www folder in the letsencrypt docker and i want to redirect www.mysite.io to mysite.io.

But if i use www, in the subdomain field the certificate will be for www.mysite.io, and then visitors get redirected to mysite.io and a cert warning will show.

 

If i dont enter www, in the subdomain field i get this error

 

No subdomains defined
E-mail address entered: yo@yomail.com
http validation is selected
Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Saving debug log to /var/log/letsencrypt/letsencrypt.log
No match found for cert-path /config/etc/letsencrypt/live/www.mysite.io/fullchain.pem!
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for mysite.io
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mysite.io/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/mysiste.io/privkey.pem
Your cert will expire on 2020-09-29. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

 

# redirect www to https://[domain.com]
server {
 listen 80;
 listen 443 ssl http2;
 server_name www.mysite.io; 
 return 301 https://mysite.io$request_uri;
}

# redirect http to https://[domain.com]
server {
    listen 80;
    server_name mysite.io; 
    return 301 https://mysite.io$request_uri;
}

# server config
server {
 listen 443 ssl http2;
 server_name mysite.io;

anyone know what i have done wrong?

Edited by lusitopp

Share this post


Link to post
52 minutes ago, lusitopp said:

Hi,

 

I currently use this docker for nextcloud and bitwarden dockers, that works great.
Now im trying to setup a Wordress inside the www folder in the letsencrypt docker and i want to redirect www.mysite.io to mysite.io.

But if i use www, in the subdomain field the certificate will be for www.mysite.io, and then visitors get redirected to mysite.io and a cert warning will show.

 

If i dont enter www, in the subdomain field i get this error

 


No subdomains defined
E-mail address entered: yo@yomail.com
http validation is selected
Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Saving debug log to /var/log/letsencrypt/letsencrypt.log
No match found for cert-path /config/etc/letsencrypt/live/www.mysite.io/fullchain.pem!
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for mysite.io
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mysite.io/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/mysiste.io/privkey.pem
Your cert will expire on 2020-09-29. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

 


# redirect www to https://[domain.com]
server {
 listen 80;
 listen 443 ssl http2;
 server_name www.mysite.io; 
 return 301 https://mysite.io$request_uri;
}

# redirect http to https://[domain.com]
server {
    listen 80;
    server_name mysite.io; 
    return 301 https://mysite.io$request_uri;
}

# server config
server {
 listen 443 ssl http2;
 server_name mysite.io;

anyone know what i have done wrong?

post your docker run

Share this post


Link to post

Hi everyone,

 

I currently have a working LetsEncrypt (port forwarded 443 -> 444) + DuckDNS for all of my docker containers hosted on my unRAID. I added a new machine to my network that is running Windows 10 and hosting a Jellyfin instance.

 

If I want to use a different domain (not DuckDNS) for Jellyfin, how should I set things up that way I can keep my existing DuckDNS + my new domain all as HTTPS?

 

Edit: I was able to get it working by pointing my CloudFlare DNS to my public ip, and creating the appropriate proxy-conf with the server_name set to my non-DuckDNS domain. If I understand correctly, this means that my SSL isn't actually being created / handled by the LetsEncrypt docker and is being managed at CloudFlare. Does anyone see any issues with taking this approach?

Edited by Ezro

Share this post


Link to post
1 hour ago, Ezro said:

Hi everyone,

 

I currently have a working LetsEncrypt (port forwarded 443 -> 444) + DuckDNS for all of my docker containers hosted on my unRAID. I added a new machine to my network that is running Windows 10 and hosting a Jellyfin instance.

 

If I want to use a different domain (not DuckDNS) for Jellyfin, how should I set things up that way I can keep my existing DuckDNS + my new domain all as HTTPS?

Use the extra domain variable and replace jellyfin with the IP of the windows 10 computer in the proxy-conf. Don't remember the variable name now, but it's the one above the port. Might be host.

Share this post


Link to post
23 hours ago, aptalca said:

post your docker run

  

As soon as I copy/pasted my docker run I did see what I did wrong, 'only subdomains' was in true. After changing to false i now get certificate for https://mysite.io.

But another question that someone might be able to help me with.
With wordpress there is often updates to plugins, themes and wordpress itself. Trying do update from admin page will prompt me for ftp username and password, I dont have a ftp.
I understand that this is because the user that runs the page don't have access to the folders in wordpress, anyone knows how to set that up?

Quote

In order to install directly the themes or plugins without the need to provide FTP user and password to Wordpress, edit the wp-config.php file and add this line:

define('FS_METHOD', 'direct');

If you still can't install directly and Wordpress is still asking for FTP credential, check that the wp-content folder is writable for the www-data user, or the user that manage your Apache or Nginx server.

 

Share this post


Link to post
7 hours ago, lusitopp said:

  

As soon as I copy/pasted my docker run I did see what I did wrong, 'only subdomains' was in true. After changing to false i now get certificate for https://mysite.io.

But another question that someone might be able to help me with.
With wordpress there is often updates to plugins, themes and wordpress itself. Trying do update from admin page will prompt me for ftp username and password, I dont have a ftp.
I understand that this is because the user that runs the page don't have access to the folders in wordpress, anyone knows how to set that up?

 

The user is abc and its pid is set to 99 (unless you changed it). It should have access to those folders on host

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.