[Request/Done] Let's Encrypt Container


rix

Recommended Posts

Hey all,

 

Firstly, this is an awesome container, thank you so much :)

 

I've got everything setup how I want and after testing each address and log in credentials, etc, nginx is seemingly working as it should, but I would appreciate feedback on my config.

 

My question is regarding the openvpn-as docker,

I'm struggling to find any decent documentation regarding setting up nginx to allow an openvpn connection through.

 

Is this actually possible and is it worth it? I assumed having only one port open on my router was the thing to aim for, but if I end up with two ports open and they're both secure, is the recommendation I still pass openvpn through nginx? If it's possible, which it looks like it isn't?

 

I'd appreciate any advice and also any config examples from others that have done something similar.

 

Thank you :)

 

server {
  listen 80;
  server_name xxxx.dyndns.biz;
  return 301 https://$server_name$request_uri;
}

server {
listen 443 ssl default_server;

root /config/www;
index index.html index.htm index.php;

server_name xxxx.dyndns.biz, 192.168.1.100;

ssl_certificate /config/keys/fullchain.pem;
ssl_certificate_key /config/keys/privkey.pem;
ssl_dhparam /config/nginx/dhparams.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;

client_max_body_size 0;

location / {
	try_files $uri $uri/ /index.html /index.php?$args =404;
}

location ~ \.php$ {
	fastcgi_split_path_info ^(.+\.php)(/.+)$;
	fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
}

location /plexpy {
    proxy_pass http://192.168.1.100:8191;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd_admin;
}

  location /couchpotato {
    proxy_pass http://192.168.1.100:8083;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd_admin;
}

location /sickrage {
    proxy_pass http://192.168.1.100:8082;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd_admin;
}

location /plexrequests {
    proxy_pass http://192.168.1.100:3000;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd_shared;
}

location /nzbhydra {
    proxy_pass http://192.168.1.100:5075;
    	    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd_admin;
}

location /mylar {
    proxy_pass http://192.168.1.100:8090;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd_admin;
}

location /sabnzbd {
    proxy_pass http://192.168.1.100:8081;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd_admin;
}
}

Link to comment

First off, @aptalca, thanks for the great docker! I switched over from Linuxserver.io Nginx after I saw that your docker supported Let's Encrypt.

 

1. I'm having a problem with CloudFlare. CloudFlare first processes everything and then passes it on to my home server. This is to prevent DDoS. But this also means certbot is unable to complete the challenges and get a SSL certificate. Disabling CloudFlare support every time I need a new certificate would be easy, but it's not the ideal solution since it's tedious to do that every thirty days (or when the certification expires.)

 

Then I came across this: https://community.letsencrypt.org/t/how-to-get-a-lets-encrypt-certificate-while-using-cloudflare/6338?u=pfg  Basically, we only need the --webroot flag to force certbot to authenticate challenges over the standard HTTP. Sure, it's a bit more insecure, but after the certification verification, we can keep redirecting to HTTPS so that the security impact is minimized.

 

I propose a checkbox when we set up the docker so that we can tell the docker container to use the --webroot flag.

 

2. http://lime-technology.com/forum/index.php?topic=43696.msg437353#msg437353

 

This is great, but what if I want to do cloud.example.com, transmission.example.com...? Can you tell me how I can set up Nginx that way? I'm fairly new to Nginx and reverse-proxy and I need a lot of help from others. Thanks in advance.

 

3. When I need to add more subdomains, will the docker automatically add certs for them if I just edit the container?

Link to comment

First off, @aptalca, thanks for the great docker! I switched over from Linuxserver.io Nginx after I saw that your docker supported Let's Encrypt.

 

1. I'm having a problem with CloudFlare. CloudFlare first processes everything and then passes it on to my home server. This is to prevent DDoS. But this also means certbot is unable to complete the challenges and get a SSL certificate. Disabling CloudFlare support every time I need a new certificate would be easy, but it's not the ideal solution since it's tedious to do that every thirty days (or when the certification expires.)

 

Then I came across this: https://community.letsencrypt.org/t/how-to-get-a-lets-encrypt-certificate-while-using-cloudflare/6338?u=pfg  Basically, we only need the --webroot flag to force certbot to authenticate challenges over the standard HTTP. Sure, it's a bit more insecure, but after the certification verification, we can keep redirecting to HTTPS so that the security impact is minimized.

 

I propose a checkbox when we set up the docker so that we can tell the docker container to use the --webroot flag.

 

2. http://lime-technology.com/forum/index.php?topic=43696.msg437353#msg437353

 

This is great, but what if I want to do cloud.example.com, transmission.example.com...? Can you tell me how I can set up Nginx that way? I'm fairly new to Nginx and reverse-proxy and I need a lot of help from others. Thanks in advance.

 

3. When I need to add more subdomains, will the docker automatically add certs for them if I just edit the container?

 

1. Webroot is not just a flag, it's a different method altogether. This container does the validation over port 443 because on many systems port 80 is blocked or used by other services (ie. unraid gui). Webroot method does not support port 443 validation.

 

Against ddos, this container includes fail2ban, for home servers it should be more than enough (although I doubt anyone would be hosting a high traffic and a potential ddos target site on a home unraid server). Fail2ban is also useful against hacking attempts like brute forcing htpasswd

 

By the way, certs expire in 90 days. Once they are over 60 days old, this container will attempt renewals every night at 2am and at each container start.

 

2. Easiest way is to duplicate the server blocks and have a separate server block for each subdomain where the server names are subdomain.domain.url. In each server block, you can include the proxy bit for that subdomain. There are plenty of guides online for that.

 

3. Yes. It should recognize changes to subdomains, revoke existing certs and get a new set. Keep in mind that if you do it too many times in a row letsencrypt may throttle you and block new requests for a period of time.

Link to comment

1. Webroot is not just a flag, it's a different method altogether. This container does the validation over port 443 because on many systems port 80 is blocked or used by other services (ie. unraid gui). Webroot method does not support port 443 validation.

 

Against ddos, this container includes fail2ban, for home servers it should be more than enough (although I doubt anyone would be hosting a high traffic and a potential ddos target site on a home unraid server). Fail2ban is also useful against hacking attempts like brute forcing htpasswd

 

By the way, certs expire in 90 days. Once they are over 60 days old, this container will attempt renewals every night at 2am and at each container start.

 

2. Easiest way is to duplicate the server blocks and have a separate server block for each subdomain where the server names are subdomain.domain.url. In each server block, you can include the proxy bit for that subdomain. There are plenty of guides online for that.

 

3. Yes. It should recognize changes to subdomains, revoke existing certs and get a new set. Keep in mind that if you do it too many times in a row letsencrypt may throttle you and block new requests for a period of time.

 

1. So do you mean using CloudFlare with this docker container is basically impossible? Then how do I make it so that it gets the certification through CF?

 

2, 3. Thanks!

 

Basically I need CF, I'm going to be hosting my own blog and really don't want my server crashing down on me. Other hosting is NOT an option, so please don't recommend me that.

Link to comment

It sounds like at some point between 60 and 90 days you need to turn CF off to me.

 

Sent from my LG-H815 using Tapatalk

 

Yeah I know, but why does tls auth fail with CF? I just want to know why so I can do some more troubleshooting.

Link to comment

I think the bit where aptalca tells you it's a completely different method, not a flag.

 

Meaning no troubleshooting is necessary as the container doesn't use the method you want it to.

 

Sent from my LG-H815 using Tapatalk

 

OK, @aptalca and @CHBMB after some further digging I found this:

 

https://community.letsencrypt.org/t/using-certbot-behind-cloudflare/16068

 

So basically we don't need to use webroot, we just need to explicitly define http based cert retrieval.

 

Will this work or is it a dumb idea?

 

Thanks,

ideaman924

Link to comment

It sounds like aptalca does the verification over port 443 because port 80 is blocked on a lot of services. 

 

So it would either need a different container touse the method you linked to or some sort of way to implement a variable.

 

Whether either are something he would consider or indeed are technically possible within the docker I couldn't say.

 

Especially as allowing LE to renew between 60 and 90 days is a valid solution, although would require intervention on your part.

 

Link to comment

It sounds like aptalca does the verification over port 443 because port 80 is blocked on a lot of services. 

 

So it would either need a different container touse the method you linked to or some sort of way to implement a variable.

 

Whether either are something he would consider or indeed are technically possible within the docker I couldn't say.

 

Especially as allowing LE to renew between 60 and 90 days is a valid solution, although would require intervention on your part.

 

Exactly why we give the users an option to choose when setting up the docker.

 

That way people with blocked 80 ports can continue to use 443, while users who need CF can enable cert retrieval via port 80.

Link to comment

If it's possible....

 

It's not "we" giving the users an option though, you're asking someone else to potentially rewrite and/or restructure the container to fit a use case, which there is a work around for...

 

Don't underestimate the time and effort it takes to get some of this stuff working and that all the developers are doing it in their free time.

 

 

Sent from my LG-H815 using Tapatalk

 

 

Link to comment

If it's possible....

 

It's not "we" giving the users an option though, you're asking someone else to potentially rewrite and/or restructure the container to fit a use case, which there is a work around for...

 

Don't underestimate the time and effort it takes to get some of this stuff working and that all the developers are doing it in their free time.

 

 

Sent from my LG-H815 using Tapatalk

 

Sorry, currently I'm coffee deprived and I didn't really think about what I typed. I understand devs are doing this in their free time. No disrespectful attitude, and I didn't mean it that way.

 

As a recommendation I just typed off the top of my head, without thinking about who were going to make the changes. I certainly couldn't so I should probably shut my mouth.

Link to comment

Having a weird issue, i had this container working but i pooch the nextcloud/mariadb a while ago (back in 2015), I finally got Nextcloud/mariadb back online. when i try to get my certs keep failing because its says domain unknown. I am using a google domain and pfsense to do the Dynamic dns and it works just fine, I have my 443 port forwarded.  To test this I set LS Nextcloud as the forwarded 443 and its reachable from the outside world but the container is not working for me.  Is it possibly related to the CF issue mention previously?

 

Domain: mangosflick.com
Type: unknownHost
Detail: No valid IP addresses found for mangosflick.com

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
/etc/my_init.d/firstrun.sh: line 138: cd: /config/keys: No such file or directory

 

attached is from my pfsense box, the domain not greyed out is the active DynDNS

LE.JPG.b94bb4d242c3aa29e0ddfec579b309a7.JPG

Link to comment

Having a weird issue, i had this container working but i pooch the nextcloud/mariadb a while ago (back in 2015), I finally got Nextcloud/mariadb back online. when i try to get my certs keep failing because its says domain unknown. I am using a google domain and pfsense to do the Dynamic dns and it works just fine, I have my 443 port forwarded.  To test this I set LS Nextcloud as the forwarded 443 and its reachable from the outside world but the container is not working for me.  Is it possibly related to the CF issue mention previously?

 

Domain: mangosflick.com
Type: unknownHost
Detail: No valid IP addresses found for mangosflick.com

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
/etc/my_init.d/firstrun.sh: line 138: cd: /config/keys: No such file or directory

 

attached is from my pfsense box, the domain not greyed out is the active DynDNS

 

I don't quite understand your setup, but based on the error message, it seems you have not set a proper dns record for "mangosflick.com", in other words, your top url, mangosflick.com does not forward to your home server therefore letsencrypt cannot validate ownership.

 

If you want to keep it that way and have a cert only for the subdomain nextcloud.mangosflick.com, then set the only_subdomains option to true so it won't try and validate the top domain

 

Link to comment

Having a weird issue, i had this container working but i pooch the nextcloud/mariadb a while ago (back in 2015), I finally got Nextcloud/mariadb back online. when i try to get my certs keep failing because its says domain unknown. I am using a google domain and pfsense to do the Dynamic dns and it works just fine, I have my 443 port forwarded.  To test this I set LS Nextcloud as the forwarded 443 and its reachable from the outside world but the container is not working for me.  Is it possibly related to the CF issue mention previously?

 

Domain: mangosflick.com
Type: unknownHost
Detail: No valid IP addresses found for mangosflick.com

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
/etc/my_init.d/firstrun.sh: line 138: cd: /config/keys: No such file or directory

 

attached is from my pfsense box, the domain not greyed out is the active DynDNS

 

I don't quite understand your setup, but based on the error message, it seems you have not set a proper dns record for "mangosflick.com", in other words, your top url, mangosflick.com does not forward to your home server therefore letsencrypt cannot validate ownership.

 

If you want to keep it that way and have a cert only for the subdomain nextcloud.mangosflick.com, then set the only_subdomains option to true so it won't try and validate the top domain

 

Set subs to true and got close.

 

ErrorWarningSystemArrayLogin


*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/firstrun.sh...
Setting the correct time

Current default time zone: 'America/Chicago'
Local time is now: Tue Oct 18 16:04:36 CDT 2016.
Universal Time is now: Tue Oct 18 21:04:36 UTC 2016.

Using existing nginx.conf
Using existing nginx-fpm.conf
Using existing site config
Using existing landing page
Using existing jail.local
Using existing fail2ban filters
SUBDOMAINS entered, processing
Sub-domains processed are: -d nextcloud.mangosflick.com.
Different sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Upgrading certbot-auto 0.8.1 to 0.9.3...
Replacing certbot-auto...
Creating virtual environment...
Installing Python packages...
Installation succeeded.
Installation succeeded.
usage:
certbot-auto [sUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

(default) run Obtain & install a cert in your current webserver
certonly Obtain cert, but do not install it (aka "auth")
install Install a previously obtained cert in a server
renew Renew previously obtained certs that are near expiry
revoke Revoke a previously obtained certificate
register Perform tasks related to registering with the CA
rollback Rollback server configuration changes made during install
config_changes Show changes made to server config during installation
plugins Display information about installed plugins
letsencrypt: error: argument --cert-path: No such file or directory

2048 bit DH parameters present
Generating new certificate
usage:
certbot-auto [sUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

(default) run Obtain & install a cert in your current webserver
certonly Obtain cert, but do not install it (aka "auth")
install Install a previously obtained cert in a server
renew Renew previously obtained certs that are near expiry
revoke Revoke a previously obtained certificate
register Perform tasks related to registering with the CA
rollback Rollback server configuration changes made during install
config_changes Show changes made to server config during installation
plugins Display information about installed plugins
letsencrypt: error: argument -d/--domains/--domain: expected one argument

/etc/my_init.d/firstrun.sh: line 138: cd: /config/keys: No such file or directory
Error opening input file cert.pem

cert.pem: No such file or directory
* Starting nginx nginx
...fail!
* Starting authentication failure monitor fail2ban
ERROR No file(s) found for glob /config/log/nginx/error.log

ERROR Failed during configuration: Have not found any log file for nginx-http-auth jail

...fail!
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 366
Oct 18 16:05:20 5f6c84693161 syslog-ng[375]: syslog-ng starting up; version='3.5.3'

 

reading thru the logs, it seems my ngix did not get installed correctly.

Link to comment

Having a weird issue, i had this container working but i pooch the nextcloud/mariadb a while ago (back in 2015), I finally got Nextcloud/mariadb back online. when i try to get my certs keep failing because its says domain unknown. I am using a google domain and pfsense to do the Dynamic dns and it works just fine, I have my 443 port forwarded.  To test this I set LS Nextcloud as the forwarded 443 and its reachable from the outside world but the container is not working for me.  Is it possibly related to the CF issue mention previously?

 

Domain: mangosflick.com
Type: unknownHost
Detail: No valid IP addresses found for mangosflick.com

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
/etc/my_init.d/firstrun.sh: line 138: cd: /config/keys: No such file or directory

 

attached is from my pfsense box, the domain not greyed out is the active DynDNS

 

I don't quite understand your setup, but based on the error message, it seems you have not set a proper dns record for "mangosflick.com", in other words, your top url, mangosflick.com does not forward to your home server therefore letsencrypt cannot validate ownership.

 

If you want to keep it that way and have a cert only for the subdomain nextcloud.mangosflick.com, then set the only_subdomains option to true so it won't try and validate the top domain

 

Set subs to true and got close.

 

ErrorWarningSystemArrayLogin


*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/firstrun.sh...
Setting the correct time

Current default time zone: 'America/Chicago'
Local time is now: Tue Oct 18 16:04:36 CDT 2016.
Universal Time is now: Tue Oct 18 21:04:36 UTC 2016.

Using existing nginx.conf
Using existing nginx-fpm.conf
Using existing site config
Using existing landing page
Using existing jail.local
Using existing fail2ban filters
SUBDOMAINS entered, processing
Sub-domains processed are: -d nextcloud.mangosflick.com.
Different sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Upgrading certbot-auto 0.8.1 to 0.9.3...
Replacing certbot-auto...
Creating virtual environment...
Installing Python packages...
Installation succeeded.
Installation succeeded.
usage:
certbot-auto [sUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

(default) run Obtain & install a cert in your current webserver
certonly Obtain cert, but do not install it (aka "auth")
install Install a previously obtained cert in a server
renew Renew previously obtained certs that are near expiry
revoke Revoke a previously obtained certificate
register Perform tasks related to registering with the CA
rollback Rollback server configuration changes made during install
config_changes Show changes made to server config during installation
plugins Display information about installed plugins
letsencrypt: error: argument --cert-path: No such file or directory

2048 bit DH parameters present
Generating new certificate
usage:
certbot-auto [sUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

(default) run Obtain & install a cert in your current webserver
certonly Obtain cert, but do not install it (aka "auth")
install Install a previously obtained cert in a server
renew Renew previously obtained certs that are near expiry
revoke Revoke a previously obtained certificate
register Perform tasks related to registering with the CA
rollback Rollback server configuration changes made during install
config_changes Show changes made to server config during installation
plugins Display information about installed plugins
letsencrypt: error: argument -d/--domains/--domain: expected one argument

/etc/my_init.d/firstrun.sh: line 138: cd: /config/keys: No such file or directory
Error opening input file cert.pem

cert.pem: No such file or directory
* Starting nginx nginx
...fail!
* Starting authentication failure monitor fail2ban
ERROR No file(s) found for glob /config/log/nginx/error.log

ERROR Failed during configuration: Have not found any log file for nginx-http-auth jail

...fail!
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 366
Oct 18 16:05:20 5f6c84693161 syslog-ng[375]: syslog-ng starting up; version='3.5.3'

 

reading thru the logs, it seems my ngix did not get installed correctly.

No, the certs weren't created due to an error. Please post a screenshot of your container settings

 

EDIT: Wait, do you have a period at the end of your url? did you enter it in as "mangosflick.com."? Then that's probably the issue

Link to comment

 

No, the certs weren't created due to an error. Please post a screenshot of your container settings

 

EDIT: Wait, do you have a period at the end of your url? did you enter it in as "mangosflick.com."? Then that's probably the issue

 

Update it was not a period that was causing the issue. I got it working, if you visit https://nextcloud.mangosflick.com it will resolved. Reading thru all of the posts i realize that the instructions on creating the container are not very clear, at least for me.

 

Don't forget to enter in the host ports, e-mail address, the domain url(without any subdomains like www) and the subdomains (just the subdomains, comma separated, no spaces) (under advanced view). 

 

So what i was doing is in Key 2 i would put the TLD (mangosflick.com) and Key 3 my subs (www, nextcloud) this would fail, I did this because of the above mention. Reason for failing was that i did not have a dyndns for just the TLD but i had them for www and nextcloud

 

I figure this out when i tried using my duckdns for an old subdomain and when it failed it pulled the an old IP because that subdomain is currently not being updated. I bought the domain specifically for this container, I like duckdns but i like to have my domain.

 

SO.. when i used the full address boom it worked. I suggest maybe putting that in the info or not.

 

Thanks for all the help aptalca and the other members.

 

 

 

Link to comment

I' m wondering if someone can help me out with a few other parameters I need to add to a sites config file to get my Collabora container to work with my nextcloud container:

 

This is the site where I am referencing: https://nextcloud.com/collaboraonline/

 

So far I am able to get every other container working through the proxy, for example:

 

server {

listen 443 ssl;

server_name nextcloud.xxx.xxx;

 

ssl_certificate /config/keys/fullchain.pem;

ssl_certificate_key /config/keys/privkey.pem;

ssl_dhparam /config/nginx/dhparams.pem;

ssl_protocols TLSv1.2;

ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

ssl_prefer_server_ciphers on;

 

client_max_body_size 0;

 

location / {

        include /config/nginx/proxy.conf;

        proxy_pass https://127.0.0.1:943/;

}   

 

}

 

 

works fine but the the documenation I need to use options like ProxyPassMatch, and ProxyPassReverse:

<VirtualHost *:443>

  ServerName office.nextcloud.com:443

 

  # SSL configuration, you may want to take the easy route instead and use Lets Encrypt!

  SSLEngine on

  SSLCertificateFile /path/to/signed_certificate

  SSLCertificateChainFile /path/to/intermediate_certificate

  SSLCertificateKeyFile /path/to/private/key

  SSLProtocol            all -SSLv2 -SSLv3

  SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

  SSLHonorCipherOrder    on

 

  # Encoded slashes need to be allowed

  AllowEncodedSlashes On

 

  # Container uses a unique non-signed certificate

  SSLProxyEngine On

  SSLProxyVerify None

  SSLProxyCheckPeerCN Off

  SSLProxyCheckPeerName Off

 

  # keep the host

  ProxyPreserveHost On

 

  # static html, js, images, etc. served from loolwsd

  # loleaflet is the client part of LibreOffice Online

  ProxyPass          /loleaflet https://127.0.0.1:9980/loleaflet retry=0

  ProxyPassReverse    /loleaflet https://127.0.0.1:9980/loleaflet

 

  # WOPI discovery URL

  ProxyPass          /hosting/discovery https://127.0.0.1:9980/hosting/discovery retry=0

  ProxyPassReverse    /hosting/discovery https://127.0.0.1:9980/hosting/discovery

 

  # Main websocket

  ProxyPassMatch "/lool/(.*)/ws$" wss://127.0.0.1:9980/lool/$1/ws

 

  # Admin Console websocket

  ProxyPass  /lool/adminws wss://127.0.0.1:9980/lool/adminws

 

  # Download as, Fullscreen presentation and Image upload operations

  ProxyPass          /lool https://127.0.0.1:9980/lool

  ProxyPassReverse    /lool https://127.0.0.1:9980/lool

</VirtualHost>

 

Link to comment

Guys urgent help. I've been trying to solve this for hours but I'm clueless so it'd be nice if somebody can chime in on what I'm doing wrong.

 

Basically my configuration logic is:

1. Check if server_name is set. If not (user is connecting from IP) then bin the request and return 404.

2. Check if user is using HTTP. HTTP is bad, so force user to HTTPS.

3. Then send user to appropriate destination based on server_name.

 

My configuration (personal info removed):

 

# Automatically bin requests from anywhere else
server {
        listen 80;
        listen 443 ssl;
        server_name _;
        return 404;
}
# Redirect all http requests to secure https
server {
listen 80;
return 301 https://$server_name$request_uri;
}


# Main server block (for wordpress)
server {
listen 443 ssl;

root /config/www;
index index.html index.htm index.php;

server_name xxx.com www.xxx.com;

ssl_certificate /config/keys/fullchain.pem;
ssl_certificate_key /config/keys/privkey.pem;
ssl_dhparam /config/nginx/dhparams.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;

client_max_body_size 0;

location / {
	try_files $uri $uri/ /index.html /index.php?$args =404;
}

location ~ \.php$ {
	fastcgi_split_path_info ^(.+\.php)(/.+)$;
	fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
}
}

# Server block for Nextcloud
server {
listen 443 ssl;

server_name cloud.xxx.com www.cloud.xxx.com;

location / {
	include /config/nginx/proxy.conf;
	proxy_pass https://xxx.xxx.xxx.xxx:xxx;
}
}

 

However, this is not working - the browsers I test (Chrome, Edge, IE) give either a 404 or an empty response (the url bar just blanks out). How do I get it to work?

 

Thanks in advance.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.