Jump to content
linuxserver.io

[Support] Linuxserver.io - Letsencrypt (Nginx)

3812 posts in this topic Last Reply

Recommended Posts

I'm trying to write a config file to reverse proxy to GitLab-CE, but I can't get it to connect. I have the following config:

 

server {
    listen 443 ssl;

    server_name git.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_gitlab GitLab-CE;
        proxy_pass http://$upstream_gitlab:9080;
    }
}

and this as extra parameters for the GitLab-CE docker, minus config for email and backups (and with the real domain replaced with example.com):

--env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';gitlab_rails['gitlab_ssh_host']='192.168.69.99';"

but I just get 502 Bad Gateway when I try to connect.

 

The page at https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#supporting-proxied-ssl suggests adding 

nginx['listen_port'] = 80;
nginx['listen_https'] = false;

but this hasn't helped at all.

 

The nginx error log has this:

2018/09/22 14:26:57 [error] 372#372: *224 connect() failed (111: Connection refused) while connecting to upstream,
client: <MY EXTERNAL IP>, server: git.*, request: "GET / HTTP/1.1",
upstream: "http://172.18.0.3:9080/", host: "git.example.com"

TBH, I'm a bit stumped here. Does anybody have any clue as to what could be going wrong?

 

Thanks 😎

Share this post


Link to post
8 hours ago, ElectricBadger said:

I'm trying to write a config file to reverse proxy to GitLab-CE, but I can't get it to connect. I have the following config:

 


server {
    listen 443 ssl;

    server_name git.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_gitlab GitLab-CE;
        proxy_pass http://$upstream_gitlab:9080;
    }
}

and this as extra parameters for the GitLab-CE docker, minus config for email and backups (and with the real domain replaced with example.com):


--env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';gitlab_rails['gitlab_ssh_host']='192.168.69.99';"

but I just get 502 Bad Gateway when I try to connect.

 

The page at https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#supporting-proxied-ssl suggests adding 


nginx['listen_port'] = 80;
nginx['listen_https'] = false;

but this hasn't helped at all.

 

The nginx error log has this:


2018/09/22 14:26:57 [error] 372#372: *224 connect() failed (111: Connection refused) while connecting to upstream,
client: <MY EXTERNAL IP>, server: git.*, request: "GET / HTTP/1.1",
upstream: "http://172.18.0.3:9080/", host: "git.example.com"

TBH, I'm a bit stumped here. Does anybody have any clue as to what could be going wrong?

 

Thanks 😎

Uppercase doesn't work in dns hostnames in nginx. 

 

Change GitLab-CE to all lowercase or use the ip instead

Share this post


Link to post
10 hours ago, aptalca said:

Uppercase doesn't work in dns hostnames in nginx. 

 

Change GitLab-CE to all lowercase or use the ip instead

Thanks — I'd already noticed that after I posted, though. I changed "GitLab-CE" to "gitlab" and edited the name of the Docker image to "gitlab", but it didn't fix it.

 

Changing the 80 in

nginx['listen_port'] = 80;

to 9080, which is the port that GitLab's actually listening on, however, did. Can't believe I didn't spot that 🙄

 

Thanks for your help!

Share this post


Link to post

I would like to thank you linuxserver for making such a excelletn container. I have been using it for sonarr, radarr, nextcloud and transmission for weeks and it is so good.

 

Thing is, I am now trying to switch from transmission to deluge (by linuxserver too) and the default network type is host. If I set it to custom, I can not access the app. May I know what should I do? Thanks.

Share this post


Link to post

SOLVED - I Think.

 

So I need some help, please.

 

When it comes to containers I am fairly new, I have followed SpaceInvaders great video for getting this setup but need some help as I want to use this with Booksonic and don't have the first clue on were exactly to make the changes to allow this access or what exactly I need to put in the booksonic.subdomain.conf or the booksonic.subfolder.conf (as they don't exist so I would be starting from scratch). If someone can point me to a guide or example that would be great.

 

Thanks

Edited by MMW

Share this post


Link to post

I have been noticing that letsencrypt most times does not want to stop properly and it thinks it has stopped but unraid still says it is running

Only way to fix it is to stop and start the docker system

Anyone one seeing this?

Share this post


Link to post

Hi, first of all this is a fantastic docker and great work. 

 

Let me explain my situation. 

 

My unraid server has 2 nic and both connect to two seperate networks with 2 different isp, 2 different wan ips and seperate subnet. 

In unraid network setting I have eth0 for wan 1 and eth1 for wan2.

 

I have installed 2 instance of this LE container and both are fetching certificates for two of my custom domains. 

 

These custom domains are pointing to my wan ip respectively. 

 

Both LE dockers are set to bridge mode. 

 

My issue is when I browse custom domain 1 it takes me to the right destination and reverse proxy work. 

 

Custom domain 2 does not take me to the destination and I get timed out message. 

 

When I swap wan 1 from eth0 to eth1 and wise versa in unraid network setting , my wan2 ip works and wan1 now don't work. 

 

Is it possible that the LE docker is hard-coded to only receive incoming connection on eth0 and not eth1? 

 

 

 

Share this post


Link to post
3 hours ago, hus2020 said:

Hi, first of all this is a fantastic docker and great work. 

 

Let me explain my situation. 

 

My unraid server has 2 nic and both connect to two seperate networks with 2 different isp, 2 different wan ips and seperate subnet. 

In unraid network setting I have eth0 for wan 1 and eth1 for wan2.

 

I have installed 2 instance of this LE container and both are fetching certificates for two of my custom domains. 

 

These custom domains are pointing to my wan ip respectively. 

 

Both LE dockers are set to bridge mode. 

 

My issue is when I browse custom domain 1 it takes me to the right destination and reverse proxy work. 

 

Custom domain 2 does not take me to the destination and I get timed out message. 

 

When I swap wan 1 from eth0 to eth1 and wise versa in unraid network setting , my wan2 ip works and wan1 now don't work. 

 

Is it possible that the LE docker is hard-coded to only receive incoming connection on eth0 and not eth1? 

 

 

 

 

If I understand you correctly, you are using the same bridge for both containers? 

Then it will only use the network card associated with the bridge. 

I don't remember the unraid network setup in my head, but you might need to create one extra bridge for eth1. 

Share this post


Link to post
2 minutes ago, saarg said:

 

If I understand you correctly, you are using the same bridge for both containers? 

Then it will only use the network card associated with the bridge. 

I don't remember the unraid network setup in my head, but you might need to create one extra bridge for eth1. 

Technically unraid takes eth0 as primary interface and creates bridge from it.

 

In my docker setting however, I see that unraid automatically creates two bridge for the two seperate NIC. However for NIC2 unraid does not detect the gateway ip and it shows blank. That is why, whenever I swap my WAN to be on eth0 that domain name works, the wan that I tie to eth1 never works.

 

I am not sure if this can be fixed by adding a static route between the two LAN gateways in UNRAID. Can anyone advice?

 

image.thumb.png.0418e0af616e7e79c887d06d611d9b82.png

Share this post


Link to post

So I had this working until recently, but I moved and it seems that Cox blocks port 80 nowadays. I had been configured to use DuckDNS for my domain, with http verification. That no longer works at all, and my certs expired and now they can't renew. So I decided to try the DNS validation....Cloudflare doesn't seem to work with DuckDNS since my domain not a registered DNS. 

 

Is my only option to buy a domain name? Is there any way to make the DNS validation work with DuckDNS domains?

 

 

Share this post


Link to post

Hi.  I am trying to get Collabora working in nextcloud.  In the settings of letsencrypt, I put in 2 subdomains, aaa,bbb.  I can see it says "Sub-domains processed are: -d aaa.duckdns.org -d bbb.duckdns.org"

 

I can also see 

 

The following certs are not due for renewal yet:
/etc/letsencrypt/live/aaa.duckdns.org/fullchain.pem expires on 2018-12-24 (skipped)
No renewals were attempted.
No hooks were run.

 

so for aaa, I get it to work properly.  I don't see anything like that for bbb.  

 

When I do open the log for letsencrypt, I see 

 

nginx: [emerg] BIO_new_file("/path/to/certficate") failed (SSL: error:02FFF002:system library:func(4095):No such file or directory:fopen('/path/to/certficate', 'r') error:20FFF080:BIO routines:CRYPTO_internal:no such file)

 

I have in my Collabora config file the following:

 

 

server {
    listen       443 ssl;
    server_name  bbb.*;
    
    include /config/nginx/ssl.conf;

    #ssl_certificate /path/to/certficate;
    #ssl_certificate_key /path/to/key;
    
    # static files
    location ^~ /loleaflet {
        proxy_pass https://192.168.1.98:9980;
        proxy_set_header Host $http_host;
    }

    # WOPI discovery URL
    location ^~ /hosting/discovery {
        proxy_pass https://192.168.1.98:9980;
        proxy_set_header Host $http_host;
    }

   # main websocket
   location ~ ^/lool/(.*)/ws$ {
       proxy_pass https://192.168.1.98:9980;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "Upgrade";
       proxy_set_header Host $http_host;
       proxy_read_timeout 36000s;
   }
   
   # download, presentation and image upload
   location ~ ^/lool {
       proxy_pass https://192.168.1.98:9980;
       proxy_set_header Host $http_host;
   }
   
   # Admin Console websocket
   location ^~ /lool/adminws {
       proxy_pass https://192.168.1.98:9980;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "Upgrade";
       proxy_set_header Host $http_host;
       proxy_read_timeout 36000s;
   }
}

 

Where 192.168.1.98 is my Unraid box's ip address.  I don't know what to put in :

 

    #ssl_certificate /path/to/certficate;
    #ssl_certificate_key /path/to/key;

 

As I cannot find the pem file in /etc/letsencrypt/live$ for bbb.  I only see pem for aaa.

 

Can anyone share how?

 

Edited by jang430

Share this post


Link to post

@jang430, You need to click on "Theme" on the bottom left of the forum and change to Unraid Light, then edit your post so it shows up in both themes. Right now it's completely unreadable for those using the light theme.

Share this post


Link to post
12 hours ago, jang430 said:

Hi.  I am trying to get Collabora working in nextcloud.  In the settings of letsencrypt, I put in 2 subdomains, aaa,bbb.  I can see it says "Sub-domains processed are: -d aaa.duckdns.org -d bbb.duckdns.org"

 

I can also see 

 

The following certs are not due for renewal yet:
/etc/letsencrypt/live/aaa.duckdns.org/fullchain.pem expires on 2018-12-24 (skipped)
No renewals were attempted.
No hooks were run.

 

so for aaa, I get it to work properly.  I don't see anything like that for bbb.  

 

When I do open the log for letsencrypt, I see 

 

nginx: [emerg] BIO_new_file("/path/to/certficate") failed (SSL: error:02FFF002:system library:func(4095):No such file or directory:fopen('/path/to/certficate', 'r') error:20FFF080:BIO routines:CRYPTO_internal:no such file)

 

I have in my Collabora config file the following:

 

 

server {
    listen       443 ssl;
    server_name  bbb.*;
    
    include /config/nginx/ssl.conf;

    #ssl_certificate /path/to/certficate;
    #ssl_certificate_key /path/to/key;
    
    # static files
    location ^~ /loleaflet {
        proxy_pass https://192.168.1.98:9980;
        proxy_set_header Host $http_host;
    }

    # WOPI discovery URL
    location ^~ /hosting/discovery {
        proxy_pass https://192.168.1.98:9980;
        proxy_set_header Host $http_host;
    }

   # main websocket
   location ~ ^/lool/(.*)/ws$ {
       proxy_pass https://192.168.1.98:9980;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "Upgrade";
       proxy_set_header Host $http_host;
       proxy_read_timeout 36000s;
   }
   
   # download, presentation and image upload
   location ~ ^/lool {
       proxy_pass https://192.168.1.98:9980;
       proxy_set_header Host $http_host;
   }
   
   # Admin Console websocket
   location ^~ /lool/adminws {
       proxy_pass https://192.168.1.98:9980;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "Upgrade";
       proxy_set_header Host $http_host;
       proxy_read_timeout 36000s;
   }
}

 

Where 192.168.1.98 is my Unraid box's ip address.  I don't know what to put in :

 

    #ssl_certificate /path/to/certficate;
    #ssl_certificate_key /path/to/key;

 

As I cannot find the pem file in /etc/letsencrypt/live$ for bbb.  I only see pem for aaa.

 

Can anyone share how?

 

You shouldn't be changing certificate locations. Container handles them automatically through symlinks. Leave them as default. 

Share this post


Link to post
14 hours ago, smashingtool said:

So I had this working until recently, but I moved and it seems that Cox blocks port 80 nowadays. I had been configured to use DuckDNS for my domain, with http verification. That no longer works at all, and my certs expired and now they can't renew. So I decided to try the DNS validation....Cloudflare doesn't seem to work with DuckDNS since my domain not a registered DNS. 

 

Is my only option to buy a domain name? Is there any way to make the DNS validation work with DuckDNS domains?

 

 

Currently, no support for dns validation with duckdns. 

 

I recommend looking on namecheap, getting one with the lowest annual renewal (there were some $4/yr ones last I checked) and setting it up with cloudflare

Share this post


Link to post
35 minutes ago, aptalca said:

You shouldn't be changing certificate locations. Container handles them automatically through symlinks. Leave them as default. 

Hi.  I don't understand what you meant by this.  I actually don't see any certifications.  I just copy and pasted the lines above from another site, and named it Collabora.something.conf.

 

I don't understand what dns validation is, but you mean I have to get a cheap domain to use?  Is this the reason the config above isn't generating any .pem file?

 

If I did indeed get a domain from namecheap, anything I should be changing from the lines above apart from server_name?

Edited by jang430

Share this post


Link to post
5 hours ago, jang430 said:

Hi.  I don't understand what you meant by this.  I actually don't see any certifications.  I just copy and pasted the lines above from another site, and named it Collabora.something.conf.

 

I don't understand what dns validation is, but you mean I have to get a cheap domain to use?  Is this the reason the config above isn't generating any .pem file?

 

If I did indeed get a domain from namecheap, anything I should be changing from the lines above apart from server_name?

 

Where does @aptalca say you need to get a new domain? 

Share this post


Link to post

Maybe this is the correct place to ask for help i am unsure but here goes so im trying to setup nextcloud using spaceinvaders reverse proxy guide ive followed the instructions to a T or as closely as i could the problem im facing is with sonarr nextcloud pretty much anything all i get is the letsencrypt landing page

that says welcome to our server there are no errors in the logs status is server ready can anyone tell me where i messed up ?

Share this post


Link to post
46 minutes ago, Sinister said:

Maybe this is the correct place to ask for help i am unsure but here goes so im trying to setup nextcloud using spaceinvaders reverse proxy guide ive followed the instructions to a T or as closely as i could the problem im facing is with sonarr nextcloud pretty much anything all i get is the letsencrypt landing page

that says welcome to our server there are no errors in the logs status is server ready can anyone tell me where i messed up ?

Probably better to ask @SpaceInvaderOne as it's his guide you're following and it's impossible to tell anyone where they've gone wrong without a lot more detail.

Edited by CHBMB

Share this post


Link to post
22 hours ago, CHBMB said:

Probably better to ask @SpaceInvaderOne as it's his guide you're following and it's impossible to tell anyone where they've gone wrong without a lot more detail.

ill definitely go ahead and do that i just figured an expert would be able to pinpoint whats wrong as ive seen this mentioned a few times for now ill post my config files

this is my nextcloud config php

<?php
$CONFIG = array (
  'memcache.local' => '\\OC\\Memcache\\APCu',
  'datadirectory' => '/data',
  'instanceid' => 'ocvagavyiudq',
  'passwordsalt' => '66cAD3+nO/lNspfvs9urUMR3Y/n3Am',
  'secret' => 'qXhHGYee7Dftjk3h/6a/UPchu1pWJBUgHNZm/iLiIExSwZjJ',
  'trusted_domains' =>
  array (
    0 => '192.168.1.113:444',
    1 => 'xxxxx.duckdns.org',
  ),
  'overwrite.cli.url' => 'https://xxxxxx.duckdns.org/',
  'overwritehost' => 'xxxxx.duckdns.org',
  'overwriteprotocol' => 'https',
  'dbtype' => 'mysql',
  'version' => '14.0.1.1',
  'dbname' => 'nextcloud',
  'dbhost' => '192.168.1.113:3306',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'mysql.utf8mb4' => true,
  'dbuser' => 'xxxxxxxx',
  'dbpassword' => 'xxxxxxxxx',
  'installed' => true,
);

 

 

this is my proxy-confs file

 

# make sure that your dns has a cname set for nextcloud
# assuming this container is called "letsencrypt", edit your nextcloud container's config
# located at /config/www/nextcloud/config/config.php and add the following lines before the ");":
#  'trusted_proxies' => ['letsencrypt'],
#  'overwrite.cli.url' => 'https://nextcloud.your-domain.com/',
#  'overwritehost' => 'nextcloud.your-domain.com',
#  'overwriteprotocol' => 'https',
#
# Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this:
#  array (
#    0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it.
#    1 => 'nextcloud.your-domain.com',
#  ),

server {
    listen 443 ssl;

    server_name xxxxx.duckdns.org*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_nextcloud nextcloud;
        proxy_max_temp_file_size 2048m;
        proxy_pass https://$upstream_nextcloud:443;
    }
}

 

Edited by Sinister
sensitive information

Share this post


Link to post
4 hours ago, saarg said:

 

Where does @aptalca say you need to get a new domain? 

@saarg, @aptalcasuggested I get a cheap domain for use for this purpose.  I don't mind as he mentioned there's renewals as cheap as 4 usd.  The only problem is, even with a domain name like that, I can't find any instruction on how to get collabora working step by step.

 

@Sinister, I've tried asking @SpaceInvaderOne, but didn't get any reply.  I hope he sees this post.

 

Thanks guys, hope someone can help me move forward.

Share this post


Link to post
1 minute ago, jang430 said:

@saarg, @aptalcasuggested I get a cheap domain for use for this purpose.  I don't mind as he mentioned there's renewals as cheap as 4 usd.  The only problem is, even with a domain name like that, I can't find any instruction on how to get collabora working step by step.

 

@Sinister, I've tried asking @SpaceInvaderOne, but didn't get any reply.  I hope he sees this post.

 

Thanks guys, hope someone can help me move forward.

Reply about dns validation was to someone else, not you.

 

ssl.conf already defines the certs. You don't have to do that manually. I don't know where you got those two lines from. 

Share this post


Link to post

I see.  Sorry.  

 

Out of these lines, these were the original on the Collabora instructions I followed.  I commented them out since I didn't know where to point it.  

 

    #ssl_certificate /path/to/certficate;
    #ssl_certificate_key /path/to/key;

 

Instead, I went in to my nextcloud conf file, and copied "include /config/nginx/ssl.conf;" and put it in. 

 

So I should be using the original 

 

#ssl_certificate /path/to/certficate;
#ssl_certificate_key /path/to/key;   ?

 

But I don't know where to point it.

 

 

 

 

Edited by jang430

Share this post


Link to post
8 minutes ago, jang430 said:

@saarg, @aptalcasuggested I get a cheap domain for use for this purpose.  I don't mind as he mentioned there's renewals as cheap as 4 usd.  The only problem is, even with a domain name like that, I can't find any instruction on how to get collabora working step by step.

 

@Sinister, I've tried asking @SpaceInvaderOne, but didn't get any reply.  I hope he sees this post.

 

Thanks guys, hope someone can help me move forward.

same ive tried asking and been at this for 16 hours seen the video back and forth many times definitely something missing here only an a super advanced person would know

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.