[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

Greetings! I've recently setup Unraid on a new (to me) server and I'm looking to migrate things like Ombi, Sonarr, Radarr and others to dockers. I also have Organizr and all the other apps setup as subdomains through a reverse proxy on an nginx web server running on a pi3. I have a wildcard SSL cert setup through Let's Encrypt that I manually renew every 90 days. I'm looking to setup this docker to get my website migrated to the new server. Will I have any issues getting new certs through here because I already have the wilcard cert? 

 

Thanks!

Link to comment
3 hours ago, slim2169 said:

Greetings! I've recently setup Unraid on a new (to me) server and I'm looking to migrate things like Ombi, Sonarr, Radarr and others to dockers. I also have Organizr and all the other apps setup as subdomains through a reverse proxy on an nginx web server running on a pi3. I have a wildcard SSL cert setup through Let's Encrypt that I manually renew every 90 days. I'm looking to setup this docker to get my website migrated to the new server. Will I have any issues getting new certs through here because I already have the wilcard cert? 

 

Thanks!

No, letsencrypt allows multiple certs for the same domains (with some limits)

Link to comment

Hello,

I have created a .htacces in www and a .passwd in "/nginx/.htpasswd" but still it keeps letting me in without asking for username and password, what am I doing wrong? Thank you!

 

.htaccess:

AuthName "Restricted Area"
AuthType Basic
AuthUserFile /mnt/disks/Samsung_SSD_860_EVO_1TB_6Y3105056W/appdata/letsencrypt/nginx/.htpasswd
require valid-user

 

Edited by milfer322
Link to comment
6 hours ago, milfer322 said:

Hello,

I have created a .htacces in www and a .passwd in "/nginx/.htpasswd" but still it keeps letting me in without asking for username and password, what am I doing wrong? Thank you!

 

.htaccess:


AuthName "Restricted Area"
AuthType Basic
AuthUserFile /mnt/disks/Samsung_SSD_860_EVO_1TB_6Y3105056W/appdata/letsencrypt/nginx/.htpasswd
require valid-user

 

Read the instructions in the readme on how to use htpasswd

Link to comment

Hello, for the life of me I can't get my calibre-web to work outside with LetsEncrypt + DuckDNS + nginx?

 

Does anyone know why?

 

1. I right clicked the LetEncrypt App and clicked edit and added my new duckdns to the subdomains (lets say calibrewebDNS)

2. I right clicked the DuckDNS App and clicked edit and added my new duckdns to the subdomains (lets say calibrewebDNS)

3. I then navigated to my appdata>letsencrypt>nginx>proxy-config and made a copy of calibre-web.subdomain.conf.sample and removed .sample from the end of the file.

4. Finally I edited the file with notepad and replaced  "server_name calibre-web.*;" with "server_name calibrewebDNS.*;"

 

 

I cant access the webui from calibrewebdns.duckdns.org

 

 

 

Link to comment

I have nextcloud docker up and running and have used this container to get certificates which worked absolutely painless. Thanks!

Since nextcloud is using ports 80 and 443 I can not have this container running all the time to renew certificates before they expire. I wrote a little script that will be executed weekly to hopefully renew the certificates automatically:

 

#!/bin/bash

# Stop nextcloud container to free up ports 80 and 443
docker stop --time=30 nextcloud

# Start letsencrypt container
docker start letsencrypt
sleep 1m
docker stop --time=30 letsencrypt

# Remove old backup certs
rm /mnt/user/appdata/nextcloud/keys/cert.crt_old
rm /mnt/user/appdata/nextcloud/keys/cert.key_old

# Backup current certs
mv /mnt/user/appdata/nextcloud/keys/cert.crt /mnt/user/appdata/nextcloud/keys/cert.crt_old
mv /mnt/user/appdata/nextcloud/keys/cert.key /mnt/user/appdata/nextcloud/keys/cert.key_old

# Copy new certs
cp /mnt/user/appdata/letsencrypt/keys/letsencrypt/cert.pem /mnt/user/appdata/nextcloud/keys/cert.crt
cp /mnt/user/appdata/letsencrypt/keys/letsencrypt/privkey.pem /mnt/user/appdata/nextcloud/keys/cert.key

# Start nextcloud container
docker start nextcloud

Will this make the letsencrypt container recognise the certificates and renew them if they are about to expire? How long before they expire will they be renewed?

 

Thanks again!

Link to comment
50 minutes ago, jesta said:

I have nextcloud docker up and running and have used this container to get certificates which worked absolutely painless. Thanks!

Since nextcloud is using ports 80 and 443 I can not have this container running all the time to renew certificates before they expire. I wrote a little script that will be executed weekly to hopefully renew the certificates automatically:

 


#!/bin/bash

# Stop nextcloud container to free up ports 80 and 443
docker stop --time=30 nextcloud

# Start letsencrypt container
docker start letsencrypt
sleep 1m
docker stop --time=30 letsencrypt

# Remove old backup certs
rm /mnt/user/appdata/nextcloud/keys/cert.crt_old
rm /mnt/user/appdata/nextcloud/keys/cert.key_old

# Backup current certs
mv /mnt/user/appdata/nextcloud/keys/cert.crt /mnt/user/appdata/nextcloud/keys/cert.crt_old
mv /mnt/user/appdata/nextcloud/keys/cert.key /mnt/user/appdata/nextcloud/keys/cert.key_old

# Copy new certs
cp /mnt/user/appdata/letsencrypt/keys/letsencrypt/cert.pem /mnt/user/appdata/nextcloud/keys/cert.crt
cp /mnt/user/appdata/letsencrypt/keys/letsencrypt/privkey.pem /mnt/user/appdata/nextcloud/keys/cert.key

# Start nextcloud container
docker start nextcloud

Will this make the letsencrypt container recognise the certificates and renew them if they are about to expire? How long before they expire will they be renewed?

 

Thanks again!

Isn't it easier to just reverse proxy nextcloud using the letsencrypt container? That way you can have both running at the same time and you don't have to copy any certs or expose ports for nextcloud.

Link to comment
7 minutes ago, jesta said:

Maybe, but I got it set up like this, the VPN tunnel was available for other reasons and I'd rather not run another docker on my VPS. It is only a single core with 2Gb RAM... Any comments on wether my script will do what I want it to?

If you are not using unraid, this is not the right place for support. Use our discourse or discord.

 

Letsencrypt needs to run if the cert is to be renewed. The check is scheduled to run at night at around 2.

 

Link to comment
1 minute ago, saarg said:

If you are not using unraid, this is not the right place for support. Use our discourse or discord.

 

Letsencrypt needs to run if the cert is to be renewed. The check is scheduled to run at night at around 2.

 

I am running unraid.

I know letsencrypt needs to be running to renew.

The question is will starting up the container that previously ran fine and got me certificates issued renew those certificates if they are about to expire? And how long before they expire?

Link to comment
13 hours ago, milfer322 said:

I read it

image.thumb.png.92d1309b4c1a7a8de26ba92e8ed3260b.png

But i dont see nothing about .htaccess.

I generated the htpasswd how as the readme file indicates.

That's the point I'm trying to make. We don't use .htaccess, that's an apache thing. Just create the htpasswd file as it says and uncomment the relevant auth lines in the confs

Link to comment
11 hours ago, WhazZzZzup17 said:

Hello, for the life of me I can't get my calibre-web to work outside with LetsEncrypt + DuckDNS + nginx?

 

Does anyone know why?

 

1. I right clicked the LetEncrypt App and clicked edit and added my new duckdns to the subdomains (lets say calibrewebDNS)

2. I right clicked the DuckDNS App and clicked edit and added my new duckdns to the subdomains (lets say calibrewebDNS)

3. I then navigated to my appdata>letsencrypt>nginx>proxy-config and made a copy of calibre-web.subdomain.conf.sample and removed .sample from the end of the file.

4. Finally I edited the file with notepad and replaced  "server_name calibre-web.*;" with "server_name calibrewebDNS.*;"

 

 

I cant access the webui from calibrewebdns.duckdns.org

 

 

 

Why don't you take 1 step at a time? Start with reading the docs, because they tell you to put your top duckdns address you have control over as the url, which would include your subdomain.

 

Then you can enter whatever you like into the subdomains field and they'll cover your sub-subdomains.

 

Then check the logs to make sure the cert was created successfully.

 

Then check to make sure your main homepage is working (or try the www version if you did wildcard).

 

Only then attempt the reverse proxy.

 

Don't try to set up 5 things at once and then get confused because it didn't work.

Link to comment
8 hours ago, jesta said:

I am running unraid.

I know letsencrypt needs to be running to renew.

The question is will starting up the container that previously ran fine and got me certificates issued renew those certificates if they are about to expire? And how long before they expire?

You said you were running it on a single core VPS and I don't see how you can run unraid on a VPS as you need a usb drive for unraids license.

 

I think letsencrypt tries to renew when it's less than 30 days before the cert expires.

Link to comment
6 hours ago, saarg said:

You said you were running it on a single core VPS and I don't see how you can run unraid on a VPS as you need a usb drive for unraids license.

 

I think letsencrypt tries to renew when it's less than 30 days before the cert expires.

Every night at 2:08.

 

On container start it only tries to renew if it's expired or expiring within 24 hours

Link to comment
20 hours ago, aptalca said:

That's the point I'm trying to make. We don't use .htaccess, that's an apache thing. Just create the htpasswd file as it says and uncomment the relevant auth lines in the confs

What files i need edit for the auth lines? I think the readme does not just explain all the necessary steps :( 
Thanks for all.

Link to comment
5 hours ago, milfer322 said:

What files i need edit for the auth lines? I think the readme does not just explain all the necessary steps :( 
Thanks for all.

Did you look at the config files?

https://github.com/linuxserver/docker-letsencrypt/blob/master/root/defaults/default#L43

 

https://github.com/linuxserver/reverse-proxy-confs/blob/master/bazarr.subdomain.conf.sample#L17

Edited by aptalca
Link to comment

Hey all, I'm in need of some assistance using this docker. Previously I was able to get it to work pretty easily (it's awesome, thanks!), but now a change in my setup broke it and my understanding of this is limited.

 

I own a domain (let's say mydomain.eu) on NameCheap.

Previously I had a static IP and I wanted to access, for example, netdata.mydomain.eu. In Namecheap, I created an A record pointing from mydomain.eu (* Host) to my static IP, and all worked flawlessly with netdata (and other services) in subdomains.

 

Now I changed my ISP and I do not have a static IP anymore. I want things to work identical as before. As a dynamic DNS, I can use my ASUS router's dynamic DNS settings, which gives access to myrouter.asuscomm.com.

Thus, I erased the A record, and created a CNAME record from mydomain.eu (* Host) to myrouter.asuscomm.com

 

Everything seems to be fine, I can ping and see the traffic on the 443 port, but certificates can't be validated. I'm getting errors like this one:

 

Domain: netdata.mydomain.eu
Type: connection
Detail: Fetching
http://netdata.mydomain.eu/.well-known/acme-challenge/long_code_here:
Timeout during connect (likely firewall problem)

 

How can I fix this? I'm guessing that using a CNAME record is not the proper way to do it and/or I need a DNS plugin?

 

Thanks in advance for any help!

Link to comment

I’ve tinkered with "satisfy any" and "satisfy all" in various reverse proxy conf files of the linuxserver/letsencrypt docker to understand how they work. What I’d like to implement requires a bit more complexity. Specifically, I’d like to configure the reverse proxy for specific applications to:

 

Allow 192.168.1.0/24 (private LAN) without NGINX Basic Auth

Allow 172.27.224.0/20 (OpenVPN Clients) without NGINX Basic Auth

Allow 192.168.2.0/24 (Ubiquiti Guest Wi-Fi with 24 hour Vouchers) with NGINX Basic Auth

Deny Internet


This is for a residential network.

 

I am aware that many applications can be configured internally to require/bypass authentication. The intent is to disable all application-specific authentication and use the NGINX authentication so it can be bypassed/required based upon the source address of the request.

Initially, I thought the following might be conceptually correct, but sources (NGINX: If Is Evil) indicate that using "if" in a location block is "evil" and that it can be unpredictable/bad if anything other than a "return" or "rewrite" is the action of the conditional. In the following, the if clause includes auth_basic and auth_basic_user_file. Note that the offending code is commented-out in case it is destructive and someone copies/pastes without reading.

 

Is this the proper conceptual method of accomplishing the goal? Is there a way to do this without violating the "If Is Evil" mantra?

 

Note: Just testing this with sonarr since I know the unmodified sonarr conf already works.

 

# Sonarr reverse proxy config for NGINX
# File location: \\unraid\appdata\letsencrypt\nginx\proxy-confs\sonarr.subdomain.com
# Modified from sonarr.subdomain.conf.sample
# Make sure that your dns has a cname set for sonarr and that your sonarr container is not using a base url

# set the variable allowed_ips to 1 if the client ip is in an allowed range
# otherwise set the variable to 0. Used in conditional, below, to allow/deny access.
# Allow access from private LAN, OpenVPN clients and Guest Wi-Fi. Deny all others.

geo $allowed_ips {
    default 0;
    192.168.1.0/24 1;
    192.168.2.0/24 1;
    172.27.224.0/20 1;
}

# set the variable auth_ips to 1 if the client is in a range requiring Auth
# otherwise set the variable to 0. Used in conditional, below, to require/bypass authentication.
# Require authentication from Guest Wi-Fi (192.168.2.0/24); Bypass authentication for all others.
# Note: Only gets applied to requests that have already passed the network exclusion defined above.

geo $auth_ips {
    default 0;
    192.168.2.0/24 1;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name sonarr.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # if allowed_ips is 0, then the login is from an IP address that is excluded, so return 403 Forbidden
        if ( $allowed_ips = 0 ) {
            return 403;
        }

        # NOTE: Not tested! Do NOT use the following pending review by someone far more knowledable.
        # Violates recommended use of IF in an NGINX location block as it results in other than
        # return or rewrite. Reference: https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/
        # The following code is commented out in case someone tries to copy/paste from forum without reading.
        # if auth_ips is 1, then the login is from an IP address that requires authentication
        #if ( $auth_ips = 1 ) {
        #    auth_basic "Restricted";
        #    auth_basic_user_file /config/nginx/.htpasswd;
        #}

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app sonarr;
        set $upstream_port 8989;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location ~ (/sonarr)?/api {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app sonarr;
        set $upstream_port 8989;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

   }
}

 

Edited by splerman
Link to comment

Any chance anyone can post their working Dokuwiki nginx configuration as well as any changes to settings they made within Dokuwiki itself? I've been trying various things without any success. There have been a few comments in this thread on the subject but nobody ever confirmed what the final configuration was actually...

 

edit: OK, finally figured out what was wrong. In the nginx configuration file, I was updating the port number to match the docker container when it needs to remain as the app port number. Also, zero changes were required within dokuwiki to make it work. Dumb on my part, I know; I'm still learning...

Edited by bigmak
Figured out issue
Link to comment

Hei guys and gals

This might be an obvious thing but i am a noob to the more advanced stuff on here.

I am trying to access my nextcloud and my server via the internet but i am unable to do so. (would like to make a wordpress webiste run on it as well at some point)
I have followed spaceinvaders guides, but i have had to mix and match between them since my ISP uses port 80. That means i have used his guide on how to setup a reverse proxy and then overwritten everything that he mentions in his DNS verification video.
I have my own domain 

But no matter what i do it simply says This site can’t be reached

Link to comment

All,

 

Any help you all can provide would be greatly appreciated. I am stuck in a “less than desirable” network layout and have been beating my head against this wall for the past few days and I am at a loss. Apologies in advance for the book.

 

Background: I have started experimenting with the letsencrypt docker as a reverse proxy to access services externally and so far, external services are working. I purchased my own domain name and have that CNAME point over to DuckDNS. I am using a pfSense VM on Unraid as my router and configured everything as Spaceinvader One recommends. Unfortunately, I am in an apartment with a roommate who refuses to let his devices fall under my network because he does not want a chance of his games being disrupted. So, I am currently forced to have my pfSense router double NAT underneath his Spectrum(ISP) provided router (I know that is terrible and it does pain me to say it). I was able to place my pfSense router in the DMZ on his router to at least get external services working. (i.e. nextcloud, etc.) However, even though pfSense supports NAT reflection the ISP router does not. So, I cannot access the devices through their domain name (i.e. nextcloud.mydomain.com) and thus do not have https connections on the local network. I thought this would not be a big deal and I would use DNS host overrides in pfSense to do a Split-DNS, however the pfSense host override does not allow DNS host assignments to IP and port (i.e. 192.168.1.5:443). It goes straight to port 80/443. This ends up that anything I try to resolve on the server dumps me at the Unraid WebUI.

 

Objectives: Hopefully, that is enough background. My two objectives I cannot find answers on anywhere are:

1.       How should I work around this host override/NAT reflection issue? I am open to other ideas, but I was thinking of swapping the Unraid WebUI and letsencrypt proxy ports so it routes through the proxy but then I can't find anywhere that says how to have letsencrypt make a cert and passthrough the unraid UI as a subdomain (i.e. UnraidUI.mydomain.com).

2.       Related, how can I have the letsencrypt reverse proxy provide valid domains and certificates to other devices and dockers yet restrict them to only the local network. For example, I would like letsencrypt to provide a valid domain name and cert to my pfSense router residing on 192.168.1.1 and make it so It had a valid cert from letsencrypt but the subdomain ‘pfsense.mydomain.com’ was only accessible from the internal network.

I am open to any other solutions be they in docker, Unraid, or pfSense.

 

Thanks in advance for any help. This has been making my eyes bleed for days.

 

V/R

 

Revrto

Edited by Revrto
spelling errors
Link to comment

Hello everyone,

 

I am setting up Letsencrypt following SpaceInvaderOne's video tutorial.

 

I am having a hard time getting the validation process to pass successfully.

 

I own a domain name and my IP is static, so I did not enter "duckdns.org" in the container settings since this would be useless. I entered my custom domain name instead.

Also, I have already created two subdomains which are pointing at my public static IP.

 

The HTPP and HTTPS ports I entered in the container template before installing are forwarded to my Unraid server's local static IP.

 

I should probably also mention I think it is weird that the Letsencrypt container is displayed in the Dashboard tab but not in the Docker tab...

 

Could you please give me a hint as to what to check or change to get this to work?

 

Thank you in advance.

 

Here are the logs :

Quote

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing...
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing...

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Let's Encrypt: https://letsencrypt.org/donate/

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=America/Los_Angeles
URL=mydomain.net
SUBDOMAINS=firstsubdomain,secondsubdomain
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
EMAIL=somethingsomething@gmail.com
STAGING=

2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d firstsubdomain.mydomain.net -d secondsubdomain.mydomain.net
E-mail address entered: somethingsomething@gmail.com
http validation is selected
Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for firstsubdomain.mydomain.net
http-01 challenge for secondsubdomain.mydomain.net
Waiting for verification...
Challenge failed for domain firstsubdomain.mydomain.net
Challenge failed for domain secondsubdomain.mydomain.net
http-01 challenge for firstsubdomain.mydomain.net
http-01 challenge for secondsubdomain.mydomain.net
Cleaning up challenges
Some challenges have failed.

 

Edited by CiaoCiao
Link to comment
9 hours ago, Revrto said:

All,

 

Any help you all can provide would be greatly appreciated. I am stuck in a “less than desirable” network layout and have been beating my head against this wall for the past few days and I am at a loss. Apologies in advance for the book.

 

Background: I have started experimenting with the letsencrypt docker as a reverse proxy to access services externally and so far, external services are working. I purchased my own domain name and have that CNAME point over to DuckDNS. I am using a pfSense VM on Unraid as my router and configured everything as Spaceinvader One recommends. Unfortunately, I am in an apartment with a roommate who refuses to let his devices fall under my network because he does not want a chance of his games being disrupted. So, I am currently forced to have my pfSense router double NAT underneath his Spectrum(ISP) provided router (I know that is terrible and it does pain me to say it). I was able to place my pfSense router in the DMZ on his router to at least get external services working. (i.e. nextcloud, etc.) However, even though pfSense supports NAT reflection the ISP router does not. So, I cannot access the devices through their domain name (i.e. nextcloud.mydomain.com) and thus do not have https connections on the local network. I thought this would not be a big deal and I would use DNS host overrides in pfSense to do a Split-DNS, however the pfSense host override does not allow DNS host assignments to IP and port (i.e. 192.168.1.5:443). It goes straight to port 80/443. This ends up that anything I try to resolve on the server dumps me at the Unraid WebUI.

 

Objectives: Hopefully, that is enough background. My two objectives I cannot find answers on anywhere are:

1.       How should I work around this host override/NAT reflection issue? I am open to other ideas, but I was thinking of swapping the Unraid WebUI and letsencrypt proxy ports so it routes through the proxy but then I can't find anywhere that says how to have letsencrypt make a cert and passthrough the unraid UI as a subdomain (i.e. UnraidUI.mydomain.com).

2.       Related, how can I have the letsencrypt reverse proxy provide valid domains and certificates to other devices and dockers yet restrict them to only the local network. For example, I would like letsencrypt to provide a valid domain name and cert to my pfSense router residing on 192.168.1.1 and make it so It had a valid cert from letsencrypt but the subdomain ‘pfsense.mydomain.com’ was only accessible from the internal network.

I am open to any other solutions be they in docker, Unraid, or pfSense.

 

Thanks in advance for any help. This has been making my eyes bleed for days.

 

V/R

 

Revrto

Well, I couldn't get nat reflection to work on pfsense even without double nat, so maybe that's some consolation for you. I am also using split dns. With that, we have no choice but to run letsencrypt on at least port 443. You'll have to change unraid's https port to something else. I kept unraid on port 80 for http, so when I hit my addresses inside my lan, I use the https endpoint and all is well.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.