[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

4 minutes ago, sfnetwork said:

OMG I finally found the issue!!!

It's about CloudFlare CNAME records...

I had to disable the traffic going through cloudflare:

cloudflare2.thumb.png.3bf1281cc688b1f3b5f71194418c5c11.png

 

Now EVERYTHING works perfectly....
Hope this can help someone else and avoid losing so much time lol

No, that's not correct. I have LE running with Cloudflare through their CDN network. Your configuration is not correct.

First you have your docker on 180/443 and in Pfsense you open up 80 and 443?? That should be 80 forwarding to 180 and 443 to 443. But then in your Nextcloud config you hard redirect to port 444. 


So if I were you I would walk through your config from the beginning, seems like you skipped through some steps. And for your LE validation you can use the Cloudflare.ini file if you aren't using that already.

Link to comment
1 minute ago, Kaizac said:

No, that's not correct. I have LE running with Cloudflare through their CDN network. Your configuration is not correct.

First you have your docker on 180/443 and in Pfsense you open up 80 and 443?? That should be 80 forwarding to 180 and 443 to 443. But then in your Nextcloud config you hard redirect to port 444. 


So if I were you I would walk through your config from the beginning, seems like you skipped through some steps. And for your LE validation you can use the Cloudflare.ini file if you aren't using that already.

The 80/ 180 port matter doesn't matter, since I validated my certificates through Cloudflare DNS (I just need to delete the NAT). Anyway, my 80 is blocked from ISP.
as for the CNAME records, that's really how it got working... ping was giving their IP, not mine (I get that's the point but didn't work with nGINX)

Link to comment
1 minute ago, sfnetwork said:

The 80/ 180 port matter doesn't matter, since I validated my certificates through Cloudflare DNS (I just need to delete the NAT). Anyway, my 80 is blocked from ISP.
as for the CNAME records, that's really how it got working... ping was giving their IP, not mine (I get that's the point but didn't work with nGINX)

Does it still work when you put the CDN back on and the pings give you their IP back? You probably don't want to hear this, but the way you configured your subdomains now opening it all to the internet is really asking for trouble. The adviced procedure is to use a VPN (which you can easily set up since you are on Pfsense) and then access your dockers like Sabnzbd and radarr. Only dockers like Nextcloud should be opened to the internet. Please make sure you have an idea what you are doing, cause right now it seems to me like you are just following some guides and not really understanding what is going on.

 

Also I wonder if you run into problems with your Nextcloud since you put MariaDB on bridge and your Nextcloud on proxynet. I expect them to have problems connecting, but maybe they work fine?

 

 

Link to comment
5 minutes ago, Kaizac said:

Does it still work when you put the CDN back on and the pings give you their IP back? You probably don't want to hear this, but the way you configured your subdomains now opening it all to the internet is really asking for trouble. The adviced procedure is to use a VPN (which you can easily set up since you are on Pfsense) and then access your dockers like Sabnzbd and radarr. Only dockers like Nextcloud should be opened to the internet. Please make sure you have an idea what you are doing, cause right now it seems to me like you are just following some guides and not really understanding what is going on.

 

Also I wonder if you run into problems with your Nextcloud since you put MariaDB on bridge and your Nextcloud on proxynet. I expect them to have problems connecting, but maybe they work fine?

 

 

Thanks, I already have OpenVPN setup and it works great. I just wanted to test out. But really good advice, I might leave nextcloud only like that and use the rest through VPN.
for mariaDB, no issue, I presume Nextcloud communicates with it behind the scene, directly to the bridge IP and port.

No it doesn't work when I enable Cloudflare thing on the CNAME

Edited by sfnetwork
Link to comment
On 3/1/2019 at 3:03 PM, CorneliousJD said:

Ever since a recent update I keep getting this in my log over and over again. 

 

How can I make sure I fix this? 

 

2019-03-01 15:02:04,922 fail2ban.jailreader [20575]: ERROR No file(s) found for glob /fail2ban/loginLog.json
2019-03-01 15:02:04,922 fail2ban [20575]: ERROR Failed during configuration: Have not found any log file for organizr-auth jail

I am still trying to fix this. Creatign the loginLog.json doesn't seem to fix it. 

 

I would assume the container would create these two files when/where it needs them but that doesn't seem to be the case. 

Link to comment
1 minute ago, sfnetwork said:

Thanks, I already have OpenVPN setup and it works great. I just wanted to test out. But really good advice, I might leave nextcloud only like that and use the rest through VPN.
for mariaDB, no issue, I presume Nextcloud communicates with it behind the scene, directly to the bridge IP and port.

No it doesn't work when I enable Cloudflare thing on the CNAME

If you are happy with it then it's cool. But if you prefer routing through their CDN you should be able to get it to work. My set-up is identical to yours, I just configured things differently. I'm using VLAN's and giving dockers their own IP. If you want to troubleshoot through them, let me/us know. If you're fine with the current state then enjoy :).

Link to comment
1 minute ago, Kaizac said:

If you are happy with it then it's cool. But if you prefer routing through their CDN you should be able to get it to work. My set-up is identical to yours, I just configured things differently. I'm using VLAN's and giving dockers their own IP. If you want to troubleshoot through them, let me/us know. If you're fine with the current state then enjoy :).

Of course, I would be interested routing through their CDN.
Could you describe your setup (using VLAN with Dockers and maybe how to get it working with CNAME going through their CDN)?
Thank you BTW, I really appreciate it! a little new to this...

Link to comment
2 minutes ago, sfnetwork said:

Of course, I would be interested routing through their CDN.
Could you describe your setup (using VLAN with Dockers and maybe how to get it working with CNAME going through their CDN)?
Thank you BTW, I really appreciate it! a little new to this...

By the way, did you also allow the WAN to access your port forward? So not just creating a Pfsense port forward, but also an associated firewall rule on your WAN interface?

 

I'm running a DuckDNS docker on my unraid box, but every other DNS service works to get your ip pushed a dns domain. That address I put in the CNAME alias of Cloudflare. So kaizac.duckdns.org is in the alias of every cname I want routing to my home address.

 

So I think you followed SpaceInvaders guide to get proxynet for your dockers. I'm not much of a fan of that construction. And I created nginx configs in site-configs of LE. So it's on you whether you want to make that change. But I think for testing purposes you can just put everything back on bridge and use your unraidIP:port in your nginx configs and it should work.

 

If you want to go my route:

Within pfsense I created a few VLAN's. You don't have to do it, but I like to keep it clean. You can also just give the dockers an address on your local LAN ip subnet. With a VLAN you can use PFsense to block your dockers to your local IP subnet if you so desire. Then I also created that VLAN in the Network Interface of Unraid.

 

After that is done you give all the dockers that need to access other dockers or need to be accessed from your LAN an ip on your VLAN or LAN network depending whether you use a VLAN or not.

 

Make sure when you give your LE docker it's own ip you also change the firewall rules in Pfsense.

 

When the dockers have it's own IP you have to change your nginx configs to redirect to the right IP and port.

 

 

  • Like 1
Link to comment
37 minutes ago, Kaizac said:

Does it still work when you put the CDN back on and the pings give you their IP back? You probably don't want to hear this, but the way you configured your subdomains now opening it all to the internet is really asking for trouble. The adviced procedure is to use a VPN (which you can easily set up since you are on Pfsense) and then access your dockers like Sabnzbd and radarr. Only dockers like Nextcloud should be opened to the internet. Please make sure you have an idea what you are doing, cause right now it seems to me like you are just following some guides and not really understanding what is going on.

 

Also I wonder if you run into problems with your Nextcloud since you put MariaDB on bridge and your Nextcloud on proxynet. I expect them to have problems connecting, but maybe they work fine?

 

 

Exposing guis like sab and radarr is the exact reason for a reverse proxy. As long as a proper auth is used (http auth with fail2ban as implemented in this image) it's perfectly fine.

Link to comment

ok, a little issue specifically with Sonarr when using the NGINX...
I run 2 dockers of Sonarr (one in English only and the other in French only).

sonarrdockers.thumb.png.0051c57bbf6e979836507cec053442dd.png

I setup nginx for Sonarr using the sample and this one works perfectly...

nginxfiles.png.284a235cccebf9c5393e0c5c01a3a4e4.png
For the other, I created another one and simply tried matching the docker name:

# make sure that your dns has a cname set for sonarr and that your sonarr container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name sonarr-fr.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_sonarr sonarr;
        proxy_pass http://$upstream_sonarr:8989;
    }

    location ~ (/sonarr)?/api {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_sonarr sonarr-fr;
        proxy_pass http://$upstream_sonarr:8989;
   }
}

Problem (again, specifically with the second one) is when I get in Sonarr using the HTTPS URL, I get this error...

1490134712_sonarrerror.png.43021f935ce1abf3548fae2e2a674311.png

 

Since nginx is causing that, I'm asking here but might be a sonarr matter, not sure at all...

Still works fine when using LAN default URL

 

this is my docker withe the issue:

sonarr-fr.thumb.png.4ca7d3457ac51d24c4c7c5950e1fc5f8.png

Edited by sfnetwork
Link to comment

Hi, I have a problem with my extra domain not working! I have myprivatedomain.co.uk with subdomains for sonarr, radarr, deluge, nextcloud and hiemdall all working fine and want mybusinessdomain.co.uk to work with the subdomains unifi and unms. UNMS has its own built-in letsencrpt support which isn't working either, would the letsencrpt docker somehow block it? If i go to https://unms.mybusinessdomain.co.uk I get the nginx landing page, if i add the 6443 port number to the domain name I get the UNMS log-in page but unsecured! I get the same thing with https://unifi.mybusinessdomain.co.uk which I'm trying to reverse proxy. The letsencrypt log file shows everything seems to be ok!

 

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=Europe/London
URL=MYPERSONALDOMAIN.co.uk
SUBDOMAINS=nextcloud,sonarr,radarr,deluge,tautulli,heimdall
EXTRA_DOMAINS=unifi.MYBUSINESSDOMAIN.co.uk
ONLY_SUBDOMAINS=true
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=

2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d nextcloud.MYPERSONALDOMAIN.co.uk -d sonarr.MYPERSONALDOMAIN.co.uk -d radarr.MYPERSONALDOMAIN.co.uk -d deluge.MYPERSONALDOMAIN.co.uk -d tautulli.MYPERSONALDOMAIN.co.uk -d heimdall.MYPERSONALDOMAIN.co.uk
EXTRA_DOMAINS entered, processing
Extra domains processed are: -d unifi.MYBUSINESSDOMAIN.co.uk
E-mail address entered: [email protected]
http validation is selected
Certificate exists; parameters unchanged; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [warn] could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
Server ready

 

I have renamed the unifi subdomain.conf file the same as the others and tried it as default and tried adding my whole domain name under server_name

 

# make sure that your dns has a cname set for unifi and that your unifi container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name unifi.MYBUSINESSDOMAIN.co.uk;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_unifi Unifi;
        proxy_pass https://$upstream_unifi:8443;
    }

    location /wss {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_unifi Unifi;
        proxy_pass https://$upstream_unifi:8443;
        proxy_buffering off;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_ssl_verify off;
    }

}

My site.conf is untouched and as default, is that where I'm going wrong? If so what do I put in my site.conf I read the instructions over and over and am now stuck!

Cheers,

Tim

 

Link to comment
3 hours ago, MothyTim said:

Hi, I have a problem with my extra domain not working! I have myprivatedomain.co.uk with subdomains for sonarr, radarr, deluge, nextcloud and hiemdall all working fine and want mybusinessdomain.co.uk to work with the subdomains unifi and unms. UNMS has its own built-in letsencrpt support which isn't working either, would the letsencrpt docker somehow block it? If i go to https://unms.mybusinessdomain.co.uk I get the nginx landing page, if i add the 6443 port number to the domain name I get the UNMS log-in page but unsecured! I get the same thing with https://unifi.mybusinessdomain.co.uk which I'm trying to reverse proxy. The letsencrypt log file shows everything seems to be ok!

 


-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=Europe/London
URL=MYPERSONALDOMAIN.co.uk
SUBDOMAINS=nextcloud,sonarr,radarr,deluge,tautulli,heimdall
EXTRA_DOMAINS=unifi.MYBUSINESSDOMAIN.co.uk
ONLY_SUBDOMAINS=true
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=

2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d nextcloud.MYPERSONALDOMAIN.co.uk -d sonarr.MYPERSONALDOMAIN.co.uk -d radarr.MYPERSONALDOMAIN.co.uk -d deluge.MYPERSONALDOMAIN.co.uk -d tautulli.MYPERSONALDOMAIN.co.uk -d heimdall.MYPERSONALDOMAIN.co.uk
EXTRA_DOMAINS entered, processing
Extra domains processed are: -d unifi.MYBUSINESSDOMAIN.co.uk
E-mail address entered: [email protected]
http validation is selected
Certificate exists; parameters unchanged; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [warn] could not build optimal variables_hash, you should increase either variables_hash_max_size: 1024 or variables_hash_bucket_size: 64; ignoring variables_hash_bucket_size
Server ready

 

I have renamed the unifi subdomain.conf file the same as the others and tried it as default and tried adding my whole domain name under server_name

 


# make sure that your dns has a cname set for unifi and that your unifi container is not using a base url

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name unifi.MYBUSINESSDOMAIN.co.uk;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_unifi Unifi;
        proxy_pass https://$upstream_unifi:8443;
    }

    location /wss {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_unifi Unifi;
        proxy_pass https://$upstream_unifi:8443;
        proxy_buffering off;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_ssl_verify off;
    }

}

My site.conf is untouched and as default, is that where I'm going wrong? If so what do I put in my site.conf I read the instructions over and over and am now stuck!

Cheers,

Tim

 

No idea about unms built-in letsencrypt so can't help you with https://unms.mybusinessdomain.co.uk. When you use the 6443 port, you are bypassing the letsencrypt container and connecting directly to unifi, so no proper cert, only self signed, hence the warning message.

 

With regards to unifi.blah address, use the proxy conf for it, make sure you only have one active unifi conf, set the server name to that full unifi domain url. Also noticed that you changed your container name to "Unifi". That won't work as nginx won't resolve names with uppercase letters. Make sure your unifi container is named "unifi".

 

Just keep the unifi proxy conf default except for the server name directive.

Edited by aptalca
Link to comment
1 hour ago, aptalca said:

No idea about unms built-in letsencrypt so can't help you with https://unms.mybusinessdomain.co.uk. When you use the 6443 port, you are bypassing the letsencrypt container and connecting directly to unifi, so no proper cert, only self signed, hence the warning message.

 

With regards to unifi.blah address, use the proxy conf for it, make sure you only have one active unifi conf, set the server name to that full unifi domain url. Also noticed that you changed your container name to "Unifi". That won't work as nginx won't resolve names with uppercase letters. Make sure your unifi container is named "unifi".

 

Just keep the unifi proxy conf default except for the server name directive.

Thanks that fixed Unifi, as simple as a capital letter! Knew it would be something daft like that! :)

UNMS is a puzzle though as the error in Chrome says

This server could not prove that it is unms.mybusinessdomain.co.uk; its security certificate is from nextcloud.mypersonaldomain.co.uk. This may be caused by a misconfiguration or an attacker intercepting your connection.

The only place that has both my personal domain and my business domain is the letsencrypt docker so is it somehow interfering?

Cheers,

Tim

Link to comment

Ok, so I gave up with the UNMS built in letsencrypt, and had another go at reverse proxying it, and success! I renamed the unifi subdomain.conf and replaced all mentions of unifi with unms, corrected the port and it seems to be working! Haven't tried connecting a device from outside my network yet, but thats for another day!

Thanks for your help!

Cheers,

Tim

Link to comment

How do I redirect HTTP to HTTPS? I am using the included ombi.subdomain.conf in proxy-confs.

 

Currently I have to write https://ombi.mysite.com to go to Ombi. I would like to be able to only have to write ombi.mysite.com in the browser to get there.

If I try this right now I just get an error saying "This site can’t be reached".

 

EDIT in case anyone wants to do this too:

Go to /mnt/user/appdata/nextcloud/letsencrypt/nging/site-confs/default

Remove the "#" signs on lines 5-10. Now everything should redirect from HTTP to HTTPS.

Edited by Limpeklimpe
Added solution
Link to comment
1 hour ago, Limpeklimpe said:

How do I redirect HTTP to HTTPS? I am using the included ombi.subdomain.conf in proxy-confs.

 

Currently I have to write https://ombi.mysite.com to go to Ombi. I would like to be able to only have to write ombi.mysite.com in the browser to get there.

If I try this right now I just get an error saying "This site can’t be reached".

 

 

First confirm your ISP isn't blocking port 80. If so, you can't control this port from WAN.

If not, you redirect port 80 in your router NAT settings to the HTTP one in Letsencrypt

Edited by sfnetwork
Link to comment
56 minutes ago, sfnetwork said:

First confirm your ISP isn't blocking port 80. If so, you can't control this port from WAN.

If not, you redirect port 80 in your router NAT settings to the HTTP one in Letsencrypt

Thanks for your reply.

 

According to my ISP and other people also using the same ISP, no they do not block port 80.

I checked port 80 using this http://www.portchecktool.com/ and it tells me "Connection refused".

 

Here are my settings in Letsencrypt

zpU3Bbh.png

I have forwarded port 80 to 180 and port 443 to 1443.

HTTPS works so I think everything is pointing correctly?

Edited by Limpeklimpe
Link to comment

Have posted twice, this is third time now, so apologies if this is annoying I just haven't gotten any feedback and recent updates have not fixed this.

 

My running logs of this container used to be clean, just ending with "Server Ready" but now it spills out this info over and over again every few minutes.

 

2019-03-13 19:00:06,640 fail2ban.jailreader [30585]: ERROR No file(s) found for glob /fail2ban/loginLog.json
2019-03-13 19:00:06,640 fail2ban [30585]: ERROR Failed during configuration: Have not found any log file for organizr-auth jail

 

Any advice on how to solve this?

 

EDIT - I actually just finally solved this, boy do I feel dumb!

there's a part of the template asking for your organizr folder, i had changed it to /appdata/organizr-v1/ while i worked on v2, so it was pointing to the wrong spot

 

Edited by CorneliousJD
Link to comment
1 hour ago, Limpeklimpe said:

Thanks for your reply.

 

According to my ISP and other people also using the same ISP, no they do not block port 80.

I checked port 80 using this http://www.portchecktool.com/ and it tells me "Connection refused".

 

Here are my settings in Letsencrypt

zpU3Bbh.png

I have forwarded port 80 to 180 and port 443 to 1443.

HTTPS works so I think everything is pointing correctly?

Really looks like port 80 is blocked. You could confirm it by forwarding 80 to anything in your LAN (or use a tool that listen on the port you want) and really see. That’s how I confirmed my port 80 was blocked after my ISP also told me it wasn’t. I called them back and they finally confirmed it was. They told me usually, even commercial accounts, if you don’t have a static IP, 80 is blocked. Well for my ISP. But based on the fact your 443 works and seems to be setup the same as 80, would bet it’s blocked. 

Link to comment
1 hour ago, sfnetwork said:

Really looks like port 80 is blocked. You could confirm it by forwarding 80 to anything in your LAN (or use a tool that listen on the port you want) and really see. That’s how I confirmed my port 80 was blocked after my ISP also told me it wasn’t. I called them back and they finally confirmed it was. They told me usually, even commercial accounts, if you don’t have a static IP, 80 is blocked. Well for my ISP. But based on the fact your 443 works and seems to be setup the same as 80, would bet it’s blocked. 

I got it working now. Port 80 was not blocked. I had to go to /mnt/user/appdata/nextcloud/letsencrypt/nging/site-confs/default and remove the "#" signs on line 5-10. Then all HTTP traffic will be directed to HTTPS.

Link to comment
7 minutes ago, Limpeklimpe said:

I got it working now. Port 80 was not blocked. I had to go to /mnt/user/appdata/nextcloud/letsencrypt/nging/site-confs/default and remove the "#" signs on line 5-10. Then all HTTP traffic will be directed to HTTPS.

Good. Still wonder why it wasn’t working the way you had it. Did you test from outside your network? Typing the http URL?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.