[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

31 minutes ago, aptalca said:

That's likely the issue. Don't use double auth. Use either grafana's built in or htpasswd.

I think you nailed it.  So what I did is enable Grafana's auth.proxy so that it passes the username from the basic authentication to Grafana to use for logins instead of the Grafana login box.  I added the header I needed to pass that to nginx conf.  Made a few other changes like usernames and some settings I found online (may not have done anything but it seems to work so I left them in, and it seems to stay logged in for at least longer.  Was able to do an hour before I posted this.  Hopefully it continues.

 

Thanks Aptalca for pointing me in the right direction.

Link to comment
19 hours ago, Platanos said:

Hi guys!

 

Sorry to bother you but I must be missing something pretty obvious.

 

Just installed this docker and I'm trying to access the basic index.html that's on the www folder but when I try to go to MY_LOCAL_IP:8080 (the port is the one I configured when installing the docker) I always get a connection error.

 

Any idea what I might be doing wrong?

 

Thanks in advance!

 

Sorry just pushing this up in case it got missed!

 

Thanks again

Link to comment
6 hours ago, Platanos said:

Thanks! Sorry I assumed I could try it locally! I’ll try to set it up and give feedback! 

Quote

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

New certificate generated; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready

Everything seems to be A ok! Thanks for all the help! Just one question does the certificate renew automatically or will I have to manually run "certbot renew"?

Link to comment
3 hours ago, Platanos said:

Everything seems to be A ok! Thanks for all the help! Just one question does the certificate renew automatically or will I have to manually run "certbot renew"?

It's renewed automatically as long as you don't change the port forwarding or whatever else is necessary for validation

Link to comment
3 hours ago, sgt_spike said:

How do I configure multiple domains?

There is an extra domains variable: https://github.com/linuxserver/docker-letsencrypt/blob/master/README.md

That will let you get a cert that covers them. 

 

Then you manage the different domains via server name directives in the site config. 

 

For example, each server block would be for a different domain and you set the server names accordingly. 

Link to comment
On 1/5/2019 at 5:00 PM, aptalca said:

In your case, make sure that the ip on duckdns matches your home ip. And also make sure that nothing changed with regards to the port mapping on unraid and port forwarding on your router. 

I verified both IP addresses and they match.

 

Here is Google Wifi router port forwarding settings and these settings worked before. 

 

785979950_Screenshot_GoogleWifi.jpg.3930ec76b0437a99c18dd4afac54bb23.jpg

 

I deleted these and added them again but still the same issue.

 

My setup is --> ATT Uverse router --> Google Wifi router --> ethernet --> unraid server


I ran port scanner tool and found both 80 and 443 ports were timed out (Connection timed out). But when I connected to a VPN service then same ports were open. I checked with my ISP (AT&T) and they do not block 80 and 443 ports.


I watched SpaceInvader video again and modified LE docker settings but still the same error.
 

Variables set:
PUID=99
PGID=100
TZ=America/Chicago
URL=duckdns.org
SUBDOMAINS=subdomain1,subdomain2
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=
2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d subdomain1.duckdns.org -d subdomain2.duckdns.org
E-mail address entered: [email protected]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for subdomain1.duckdns.org
http-01 challenge for subdomain2.duckdns.org
Waiting for verification...
Cleaning up challenges

 

Here is the error message:


- The following errors were reported by the server:

 

Domain: subdomain1.duckdns.org
Type: connection
Detail: Fetching
http://subdomain1.duckdns.org/.well-known/acme-challenge/token:
Timeout during connect (likely firewall problem)

Domain: subdomain2.duckdns.org
Type: connection
Detail: Fetching
http://subdomain2.duckdns.org/.well-known/acme-challenge/token2:
Timeout during connect (likely firewall problem)

ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

 

I am running out of ideas, to fix this issue.

 

 

 

Link to comment
2 hours ago, stlrox said:

I verified both IP addresses and they match.

 

Here is Google Wifi router port forwarding settings and these settings worked before. 

 

785979950_Screenshot_GoogleWifi.jpg.3930ec76b0437a99c18dd4afac54bb23.jpg

 

I deleted these and added them again but still the same issue.

 

My setup is --> ATT Uverse router --> Google Wifi router --> ethernet --> unraid server


I ran port scanner tool and found both 80 and 443 ports were timed out (Connection timed out). But when I connected to a VPN service then same ports were open. I checked with my ISP (AT&T) and they do not block 80 and 443 ports.


I watched SpaceInvader video again and modified LE docker settings but still the same error.
 


Variables set:
PUID=99
PGID=100
TZ=America/Chicago
URL=duckdns.org
SUBDOMAINS=subdomain1,subdomain2
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=
2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d subdomain1.duckdns.org -d subdomain2.duckdns.org
E-mail address entered: [email protected]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for subdomain1.duckdns.org
http-01 challenge for subdomain2.duckdns.org
Waiting for verification...
Cleaning up challenges

 

Here is the error message:


- The following errors were reported by the server:

 


Domain: subdomain1.duckdns.org
Type: connection
Detail: Fetching
http://subdomain1.duckdns.org/.well-known/acme-challenge/token:
Timeout during connect (likely firewall problem)

Domain: subdomain2.duckdns.org
Type: connection
Detail: Fetching
http://subdomain2.duckdns.org/.well-known/acme-challenge/token2:
Timeout during connect (likely firewall problem)

ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

 

I am running out of ideas, to fix this issue.

 

 

 

 

Have you forwarded the ports in the ATT Router also?

Link to comment
3 hours ago, stlrox said:

I verified both IP addresses and they match.

 

Here is Google Wifi router port forwarding settings and these settings worked before. 

 

785979950_Screenshot_GoogleWifi.jpg.3930ec76b0437a99c18dd4afac54bb23.jpg

 

I deleted these and added them again but still the same issue.

 

My setup is --> ATT Uverse router --> Google Wifi router --> ethernet --> unraid server


I ran port scanner tool and found both 80 and 443 ports were timed out (Connection timed out). But when I connected to a VPN service then same ports were open. I checked with my ISP (AT&T) and they do not block 80 and 443 ports.


I watched SpaceInvader video again and modified LE docker settings but still the same error.
 


Variables set:
PUID=99
PGID=100
TZ=America/Chicago
URL=duckdns.org
SUBDOMAINS=subdomain1,subdomain2
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
DHLEVEL=2048
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=
2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d subdomain1.duckdns.org -d subdomain2.duckdns.org
E-mail address entered: [email protected]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for subdomain1.duckdns.org
http-01 challenge for subdomain2.duckdns.org
Waiting for verification...
Cleaning up challenges

 

Here is the error message:


- The following errors were reported by the server:

 


Domain: subdomain1.duckdns.org
Type: connection
Detail: Fetching
http://subdomain1.duckdns.org/.well-known/acme-challenge/token:
Timeout during connect (likely firewall problem)

Domain: subdomain2.duckdns.org
Type: connection
Detail: Fetching
http://subdomain2.duckdns.org/.well-known/acme-challenge/token2:
Timeout during connect (likely firewall problem)

ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

 

I am running out of ideas, to fix this issue.

 

 

 

I too am having issues getting the container to find the certs.  I also have a google wifi.  If i visit mydns.dnsserver.com:port during startup I get a message about the acme challenge so I know the port forwarding CNAME and DNS is working.  All of my other services are working also externally by port number.  I have a pihole in the mix as DNS only that i though was causing trouble but I removed it and got the same issues.  I'm completely stumped.

Link to comment
3 hours ago, saarg said:

 

Have you forwarded the ports in the ATT Router also?

 

Yes, I tried but that didn't work. Here are the screenshots. Let me know if I entered the wrong information here.

 

Isn't this applies to all devices to connect to ATT router (and Google Wifi)? 

 

715365621_ScreenShot2019-01-08at7_19_07PM.png.67b83c1ebc0ee699428951bf46b3701a.png

 

1017924805_ScreenShot2019-01-08at7_15_00PM.png.f433c48212c3127227a3a16a0f22297c.png

 

Link to comment
10 hours ago, stlrox said:

Yes, I tried but that didn't work. Here are the screenshots. Let me know if I entered the wrong information here.

If the ATT modem is forwarding WAN port 80 to LAN 180, and the google wifi is connected to the ATT LAN, then you need to tell the google wifi to forward port 180 to your unraid docker IP, not 80.

 

The info has to make it all the way through and back. Internet port 80 <-> 80WAN ATT moves it to 180LAN <-> 180WAN Google Wifi 180LAN <-> 180 Docker LE 80 to application.

 

Same thing with 443.

 

The next device in the chain has to be listening to the correct port, you told google wifi to listen to 80, and the ATT to talk to 180.

  • Like 1
Link to comment
1 hour ago, jonathanm said:

If the ATT modem is forwarding WAN port 80 to LAN 180, and the google wifi is connected to the ATT LAN, then you need to tell the google wifi to forward port 180 to your unraid docker IP, not 80.

 

The info has to make it all the way through and back. Internet port 80 <-> 80WAN ATT moves it to 180LAN <-> 180WAN Google Wifi 180LAN <-> 180 Docker LE 80 to application.

 

Same thing with 443.

 

The next device in the chain has to be listening to the correct port, you told google wifi to listen to 80, and the ATT to talk to 180.

You, sir, deserve an award. 

 

I corrected port forwarding in Google Wifi and bingo!

 

984573225_Screenshot_20190109-084224_GoogleWifi.thumb.jpg.1d0dbf6a523b79aeeb4059391e23958d.jpg

2048 bit DH parameters present
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d subdomain1.duckdns.org -d subdomain2.duckdns.org
E-mail address entered: [email protected]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for subdomain1.duckdns.org
http-01 challenge for subdomain2.duckdns.org
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/subdomain1.duckdns.org/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/subdomain1.duckdns.org/privkey.pem
Your cert will expire on 2019-04-09. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

New certificate generated; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
New certificate generated; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready

 

Edited by stlrox
Link to comment

After setting up port forwarding everything worked correctly with Nextcloud container. I was able to access Nextcloud using duckdns domain. But in the excitement of that working correctly then I tried to set up for Home Assistant duckdns external access. But I messed up with Nginx configuration files and screwed up both Nextcloud and Home Assistant setup.

 

Could someone point out from where I can grab the original nginx\site-confs\default file.

 

Thank you

 

 

 

 

Link to comment
53 minutes ago, stlrox said:

After setting up port forwarding everything worked correctly with Nextcloud container. I was able to access Nextcloud using duckdns domain. But in the excitement of that working correctly then I tried to set up for Home Assistant duckdns external access. But I messed up with Nginx configuration files and screwed up both Nextcloud and Home Assistant setup.

 

Could someone point out from where I can grab the original nginx\site-confs\default file.

 

Thank you

 

 

 

 

The default should appear if you delete it and restart the container. 

  • Like 1
Link to comment
4 hours ago, saarg said:

The default should appear if you delete it and restart the container. 

Nice. Thank you for the tip. I did that and default file appeared back. 

 

I left that default file as it is and made changes 'nextcloud.subdomain.conf' file. Could someone confirm these entries in this file. 

 

As of now when I browse to 'WebUI' from NextCloud container, am getting 'Welcome to our server' message. This setup worked before and don't know what changes I made to make this break.

server {

listen 443 ssl;



server_name subdomain1.*;



root /config/www;

include /config/nginx/ssl.conf;



client_max_body_size 0;



location / {

include /config/nginx/proxy.conf;

resolver 127.0.0.11 valid=30s;

set $upstream_nextcloud nextcloud;

proxy_max_temp_file_size 2048m;

proxy_pass https://$upstream_nextcloud:443;

}

}


 

Link to comment
2 hours ago, stlrox said:

Nice. Thank you for the tip. I did that and default file appeared back. 

 

I left that default file as it is and made changes 'nextcloud.subdomain.conf' file. Could someone confirm these entries in this file. 

 

As of now when I browse to 'WebUI' from NextCloud container, am getting 'Welcome to our server' message. This setup worked before and don't know what changes I made to make this break.


server {

listen 443 ssl;



server_name subdomain1.*;



root /config/www;

include /config/nginx/ssl.conf;



client_max_body_size 0;



location / {

include /config/nginx/proxy.conf;

resolver 127.0.0.11 valid=30s;

set $upstream_nextcloud nextcloud;

proxy_max_temp_file_size 2048m;

proxy_pass https://$upstream_nextcloud:443;

}

}


 

Did you follow the directions at the top of the sample conf file? 

 

Once set up, you're supposed to access nextcloud through your domain not the local ip, because it will redirect everything to your ip. 

 

If you're seeing the welcome message, you have an issue with a redirect. Clear your browser cache, then recheck your settings. 

Edited by aptalca
  • Like 1
Link to comment
12 hours ago, aptalca said:

Did you follow the directions at the top of the sample conf file? 

 

Once set up, you're supposed to access nextcloud through your domain not the local ip, because it will redirect everything to your ip. 

 

If you're seeing the welcome message, you have an issue with a redirect. Clear your browser cache, then recheck your settings. 

 

Thank you. I cleared cache and was able to access Nextcloud UI via subdomain1.duckdns.org

 

When I am at home why can't I access Nextcloud UI using the IP address? Just curious.

 

Link to comment
8 hours ago, stlrox said:

 

Thank you. I cleared cache and was able to access Nextcloud UI via subdomain1.duckdns.org

 

When I am at home why can't I access Nextcloud UI using the IP address? Just curious.

 

Because your nextcloud is configured to redirect all connections to the domain name

Link to comment

Need help trying to pass a web server called RadioFeed here . I've tried  a default config with it not fulling passing the web server fully

 

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name scanner.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        proxy_pass http://10.1.60.48:5000;
    }
}

 

 

Link to comment

Hi, I'm trying to setup a reverse-proxy for the unraid webUI. I only want it for my local network and not expose it to the internet. The main reason is to have uniform subdomains for all services on my local network and for the annoying invalid certificate warning to go away.

 

So far, I have been quite successful, but I am struggling with php files getting buffered. Which is something I don't want, as for example the docker update popup box will only display text once the update is finished.

 

Here is my config so far:

server {
    listen 443 ssl;

    server_name unraid.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;
	
    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
	fastcgi_keep_conn on;
	fastcgi_buffering off;
	proxy_buffering off;
	gzip off;
		
        resolver 127.0.0.11 valid=30s;
        set $upstream_unraid $REDACTED;

	proxy_set_header Upgrade $http_upgrade;
    	proxy_set_header Connection "Upgrade";
        proxy_pass https://$upstream_unraid:4433;
    }
}

I have simply tried to adapt one of the existing templates for the unraid UI. 

All the statements related to buffering didn't change anything. I have also tried not including /config/nginx/proxy.conf and also still no change.

 

I would be glad if someone could help me figure out why php responses still are getting buffered.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.