[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


5641 posts in this topic Last Reply

Recommended Posts

11 hours ago, aptalca said:

Symlinks work as long as nginx inside the container can follow it and access the target. I'm assuming the symlink is pointing to a share hosting your movies on unraid, but the letsencrypt container does not have access to that share (location not mapped) so nginx read the link but cannot find the target.

 

Here's what you can do:

1) map your movies location into your letsencrypt container as "/movies" and create symlinks in your www folder that point to "/movies/filename"

Doh! obvious when I think about it. Thanks for that, working like a sweetie now :)

Link to post
  • Replies 5.6k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I will only post this once. Feel free to refer folks to this post.   A few points of clarification:   The last update of this image didn't break things. Letsencrypt abruptly disabl

Application Name: SWAG - Secure Web Application Gateway Application Site:  https://docs.linuxserver.io/general/swag Docker Hub: https://hub.docker.com/r/linuxserver/swag Github: https:/

There is a PR just merged, it will be in next Friday's image, and will let you append php.ini via editing a file in the config folder   If you want to see how the sausage is made: https://gi

Posted Images

12 hours ago, halorrr said:

Sorry i flip flopped some wording there but yes that is how I have it set up.

 

The remote one is in host mode. I replaced the line to make it:


proxy_pass http://192.168.0.88:32400;

which is the host ip, and in the plex server settings I filled in "Custom server access URLs" with my domain https://plex.thedomainiamusing.xyz:443

 

But then I run into:

 

The instances of Cerberus (the desired remote plex instance) are duplicated https://monosnap.com/direct/Zm2FD9Pzdj1X3hIpBg3sQNB2FvQHys trying to click any of my libraries just reloads the plex dashboard and trying to click either of the two instances listed in activity gives me a no soup for you error: https://monosnap.com/direct/PbS0MMy3GDvqYhIoKoc8cJly0yA7Rj

Both these servers work fine directly from the local ip or launched though the plex.tv page. Only have these issues behind the letencrypt reverse proxy. Not sure what would cause this.

Could it be browser cache? There shouldn't be duplicates unless there are two Plex servers are running with the same name but different server ids.

Link to post
On 4/9/2019 at 8:04 PM, halorrr said:

Sorry i flip flopped some wording there but yes that is how I have it set up.

 

The remote one is in host mode. I replaced the line to make it:


proxy_pass http://192.168.0.88:32400;

which is the host ip, and in the plex server settings I filled in "Custom server access URLs" with my domain https://plex.thedomainiamusing.xyz:443

 

But then I run into:

 

The instances of Cerberus (the desired remote plex instance) are duplicated https://monosnap.com/direct/Zm2FD9Pzdj1X3hIpBg3sQNB2FvQHys trying to click any of my libraries just reloads the plex dashboard and trying to click either of the two instances listed in activity gives me a no soup for you error: https://monosnap.com/direct/PbS0MMy3GDvqYhIoKoc8cJly0yA7Rj

Both these servers work fine directly from the local ip or launched though the plex.tv page. Only have these issues behind the letencrypt reverse proxy. Not sure what would cause this.

 

Are you using Chrome.  As @aptalca said it's probably browser cache.  I've had this same exact issue on Chrome.  No such issue with Firefox.  Try clearing the browser cache and trying again.

Link to post

I tried to update my key because the expiration date was coming up. that failed.. Now this is the error I get when I start my docker.. I tried to entirely remove it and all the appdata info and just do a fresh install and that doesn't work.

 

 

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for **************.duckdns.org
Waiting for verification...
Challenge failed for domain **************.duckdns.org
http-01 challenge for **************.duckdns.org
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: **************.duckdns.org
Type: connection
Detail: Fetching
http://**************.duckdns.org/.well-known/acme-challenge/MuDzz0atZ15Jyw1q2Jf0XZRzjmXWO7qa8d-Y0w9YtK4:
Timeout during connect (likely firewall problem)

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

Link to post
10 hours ago, djlcurly said:

I tried to update my key because the expiration date was coming up. that failed.. Now this is the error I get when I start my docker.. I tried to entirely remove it and all the appdata info and just do a fresh install and that doesn't work.

 

 

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for **************.duckdns.org
Waiting for verification...
Challenge failed for domain **************.duckdns.org
http-01 challenge for **************.duckdns.org
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: **************.duckdns.org
Type: connection
Detail: Fetching
http://**************.duckdns.org/.well-known/acme-challenge/MuDzz0atZ15Jyw1q2Jf0XZRzjmXWO7qa8d-Y0w9YtK4:
Timeout during connect (likely firewall problem)

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

Check the ip on duckdns, if correct, check your port mapping

Link to post

I would like some help with understanding my problem.  First, I have things "working" but it bothers me that following the video from Spaceinvader didn't quite work.

 

As I mentioned, I followed the instructions.  When I point myself to the subdomain, I get the nginx default page.  Somehow it isn't seeing the subdomain redirection.  I followed some other instructions, and they mentioned that I can create a file in appdate/letsencrypt/nginx/site-confs directory, and I did so with the name nextcloud.

 

I can't seem to get it to properly work using the same method with sonarr, as I get a bad gateway message. 

 

So, the questions I have are:

1.  Why does it not work with proxy-confs (note that I tried to use port 444, and also the IP in proxy_pass, no difference).

2.  What is the secret sauce to get sonarr working the same way as I got nextcloud working, or to properly get it to work in proxy-confs

 

Hopefully this wasn't answered before, as I did search and read quite a few posts.

 

File: appdata/letsencrypt/nginx/site-confs/nextcloud

server {
	listen 443 ssl;
	server_name nextcloud.domainname.org;

	root /config/www;
	index index.html index.htm index.php;
	
	###SSL Certificates
	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
	
	###Diffie–Hellman key exchange ###
	ssl_dhparam /config/nginx/dhparams.pem;
	
	###SSL Ciphers
	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
	
	###Extra Settings###
	ssl_prefer_server_ciphers on;
	ssl_session_cache shared:SSL:10m;

        ### Add HTTP Strict Transport Security ###
	add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
	add_header Front-End-Https on;

	client_max_body_size 0;

	location / {
		proxy_pass https://10.99.2.10:444/;
        proxy_max_temp_file_size 2048m;
        include /config/nginx/proxy.conf;
	}
}

File: appdate/letsencrypt/nginx/proxy-confs/nextcloud.subdomain.conf

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name nextcloud.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_nextcloud nextcloud;
        proxy_max_temp_file_size 2048m;
        proxy_pass https://$upstream_nextcloud:443;
    }
}

 

Link to post
2 hours ago, djlcurly said:

Well uhm, it won't get a cert. And it really doesn't seem like it has anything to do with a firewall.. I turned mine off.

I do hope you turned it on again after the test! 

Link to post
8 hours ago, djlcurly said:

Well uhm, it won't get a cert. And it really doesn't seem like it has anything to do with a firewall.. I turned mine off.

Ok, as a test, stop the letsencrypt container. Then set up the regular nginx container with the same exact port mappings as letsencrypt. See if you can reach the container using your domain name at both ports 80 and 443

Link to post
39 minutes ago, aptalca said:

Ok, as a test, stop the letsencrypt container. Then set up the regular nginx container with the same exact port mappings as letsencrypt. See if you can reach the container using your domain name at both ports 80 and 443

Thanks, I actually did this last night, not as a test necessarily but because I have kind of given up on letsencrypt app. Am now trying to figure out how to make my own SSL key that I can just put in the nginx docker... because that actually is working. Just doesn't have a key.

Link to post

Hi, I have a question on xdebug. I would like to debug my php google assistant project. I read multiple articles referring to using remote xdebug. As far as I understand now, this would require xdebug to be setup inside the docker container. php-xdebug would need to be installed etc.

 

I would like to know if I can use xdebug inside the letsencrypt docker container. I like this container for its swift updates to keep me secure, and I know that acutally installing anything inside the container will break upon the next update. Therefore this question, is xdebug enabled (but hidden from the config) or can it be added in future?  Or lastly is there another docker container which I should use if I intent to debug?

Link to post

What is the recommended way to secure docker containers behind nginx?

 

My setup is pretty standard docker containers using a bridge network.

--net='my-bridge' \

-p '8989:8989/tcp' \ ...

 

I can access this docker container on either https://{subdomain] or {ip-address}:8989.

 

Should I just use iptables to block all ports except 22 and 443 ?

Link to post
8 hours ago, jenga201 said:

What is the recommended way to secure docker containers behind nginx?

 

My setup is pretty standard docker containers using a bridge network.

--net='my-bridge' \

-p '8989:8989/tcp' \ ...

 

I can access this docker container on either https://{subdomain] or {ip-address}:8989.

 

Should I just use iptables to block all ports except 22 and 443 ?

If the reverse proxy is already set up via container name as dns hostname, you can remove the 8989 port mapping

Link to post

I thought I'd tidy up a bit and move all the sample files from the proxy-confs folder to proxy-confs/_samples. But something noticed I'd done that and automatically recreated them again a few minutes later.

 

Having them in the sample folder as the live files makes it difficult to find what I want (and makes tab-completion more of a pain). Is there a way I can disable this auto-recreate, or tell it that the samples live in a different folder?

Link to post
8 hours ago, ElectricBadger said:

I thought I'd tidy up a bit and move all the sample files from the proxy-confs folder to proxy-confs/_samples. But something noticed I'd done that and automatically recreated them again a few minutes later.

 

Having them in the sample folder as the live files makes it difficult to find what I want (and makes tab-completion more of a pain). Is there a way I can disable this auto-recreate, or tell it that the samples live in a different folder?

It's done this way so the sample files gets updated in case there are some updates to the files. 

You can of course rip out that logic and create your own folders if you need to move the samples files to another folder. But you then need to build your own container. 

 

Is it really that hard to find the files that are not named sample? 

Tip of the day: Sort on file type. 

Link to post
9 hours ago, ElectricBadger said:

I thought I'd tidy up a bit and move all the sample files from the proxy-confs folder to proxy-confs/_samples. But something noticed I'd done that and automatically recreated them again a few minutes later.

 

Having them in the sample folder as the live files makes it difficult to find what I want (and makes tab-completion more of a pain). Is there a way I can disable this auto-recreate, or tell it that the samples live in a different folder?

No, that's the logic built in.

 

You can do "ls *.conf" to see the active ones.

 

How is tab completion difficult? It's the first part of the naming scheme that's unique per app, and it matches the conf before the .sample

And how frequently do you edit them honestly?

Link to post
9 hours ago, ElectricBadger said:

I thought I'd tidy up a bit and move all the sample files from the proxy-confs folder to proxy-confs/_samples. But something noticed I'd done that and automatically recreated them again a few minutes later.

 

Having them in the sample folder as the live files makes it difficult to find what I want (and makes tab-completion more of a pain). Is there a way I can disable this auto-recreate, or tell it that the samples live in a different folder?

Or, leave the samples folder completely alone, copy the ones you need to the folder below, rename, edit them there, and keep all your active .conf files in the ../letsencrypt/nginx folder. Much less to sort through, and when you need a new conf sample you know exactly where to look. Having multiple subfolders to sort through for your conf files may make sense on a huge multi site install, but I'm fine with keeping them all together.

Link to post
4 hours ago, jonathanm said:

Or, leave the samples folder completely alone, copy the ones you need to the folder below, rename, edit them there, and keep all your active .conf files in the ../letsencrypt/nginx folder. Much less to sort through, and when you need a new conf sample you know exactly where to look. Having multiple subfolders to sort through for your conf files may make sense on a huge multi site install, but I'm fine with keeping them all together.

That only works with subdomains confs, not subfolder

Link to post

Is there a way to fix this error? it appears in the container log

nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"

 

I think is related with the ssl_trusted_certificate nginx parameter, but I don't know to which pem file do I have to point.

I am using duckdns

 

On the other hand anyone is able to get a 100% in this test? https://www.ssllabs.com/ssltest/index.html

Edited by L0rdRaiden
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.