[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

I'm following the spaceinvader guide here :https://www.youtube.com/watch?v=I0lhZc25Sro&feature=youtu.be however I keep getting stuck on the part where letsencypt is first booted up and tries to get a certificate and getting an error that I haven't been able to solve.

 

Network is set up as follows:

ISP -> Modem (bridge mode) -> Netgear ORBI Router - Ports 80, 443, mapped to Unraid's IP (not the custom net) ports 180, 1443 -> Unraid Server -> LetsEncrypt docker on customnet

 

DuckDNS container set up and updating publicIP routinely. (ping to mydomain.duckdns.org reveals my external IP)

 

When I fire up letsencrypt I get this in the logs:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 10-adduser: executing...-------------------------------------_ 
()| | ___ _ __| | / __| | | / \| | \__ \ | | | () ||_| |___/ |_| \__/
Brought to you by linuxserver.io
We gratefully accept donations at:https://www.linuxserver.io/donate/
-------------------------------------GID/UID-------------------------------------
User uid: 99
User gid: 100
-------------------------------------
[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...Variables set:PUID=99PGID=100TZ=Europe/BerlinURL=duckdns.orgSUBDOMAINS=mydomainEXTRA_DOMAINS=ONLY_SUBDOMAINS=trueDHLEVEL=2048VALIDATION=httpDNSPLUGIN=EMAIL=myemail@gmail.com
STAGING=2048 bit DH parameters presentSUBDOMAINS entered, processingSUBDOMAINS entered, processingOnly subdomains, no URL in cert
Sub-domains processed are: -d mysubdomain.duckdns.orgE-mail address entered: myemail@gmail.comhttp validation is selectedGenerating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.logPlugins selected: Authenticator standalone, Installer NoneObtaining a new certificate
Performing the following challenges:http-01 challenge for mysubdomain.duckdns.orgWaiting for verification...
Challenge failed for domain mysubdomain.duckdns.org
http-01 challenge for mysubdomain.duckdns.org
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:- The following errors were reported by the server:Domain: mysubdomain.duckdns.orgType: connectionDetail: Fetchinghttp://mysubdomain.duckdns.org/.well-known/acme-challenge/TEN3u0g3N88iLRAEqryMvo6GJ71lsvCxP9hMbC5vwg8
:Connection refused
To fix these errors, please make sure that your domain name wasentered correctly and the DNS A/AAAA record(s) for that domaincontain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that nofirewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify that you are serving files from the webroot path you provided.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

At this point, not sure where else to check

Link to comment
21 hours ago, Lavoslav said:

Are you sure the ports are open? try with https://www.canyouseeme.org/

canyouseeme says no. but I have ports 80 and 443 forwarded to the unraid's IP port 180 and 1443 (which is what I set for letsencrypt). At any rate, I installed NGINX Reverse Proxy container, switched off letsencrypt and used ports 180 and 1443 for that. Nginx Reverse proxy worked immediately. I can even access it through the CNAME i have forwarded to duckdns.

Link to comment

Hi

I have a few Hikvision cameras. I’m trying to create a rule on an NGINX proxy on letsencrypt docker to forward traffic to the cameras.
If I do a simple port forward on my router to the Camera’s IP address and port, it works, but trying the same in nginx fails.

For some reason I can forward the port 80 traffic (access to admin console, which I don’t actually want to forward from outside) but not the server traffic (port 8000 needed to access the camera via the iVMS 4500 phone app).

 

Do you have any idea/examples of the NGINX rules that are required?

Here is what I have so far:

server {
    listen 8000;

    server_name [REDACTED]

    access_log /config/nginx/app.log;
    error_log /config/nginx/app.error.log;

    location / {
        proxy_pass http://192.168.0.34:8000;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_redirect off;
    }
}

The error_log gives me errors like this:

2019/04/05 13:00:43 [error] 348#348: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: [REDACTED], request: "GET /ay HTTP/1.1", upstream: "http://192.168.0.34:8000/ay", host: "[REDACTED]:8000", referrer: "http://[REDACTED]:8000/"

The access_log gives me this:

185.69.144.xx - - [05/Apr/2019:14:11:57 +0100] "\x00\x00\x00\xE0Z\[REDACTED]\x00\x00\x00\x016~\x02" 400 173 "-" "-"

 

Any ideas?

Thanks

Link to comment

You must add a new port rule to the docker, container port 8000 and host port 8000 then it should work, if you want to listen on port 80 co figure the docker with custom: br1 mode and assign it a ip in your home network.

You must understand that a container can not have the same port 80 as the unraid web interface (bridge mode means that the container use the same ip as the host - unraid).

You can also change the webinterface port to anithing other as 80 but i think the way with the cust: br1 is better.

Link to comment

Thanks @ich777

 

I guess I wasn't clear enough.

Port forward is setup ok in docker-compose and if I forward the traffic on the camera's port 80 it works without trouble, but if I forward the traffic to port 8000 I get the errors as posted above. I guess I must be missing something in the headers so the communication works well, I just don't know what is required.

 

Thanks 

Link to comment

Thanks for replying.

I'm not sure how it's setup, never changed this so I guess bridge?.

Here is my docker-compose:

 

    letsencrypt:
      image: linuxserver/letsencrypt:latest
      container_name: letsencrypt
      restart: unless-stopped
      cap_add:
      - NET_ADMIN
      volumes:
      - /etc/localtime:/etc/localtime:ro
      - /NAS/letsencrypt/config:/config
      - /var/www/html:/var/www/html
      environment:
      - PGID=1000
      - PUID=1000
      - EMAIL=[REDACTED]
      - URL==[REDACTED]
      - SUBDOMAINS==[REDACTED]
      - VALIDATION=http
      - TZ=Europe/London
      ports:
      - "443:443"
      - "8000:8000" 

Where should I change the setting to try Custom: br1 ?

Link to comment

I have firebird database in docker that I would like to connect trough nginx, as I understand I need to setup stream module for that to work?

Firebird is running on port 3050 and 5000, is there a way so I can forward specific subdomain to specific database in that container trough Nginx, same way I'm forwarding nextcloud or similar container?

Link to comment
18 minutes ago, ich777 said:

Dont understand the question.

You've got a nginx docker and a firebird database on the same server and want to connect the two or do you want to reverse proxy the firebird database through nginx.

I want to reverse proxy firebird database. 

I have windows app that uses firebird database, my idea is to host that database on unraid server, and access it trough reverse proxy.

Link to comment
22 minutes ago, INTEL said:

I want to reverse proxy firebird database. 

I have windows app that uses firebird database, my idea is to host that database on unraid server, and access it trough reverse proxy.

You wouldn't reverse proxy a database connection via a HTTP proxy. Just point your windows application at the firebird SQL port number and away you go.

Link to comment

As I thought... I already have that kind of setup running, but on windows server, I was thinking as I have multiple users that needs to connect to their database (different databases, all running firebird), to run firebird docker, and specify user to coresponding database using different subdomain.

Link to comment

Hey folks, 

 

I have an issue with Symlinks within NGINX.

 

I have a couple of TV Shows in my Media share which are updated and served via the usual Sonarr/Plex setup. I have a user who wishes to also download these shows so that they have a copy stored locally. Initially, I manually copied them over to a folder in my WWW folder and sent the user the link. But then I thought I could automate it and save myself some cache space by just creating a Symlink from the TV Show folder to the WWW folder. This appears to work in principle and the folder shows up in the right place within my LAN but via web browser it gives a 404 error.

 

I understand that NGINX should allow Symlinks by default so I'm further assuming it's a permissions issue. Is there any way to get this working while at the same time not messing up my Sonarr/Plex permissions?

 

Thanks in advance :)

Edited by NeoDude
Link to comment

So am trying to get Nextcloud to work with Letsencrypt using Spaceinvaders guide. Note nextcloud works before configuring it to work with letsencrypt. When trying to connect to my sub domain its getting 502 Bad gateway. In the ngnix log:

2019/04/09 08:49:58 [error] 353#353: *162 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.0.1, server: cloud.*, request: "GET /apps/files/ HTTP/2.0", upstream: "https://172.18.0.4:444/apps/files/", host: "cloud.mydomain.com" 

 

Nextcloud docker:

changed network to custom

changed port to 444

 

NextCloud config.php

    1 => 'cloud.mydomain.com',

  'trusted_proxies' => ['letsencrypt'], ( tried with and without this line. )
  'overwrite.cli.url' => 'https://cloud.mydomain.com',
  'overwritehost' => 'cloud.mydomain.com',
  'overwriteprotocol' => 'https',

 

nextcloud.subdomain.conf

just changed nextcloud.* to cloud.*

 

Any suggestions? I have it working with ombi just cant seem to figure it out with nextcloud.

Link to comment

Hopefully someone here can help me out. I've been having an odd issue. First I'll give some details about my setup.

 

My server has two instances of plex:
plex which is run in Host mode.

plex-eros which is run in br0 bridge mode.

 

https://monosnap.com/direct/GhwdEpmQ069JRyrl9oXW7rwe0pE59g

 

Only plex is shared with others and needs to be accessible outside my network with the reverse proxy.

 

If I leave 

https://$upstream_plex:32400

in the config then I get a "502 Bad Gateway" error

 

If I configure it as if I'm in bridge mode and make it:

 

https://192.168.0.88:32400

I am then able to load the plex page, log into my user but then things get weird. The instances of Cerberus (the regular plex instance) are duplicated https://monosnap.com/direct/Zm2FD9Pzdj1X3hIpBg3sQNB2FvQHys trying to click any of my libraries just reloads the plex dashboard and trying to click either of the two instances listed in activity gives me a no soup for you error: https://monosnap.com/direct/PbS0MMy3GDvqYhIoKoc8cJly0yA7Rj

Both these servers work fine directly from the local ip or launched though the plex.tv page. Only have these issues behind the letencrypt reverse proxy. Any idea what I'm doing wrong?

Link to comment
9 hours ago, NeoDude said:

Hey folks, 

 

I have an issue with Symlinks within NGINX.

 

I have a couple of TV Shows in my Media share which are updated and served via the usual Sonarr/Plex setup. I have a user who wishes to also download these shows so that they have a copy stored locally. Initially, I manually copied them over to a folder in my WWW folder and sent the user the link. But then I thought I could automate it and save myself some cache space by just creating a Symlink from the TV Show folder to the WWW folder. This appears to work in principle and the folder shows up in the right place within my LAN but via web browser it gives a 404 error.

 

I understand that NGINX should allow Symlinks by default so I'm further assuming it's a permissions issue. Is there any way to get this working while at the same time not messing up my Sonarr/Plex permissions?

 

Thanks in advance :)

Symlinks work as long as nginx inside the container can follow it and access the target. I'm assuming the symlink is pointing to a share hosting your movies on unraid, but the letsencrypt container does not have access to that share (location not mapped) so nginx read the link but cannot find the target.

 

Here's what you can do:

1) map your movies location into your letsencrypt container as "/movies" and create symlinks in your www folder that point to "/movies/filename"

Link to comment
4 hours ago, olemal said:

So am trying to get Nextcloud to work with Letsencrypt using Spaceinvaders guide. Note nextcloud works before configuring it to work with letsencrypt. When trying to connect to my sub domain its getting 502 Bad gateway. In the ngnix log:

2019/04/09 08:49:58 [error] 353#353: *162 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.0.1, server: cloud.*, request: "GET /apps/files/ HTTP/2.0", upstream: "https://172.18.0.4:444/apps/files/", host: "cloud.mydomain.com" 

 

Nextcloud docker:

changed network to custom

changed port to 444

 

NextCloud config.php

    1 => 'cloud.mydomain.com',

  'trusted_proxies' => ['letsencrypt'], ( tried with and without this line. )
  'overwrite.cli.url' => 'https://cloud.mydomain.com',
  'overwritehost' => 'cloud.mydomain.com',
  'overwriteprotocol' => 'https',

 

nextcloud.subdomain.conf

just changed nextcloud.* to cloud.*

 

Any suggestions? I have it working with ombi just cant seem to figure it out with nextcloud.

You shouldn't have changed the port if it's connecting via container name.

 

Read the docs for letsencrypt, it's all explained there and the top of each config tells you what to change. Don't change anything else unless you know exactly what you're doing (changing server name to cloud is fine)

Link to comment
1 hour ago, halorrr said:

Hopefully someone here can help me out. I've been having an odd issue. First I'll give some details about my setup.

 

My server has two instances of plex:
plex which is run in Host mode.

plex-eros which is run in br0 bridge mode.

 

https://monosnap.com/direct/GhwdEpmQ069JRyrl9oXW7rwe0pE59g

 

Only plex is shared with others and needs to be accessible outside my network with the reverse proxy.

 

If I leave 


https://$upstream_plex:32400

in the config then I get a "502 Bad Gateway" error

 

If I configure it as if I'm in bridge mode and make it:

 


https://192.168.0.88:32400

I am then able to load the plex page, log into my user but then things get weird. The instances of Cerberus (the regular plex instance) are duplicated https://monosnap.com/direct/Zm2FD9Pzdj1X3hIpBg3sQNB2FvQHys trying to click any of my libraries just reloads the plex dashboard and trying to click either of the two instances listed in activity gives me a no soup for you error: https://monosnap.com/direct/PbS0MMy3GDvqYhIoKoc8cJly0yA7Rj

Both these servers work fine directly from the local ip or launched though the plex.tv page. Only have these issues behind the letencrypt reverse proxy. Any idea what I'm doing wrong?

Leave the one you want to access remotely in host networking and read the top of the proxy conf to see what exactly you need to change for host mode

Link to comment
14 minutes ago, aptalca said:

Leave the one you want to access remotely in host networking and read the top of the proxy conf to see what exactly you need to change for host mode

Sorry i flip flopped some wording there but yes that is how I have it set up.

 

Quote

or host mode,
# replace the line "proxy_pass https://$upstream_plex:32400;" with "proxy_pass https://HOSTIP:32400;" HOSTIP being the IP address of plex
# in plex server settings, under network, fill in "Custom server access URLs" with your domain (ie. "https://plex.yourdomain.url:443")

The remote one is in host mode. I replaced the line to make it:

proxy_pass http://192.168.0.88:32400;

which is the host ip, and in the plex server settings I filled in "Custom server access URLs" with my domain https://plex.thedomainiamusing.xyz:443

 

But then I run into:

 

The instances of Cerberus (the desired remote plex instance) are duplicated https://monosnap.com/direct/Zm2FD9Pzdj1X3hIpBg3sQNB2FvQHys trying to click any of my libraries just reloads the plex dashboard and trying to click either of the two instances listed in activity gives me a no soup for you error: https://monosnap.com/direct/PbS0MMy3GDvqYhIoKoc8cJly0yA7Rj

Both these servers work fine directly from the local ip or launched though the plex.tv page. Only have these issues behind the letencrypt reverse proxy. Not sure what would cause this.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.