[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

10 minutes ago, Titopr21 said:

modify server name, add binhex- before mineos.*; it should be server_name binhex-mineos.*;

 

Thanks for the suggestion, I just tried that and it didn't work. I always thought that the server_name was for the subdomain, which i had set to mineos. Whenever I try binhex-mineos it has the same result, just changes the url from mineos.domain.com to binhex-mineos.domain.tech.

I think that the only thing that needs to mention binhex is the set $upstream_mineos, and I've got that to the docker container name.

Link to comment
2 hours ago, Alex.b said:

Hello,

 

I'm trying to setup letsencrypt with reverse proxy. I'm blocked on the first step  :


Domain: ****cloud.duckdns.org
Type: connection
Detail: Fetching
http://****cloud.duckdns.org/.well-known/acme-challenge/4u00bIH-L*******:
Timeout during connect (likely firewall problem)

ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

I did all ports forwarding on my router:

 

image.png.6a828c765ea6df7632eae1c5c79a86cd.png

 

On my let's encrypt docker:

image.thumb.png.370654e430904f1c41216013d80ea1b3.png

 

On local, when I'm pointing to ***cloud.duckdns.org, I'm landing to my router configuration.

On 4G (outside my network), I have a timeout.

 

When I try to check port on https://www.canyouseeme.org/

I have a Success with 280, but error with 2443.

 

Update : after some research my IPS block the port 443 for forwarding. What can I do ?

Your port forwarding are wrong. You switched destination and external. External are 80 & 443. Destination are 280 & 2443.

Link to comment

I came across this in my letsencrypt container log. It's the only highlighted text and everything seems to be working ok, but I don't want to be complacent.

nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)

I'm currently trying to resolve some nextcloud iOS app camera roll upload issues and just making sure this isn't related.

 

Link to comment
24 minutes ago, bigbangus said:

I came across this in my letsencrypt container log. It's the only highlighted text and everything seems to be working ok, but I don't want to be complacent.


nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)

I'm currently trying to resolve some nextcloud iOS app camera roll upload issues and just making sure this isn't related.

 

Not related, just an alert

  • Thanks 1
Link to comment
5 hours ago, Nosirus said:

Is it really useful to create a proxy network? What's the point of doing it?

 

I guess it must be awkward with the wireguard plugin unless you're using Heimdall or organizr ?

It's just another bridge network like the default bridge all containers run on by default. The difference is, user defined bridge allows containers to connect to each other via container names as dns hostnames.

 

See here: https://blog.linuxserver.io/2017/10/17/using-docker-networks-for-better-inter-container-communication/

Link to comment

For the life of me I can't figure out what I'm doing wrong.

I have 3 duckdns subdomains setup, one for the server, nextcloud, and sonarr, they ping fine and point to my external IP.

I use google domains and have 3 CNAME records, server, sonarr, and nextcloud pointed at the server duckdns subdomain. These records seem to be working as well, I can ping them and they resolve to the external IP.

I have forwards setup to send the ports 443 and 80 on to 1443 and 180.

 

Hopefully someone can figure out what's wrong, I've gone around in circles and am at wit's end, thank you!

Annotation 2020-07-16 120840.png

Annotation 2020-07-16 120841.png

Link to comment
3 hours ago, homefloyd said:

For the life of me I can't figure out what I'm doing wrong.

I have 3 duckdns subdomains setup, one for the server, nextcloud, and sonarr, they ping fine and point to my external IP.

I use google domains and have 3 CNAME records

Why get duckdns involved if you have your own domain?

Link to comment
11 hours ago, trurl said:

Why get duckdns involved if you have your own domain?

I was following along with the reverse proxy video by spaceinvaderone and that's what he did and it worked, so I figured I shouldn't deviate. Any suggestions on what I might be doing wrong?

Link to comment
8 minutes ago, homefloyd said:

I was following along with the reverse proxy video by spaceinvaderone and that's what he did and it worked, so I figured I shouldn't deviate. Any suggestions on what I might be doing wrong?

Theoretically it should work.

How long ago did you setup the CNAME? It can take up to 48 hours to get updated and propagated.

Try using google DNS server 8.8.8.8 since it's theoretically the fastest to be updated with google own domain settings.

 

Having CNAME of your own domain to point to duckdns is probably not a good idea, to be honest.

 

Can you just use Google own DDNS instead (assuming you don't have a static IP)?

In the DNS page of the Google Domains website, look for Synthetic records, add a Dynamic DNS line which will generate a subdomain e.g. subdomain.example.com + username + password.

Then use that to configure your router DDNS settings.

Then point CNAME to the DDNS subdomain e.g. server -> subdomain.example.com.

 

With a static IP then obviously use A record.

Link to comment

Hello everyone. Let me first preface this with saying that I'm not 100% it's this container that's causing my problem, but since it's the one that points to Cloudflare it's a good candidate.

 

So I have several subdomains set up that have worked fine for months. Connection and speed were great. Within the last few weeks, though, the connection via subdomains have become sluggish and sometimes don't even connect properly. All the subdomains in Cloudflare are SSL and are directed to DuckDNS to ensure proper direction. If I connect via IP locally or use the DuckDNS with the proper port it all works as it should, it's just when I try to connect via Cloudflare subdomain I experience issues.

 

Has this happened to anyone else? If so, any suggestions? I've checked the Letsencrypt log shown in the docker area but there are no errors.

 

Thanks.

Link to comment

I've been pulling my hair out trying to figure out what I am doing wrong getting piwigo to work with letsencrypt.

I used spaceinvader's guid to get them up and running, and have several other subdomains working, Jellyfin, radarr, sonarr, lidarr, etc... So letencrypt is working, and I am getting no errors there.
I have my subdomains/ddns setup through freedns.afraid.org

I setup piwigo this morning, changed the port 80 to port 8018, and it works locally.
But trying to go to it via piwgo.mydomain.com gives me a 502 Bad Gateway error.

Here is the piwigo.subdomain.conf file I am using.
 

# make sure that your dns has a cname set for piwigo

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name piwigo.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app piwigo;
        set $upstream_port 8018;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}




Looking at my ngnix error.log, I just noticed it's giving me this:
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: piwigo.*, request: "GET / HTTP/2.0", upstream: "http://172.18.0.8:8018/", host: "piwigo.mydomain.com"


2020/07/19 15:33:28 [error] 432#432: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: piwigo.*, request: "GET /favicon.ico HTTP/2.0", upstream: "http://172.18.0.8:8018/favicon.ico", host: "piwigo.mydomain.com", referrer: "https://piwigo.mydomain.com/"

Obviously, my actual domain was replaced by mydomain.com in the above errors, and .conf file.

So what am I doing wrong?


 

Edited by buellmule
Link to comment
10 minutes ago, buellmule said:

I've been pulling my hair out trying to figure out what I am doing wrong getting piwigo to work with letsencrypt.

I used spaceinvader's guid to get them up and running, and have several other subdomains working, Jellyfin, radarr, sonarr, lidarr, etc... So letencrypt is working, and I am getting no errors there.
I have my subdomains/ddns setup through freedns.afraid.org

I setup piwigo this morning, changed the port 80 to port 8018, and it works locally.
But trying to go to it via piwgo.mydomain.com gives me a 502 Bad Gateway error.

Here is the piwigo.subdomain.conf file I am using.
 


# make sure that your dns has a cname set for piwigo

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name piwigo.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app piwigo;
        set $upstream_port 8018;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}




Looking at my ngnix error.log, I just noticed it's giving me this:
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: piwigo.*, request: "GET / HTTP/2.0", upstream: "http://172.18.0.8:8018/", host: "piwigo.mydomain.com"


2020/07/19 15:33:28 [error] 432#432: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: piwigo.*, request: "GET /favicon.ico HTTP/2.0", upstream: "http://172.18.0.8:8018/favicon.ico", host: "piwigo.mydomain.com", referrer: "https://piwigo.mydomain.com/"

Obviously, my actual domain was replaced by mydomain.com in the above errors, and .conf file.

So what am I doing wrong?


 

Don't change the port in the proxy conf. Leave it at the default value. The containers talk internally using the container port and not the host port.

Link to comment
2 minutes ago, saarg said:

Don't change the port in the proxy conf. Leave it at the default value. The containers talk internally using the container port and not the host port.

DOOOH! Thank you!!!!
I thought since I changed the port to 8018, I needed to in the conf file. ALl fixed and working in less than 30 min. THANK YOU!

Link to comment

Hi,

I have successfully setup several containers using letsencrypt and all working well but I would like to be able to acess my 2 unraid servers gui remotely the same way I know the port is 80 and I have setup my cname records but I do not know how to setup a proxy conf file?  I have obviously all the examples in the proxy-confs folder but nothing to access the unraid gui? can anyone help please?

 

Many Thanks!

Link to comment
11 hours ago, mbc0 said:

Hi,

I have successfully setup several containers using letsencrypt and all working well but I would like to be able to acess my 2 unraid servers gui remotely the same way I know the port is 80 and I have setup my cname records but I do not know how to setup a proxy conf file?  I have obviously all the examples in the proxy-confs folder but nothing to access the unraid gui? can anyone help please?

 

Many Thanks!

I would advice you to use a VPN if you need to access the unraid GUI. If you have letsencrypt on a custom bridge, it will not be able to talk to one of the servers either.

  • Like 1
Link to comment

Hello, 

 

Can I start by saying that I have only moved over to unRaid from Synology during the past 48hrs and the support both tech and community have been first class. Things are so much smoother and easier, I should have made the transition a long time ago. 

 

I've followed several of Spaceinvader One's excellent YouTube videos and have successfully setup Sonarr, Radarr, Sabnzbd, DuckDNS, MariaDB, Bitwarden, Letsencrypt, NextCloud, server wide encryption, cache drive and several plugins, and I'm very happy. The Linuxserver containers are first class. 

 

One issue I have been stuck on for the past few hours though is getting NextCloud to work with Letsencrypt. I managed to set it up and had it working locally however when I moved it into the reverse-proxy fold like I did with Bitwarden, I am getting timeout errors. 

 

I have looked through the forums of similar issues however none of the potential solutions have fixed the issue. 

 

The CNAME entries have been checked and double checked. These were entered only today however the Bitwarden one took effect instantly. I have double checked the Letsencrypt and NextCloud config files, however they appear correct also. I've ran out of ideas. 

 

Thanks in advance. 

 

I have attached several screenshots below which I hope will be enough for someone to point me in the right direction, however if any further information is needed, please let me know. 

20200723_124320.jpg

20200723_124353.jpg

20200723_124636.jpg

20200723_124442.jpg

20200723_124839.jpg

20200723_124515.jpg

Link to comment

I think this letsencrypt thing is hardest in unraid

I tried multiple things, cannot make it working

I bought a domain in name cheap 

Screenshot_5.thumb.png.2c16f3b73d321f5253082c10ab412daa.png

I placed CNAME records. in host i put server and sonarr, in value i put my duckdns url

In my Asus router with merlin firmware i setup port forwarding

ports.png.b1de05774442bcf33ea302d9bcf6eb76.png

When i am trying to listen ports 180 is showing as opened, and 1443 no response. As i checked my IPS not blocking ports 443 and 80

When i am installing Letsencrypt following Spaceinvader guide in log files i see that errors

Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Let's Encrypt: https://letsencrypt.org/donate/

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=Europe/London
URL=xxxx.xx
SUBDOMAINS=sonarr,server
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=

SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d sonarr.xxxx.xx -d server.xxxx.xx
E-mail address entered: [email protected]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for server.xxxx.xx
http-01 challenge for sonarr.xxxx.xx
Waiting for verification...
Challenge failed for domain server.xxxx.xx

Challenge failed for domain sonarr.xxxx.xx

http-01 challenge for server.xxxx.xx
http-01 challenge for sonarr.xxxx.xx
Cleaning up challenges
Some challenges have failed.

IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: server.xxxx.xx
Type: connection
Detail: Fetching
http://server.xxxx.xx/.well-known/acme-challenge/3yva4Rpbi0bKXT6gm3yl23U759AHPNh6nYTkfSg2FGk:
Timeout during connect (likely firewall problem)

Domain: sonarr.xxxx.xx
Type: connection
Detail: Fetching
http://sonarr.xxxx.xx/.well-known/acme-challenge/Hch6QTW5eTlxfTKK1fh-P38kWMBVcsRjDcCk2f7Rkb0:
Timeout during connect (likely firewall problem)

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

Router has DDNS enabled, i don;t know if this is affecting. I a dumb in all this network and domains things

What i am doing wrong? How i can make this letsencrypt working? Can be issue with duckdns?

Edited by J05u
Link to comment
22 minutes ago, J05u said:

I think this letsencrypt thing is hardest in unraid

I tried multiple things, cannot make it working

I bought a domain in name cheap 

Screenshot_5.thumb.png.2c16f3b73d321f5253082c10ab412daa.png

I placed CNAME records. in host i put server and sonarr, in value i put my duckdns url

In my Asus router with merlin firmware i setup port forwarding

ports.png.b1de05774442bcf33ea302d9bcf6eb76.png

When i am trying to listen ports 180 is showing as opened, and 1443 no response. As i checked my IPS not blocking ports 443 and 80

When i am installing Letsencrypt following Spaceinvader guide in log files i see that errors


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
Let's Encrypt: https://letsencrypt.org/donate/

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=Europe/London
URL=xxxx.xx
SUBDOMAINS=sonarr,server
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
DNSPLUGIN=
[email protected]
STAGING=

SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d sonarr.xxxx.xx -d server.xxxx.xx
E-mail address entered: [email protected]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for server.xxxx.xx
http-01 challenge for sonarr.xxxx.xx
Waiting for verification...
Challenge failed for domain server.xxxx.xx

Challenge failed for domain sonarr.xxxx.xx

http-01 challenge for server.xxxx.xx
http-01 challenge for sonarr.xxxx.xx
Cleaning up challenges
Some challenges have failed.

IMPORTANT NOTES:
- The following errors were reported by the server:

Domain: server.xxxx.xx
Type: connection
Detail: Fetching
http://server.xxxx.xx/.well-known/acme-challenge/3yva4Rpbi0bKXT6gm3yl23U759AHPNh6nYTkfSg2FGk:
Timeout during connect (likely firewall problem)

Domain: sonarr.xxxx.xx
Type: connection
Detail: Fetching
http://sonarr.xxxx.xx/.well-known/acme-challenge/Hch6QTW5eTlxfTKK1fh-P38kWMBVcsRjDcCk2f7Rkb0:
Timeout during connect (likely firewall problem)

To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

Router has DDNS enabled, i don;t know if this is affecting. I a dumb in all this network and domains things

What i am doing wrong? How i can make this letsencrypt working? Can be issue with duckdns?

 

I'm finding my way also, however I believe you have your internal and external ports the wrong way round on your routers firewall. 

20200723_135104.jpg

  • Like 1
Link to comment
46 minutes ago, J05u said:

I bought a domain in name cheap

46 minutes ago, J05u said:

Router has DDNS enabled, i don;t know if this is affecting. I a dumb in all this network and domains things

What i am doing wrong? How i can make this letsencrypt working? Can be issue with duckdns?

1) You have your own domain, 2) you are using duckdns, and 3) your router has DDNS.

 

You should only need one of these 3. Since your router has DDNS why get the other things involved at all?

 

Link to comment
20 minutes ago, trurl said:

1) You have your own domain, 2) you are using duckdns, and 3) your router has DDNS.

 

You should only need one of these 3. Since your router has DDNS why get the other things involved at all?

 

I think because of my knowledge.  Now i managed to make it working with duckdns and domain name. Duckdns only for my cnames, as i dont have static IP

@LoneTraveler U were right, i setup wrong port forwarding, I tried before and nothing has been working, but i think this is was because of cnames so i messed up port forwarding.

 

Edited by J05u
  • Thanks 1
Link to comment
22 hours ago, LoneTraveler said:

Hello, 

 

Can I start by saying that I have only moved over to unRaid from Synology during the past 48hrs and the support both tech and community have been first class. Things are so much smoother and easier, I should have made the transition a long time ago. 

 

I've followed several of Spaceinvader One's excellent YouTube videos and have successfully setup Sonarr, Radarr, Sabnzbd, DuckDNS, MariaDB, Bitwarden, Letsencrypt, NextCloud, server wide encryption, cache drive and several plugins, and I'm very happy. The Linuxserver containers are first class. 

 

One issue I have been stuck on for the past few hours though is getting NextCloud to work with Letsencrypt. I managed to set it up and had it working locally however when I moved it into the reverse-proxy fold like I did with Bitwarden, I am getting timeout errors. 

 

I have looked through the forums of similar issues however none of the potential solutions have fixed the issue. 

 

The CNAME entries have been checked and double checked. These were entered only today however the Bitwarden one took effect instantly. I have double checked the Letsencrypt and NextCloud config files, however they appear correct also. I've ran out of ideas. 

 

Thanks in advance. 

 

I have attached several screenshots below which I hope will be enough for someone to point me in the right direction, however if any further information is needed, please let me know. 

20200723_124320.jpg

20200723_124353.jpg

20200723_124636.jpg

20200723_124442.jpg

20200723_124839.jpg

20200723_124515.jpg

 

Hello, 

 

I've managed to resolve my issue. For anyone whom is having similar difficulty, I'll explain here. 

 

After creating the NextCloud database in MariaDB, I had to edit the 'custom.cnf' file in;

"...mnt/cache/.appdata/mariadb"

Specifically, I edited line 124, changing '#bind-address=0.0.0.0' to 'bind-address=0.0.0.0' without quotations. 

 

This allowed me to progress through the initial admin setup page of NextCloud without the 504 error. 

 

One last point though, when I placed NextCloud behind the Letsencrypt reverse-proxy, I had to restore the above edit to default, ie had to place back the # before I could connect. 

 

Hope this helps. 

  • Thanks 1
Link to comment

Is there a way to edit a perf-conf file to direct traffic to an external machine?

Basically I had this setup and working with my tautulli and Letsencrypt in dockers on my server. I've moved my tautulli installation to an external machine for better tracking and notifications. However, I'd like to forward the traffic that was going to my old docker via tautulli.mydomain.com to my new one on the network. I've got my ports opened up, I tried some basic changes to the tautulli.subdomain.conf, but no luck.

I'm not even certain this is possible. But I figured I'd ask! Thanks!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.