[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

EDIT: I fixed the problem by adding a custom variable called "CERTBOT_DOMAIN" and putting in my domain name without the "duckdns.org" part, then the server started and was reachable with a valid SSL cert! Yay! I hope I didn't break something and this is a small glitch in a chain of many tools???

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Hi, I'm trying to get DNS-over-HTTPS for my dumb Android phones, it seems Google is too greedy to let you define your own DNS while still getting an address over DHCP, a thing that every Computer and even iOS devices can do...

 

Well I installed Swag and I tried everything from Port Mapping with 81/444 as suggested and using another IP completely together with all Port Forwarding but it always fails the challenge. I assume that only problem is the curl doesn't have the correct address (see error message Can't resolve host below) but where do I have to change it to include the duckdns-Address? it tries to curl <DUCKDNSTOKEN>&txt=<CHALLENGETEXT> which does not have http://... in front.

 

In the error message it shows the Token I tried putting in manually so I'm pretty sure it just curls wrongly!

 

Can you tell me where I have to fix that? Here's the log output:

 

Quote

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.

[s6-init] ensuring user provided files have correct perms...exited 0.

[fix-attrs.d] applying ownership & permissions fixes...

[fix-attrs.d] done.

[cont-init.d] executing container initialization scripts...

[cont-init.d] 01-envfile: executing...

[cont-init.d] 01-envfile: exited 0.

[cont-init.d] 10-adduser: executing...

 

-------------------------------------

_ ()

| | ___ _ __

| | / __| | | / \

| | \__ \ | | | () |

|_| |___/ |_| \__/


 

Brought to you by linuxserver.io

-------------------------------------

 

To support the app dev(s) visit:

Certbot: https://supporters.eff.org/donate/support-work-on-certbot

 

To support LSIO projects visit:

https://www.linuxserver.io/donate/

-------------------------------------

GID/UID

-------------------------------------

 

User uid: 99

User gid: 100

-------------------------------------

 

[cont-init.d] 10-adduser: exited 0.

[cont-init.d] 20-config: executing...

[cont-init.d] 20-config: exited 0.

[cont-init.d] 30-keygen: executing...

using keys found in /config/keys

[cont-init.d] 30-keygen: exited 0.

[cont-init.d] 50-config: executing...

Variables set:

PUID=99

PGID=100

TZ=Europe/Berlin

URL=xxx.duckdns.org

SUBDOMAINS=www,

EXTRA_DOMAINS=

ONLY_SUBDOMAINS=false

VALIDATION=duckdns

CERTPROVIDER=

DNSPLUGIN=cloudflare

EMAIL=xxx

STAGING=false

 

Using Let's Encrypt as the cert provider

SUBDOMAINS entered, processing

SUBDOMAINS entered, processing

Sub-domains processed are: -d www.xxx.duckdns.org

E-mail address entered: xxx

duckdns validation is selected

the resulting certificate will only cover the main domain due to a limitation of duckdns, ie. subdomain.duckdns.org

Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created

Generating new certificate

Saving debug log to /var/log/letsencrypt/letsencrypt.log

Account registered.

Requesting a certificate for xxx.duckdns.org

Hook '--manual-auth-hook' for xxx.duckdns.org ran with output:

KOsleeping 60

Hook '--manual-auth-hook' for xxx.duckdns.org ran with error output:

% Total % Received % Xferd Average Speed Time Time Time Current

Dload Upload Total Spent Left Speed

 

0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0

100 2 0 2 0 0 4 0 --:--:-- --:--:-- --:--:-- 4

% Total % Received % Xferd Average Speed Time Time Time Current

Dload Upload Total Spent Left Speed

 

0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: <MY_DuckDNS_Token>&txt=<TXT-content>

 

Certbot failed to authenticate some domains (authenticator: manual). The Certificate Authority reported these problems:

Domain: xxx.duckdns.org

Type: unauthorized

Detail: Incorrect TXT record "<Manually_entered_token>" found at _acme-challenge.xxx.duckdns.org

 

Hint: The Certificate Authority failed to verify the DNS TXT records created by the --manual-auth-hook. Ensure that this hook is functioning correctly and that it waits a sufficient duration of time for DNS propagation. Refer to "certbot --help manual" and the Certbot User Guide.

 

Some challenges have failed.

Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.

ERROR: Cert does not exist! Please see the validation error above. Make sure your DUCKDNSTOKEN is correct.

 

Edited by gentux
Found the solution!
Link to comment

Yesterday I upgraded my license from Basic to Plus.  I now am seeing a situation where I am no longer able to stay logged in to Unraid.net.  When I click on the Sign In button in the upper right hand corner, it brings up the login page.  I enter my credentials, and click submit.  I'm then successfully logged in and the page closes.  Then a few seconds later, I am signed out (I didn't sign out) and have to sign in again.

 

I notice also that in the My Servers page I am offline.  The log file doesn't show any issues.

 

Any pointers to how to troubleshoot would be appreciated.

 

Thanks.

Olympia Signed Out.jpg

My Servers.jpg

Edited by dius
Link to comment

My Swag container fails to work constantly. I have Nextcloud, Guacamole, and Overseer routed through Swag. Over the past couple years I've only had a couple instances where I was not able to access one of these applications through https, and a restart of swag fixed it. Now I've gone as far as setting a user script to restart Swag every hour, but that doesn't even work. Like now, I just restarted swag less than 5 minutes ago, everything worked, but just checked again and I can't access my apps.

Any help is appreciated.

Link to comment

I have setup nextcloud, mariaDB and swag following as close as I can to spaceinvaders one youtube video.  My problem is that after I change the config files for swag and nextcloud, I can no longer access the nextcloud webUI.  it just brings up the default "welcome to swag" page.  Even when I shutdown the swag container, the nextcloud webUI still stays the same.  Has anyone had the same issues?

 

edit - solved.  when you are using a cache drive, by default the docker container will put the config files on your cache drive.  so when he is adjusting the files, type /mnt/cache instead of /mnt/user

 

 

Edited by Ben deBoer
Link to comment

Wondering if someone might be willing to help point me in the right direction to what is going on with my Issue. 

As of 3am this morning everything was working fine with my swag setup, then i went to bed... when I awoke today my website (dlongo.net)  is no longer accessible from inside my local network.(The site just times out ERR_CONNECTION_TIMED_OUT) But it seems to work fine if I turn on my VPN or access it from my mobile connection. Also if I ping dlongo.net it seems to resolve the correct IP. Anyone have any ideas on what I can check? Im just kinda lost at this point. 

Link to comment
On 1/30/2022 at 5:48 AM, Rex099 said:

Wondering if someone might be willing to help point me in the right direction to what is going on with my Issue. 

As of 3am this morning everything was working fine with my swag setup, then i went to bed... when I awoke today my website (dlongo.net)  is no longer accessible from inside my local network.(The site just times out ERR_CONNECTION_TIMED_OUT) But it seems to work fine if I turn on my VPN or access it from my mobile connection. Also if I ping dlongo.net it seems to resolve the correct IP. Anyone have any ideas on what I can check? Im just kinda lost at this point. 

Maybe your Modem/Router won't allow coming back over external networks. Maybe you find an Option that is either called "NAT loopback" or "NAT hairpin" I think turning that on could solve your issue although I'm not sure if you really want that or you'd better access internally with an address that is not routed out and back in.

Link to comment

Hi all,

I keep seeing in the logs:

No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

If I restart the container, it doesn't appear in the logs, but eventually re-appears.

The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command

echo $MAXMINDDB_LICENSE_KEY

It returns the correct value.

 

The only mention of this issue that I can find is this:

https://github.com/linuxserver/docker-swag/issues/139

 

Similar to that page, if I run:

# /config/geoip2db# ls -lah

it returns:
 

sh: /config/geoip2db#: not found

 

But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it?

I'm not sure how to manually check if those changes have been applied in the Docker or not.

 

Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening.

 

EDIT: Solution found:

 

Edited by jademonkee
Added ref to later post w/ solution
Link to comment

Greetings friends, I'm seeking some troubleshooting help. I'm running SWAG and Nextcloud and have had that running smoothly for a few months. Last week, I suddenly couldn't access Nextcloud from outside my home network.

 

The specific error I get is: ERR_CONNECTION_TIMED_OUT

 

I still can access Nextcloud inside my home network with no issues. The only changes I made recently to Unraid were adding a new data disk and setting up an ssd as a cache drive. Nothing has been changed in my router config.

I've been reading this topic and checking various log files and such. The only error I'm seeing in the SWAG log is: 

No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1


I've verified that my IP is correct with duckdns. I've verified that my ports 80 and 443 are being forwarded to 180 and 1443 in my router config. However, when I use an online port checking tool I get a 'Connection timed out' error. My research on this error has suggested that it could indicate a problem with router config or my ISP is blocking those ports. 

Given that it's been working up to a week ago I'm not sure how to narrow this down. Has my ISP started blocking ports 80 and 443? Could my router have an issue even though my ports correctly show as forwarded in the config? Is it the MaxMind error? Is there something else entirely I should be digging into?

Many thanks for any help on this!

 

Running Unraid 6.9.2. Here's the full log in case that's helpful:

 

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=America/New_York
URL=mycustomdomain.duckdns.org
SUBDOMAINS=mycustomdomain
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=false
VALIDATION=duckdns
CERTPROVIDER=
DNSPLUGIN=cloudflare
[email protected]
STAGING=false

Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Sub-domains processed are: -d mycustomdomain.mycustomdomain.duckdns.org
E-mail address entered: [email protected]
duckdns validation is selected
the resulting certificate will only cover the main domain due to a limitation of duckdns, ie. subdomain.duckdns.org
Certificate exists; parameters unchanged; starting nginx
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing...
The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am).
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 70-templates: executing...
[cont-init.d] 70-templates: exited 0.
[cont-init.d] 90-custom-folders: executing...
[cont-init.d] 90-custom-folders: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Server ready
No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

 

Link to comment
  • 2 weeks later...
On 2/6/2022 at 11:35 AM, vikingbrent said:

Greetings friends, I'm seeking some troubleshooting help. I'm running SWAG and Nextcloud and have had that running smoothly for a few months. Last week, I suddenly couldn't access Nextcloud from outside my home network.

 

The specific error I get is: ERR_CONNECTION_TIMED_OUT

 

I still can access Nextcloud inside my home network with no issues. The only changes I made recently to Unraid were adding a new data disk and setting up an ssd as a cache drive. Nothing has been changed in my router config.

Turns out restarting my router fixed everything. For whatever reason it had stopped forwarding ports. Just wanted to share in case someone else is beating their head into a wall with this issue. Cheers!

Link to comment

Hello,

 

I'm looking for an suggestion how to setup SWAG / Nextcloud that it would use also port behind domain...

 

All is working fine, port fwd etc.

 

1. When i enter url (internet - out of my LAN) : mydomain.duckdns.org 

It goes directly to my Nextcloud Login page.

 

What i want to archive to type URL with port mydomain.duckdns.org:8888 and then get the Nextcloud login page.

As i might want to use it for multiple dockers and not creating / changing it for each Docker (as spaceinvader does)

 

In example:

Nexcloud - "mydomain.duckdns.org:8888 "

Grafana - "mydomain.duckdns.org:8889 "

Plex- "mydomain.duckdns.org:8890 "

 

 

Is this possible? if so what i need to change in config?

 

## Version 2021/05/18
# make sure that your dns has a cname set for nextcloud
# assuming this container is called "swag", edit your nextcloud container's config
# located at /config/www/nextcloud/config/config.php and add the following lines before the ");":
#  'trusted_proxies' => ['swag'],
#  'overwrite.cli.url' => 'https://nextcloud.your-domain.com/',
#  'overwritehost' => 'nextcloud.your-domain.com',
#  'overwriteprotocol' => 'https',
#
# Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this:
#  array (
#    0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it.
#    1 => 'nextcloud.your-domain.com',
#  ),

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name mydomain.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }
}

 

 

2. After i setup SWAG and i want to open Nextcloud locally via WebUI, it redirects to "mydomain.duckdns.org" and not able to open it.

Only way it works when i enter Nextcloud Docker IP manually: 192.168.1.123:8989

 

How to get it working "oldway" ? 

image.png.19d5dacc01c85b283f7d3fa593689d22.png

thanks

 

 

Link to comment
On 2/5/2022 at 7:10 AM, jademonkee said:

Hi all,

I keep seeing in the logs:

No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

If I restart the container, it doesn't appear in the logs, but eventually re-appears.

The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command

echo $MAXMINDDB_LICENSE_KEY

It returns the correct value.

 

The only mention of this issue that I can find is this:

https://github.com/linuxserver/docker-swag/issues/139

 

Similar to that page, if I run:

# /config/geoip2db# ls -lah

it returns:
 

sh: /config/geoip2db#: not found

 

But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it?

I'm not sure how to manually check if those changes have been applied in the Docker or not.

 

Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening.

I'm still receiving this error after the period (weekly) license check. Is anyone else using Geo IP blocking seeing the same error in their logs? Should I be seeking support in a different forum?

Link to comment
On 11/10/2021 at 3:52 PM, Astrayel said:

Hi all,

I have unraid since two years now and I've always found solutions to my issues.

But today, I can't.

 

I got SWAG for reverse proxy with subdomain on many containers. All is fine except for plex which ran perfectly until three days ago without any sign. I can't figure out what's going wrong.

In SWAG log, I have lines like this :

*11 plex could not be resolved (3: Host not found), client: X.X.X.X, server: plex.*, request: "GET /favicon.ico HTTP/2.0", host: "plex.mydomain.com", referrer: "https://plex.mydomain.com/"

 

When I  open console on SWAG container to lookup for names, here is the result :

 nslookup radarr
Server:         127.0.0.11
Address:        127.0.0.11:53
Non-authoritative answer:
*** Can't find radarr: No answer
Non-authoritative answer:
Name:   radarr
Address: 172.18.0.10
root@XXX:/# nslookup plex
Server:         127.0.0.11
Address:        127.0.0.11:53

** server can't find plex: NXDOMAIN

** server can't find plex: NXDOMAIN

 

Container has the right name (plex). It is in network type host despite radarr is in same network as SWAG. 

 

And I did not change anything apart from updating container.

 

Any idea ?


A little late, but may help someone who has the same issues.
 

I'm not sure why the Plex behaves differently than other containers and I didn't investigate the cause further, but the following as "Extra Parameters" in the configuration solved the problem for me:
 

-p '32400:32400' -h 'plex.example.org'


Also, I had to pass the hostname as "-h" to get Plex to start. After that, Plex can be pinged from SWAG and resolves correctly.

Link to comment

Hello,

 

i am having a lot of error messages in my error.log and have no clue what it could mean. I looked it up in google, but this didnt really help much. As far as i can tell everything works, but these errors are new / i never noticed them. Tried increasing the timeout which didnt help.

 

Error Log example:

2022/02/20 14:40:31 [error] 634#634: *10069 upstream prematurely closed connection while reading response header from upstream, client: xxxxxxx, server: plex.*, request: "GET /:/websockets/notifications?X-Plex-Token=xxxxxxxx-Plex-Language=de-de HTTP/1.1", upstream: "https://192.168.3.100:32400/:/websockets/notifications?X-Plex-Token=QR-xxxxxxxX-Plex-Language=de-de", host: "xxxxxxx"


 

Proxy Config:

client_body_buffer_size 128k;

#Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Advanced Proxy Config
send_timeout 5m;
proxy_read_timeout 240s;
proxy_send_timeout 240s;
proxy_connect_timeout 240s;

# TLS 1.3 early data
proxy_set_header Early-Data $ssl_early_data;

# Basic Proxy Config
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Ssl on;
proxy_redirect  http://  $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
#proxy_cookie_path / "/; HTTPOnly; Secure"; # enable at your own risk, may break certain apps
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 32 4k;
proxy_headers_hash_bucket_size 128;
proxy_headers_hash_max_size 1024;

 

Plex Config:

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name plex.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;
    proxy_redirect off;
    proxy_buffering off;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;
    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app 192.168.3.100;
        set $upstream_port 32400;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_set_header X-Plex-Client-Identifier $http_x_plex_client_identifier;
        proxy_set_header X-Plex-Device $http_x_plex_device;
        proxy_set_header X-Plex-Device-Name $http_x_plex_device_name;
        proxy_set_header X-Plex-Platform $http_x_plex_platform;
        proxy_set_header X-Plex-Platform-Version $http_x_plex_platform_version;
        proxy_set_header X-Plex-Product $http_x_plex_product;
        proxy_set_header X-Plex-Token $http_x_plex_token;
        proxy_set_header X-Plex-Version $http_x_plex_version;
        proxy_set_header X-Plex-Nocache $http_x_plex_nocache;
        proxy_set_header X-Plex-Provides $http_x_plex_provides;
        proxy_set_header X-Plex-Device-Vendor $http_x_plex_device_vendor;
        proxy_set_header X-Plex-Model $http_x_plex_model;
    }
}

 

Errors in Plex:

Unbenannt.thumb.JPG.a28bcbd50260910bf08e280b2d2d765f.JPG

Link to comment
On 2/5/2022 at 7:10 AM, jademonkee said:

Hi all,

I keep seeing in the logs:

No MaxMind license key found; exiting. Please enter your license key into /etc/libmaxminddb.cron.conf
run-parts: /etc/periodic/weekly/libmaxminddb: exit status 1

If I restart the container, it doesn't appear in the logs, but eventually re-appears.

The key I have provided in the Docker variable 'GeoIP2 License key' is current and correct, and if I run the command

echo $MAXMINDDB_LICENSE_KEY

It returns the correct value.

 

The only mention of this issue that I can find is this:

https://github.com/linuxserver/docker-swag/issues/139

 

Similar to that page, if I run:

# /config/geoip2db# ls -lah

it returns:
 

sh: /config/geoip2db#: not found

 

But the page says that the issue has been solved. Could it be that I had to manually apply those changes? I'm usually pretty good at looking at the logs after an update to see if any configs need to be manually updated, but maybe I missed it?

I'm not sure how to manually check if those changes have been applied in the Docker or not.

 

Your help is appreciated - I'm concerned that Geo IP blocking is not working while this is happening.

 

On 2/16/2022 at 1:38 PM, jademonkee said:

I'm still receiving this error after the period (weekly) license check. Is anyone else using Geo IP blocking seeing the same error in their logs? Should I be seeking support in a different forum?

I'm still receiving this error.

Am I looking in the wrong place for a solution to this? Are LSIO still present in this forum?

Link to comment
5 hours ago, arturovf said:

can you share your config for mattermost please

#upstream backend {
#   server 10.10.10.2:8065;
#   keepalive 32;
#}

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

server {
   listen 80;
   server_name _;
   return 301 https://$host$request_uri;
}

server {
   listen 443 ssl;

   server_name    mattermost.*;

   location ~ /api/v[0-9]+/(users/)?websocket$ {
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       client_body_timeout 60;
       send_timeout 300;
       lingering_timeout 5;
       proxy_connect_timeout 90;
       proxy_send_timeout 300;
       proxy_read_timeout 90s;
       proxy_pass http://UNRAIDIP:8065;
   }

   location / {
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_http_version 1.1;
       proxy_pass http://UNRAIDIP:8065;
   }
}

 

I hope it helps

  • Like 1
Link to comment
10 hours ago, arturovf said:

Unfortunately no, at first it works but after a short minutes of using it (login off and trying to log on again) it crashes nginx entirely.  (Other reverse proxy sites stops working as well)

Manage to narrow it to fail2ban banning me 😮 

 

When hitting the Mattermost login screen I got some 401 lines on Nginx unauthorized logs which triggers fail2ban ban:

REMOTE_IP - - [23/Feb/2022:22:49:20 -0600] "GET /plugins/playbooks/api/v0/settings HTTP/2.0" 401 15 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15"
REMOTE_IP - - [23/Feb/2022:22:49:20 -0600] "GET /plugins/playbooks/api/v0/bot/connect HTTP/2.0" 401 15 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15"
REMOTE_IP - - [23/Feb/2022:22:49:20 -0600] "GET /api/v4/teams?page=0&per_page=200&include_total_count=false&exclude_policy_constrained=false HTTP/2.0" 401 202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15"
REMOTE_IP - - [23/Feb/2022:22:49:20 -0600] "GET /api/v4/analytics/old?name=standard&team_id= HTTP/2.0" 401 202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.3 Safari/605.1.15"

Does anybody have a clue what might be happening?

 

This is my config according to a fellow user @Heciruam and mattermost documentation (https://docs.mattermost.com/install/config-ssl-http2-nginx.html)

upstream backend {
    server 192.168.52.182:8065;
    keepalive 32;
    }

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

server {
   listen 80;
   server_name board.*;
   return 301 https://$host$request_uri;
}

server {
   listen 443 ssl;

   server_name    board.*;
   
    

    location ~ /api/v[0-9]+/(users/)?websocket$ {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        client_max_body_size 50M;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        client_body_timeout 60;
        send_timeout 300;
        lingering_timeout 5;
        proxy_connect_timeout 90;
        proxy_send_timeout 300;
        proxy_read_timeout 90s;
        proxy_pass http://backend;
    }

    location / {
        client_max_body_size 50M;
        proxy_set_header Connection "";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        proxy_read_timeout 600s;
        proxy_cache mattermost_cache;
        proxy_cache_revalidate on;
        proxy_cache_min_uses 2;
        proxy_cache_use_stale timeout;
        proxy_cache_lock on;
        proxy_http_version 1.1;
        proxy_pass http://backend;
    }
}

 

Edited by arturovf
code tags
Link to comment

Edit: I made a new container using swag instead of the old letsencrypt. Only changed the domain and email settings: Same result

 

Edit2: using "nc -l localhost -p 80" and shutting down the Swag container, I made sure I could access port 80 from outside. I'm not sure what else I changed, but now it works. You can always help me, but now I'l be trying to add back my old settings slowly.

 

Hi, I just had to change my domain and now I can't seem to make letsencrypt/swag work again. I changed the domain name in most files/area I could think of, but I guess I'm forgetting something important.

 

Requesting a certificate for mydomain.fun and www.mydomain.fun

Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:

Domain: mydomain.fun
Type: connection
Detail: Fetching http://mydomain.fun/.well-known/acme-challenge/AxgorMtHjklmjngO0kvrKsu3Pi-EuATqWmPA9x-tvUc: Timeout during connect (likely firewall problem)

Domain: www.mydomain.fun
Type: connection
Detail: Fetching http://www.mydomain.fun/.well-known/acme-challenge/Lo35xswjM0aVaWMmlHuYYLNu3VgF5GEHvGHSGGPeiao: Timeout during connect (likely firewall problem)

Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.


Some challenges have failed.

Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

 

"www" is the only subdomain I tried adding so far.

 

Over at Namecheap, I've got:

AAAA Record        @       *IPV6 address*

Cname Record      ombi   mydomain.fun

Cname Record      www   mydomain.fun

 

 

Any idea what I might have forgotten, or where I could find more info? The logs I'm getting aren't super usefull (from the little I understand)

Edited by Matmat07_2
More debug, SOLVED, kinda..
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.