[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


5641 posts in this topic Last Reply

Recommended Posts

8 hours ago, Schwaby412 said:

Hey all during an upgrade of swag it failed. And now ive removed it completely removed any files associated with it but when i go to resintall fresh it wont launch. I get this error.

 

driver failed programming external connectivity on endpoint swag (c09c532ca4cfa43c71f2affae682c2387117adc17070dda5db1b07ecc3d7b35f): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use.

 

port 80 already in use ... you may changed your network settings like to host while unraid already listeining on port 80 as sample, take a look what you done in your swag docker network settings.

Link to post
  • Replies 5.6k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I will only post this once. Feel free to refer folks to this post.   A few points of clarification:   The last update of this image didn't break things. Letsencrypt abruptly disabl

Application Name: SWAG - Secure Web Application Gateway Application Site:  https://docs.linuxserver.io/general/swag Docker Hub: https://hub.docker.com/r/linuxserver/swag Github: https:/

There is a PR just merged, it will be in next Friday's image, and will let you append php.ini via editing a file in the config folder   If you want to see how the sausage is made: https://gi

Posted Images

Hi, I updated Swag container and now my bitwarden instance is not working anymore. Checking swag log I found a message asking me to update nginx conf files so I update conf file inside nginx folder with new template, renamed container as requested in that file from bitwardenrs to bitwarden and set true to WEBSOCKET_ENABLED in bitwarden container. Still can't access from outside. Any hint?

 

Previous conf file

 

#BITWARDEN
# make sure that your domain has dns has a cname or a record set for the subdomain bitwarden 
# This config file will work as is when using a custom docker network the same as letesencrypt (proxynet).
# However the container name is expected to be "bitwardenrs" as it is by default the template as this name is used to resolve.  
# If you are not using the custom docker network for this container then change the line "server bitwardenrs:80;" to "server [YOUR_SERVER_IP]:8086;" Also remove line 7

resolver 127.0.0.11 valid=30s;
upstream bitwarden {
    server bitwardenrs:80;
}

server {
    listen 443 ssl;
    server_name bitwarden.*;
    include /config/nginx/ssl.conf;
  client_max_body_size 128M;

  location / {
   proxy_pass http://bitwarden;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
  
  location /notifications/hub {
   proxy_pass http://bitwarden;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  }
  
  location /notifications/hub/negotiate {
    proxy_pass http://bitwarden;
  }
}

 

New Conf file

 

## Version 2020/12/09
# make sure that your dns has a cname set for bitwarden and that your bitwarden container is not using a base url
# make sure your bitwarden container is named "bitwarden"
# set the environment variable WEBSOCKET_ENABLED=true on your bitwarden container

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name bitwarden.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 128M;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /admin {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /notifications/hub {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 3012;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location /notifications/hub/negotiate {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app bitwarden;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}


 

Link to post
On 4/1/2021 at 1:02 PM, muppie said:

Hi! 

I rebooted my containers yesterday and after that my SWAG container won’t listed to the ports I’ve chosen. It worked flawlessly the days before that. I use 80/443 and they are portforwarded in my pfsense. I thought I mucked something up in pfsense so I’ve wiped it and started over, but no success. When I tried Nginx Proxy Manager, the port is suddenly open, even on the same LAN IP. As soon as I stop Nginx and start swag, the port is suddenly closed. I have other port forwards in pfsense set up and they work too.

 

Swag is in br0.

 

Does anyone have a clue what happened? I’m on the latest version of swag. I’ve forced updates and I wiped swag too but no successful. 

There was nothing wrong with SWAG. I messed up my cloudflare settings which caused the error.

Link to post

Hi,

I have been trying to setup something I am not sure is possible to do with my current setup and swag. Basically is to reverse proxy http only services on my unraid machine from a domain like photoprism.lan to its containerIP and port (2342)

I have swag running on unraid 6.9.1 host and listening on ports 80 and 443, those are port forwarded from my router for external access. I can successfully access my desired services running on https behind the subdomain certs I have generated for nextcloud and bitwarden: nextcloud.mydomain.com and bitwarden.mydomain.com. Everything works fine also internally: I have two entries on PiHole internal DNS server that resolves nextcloud.mydomain.com and bitwarden.mydomain.com to the local unraid IP where swag nginx is listening.

 

Now I am trying to make use of the nginx reverse proxy on swag to locally access a new service on my unraid, in this case photoprism. The thing is that photoprism gui is running on port 2342 and is running over http. I would like to access photoprism with a domain (different from my external one used for nextcloud and bitwarden) and without needing to write the port each time, for example with http://photoprism.lan and no port (I have added a dns entry on the pihole to resolve photoprism.lan to the unraid IP where swag nginx is listening) but I have not find a way to configure a proxy-conf in nginx that proxies this domain to the right IP and port. What I have tried, among many other things is to put a file (local-servers.conf) inside proxy-confs folder of ngingx with:

 

server {
    listen 80;
    server_name photoprism.*;

    location / {

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app photoprism;
        set $upstream_port 2342;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    }
}

 

I have tried also with server_name photoprism.lan*
Although the internal docker dns works ok resolving the container name, I have tried also setting the proxy_pass with the final docker IP and port with no luck.

When I try to go to http://photoprism.lan I got redirected to a https://photoprism.lan/ and see the default nginx webpage:

 

Welcome to our server

The website is currently being setup under this address.

For help and support, please contact: me@example.com


Is this because by default only https is being configured to be proxied?
Any way of allowing http for internal lan without compromising security?
My certs are subdomains, as stated above, like nextcloud.mydomain.com but the photoprism is not in the same domain but photoprism.lan, does this cause the failure?

 

Thanks!

Link to post

Hey looking for some help with SWAG, nothing is getting baned at all. I have tried to trip getting banned by trying 10 times in a row with a bad login... max is set to 2. I thought it might be that i needed to add a .local for bitwardenrs and a path to the log. even after doing this still getting nothing. 

 

[Definition]
failregex = ^\s*\[ERROR\]\s+Username or password is incorrect. Try again.(?:, 2FA invalid)?\. <HOST>$
 

this is my bitwarden.local inside filter.d

 

[bitwarden]

enabled  = true
filter   = bitwarden
logpath  = /config/log/containers/bitwarden.log
maxretry = 2
 

inside jail.local

 

i have added my fail2ban log.

 

i changed to debug and heavydebug 

 

still can't see why it isn't picking up the failed attempt.

 

 

any help would be much appriciated

 

fail2ban.log

Link to post

Hi! Back again, not with an issue but just a question this time.

I know that using configuring swag a certain way and using a cloudflare domain, you can hide your WAN IP behind CloudFlare's.

Instead of using a VPS/VPN as a layer between my server and the world, I'm tempted to do the same.

BUT, I'm not at cloudflare. My domain (and webhosts) are at OVH's. I know OVH has letsencrypt certificate, CDNs and all the bells and whistles, but I would like directions (or even step by step if you don't mind) about how to set up swag with OVH to keep my WAN ip private.

It's pretty critical to nail this the first time, as I can't allow myself too much downtime.
My Owncloud, Nextcloud, and virtualized desktops are used on a daily basis for some pretty time-sensitive jobs, and I would be as afraid of a few hours downtime as I would of an attack targeting my WAN IP directly, the later being still a very real threat with the actual setup.

For info, the actual setup is the "good ol no-ip gig", aka, using a no-ip tracker with the "front facing" OVH domain CNAME field filled with the no-ip domain, and swag handling the SSL certificate to the OVH domain.

Edited by Keexrean
Link to post

Commenting to see if anyone has gotten Snapdrop working through a SWAG reverse proxy on unraid. Tried setting up the configs myself with partial success but it's not working properly. Would also be awesome to see a default config file for linuxserver/snapdrop included with SWAG!

Link to post

Hi all, 

 

Quick (hopefully) question; 

I've followed SpaceInvaderOne's video about setting up reverse proxy and Nextcloud - using SWAG to make it externally accessible. 

Everything is working fine - and Nextcloud is the only externally accessible app I've setup. 

 

However, when I access my static IP address directly from a browser (not via the Nextcloud.(mydomain.com)) I get a 'Welcome to your SWAG instance' page. 

 

Is this a security issue? Is there any way to direct ALL traffic that hits port 80 or 444 at my address to send it directly to my Nextcloud instance? 

 

Cheers! 

 

Link to post
2 hours ago, BenW said:

Hi all, 

 

Quick (hopefully) question; 

I've followed SpaceInvaderOne's video about setting up reverse proxy and Nextcloud - using SWAG to make it externally accessible. 

Everything is working fine - and Nextcloud is the only externally accessible app I've setup. 

 

However, when I access my static IP address directly from a browser (not via the Nextcloud.(mydomain.com)) I get a 'Welcome to your SWAG instance' page. 

 

Is this a security issue? Is there any way to direct ALL traffic that hits port 80 or 444 at my address to send it directly to my Nextcloud instance? 

 

Cheers! 

 

 

that would make a reverse proxy more or less onsolete .... you can skip swag then and just forward ports to your NC instance directly, cert creation will be another story then ...

 

as option, use rewrite rules for all incoming requests to root to your NC domain in swag

Link to post
2 hours ago, alturismo said:

 

that would make a reverse proxy more or less onsolete ....

Thanks - I'll give the re-writing rules from root to NC domain a go, but is leaving it as is a security risk? 

 

Link to post
21 hours ago, BenW said:

Thanks - I'll give the re-writing rules from root to NC domain a go, but is leaving it as is a security risk? 

 

nope, all good as long as you dont put extra stuff manually in the /www folder, then you should know what you doing.

Link to post

Having an issue migrating from the old letsencrypt image to swag. Followed the instructions on the repo, and now I'm getting 

 

Error determining zone_id: 9103 Unknown X-Auth-Key or X-Auth-Email. Please confirm that you have supplied valid Cloudflare API credentials. (Did you enter the correct email address and Global key?)
ERROR: Cert does not exist! Please see the validation error above. Make sure you entered correct credentials into the /config/dns-conf/cloudflare.ini file

 

Also getting this warning, ran `chmod 600` and the warning will not go away.

Unsafe permissions on credentials configuration file: /config/dns-conf/cloudflare.ini

My credentials are correct in cloudflare.ini. I've tried rolling my API token, generating completely new ones, even using email/global API key and nothing is working. 

 

Stumped here.

 

/var/log/letsencrypt.log hits the first exception here:

2021-04-14 12:28:36,683:DEBUG:urllib3.connectionpool:https://api.cloudflare.com:443 "GET /client/v4/zones?name=docker.muth.dev&per_page=1 HTTP/1.1" 403 None
2021-04-14 12:28:36,692:DEBUG:certbot._internal.error_handler:Encountered exception:
Traceback (most recent call last):
  File "/usr/lib/python3.8/site-packages/certbot_dns_cloudflare/_internal/dns_cloudflare.py", line 187, in _find_zone_id
    zones = self.cf.zones.get(params=params)  # zones | pylint: disable=no-member
  File "/usr/lib/python3.8/site-packages/CloudFlare/cloudflare.py", line 672, in get
    return self._base.call_with_auth('GET', self._parts,
  File "/usr/lib/python3.8/site-packages/CloudFlare/cloudflare.py", line 126, in call_with_auth
    return self._call(method, headers, parts,
  File "/usr/lib/python3.8/site-packages/CloudFlare/cloudflare.py", line 502, in _call
    raise CloudFlareAPIError(code, message)
CloudFlare.exceptions.CloudFlareAPIError: Unknown X-Auth-Key or X-Auth-Email

 

I can curl the https://api.cloudflare.com/client/v4/user/tokens/verify endpoint just fine:

"messages":[{"code":10000,"message":"This API Token is valid and active","type":null}]

 

 

I am a genius.

 

Renamed my dir that my compose and config files live in from letsencrypt/ to swag/, but forgot to update the volume mount path as well. Amazing lol. All is well.

Edited by seanmuth
log output, I'm an idiot
Link to post

Hi all. I upgraded from Lets Encrypt to SWAG this week, and initially things were fine. But then a separate cache corruption issue occurred, and I had to reformat my cache drive. Now that I've ran Mover twice (once to move everything to the array, and again after the reformat back to the cache), things aren't working as expected. SWAG is failing to run and seems to be missing certain files. Honestly I'm not much of an expert here, I can follow along @SpaceInvaderOne's videos and google enough to be dangerous, but this has me stuck. Would anyone be kind enough to provide some guidance on what I should try? I don't have a CA backup because the last one ran before I had SWAG, and I can't restore Lets Encrypt because the app has been delisted and I no longer have the template saved.

 

To support the app dev(s) visit:
Certbot: https://supporters.eff.org/donate/support-work-on-certbot

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-config: executing...
Variables set:
PUID=99
PGID=100
TZ=America/Los_Angeles
URL=[***OMITTED***]
SUBDOMAINS=ombi,cloud
EXTRA_DOMAINS=
ONLY_SUBDOMAINS=true
VALIDATION=http
CERTPROVIDER=
DNSPLUGIN=
EMAIL=[***OMITTED***]
STAGING=false

Using Let's Encrypt as the cert provider
SUBDOMAINS entered, processing
SUBDOMAINS entered, processing
Only subdomains, no URL in cert
Sub-domains processed are: -d ombi.[***OMITTED***] -d cloud.[***OMITTED***]
E-mail address entered: [***OMITTED***]
http validation is selected
Generating new certificate
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Renewing an existing certificate for ombi.[***OMITTED***] and cloud.[***OMITTED***]
An unexpected error occurred:

There were too many requests of a given type :: Error creating new order :: too many certificates already issued for exact set of domains: cloud.[***OMITTED***],ombi.[***OMITTED***]: see https://letsencrypt.org/docs/rate-limits/

Please see the logfiles in /var/log/letsencrypt for more details.
Can't open privkey.pem for reading, No such file or directory
22963416648520:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:69:fopen('privkey.pem','r')

22963416648520:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:76:

unable to load private key
cat: privkey.pem: No such file or directory
cat: fullchain.pem: No such file or directory
New certificate generated; starting nginx
Starting 2019/12/30, GeoIP2 databases require personal license key to download. Please retrieve a free license key from MaxMind,
and add a new env variable "MAXMINDDB_LICENSE_KEY", set to your license key.
[cont-init.d] 50-config: exited 0.
[cont-init.d] 60-renew: executing...
Can't open /config/keys/letsencrypt/fullchain.pem for reading, No such file or directory
23299000830792:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:69:fopen('/config/keys/letsencrypt/fullchain.pem','r')

23299000830792:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:76:

unable to load certificate
The cert is either expired or it expires within the next day. Attempting to renew. This could take up to 10 minutes.
<------------------------------------------------->

<------------------------------------------------->
cronjob running on Sat Apr 17 11:46:29 PDT 2021
Running certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/ombi.[***OMITTED***]-0001.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not yet due for renewal

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/ombi.[***OMITTED***].conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/certbot/_internal/renewal.py", line 70, in _reconstitute
renewal_candidate = storage.RenewableCert(full_path, config)
File "/usr/lib/python3.8/site-packages/certbot/_internal/storage.py", line 468, in __init__
self._check_symlinks()
File "/usr/lib/python3.8/site-packages/certbot/_internal/storage.py", line 538, in _check_symlinks
raise errors.CertStorageError(
certbot.errors.CertStorageError: expected /etc/letsencrypt/live/ombi.[***OMITTED***]/cert.pem to be a symlink
Renewal configuration file /etc/letsencrypt/renewal/ombi.[***OMITTED***].conf is broken. Skipping.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
The following certificates are not due for renewal yet:
/etc/letsencrypt/live/ombi.[***OMITTED***]-0001/fullchain.pem expires on 2021-07-16 (skipped)
No renewals were attempted.
No hooks were run.

Additionally, the following renewal configurations were invalid:
/etc/letsencrypt/renewal/ombi.[***OMITTED***].conf (parsefail)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
0 renew failure(s), 1 parse failure(s)
[cont-init.d] 60-renew: exited 0.
[cont-init.d] 70-templates: executing...
**** The following reverse proxy confs have different version dates than the samples that are shipped. ****

**** This may be due to user customization or an update to the samples. ****
**** You should compare them to the samples in the same folder to make sure you have the latest updates. ****
/config/nginx/proxy-confs/ombi.subdomain.conf
/config/nginx/proxy-confs/nextcloud.subdomain.conf

[cont-init.d] 70-templates: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [emerg] cannot load certificate "/config/keys/letsencrypt/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/letsencrypt/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

 

Link to post

Just a thing I found out, just in case someone has the same problem and did not know:
Tried to update the certs today but got the message that authentication did not work.
Found out that since my domain is now on Cloudflare, I had to turn off the proxy for the subdomains SWAG is using, then the authentication worked and then I could turn the proxy setting on again.

Link to post

Having issues with overseerr. Works great and is fast and snappy with network set to bridge. When I add it to proxynet, however, it is consistently slow, and sometimes hangs up for minutes at a time. This is both using the local IP as well as the external domain. All of my other dockers on proxynet work fine. I brought that up on overseerr discord support and they insist it's a docker problem.

Link to post

Good evening all,

 

My web host raised their rates again and... well... screw that... $15/mo for a simple, mostly static, WP site... I don't think so.

 

I already have MariaDB, SWAG, and a NextCloud instance set up and working. I blew away swag and redid it so that it includes top level domain certificate and not just sub-domains. My subdomain for Nextcloud is working, as well as one that I called web.exampledomian.com [using my actual domain]. The web one goes straight to the SWAG page. As does www.exampledomain.com. That said, I get 'ERR_TOO_MANY_REDIRECTS' when I just try https://exampledomain.com

 

I am using Cloudflare for dns. Once I get my site up and running, I plan to transition my DNS registration to Cloudflare entirely for $8/yr.

 

Any idea where to look as to why https://exampledomain.com would get a Too Many Redirects error while the subdomains (including www) do not?

Link to post
5 minutes ago, wes.crockett said:

Good evening all,

 

My web host raised their rates again and... well... screw that... $15/mo for a simple, mostly static, WP site... I don't think so.

 

I already have MariaDB, SWAG, and a NextCloud instance set up and working. I blew away swag and redid it so that it includes top level domain certificate and not just sub-domains. My subdomain for Nextcloud is working, as well as one that I called web.exampledomian.com [using my actual domain]. The web one goes straight to the SWAG page. As does www.exampledomain.com. That said, I get 'ERR_TOO_MANY_REDIRECTS' when I just try https://exampledomain.com

 

I am using Cloudflare for dns. Once I get my site up and running, I plan to transition my DNS registration to Cloudflare entirely for $8/yr.

 

Any idea where to look as to why https://exampledomain.com would get a Too Many Redirects error while the subdomains (including www) do not?

 It's always the setting you don't think about... set it to do Full SSL on CloudFlare and good to go.

Link to post

Configuration change needed after latest Nextcloud update to Nextcloud 21.0.1

 

Error message:

image.thumb.png.e27f854ccb102e622c694cfd26f8ba97.png

 

My existing configuration in Swag:


server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name maindomain.dk;

    include /config/nginx/ssl.conf;
#   add_header X-Frame-Options "SAMEORIGIN" always; 
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomians; preload;";


    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }
}

 

I have found different solution on the net that I cant "translate" into my SWAG configuration file

 

Quote

To be more precise, I have added the following lines to my nginx config file:

    location = /.well-known/webfinger {
            rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
    }

    location = /.well-known/nodeinfo {
            rewrite ^/.well-known/nodeinfo /public.php?service=nodeinfo last;
    }

 

Or this one:

location ^~ /.well-known {
        location = /.well-known/carddav     { return 301 /remote.php/dav/; }
        location = /.well-known/caldav      { return 301 /remote.php/dav/; }
        # Anything else is dynamically handled by Nextcloud
        location ^~ /.well-known            { return 301 /index.php$uri; }
        try_files $uri $uri/ =404;
    }

 

If anyone have this working then it would be great if you could share your configuration file ;-)

Link to post

Update I tried this:

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name maindomain.dk;

    include /config/nginx/ssl.conf;
#   add_header X-Frame-Options "SAMEORIGIN" always; 
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomians; preload;";


    client_max_body_size 0;

    location / {
        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app nextcloud;
        set $upstream_port 443;
        set $upstream_proto https;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

        proxy_max_temp_file_size 2048m;
    }

    # Make a regex exception for `/.well-known` so that clients can still
    # access it despite the existence of the regex rule
	location ^~ /.well-known {
        location = /.well-known/carddav     { return 301 /remote.php/dav/; }
        location = /.well-known/caldav      { return 301 /remote.php/dav/; }
        # Anything else is dynamically handled by Nextcloud
        location ^~ /.well-known            { return 301 /index.php$uri; }
        try_files $uri $uri/ =404;
    }
		
}

 

And I got it reduced to this last one:

image.thumb.png.1b729e12b1da988d4ac2a37cd66b49cd.png

 

To anyone finding this "webfinger" error

Last error is related to cache (In chrome do the following) Open Dev Tools (F12), and while this is open right click on "normal" refresh button on your top left and select Empty cache and hard reload.

And all us ok ;-)

image.thumb.png.8f2b9f154f326e8d906f6cff5d3b7475.png

 

Edited by casperse
Link to post

I am trying to set my maxmind key in the docker by adding a variable.

 

Its not working :/  I would be most grateful for some advice.

 

Config Type: Variable

Name: Maxmind

Key: my Maxmind key

Value: MAXMINDDB_LICENSE_KEY=

Default value: -e

 

What wrong with the above please.

Link to post
2 hours ago, Greygoose said:

I am trying to set my maxmind key in the docker by adding a variable.

 

Its not working :/  I would be most grateful for some advice.

 

Config Type: Variable

Name: Maxmind

Key: my Maxmind key

Value: MAXMINDDB_LICENSE_KEY=

Default value: -e

 

What wrong with the above please.

Everything.

You have switched value and key and also remove =. Default value is also not -e. Just leave it blank.

Link to post
3 hours ago, saarg said:

Everything.

You have switched value and key and also remove =. Default value is also not -e. Just leave it blank.

 

Thank you,

 

I now have it working because of you help.

 

Much appreciated.

Link to post
Posted (edited)

I'm having some problems with my reverse proxy for a particular container (Linuxserver's Airsonic).

I recently updated my containers (both Airsonic and Swag). Everything was working perfectly fine before the update.

Now I get the following error :

1.jpg.b578ec56e21022536d3ac9f70e4436a3.jpg

 

I am using the default subdomain.conf file provided with Swag (airsonic.subdomain.conf)

Just to make sure, I tried deleting the config file and using the new one that gets automatically downloaded. The results stays the same.

I can still access the Airsonic docker using the local address.

Also, all of my other containers using Swag reverse proxy still work perfectly fine, so it seems isolated for Airsonic.

The log of Swag does not bring up any error and airsonic subdomain is shown in there.

 

Any ideas?

Edited by gustomucho
Link to post
53 minutes ago, gustomucho said:

I'm having some problems with my reverse proxy for a particular container (Linuxserver's Airsonic).

I recently updated my containers (both Airsonic and Swag). Everything was working perfectly fine before the update.

Now I get the following error :

1.jpg.b578ec56e21022536d3ac9f70e4436a3.jpg

 

I am using the default subdomain.conf file provided with Swag (airsonic.subdomain.conf)

Just to make sure, I tried deleting the config file and using the new one that gets automatically downloaded. The results stays the same.

I can still access the Airsonic docker using the local address.

Also, all of my other containers using Swag reverse proxy still work perfectly fine, so it seems isolated for Airsonic.

The log of Swag does not bring up any error and airsonic subdomain is shown in there.

 

Any ideas?

Yes, the context path was added back to your airsonic template. Remove it and it will work again.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.