[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

1 hour ago, Urbanpixels said:

my swag is refusing the run with the below error:

 

An unexpected error occurred:
AttributeError: module 'certbot.interfaces' has no attribute 'IAuthenticator'

 

Had to force version 1.32.0 to get it working again.

I just found this issue too. Forcing previous version seems the only fix for now. Has been flagged on github so hopefully a fix won't take too long https://github.com/linuxserver/docker-swag/issues/297

  • Upvote 1
Link to comment

Hi All, 
First time posting. Normally I struggle through solutions, but I have hit a bit of a wall. 

Long story short Swag was working normally with several dockers including OpenVPN up until the day my router failed. I have since purchased and setup a new router (w/ port forwarding) but I cannot get Swag to work properly. 

So far I have tried the following 

  • checked SWAG access via local IP - CONFIRMED 
  • checked port forward is working correctly by checking swag logs turning p.forward on and off - CONFIRMED
  • deleted app data and restarted swag
  • created new docker network

 

So far it looks like swag is running but I cannot get the swag splash page / or docker to load when using a web address. Swag log attached.

 

Any tips, pointers, direction is appreciated

 

EDIT: Log now shows this error repeatedly 

ginx: [emerg] cannot load certificate "/config/keys/cert.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/cert.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)

Thanks 
John

Swag Log .txt

Edited by JM Benoit
Link to comment

I'm hoping someone can help me with my thought process and see if I'm off base or what I'm thinking is possible with my current swag configuration:

 

It's working perfectly for all my various containers on Unraid---however I have a new use case where I want to reverse proxy to another host on the local network (completely separate network outside the dockernetwork I'm using).  Routing is going to be a problem no? 

 

My goal is to reverse proxy external traffic looking for server1.myhome.com to hit SWAG and then proxied to server1 internal IP address and port.  My guess is that the routing from the docker network to the local area network (which unraid sits on as well) is going to the problem?

 

Would I use the subdomain template to reverse proxy this request?  Thanks for any advice and insight!

Link to comment
12 hours ago, alturismo said:

to make it short, yes this works, may rather just use the LAN IP instead NAME in the proxy conf

 

here a sample running adguard on a sep mashine

image.thumb.png.451657a2b328090e9021273b57d58dc6.png

 

Thank you!  I optimized my search and actually found your response earlier in this thread with a similar question.  It was EXACTLY what i needed and helped immensely.  Thank you very much for helping!

  • Like 1
Link to comment
On 12/7/2022 at 7:46 PM, JM Benoit said:

Hi All, 
First time posting. Normally I struggle through solutions, but I have hit a bit of a wall. 

Long story short Swag was working normally with several dockers including OpenVPN up until the day my router failed. I have since purchased and setup a new router (w/ port forwarding) but I cannot get Swag to work properly. 

So far I have tried the following 

  • checked SWAG access via local IP - CONFIRMED 
  • checked port forward is working correctly by checking swag logs turning p.forward on and off - CONFIRMED
  • deleted app data and restarted swag
  • created new docker network

 

So far it looks like swag is running but I cannot get the swag splash page / or docker to load when using a web address. Swag log attached.

 

Any tips, pointers, direction is appreciated

 

EDIT: Log now shows this error repeatedly 

ginx: [emerg] cannot load certificate "/config/keys/cert.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/cert.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)

Thanks 
John

Swag Log .txt 4.05 kB · 2 downloads

If anyone could please assist?
In short everything worked properly outside network prior to getting a new router. 

Currently I can access all dockers in network but not outside

I confirmed Port forwarding is working properly on new router (myservers is working properly)

At this point i deleted swag + appdata and reinstalled several times however, I cannot access swag or any docker from outside network. 

 

Here are my logs from last swag restart. 

 

Again thank you!

John

 

 

swag log 12.11.txt

Link to comment

Hi all,

Does anyone know if there is a way to add a device on a network to SWAG?

 

I'm running Home Assistant on a raspberry pi and would like to access HA remotely. Is there a YouTube video or online guide that I could follow to make this work? So far I have followed SpaceInvaderOne's video for docker containers but that only covers the UNRAID docker network.

 

Thank You

Link to comment

Hi guys. I have a mystery that I cannot resolve.

 

For ages my Swag as been performing perfect. But I have ran into an isue. I have my own domain being managed by No-IP.com. Their portal currently lists my external IP properly for my domain and subdomain. In Swag, I have my domain and subdomains.

 

Starting last week, when trying to login with https://mydomain.com, I get a certificate warning. If I ignore the warning I get this:

 

image.thumb.png.de93c2ab84e62e122570606f18215544.png

 

I have no clue what this is. Name and passwords don't work. I don't have anything Sophos installed. At one point I called No-Ip with the issue, they did not know what was causing this. They said my external IP did not match that of something from my ISP. They suggested I get a new external IP. I called my ISP, they said that to change IP, I needed a new cable modem from them. I got the new cable modem and it worked.

 

Five days later, I am getting the same Sophos thing again!!

 

In Swag docker setup, I edited my sub-domains. When relaunching Swag, it says new LE certificates are needed (as it should) and that it will generate them. Then validation fails (see image).

image.thumb.png.a8d6ca7ffe36444f432f42fae5780d17.png

 

Swag docker is still running on my server. But I cannot get to my web site even with the internal IP address...!! Then on my browser I try https://mydomain.com; I get the same Sophos portal login screen!! This is telling me that, somehow, the domain is being directed someplace else. Obviously this is why it cannot pass validation.

 

I get the same result if I try to do this from my home internet connection or when I am at work. At one point (when the problem first happened), I swapped my router and had the same result.

 

What is causing this??? Google did not help... Has my domain been hijacked? How is it being re-routed?

 

Thanks,

 

H.

 

P.S. I was at least able to get my internal web URL working by restoring my appdata backup. Using the certificates prior to me editing the docker configuration. This will fail once the certificates expire.

 

I still cannot get in from the outside.

 

 

 

Edited by hernandito
Link to comment

Hi,

 

I have edited the 'guacamole.subdomain.conf.sample' in terminal with "nano /mnt/user/appdata/swag/nginx/proxy-confs/guacamole.subdomain.conf.sample" written it out, removed the '.sample', then pressed Y and 'Ctrl o'.  

 

When i restart the Swag container a new 'guacamole.subdomain.conf.sample' is written and the edited version is ignored.

 

Can someone help me please!  :)

Edited by Jono W
Link to comment

Hello everyone. i am trying to figure out what is not working in my setup. I have swag and ApacheGuacamole on my unraid, and I cannot get fail2ban to work. in the swag dashboard I can see failed login attemps and that I have 2 ips in jail, but that doesn't stop my mobile from continously trying to log in (although it's banned). 

image.png.304dc8e6c13727d63079242cbed2c020.png

any idea what to do or how to debug this further? 

thank you. 

Link to comment

I'm trying to get the Swag Dashboard working. I have the Swag docker port configured to 81 and 443 respectively and it has been working fine my other subdomains. However, after enabling the dashboard mod, configuring an A record for 'dashboard.domain.com' to point to my server IP that Swag is listening on, I get a nginx 504 time out error when trying to go to https://dashboard.mydomain.com . I'm coming from an internal LAN subnet that is within the allowed IP ranges stated in the domain.subdomain.conf file as well.

The Swag docker logs are not reporting any issues either.

Any ideas on what I may be missing or doing wrong?

Edited by guruleenyc
Link to comment

Hey everyone, I'm running into a bit of a pickle and I'm hoping someone can help with my configuration. My current setup consists of:

  • Gandi domain + Cloudflare proxy providing SSL cert => the old version of this docker container to provide the reverse proxy
  • Duckdns subdomain => the old version of this docker container (letsencrypt) to provide the reverse proxy and SSL cert

I've recently removed Cloudflare and am now running into an issue where my domain is using the cert of my duckdns subdomain.

 

I don't mind upgrading the container but would anyone be able to provide insight into how I would need to configure the parameters to accomplish this? (Or am I better off leaving the existing deprecated container and trying to somehow generate a new cert for my Gandi domain?)

Link to comment

Hi all, I followed Space Invaders Ones great video on installing swag so that I can have a bitwarden server and its all working well I think. However in my unifi router, security is saying ports 180 and 1443 (forwarded from 80 and 443) are getting hit loads of times a day from around the world. Is this normal, or have a bodged up somewhere?

 

Link to comment
1 minute ago, garethsnaim said:

Hi all, I followed Space Invaders Ones great video on installing swag so that I can have a bitwarden server and its all working well I think. However in my unifi router, security is saying ports 180 and 1443 (forwarded from 80 and 443) are getting hit loads of times a day from around the world. Is this normal, or have a bodged up somewhere?

 

This is completely normal and one of the drawbacks with exposing services to the internet. All day, every day, all year, bots will try to search for known vulnerabilities in services exposed to the internet and try to get into your service/server. So it's important when you do this to secure your services the best you can. With long complex passwords, set up fail2ban and maybe a log monitoring service to alert you when something fishy is happening. Or you can set up something like a cloudflare tunnel instead, then you don't need to expose the ports on your router. But then again cloudflare sits in the middle and you have to trust them with all your data. But your server wont be hit all day with bot's trying to get into it, so there's that.

Link to comment

Thanks Strike, thats comforting. I have a unifi router so its logging and telling me whats happening and blocking these connections so I guess it is what it is. Scary out there!

 

It took me hours to get this going so I think I will leave cloudflare tunnel alone for now lol, is that a bit like ZeroTier?

Link to comment
18 minutes ago, garethsnaim said:

is that a bit like ZeroTier?

No, it's not exactly like ZeroTier. The only thing they have in common is that it's creates a secure tunnel to your server. Other then that they are a bit different. Cloudflare tunnel is more for exposing your webserver securely to the internet and Zerotier is for creating a tunnel so you can access your LAN remotely. And ZeroTier requires a client on each device but cloudflare tunnel does not.  

Link to comment

I need some help with getting  my unifi-controller container accessible via swag.

 

I think because the unifi-controller webui uses https without a cert, then nginx can't resolve it and I get a 502 error from Cloudflare.

 

From my nginx unifi conf file:

include /config/nginx/proxy.conf;

include /config/nginx/resolver.conf;

set $upstream_app 192.168.75.13;

set $upstream_port 8443;

set $upstream_proto https;

proxy_pass $upstream_proto://$upstream_app:$upstream_port;

 

I've tried changing the $upstream_port to just 443. The container has port 8443 set as the webui port.

This is definitely the url for accessing the webui locally: https://192.168.75.13:8443/

 

Swag is on my proxy network, and the unifi-controller container is on br0 with a specified IP.

 

The nginx error logs say: 

2023/01/02 15:49:42 [error] 1943#1943: *15046 connect() failed (113: Host is unreachable) while connecting to upstream, client: -publicIP-, server: unifi.*, request: "GET / HTTP/1.1", upstream: "http://192.168.75.13:8443/", host: "unifi.domain.com"

2023/01/02 15:49:45 [error] 1943#1943: *15046 connect() failed (113: Host is unreachable) while connecting to upstream, client: -publicIP-, server: unifi.*, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.75.13:8443/favicon.ico", host: "unifi.domain.com", referrer: "https://unifi.domain.com/"

 

I have a FreeIPA server being accessed the same way, specifying an IP address as opposed to the name of a container, and it works fine. The big difference is the FreeIPA server does not use https.

 

Any suggestions are appreciated!

Thanks!

Link to comment

I am working to setup my UnRaid server using hardware I received for Christmas so I can host storage on a separate machine and have a parity drive. I created a separate docker network for my docker containers. I have a duckdns docker to update my ip address, and swag. With swag, my setup used example.duckdns.org [example replaced with my subdomain] for my domain, wildcard for subdomains, and only subdomains is set to true. My plan is to use separate subdomains from my duckdns subdomain with each docker service I want to access. For services currently setup, so far I only have jellyfin and nextcloud. I enabled the subdomain examples in each and made the edits to the nextcloud configuration to add swag as a trusted proxy and my duckdns domain as a trusted domain. The problem I am running into is that when I access jellyfin.example.duckdns.org, it often works. As soon as I access nextcloud.duckdns.org either after logging in via the web or attempting to connect with the android app, I get an “ERR_CONNECTION_REFUSED” in my browser. This will then occur for jellyfin as well. The only resolution I have found is to restart swag or wait awhile. The problem will then quickly resurface after a short period of using nextcloud. Because it works intermittently with the error seemingly only being triggered after attempting to use nextcloud, I am baffled at what the problem could be. If the problem is with nextcloud, why would it then affect jellyfin? If the problem is with swag, why does it sometimes work?

Any suggestions on where to look? Swag and nextcloud logs don’t have anything after showing they started, and there do not appear to be any errors in the startup process. Jellyfin doesn’t show anything in its logs at the same time as the connection refused error.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.