[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


5641 posts in this topic Last Reply

Recommended Posts

  • Replies 5.6k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I will only post this once. Feel free to refer folks to this post.   A few points of clarification:   The last update of this image didn't break things. Letsencrypt abruptly disabl

Application Name: SWAG - Secure Web Application Gateway Application Site:  https://docs.linuxserver.io/general/swag Docker Hub: https://hub.docker.com/r/linuxserver/swag Github: https:/

There is a PR just merged, it will be in next Friday's image, and will let you append php.ini via editing a file in the config folder   If you want to see how the sausage is made: https://gi

Posted Images

Hi.  I was wondering if somebody could help me as I am getting a cert does not exist error.  I've so far followed SpaceInvader Ones youtube video up to the point of setting up my DNS Cname records, and forwarding my ports (see pictures below).

I am using Google domains as my registar.  I am not using duckdns at all.  I have a static IP.  The red boxes in the picture are all mydomain.com and the purple box is my wan IP.


Capture.thumb.PNG.b5c23f40859812d065621b51163fa46d.PNG


If I try to RDP into my Windows machine using mydomain.com:3389 or www.mydomain.com:3389 , which are both forwaarded to my IP as "A" records, it works.  If I try cloud.mydomain.com:3389 it just fails.  Do I need to create an "A" record for cloud. and video. as well?  I suppose there was a bit of a disconnect here since the video guide space invader one made talks about using duck DNS, which i'm not using, and im guessing his duckdns config somehow has that setup already for him?

Here is my picture of my port forwarding on my router Capture2.thumb.PNG.46b5151a2509e9eb0e2f478676641d2b.PNG

 

Capture.PNG

Capture2.PNG

 

Edit: I figured it out.  In case anybody sees this in the future, you need to also forward your subdomains to your IP (which is basically creating A records for each, but through the forwarding options on google domains, rather than the custom resource records options), as well as making the Cname records in the custom resource records options, which point to the main domain.

Edited by 007craft
Link to post
On 2/2/2021 at 8:42 AM, ZekerPixels said:

I filtered all my activities out and am left with 158 lines in ngix log

May i ask where you got this log data from? Just interested to see on mine also :)

 

My router is getting hit alot by port scans etc all landing on the webserver (80 and 443) but its pretty normal sadly just bots scanning IP ranged trying a bunch of things and then moving on unless they get a hit

Link to post
9 hours ago, brent3000 said:

May i ask where you got this log data from? Just interested to see on mine also :)

 

My router is getting hit alot by port scans etc all landing on the webserver (80 and 443) but its pretty normal sadly just bots scanning IP ranged trying a bunch of things and then moving on unless they get a hit

 

of course its in /appdata/swag/log/nginx/access.log

 

I understand the bot searching for something unsecured, as long as its secured it doesnt realy do anything. But I'm trying to understand what is happening and I want to be conviced it is secured before actualy using it. And I realy want to have geo blocking working, it doesnt hurt using it and https://www.spamhaus.org/statistics/botnet-cc/ well those get blocked.

But kerbynet I mentioned, turns out to be an old router exploit.

Link to post

can anyone help me figure this issue out?

 

everything appears to be configured as it should. im using port 180 and 4433 both check out as open on my firewall. created the dns cname in godaddy. see log file below:

 

 

2021-02-03 20_28_27-Media_Docker - Opera.png

2021-02-03 20_30_54-Domain Manager - Opera.png

log for_ swag - Opera.png

Proxy ch.png

 

4433.png

Edited by dfox1787
Link to post
11 hours ago, dfox1787 said:

can anyone help me figure this issue out?

 

everything appears to be configured as it should. im using port 180 and 4433 both check out as open on my firewall. created the dns cname in godaddy. see log file below:

 

 

2021-02-03 20_28_27-Media_Docker - Opera.png

2021-02-03 20_30_54-Domain Manager - Opera.png

log for_ swag - Opera.png

Proxy ch.png

 

4433.png

Looks like your portforwarding is missing. You need to port forward 80 & 443 to 180 & 4433 on 192.168.1.50

Link to post
5 hours ago, saarg said:

Looks like your portforwarding is missing. You need to port forward 80 & 443 to 180 & 4433 on 192.168.1.50

if you look at my screen shots i have shown the ports are open and are getting forwarded using port redirection on my draytek fw

Edited by dfox1787
Link to post
1 hour ago, dfox1787 said:

if you look at my screen shots i have shown the ports are open and are getting forwarded using port redirection on my draytek fw

My device settings look totally different, but to me it looks like you are forwarding 180 to 180 and 4443 to 4443 instead of 80 to 180 and 443 to 4443. The scrceenshot is verry cropped so maybe its on there, but it should state the ports 80 and 443 someware in the port forward.

What you can do is change the ip in the forward to your pc, and check if the ports are indeed open.

Link to post
17 minutes ago, ZekerPixels said:

My device settings look totally different, but to me it looks like you are forwarding 180 to 180 and 4443 to 4443 instead of 80 to 180 and 443 to 4443. The scrceenshot is verry cropped so maybe its on there, but it should state the ports 80 and 443 someware in the port forward.

What you can do is change the ip in the forward to your pc, and check if the ports are indeed open.

ill get a screen shot of my settings because all im showing is the port open, however i use the same settings for translating ports to some of my other dockers which are working fine.

Link to post
1 hour ago, dfox1787 said:

ill get a screen shot of my settings because all im showing is the port open, however i use the same settings for translating ports to some of my other dockers which are working fine.

You need to port forward port 80 to 180 and 443 to 4433. From your screenshots it looks like you only port forward 180 to 180 and 4433 to 4433. That will not work.

 

Link to post
5 hours ago, saarg said:

You need to port forward port 80 to 180 and 443 to 4433. From your screenshots it looks like you only port forward 180 to 180 and 4433 to 4433. That will not work.

 

rebuilt the container and seems to be working but im getting this

 

 

image_2021-02-04_213032.png

Link to post

I've tried doing some keyword searches in this topic and in the Wireguard support topic without any luck.

At this point I have SWAG set up with a half dozen subdomains and it works beautifully from within and outside the local network. The peculiar thing is that I have a wireguard server set up in Unraid as well and if I use a "remote tunneled access" client, all the NGINX subdomains time-out and cannot be accessed.

 

If I use the individual internal IP addresses + port I can still access the dockers though. The remote tunneled access clients can also successfully connect to my home network, SMB shares, browse the internet, ping/tracert the subdomains in question, etc... so I'm fairly certain the issue lies in how SWAG is set up and not Wireguard.

I feel like I'm missing something obvious and would greatly appreciate it if someone's heard of this issue before and can point me in the right direction.

Link to post
31 minutes ago, zdkxlepb7VHyjz07oopI said:

I've tried doing some keyword searches in this topic and in the Wireguard support topic without any luck.

At this point I have SWAG set up with a half dozen subdomains and it works beautifully from within and outside the local network. The peculiar thing is that I have a wireguard server set up in Unraid as well and if I use a "remote tunneled access" client, all the NGINX subdomains time-out and cannot be accessed.

 

If I use the individual internal IP addresses + port I can still access the dockers though. The remote tunneled access clients can also successfully connect to my home network, SMB shares, browse the internet, ping/tracert the subdomains in question, etc... so I'm fairly certain the issue lies in how SWAG is set up and not Wireguard.

I feel like I'm missing something obvious and would greatly appreciate it if someone's heard of this issue before and can point me in the right direction.

Are you proxying through cloudflare by chance? Cuz you can't proxy wireguard, has to be set to dns only.

Edited by bigmak
Link to post

Need help configuring SWAG nginx reverse proxy to work with docker container that uses websocket.

 

Hoping someone here can help me with this. I'm using SWAG for WAN access to my web applications and recently tried to set up a docker solution to host Taiga.io (kanban/scrum Trello alternative). The official Taiga docker config builds its own virtual network on which it runs 8 docker containers for the the different services, including the front end, back end, database, events handler, and its own nginx reverse proxy. 

 

The problem I'm encountering is that out of the box, Taiga isn't configured for SSL. If you connect with HTTPS through SWAG, the page will refuse to load, because chrome won't let you load an HTTPS web page that includes an insecure websocket connection. 

 

I can get the page to load by changing the configuration variable in Taiga's docker-compose file to use wss: instead of ws: for the websocket connection URL. However, the websocket connection fails to connect, and the application won't function properly. I've tried playing around with the subdomain.conf and I haven't been able to get it to complete the handshake, and my browser console is filling up with the following errors:

 

app.js:3370 WebSocket connection to 'wss://taiga.******.***/events' failed: WebSocket is closed before the connection is established.
app.js:3354 WebSocket connection to 'wss://taiga.******.***/events' failed: Error during WebSocket handshake: Unexpected response code: 200

 

Here's my taiga.subdomain.conf:

 

## Version 2020/12/09
# custom for taiga to proxy?

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name taiga.*;

    include /config/nginx/ssl.conf;
	
	# restrict access to authenticated users
    #auth_basic "Restricted";
    #auth_basic_user_file /config/etc/htpasswd/.htpasswd;
	
    #client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

  location / {
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Scheme $scheme;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_redirect off;
    proxy_pass http://10.0.0.10:9000/;
  }

  # Events
  location /events {
      proxy_pass http://10.0.0.10:9000/;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
	  proxy_set_header Host $host;
      proxy_connect_timeout 7d;
      proxy_send_timeout 7d;
      proxy_read_timeout 7d;
  }
}

 

and the taiga.conf that taiga's nginx instance is using: 

 

server {
    listen 80 default_server;

    client_max_body_size 100M;
    charset utf-8;

    # Frontend
    location / {
        proxy_pass http://taiga-front/;
        proxy_pass_header Server;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
    }

    # Api
    location /api {
        proxy_pass http://taiga-back:8000/api;
        proxy_pass_header Server;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
    }

    # Admin
    location /admin {
        proxy_pass http://taiga-back:8000/admin;
        proxy_pass_header Server;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
    }

    # Static
    location /static {
        root /taiga;
    }

    # Media
    location /_protected {
        internal;
        alias /taiga/media/;
        add_header Content-disposition "attachment";
    }

    # Unprotected section
    location /media/exports {
        alias /taiga/media/exports/;
        add_header Content-disposition "attachment";
    }

    location /media {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://taiga-protected:8003/;
        proxy_redirect off;
    }

    # Events
    location /events {
        proxy_pass http://taiga-events:8888/events;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
		proxy_set_header Host $host;
        proxy_connect_timeout 7d;
        proxy_send_timeout 7d;
        proxy_read_timeout 7d;
    }
}

 

Link to post
On 12/31/2020 at 3:41 AM, Spoonsy1480 said:

nginx: [emerg] cannot load certificate "/config/keys/letsencrypt/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/letsencrypt/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

I got this error yesterday and I can not find out how to fix everything was working fine I kept getting emails saying my certificate was about to expire

 

Ok fixed it wiped old config and reinstalled now everything is good

Sent from my iPhone using Tapatalk

Having this issue. It's been working great for months without issues and suddenly this comes up. Tried to reinstall but same issue. Want to avoid starting from scratch. Only recent issue is my usb flash drive was messing up so took did a repair on a windows PC, restarted unraid and everything was back.

Edited by bluesky509
Link to post

I set up letsencrypt and then migrated to swag using the space invader one video.  Access to my dockers is working fine.  Two questions:

 

1. I'm running Zoneminder (for home security cameras) on a different  (ubuntu) machine.  Can I setup up a reverse proxy using swag in an unraid docker to access zoneminder on my ubuntu machine to from outside my home network?

2. Is thee a zoneminder.subdomain.conf file ?  I didn't find one in the conf folder.

 

Thanks 

Link to post
5 hours ago, krh1009 said:

I set up letsencrypt and then migrated to swag using the space invader one video.  Access to my dockers is working fine.  Two questions:

 

1. I'm running Zoneminder (for home security cameras) on a different  (ubuntu) machine.  Can I setup up a reverse proxy using swag in an unraid docker to access zoneminder on my ubuntu machine to from outside my home network?

2. Is thee a zoneminder.subdomain.conf file ?  I didn't find one in the conf folder.

 

Thanks 

 

i guess lsio wont make a sample config for every possible app out there ;)

 

so to your questions, yes of course, u can use "externals" too.

 

may take a look at your folder where the sample are and ...

 

image.png.7c05ccf96a02d0194900ed8749617b41.png

 

use ip instead container_name as sample ...

Link to post
On 2/4/2021 at 1:30 PM, dfox1787 said:

rebuilt the container and seems to be working but im getting this

 

 

image_2021-02-04_213032.png

 

Check the container name.  Sometimes the unraid containers are named differently than what the included swag configs are expecting.  eg: "bitwardenrs" vs "bitwarden".

Link to post

Hi all,

 

I have setup swag, nextcloud with my own domain, it gets the certificates and geoip blocked with maxminddb to only my country. I'm trying to understand what is happening and what gets returned to bots ect. and ofc have some questions of which I have some difficulty finding a answer.

 

1. When I vpn to some county and try to access mydomain, it gets blocked and I'm returned a 444. I think that sound good. So, I was expecting to see the same response to requests from another country in the log, but that is not the case. To IPs I checked which are from outside the country, I see 200 and 301 get returned and not 444. I don't get why it is different.

 

2. I have set fail2ban to 2 retries, a day find-  and a week for bantime (aka things should get blocked). (I have some ideas for a better setting, but first I want to see it working on something which isn't me) I was expecting some to get blocked on a banlist of sorts, maybe I'm stupid but where can I find what got blocked? or does it get reset at reboot?

 

Thanks,

 

Edit:

1. still dont know

2. fail2ban in not configured right https://forums.unraid.net/topic/48383-support-linuxserverio-nextcloud/page/177/?tab=comments#comment-947025

 

 

Edited by ZekerPixels
Link to post
11 minutes ago, Maximus88 said:

Hi All,

 

How can we request that port 80 on the template be marked as optional?
No one in their right mind should be publishing this and it prevents me from auto-updating..

I guess noone within the team is in their right mind.

It does not prevent you from auto-updating.

Link to post
6 hours ago, Maximus88 said:

Hi All,

 

How can we request that port 80 on the template be marked as optional?
No one in their right mind should be publishing this and it prevents me from auto-updating..

What do you mean port 80? specifying a port is required but you should be able to change it, unless you mean the inbound port past the container NAT, in which case it's not exposed on the outside so it shouldn't matter.

Link to post
22 hours ago, saarg said:

I guess noone within the team is in their right mind.

It does not prevent you from auto-updating.

I came here for this as well.  I have the Port 80 already in my template at the top.  Every time I update the docker, it adds another variable for port 80 and filled in as port 80 at the bottom and the conflict makes the docker die and it won't start until i remove the added variable.  No matter how many times I remove it, it adds it back. 

 

I don't know if this is because I just changed the repo when it was switched to swag and left the existing template alone.

Edited by mattgob86
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.