[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

58 minutes ago, saarg said:

Why do you need the IP if you can use the domain? I assume you are talking about your public IP.

yes i mean my public ip

as it is setup know it is setupo as a dynamic ip. i have a static IP that i am paying for (to get my old synology nas to work). i just want to use that posibilety as much as posible and safe. 

and i also want to learn more :)

have looked at bying a domain also. 

Edited by jeppe
Link to comment
49 minutes ago, jeppe said:

yes i mean my public ip

as it is setup know it is setupo as a dynamic ip. i have a static IP that i am paying for (to get my old synology nas to work). i just want to use that posibilety as much as posible and safe. 

and i also want to learn more :)

have looked at bying a domain also. 

I'm lost. If you have a static IP that you are paying for, then you do not have a dynamic IP.

duckdns is a free domain and you don't need to buy a domain. You can't use an IP (in the browser address bar) and use a valid SSL cert.

There isn't any advantage in using the IP instead of the domain as far as I see.

Link to comment
28 minutes ago, saarg said:

I'm lost. If you have a static IP that you are paying for, then you do not have a dynamic IP.

duckdns is a free domain and you don't need to buy a domain. You can't use an IP (in the browser address bar) and use a valid SSL cert.

There isn't any advantage in using the IP instead of the domain as far as I see.

yes that corret. i need the static ip, to be able to open the ports i need and want to use. my ISP closes all unnecessary on dynamic ip. 

i wanted to make sure that it worked with duckdns. befor i tryed to make it work with my statuc ip. but as you say it won't work. 

 

i hope it makes better meaning. and thanks for the help

Link to comment

EDIT: This was all made moot by moving my external facing docker containers to a user defined bridge per the Unraid section of these instructions: https://github.com/linuxserver/reverse-proxy-confs#ensure-you-have-a-custom-docker-network. It is possible! Leaving the rest of the post here for posterity.

 

Hi everyone, I'm still working on getting Authelia & SWAG to be happy. Because we're talking Unraid I can't use the user defined bridge network, so I'm trying to set it up with the subdomain config.

 

I now have four different files I'm working with:

  1. authelia.subdomain.conf
  2. myapp.subdomain.conf
  3. authelia-server.conf
  4. authelia-location.conf

 

For #1, I have set up the subdomain for authelia and verified that it works -- CNAME set correctly and can access authelia directly as expected.

 

For #2, myapp works correctly on its own when unprotected. To enable authelia protection, I have uncommented the two relevant lines so that myapp includes #3 and #4 in its proxy config.

 

For #3, I have it set up exactly according to the authelia sample on github, lines 1-30, with the only changes being I put the actual server IP and authelia port into the appropriate placeholders.

 

And lastly #4 is changed only to remove /api/verify from location in the LSIO default, because that is already a part of #3 per the default config from authelia. So now the first line reads: auth_request /authelia; which means this file matches lines 36-42 of the github example.

 

And now I'm getting these nginx errors:

"invalid port in upstream "http://[server ip]:[authelia port]/authelia/api/verify,"
"auth request unexpected status: 500,"

Any suggestions for why it says the port is invalid? I noticed that lines 33-34 and lines 44-69 don't appear to be covered in any of these four files, but it also appears the LSIO has set up some slightly different variables than the default Authelia config so I'm not sure.

 

TL;DR: Authelia works fine in SWAG independently, my app works fine in SWAG independently, but when I try to have my app protected by authelia i'm getting the upstream port error, above.

Edited by scud133b
Link to comment
54 minutes ago, scud133b said:

Hi everyone, I'm still working on getting Authelia & SWAG to be happy. Because we're talking Unraid I can't use the user defined bridge network, so I'm trying to set it up with the subdomain config.

 

I now have four different files I'm working with:

  1. authelia.subdomain.conf
  2. myapp.subdomain.conf
  3. authelia-server.conf
  4. authelia-location.conf

 

For #1, I have set up the subdomain for authelia and verified that it works -- CNAME set correctly and can access authelia directly as expected.

 

For #2, myapp works correctly on its own when unprotected. To enable authelia protection, I have uncommented the two relevant lines so that myapp includes #3 and #4 in its proxy config.

 

For #3, I have it set up exactly according to the authelia sample on github, lines 1-30, with the only changes being I put the actual server IP and authelia port into the appropriate placeholders.

 

And lastly #4 is changed only to remove /api/verify from location in the LSIO default, because that is already a part of #3 per the default config from authelia. So now the first line reads: auth_request /authelia; which means this file matches lines 36-42 of the github example.

 

And now I'm getting these nginx errors:


"invalid port in upstream "http://[server ip]:[authelia port]/authelia/api/verify,"

"auth request unexpected status: 500,"

Any suggestions for why it says the port is invalid? I noticed that lines 33-34 and lines 44-69 don't appear to be covered in any of these four files, but it also appears the LSIO has set up some slightly different variables than the default Authelia config so I'm not sure.

 

TL;DR: Authelia works fine in SWAG independently, my app works fine in SWAG independently, but when I try to have my app protected by authelia i'm getting the upstream port error, above.

Why can't you use a user defined bridge on unraid?

For help with authelia it's better if you join our discord server.

Link to comment
2 hours ago, saarg said:

Why can't you use a user defined bridge on unraid?

For help with authelia it's better if you join our discord server.

I did and @aptalca helped me out. Didn't realize you *could* make user defined bridges in unraid, all my google searches turned up that it wasn't possible. Unraid docs need updating I think.

 

Revised my previous post.

Link to comment
11 hours ago, scud133b said:

I did and @aptalca helped me out. Didn't realize you *could* make user defined bridges in unraid, all my google searches turned up that it wasn't possible. Unraid docs need updating I think.

 

Revised my previous post.

I think there might be some info about it in the help function in the unraid webui.

Link to comment
6 hours ago, saarg said:

I think there might be some info about it in the help function in the unraid webui.

Yep it's under Settings > Docker > Advanced View and has a help tooltip. But I would never have known it was there without you guys nudging me to look for it. Google still turns up a bunch of older info from all the usual places (unraid wiki, forums, reddit, etc.) that says user defined bridges aren't possible in unraid. The question often comes up in context of using docker-compose so perhaps the older comments are more about that, but in short, I turned up empty on my research.

Edited by scud133b
Link to comment

nginx: [emerg] cannot load certificate "/config/keys/letsencrypt/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/letsencrypt/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

I got this error yesterday and I can not find out how to fix everything was working fine I kept getting emails saying my certificate was about to expire

 

Ok fixed it wiped old config and reinstalled now everything is good

Sent from my iPhone using Tapatalk

Link to comment

Hi everyone I'm currently testing out Unraid (havent yet purchased a license) I'm thinking that unraid might be what I'm looking for. The only issue I'm having is with accessing my docker containers outside of my network with SWAG.
I'm currently running unraid version 6.9.0-rc2

I have been playing with the default file in site-confs folder of swag, but I'm not getting anywhere. I have managed to access the /login/ section over ssl however when entering in my credentials it doesnt foward me to the dashboard, even entering in the wrong details doesnt seem to do anything it just reloads the page.

 

My default file contains the following:

# sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp"
# notice this is within the same server block as the base
# don't forget to generate the .htpasswd file as described on docker hub
#    location ^~ /cp {
#        auth_basic "Restricted";
#        auth_basic_user_file /config/nginx/.htpasswd;
#        include /config/nginx/proxy.conf;
#        proxy_pass http://192.168.1.50:5050/cp;
#    }

    location ^~ /login/ {
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.1.222:80/login;
        #rewrite /login(.*) $1 break;
        
    }
    
#    location ^~ /deluge/ {
#        include /config/nginx/proxy.conf;
#        proxy_pass http://192.168.1.222:8112/deluge;
#        rewrite /deluge(.*) $1 break;
#    }

}


I have tried playing with getting deluge and jellyfin working however with no success, when using the template files (subfolder ones) I get a 502 Bad Gateway error.
My deluge file is set up the following way - note changing of "set $upstream_app binhex-delugevpn;" to be the same as the docker container name as I found in a guide.

## Version 2020/12/09
# deluge does not require a base url setting

location /deluge {
    return 301 $scheme://$host/deluge/;
}

location ^~ /deluge/ {
    # enable the next two lines for http auth
    #auth_basic "Restricted";
    #auth_basic_user_file /config/nginx/.htpasswd;

    # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf
    #auth_request /auth;
    #error_page 401 =200 /ldaplogin;

    # enable for Authelia, also enable authelia-server.conf in the default site config
    #include /config/nginx/authelia-location.conf;

    include /config/nginx/proxy.conf;
    resolver 127.0.0.11 valid=30s;
    set $upstream_app binhex-delugevpn;
    set $upstream_port 8112;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    rewrite /deluge(.*) $1 break;
    proxy_set_header X-Deluge-Base "/deluge/";
}

When running swag I dont get any error in the logs with the current set up as outlined above.
Any help would be greatly appreciated. As I said I'm close to pulling the trigger and purchasing the Plus package for Unraid but need to get this sorted first.

 

Thanks guys.

 

SP

Edited by SimplePete
Link to comment
8 hours ago, SimplePete said:

Hi everyone I'm currently testing out Unraid (havent yet purchased a license) I'm thinking that unraid might be what I'm looking for. The only issue I'm having is with accessing my docker containers outside of my network with SWAG.
I'm currently running unraid version 6.9.0-rc2

I have been playing with the default file in site-confs folder of swag, but I'm not getting anywhere. I have managed to access the /login/ section over ssl however when entering in my credentials it doesnt foward me to the dashboard, even entering in the wrong details doesnt seem to do anything it just reloads the page.

 

My default file contains the following:


# sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp"
# notice this is within the same server block as the base
# don't forget to generate the .htpasswd file as described on docker hub
#    location ^~ /cp {
#        auth_basic "Restricted";
#        auth_basic_user_file /config/nginx/.htpasswd;
#        include /config/nginx/proxy.conf;
#        proxy_pass http://192.168.1.50:5050/cp;
#    }

    location ^~ /login/ {
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.1.222:80/login;
        #rewrite /login(.*) $1 break;
        
    }
    
#    location ^~ /deluge/ {
#        include /config/nginx/proxy.conf;
#        proxy_pass http://192.168.1.222:8112/deluge;
#        rewrite /deluge(.*) $1 break;
#    }

}


I have tried playing with getting deluge and jellyfin working however with no success, when using the template files (subfolder ones) I get a 502 Bad Gateway error.
My deluge file is set up the following way - note changing of "set $upstream_app binhex-delugevpn;" to be the same as the docker container name as I found in a guide.


## Version 2020/12/09
# deluge does not require a base url setting

location /deluge {
    return 301 $scheme://$host/deluge/;
}

location ^~ /deluge/ {
    # enable the next two lines for http auth
    #auth_basic "Restricted";
    #auth_basic_user_file /config/nginx/.htpasswd;

    # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf
    #auth_request /auth;
    #error_page 401 =200 /ldaplogin;

    # enable for Authelia, also enable authelia-server.conf in the default site config
    #include /config/nginx/authelia-location.conf;

    include /config/nginx/proxy.conf;
    resolver 127.0.0.11 valid=30s;
    set $upstream_app binhex-delugevpn;
    set $upstream_port 8112;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    rewrite /deluge(.*) $1 break;
    proxy_set_header X-Deluge-Base "/deluge/";
}

When running swag I dont get any error in the logs with the current set up as outlined above.
Any help would be greatly appreciated. As I said I'm close to pulling the trigger and purchasing the Plus package for Unraid but need to get this sorted first.

 

Thanks guys.

 

SP

I don't know what you mean by the login section, but you don't need to modify anything in the default file of swag to get any reverse proxy going. Referencing a guide isn't helpful for us either. Post what you have done so we don't have to go through a guide.

If you are using the name to resolve the containers you need to create a custom network bridge or else it will not work. All containers talking to each other has to be on the same custom bridge. Is that done?

Link to comment
8 hours ago, saarg said:

I don't know what you mean by the login section, but you don't need to modify anything in the default file of swag to get any reverse proxy going. Referencing a guide isn't helpful for us either. Post what you have done so we don't have to go through a guide.

If you are using the name to resolve the containers you need to create a custom network bridge or else it will not work. All containers talking to each other has to be on the same custom bridge. Is that done?

Good morning. Ok so what I mean by login section is this:
image.png.299dc9ec651d3517c1365c2e424c1464.png

 

 

I can access the above screen over https when the default file contains the code:

    location ^~ /login/ {
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.1.222:80/login;
        #rewrite /login(.*) $1 break;
        
    }

However it does not allow me to login to my server, on entering username/password it just refreshes the screen - I dont think its submitting the data as when entering incorrect login info it doesnt say that the data userame/pass was invalid. I can connect remotely to the login via normal http and it will submit the login details. However obviously I dont wish to do this over http so this is not currently a solution.

 

 

Thank you!! I hadnt set up a custom bridge, that was the issue I was having with the 502 error. Jellyfin and deluge are now working as expected. Now if I can only get the login screen to work then I think I'm good with Unraid.

 

SP

Link to comment
10 minutes ago, SimplePete said:

Good morning. Ok so what I mean by login section is this:
image.png.299dc9ec651d3517c1365c2e424c1464.png

 

 

I can access the above screen over https when the default file contains the code:


    location ^~ /login/ {
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.1.222:80/login;
        #rewrite /login(.*) $1 break;
        
    }

However it does not allow me to login to my server, on entering username/password it just refreshes the screen - I dont think its submitting the data as when entering incorrect login info it doesnt say that the data userame/pass was invalid. I can connect remotely to the login via normal http and it will submit the login details. However obviously I dont wish to do this over http so this is not currently a solution.

 

 

Thank you!! I hadnt set up a custom bridge, that was the issue I was having with the 502 error. Jellyfin and deluge are now working as expected. Now if I can only get the login screen to work then I think I'm good with Unraid.

 

SP

Please don't reverse proxy your unraid webgui. It's not a hardened system. If you absolutely need to access it remotely, do so only using a VPN.

When using a custom bridge for swag you can't talk to unraid, so it will not work.

Edited by saarg
Link to comment
3 hours ago, SimplePete said:

Hi Saag, well why am I able to access the web gui via http? Shouldnt it be disabled then if it poses a security risk?

 

SP

You are exposing it to the whole world, that is not the same as having access on your local network.

Link to comment

I've just moved my appdata to its own cache drive (long overdue).  SWAG is the only thing I can't get to totally move, I guess its because the certs are loaded/protected?  

 

I set the share to prefer the static cache drive, and disabled docker in settings and run the mover.  If i disable docker again and manually move these files will I break something*? (probably :D

 


image.thumb.png.e42fec67a4009b5a2040868b3356d4a7.png

 

*I should say i have read the readme and its pretty clear:

Quote

WARNING: DO NOT MOVE OR RENAME THESE FILES!
         Certbot expects these files to remain in this location in order
         to function properly!

However does moving the files to the same place on the cache count as moving them?

Edited by alexdodd
Link to comment
20 hours ago, alexdodd said:

I've just moved my appdata to its own cache drive (long overdue).  SWAG is the only thing I can't get to totally move, I guess its because the certs are loaded/protected?  

 

I set the share to prefer the static cache drive, and disabled docker in settings and run the mover.  If i disable docker again and manually move these files will I break something*? (probably :D

 


image.thumb.png.e42fec67a4009b5a2040868b3356d4a7.png

 

*I should say i have read the readme and its pretty clear:

However does moving the files to the same place on the cache count as moving them?

 

As long as you choose the new path in the appdata field in the container, you can move the files around as you want.

If you are only using the cache drive you should set it to cache only. Cache prefer will move the files in some situations to the array.

Link to comment
1 hour ago, saarg said:

 

As long as you choose the new path in the appdata field in the container, you can move the files around as you want.

If you are only using the cache drive you should set it to cache only. Cache prefer will move the files in some situations to the array.

Yeah I set it to cache prefer only to move the bulk of things, and now its set back to cache only, and i'll manually move these files from the disk1 to static, wish me luck :D

The appdata path has never changed, just the underlying physical disk, which i'm not even convinced letsencrypt knows about, so this "move" isn't what the readme is referencing.

 

//Edit:  Sweet, nothing exploded, certs still valid, and everything still works! 

Edited by alexdodd
Link to comment

Someone who has more experience in NGinx might be able to help me out. Currently I have Heimdall and Rutorrent reverse proxied and it's working. However I want to allow Heimdall to call the location in rutorrent /RPC2 but deny access to RPC2 to everything else. Anyone know what I need to change in the  rutorrent.subdomain.conf config? I've tried to add heimdall's ip address but it still doesn't seem to be working.

 

    location /RPC2 {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        # block rpc access by default because it is unprotected
        # you can comment out the next line to enable remote rpc calls
		
		allow 172.19.0.18; #Allow heimdall
		deny all;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_app rutorrent;
        set $upstream_port 80;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

 

It works perfeclty fine if I just uncomment deny all but I want to add another layer of security and only allow Heimdall access to make RPC calls

 

EDIT: I got it working by changing the ip to  allow 172.19.0.1; #Allow heimdall

 

 

Edited by bobokun
Working
Link to comment

Today I noticed that nginx is running at ~60% CPU constantly.

 

I do see a lot of these messages in the nginix/error.log

 

2021/01/05 21:15:50 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:15:50 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:15:55 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:15:55 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:16:00 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:16:00 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:16:05 [error] 431#431: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:16:05 [error] 430#430: send() failed (111: Connection refused) while resolving, resolver: 127.0.0.11:53
2021/01/05 21:16:10 [error] 431#431: r3.o.lencr.org could not be resolved (110: Operation timed out) while requesting certificate status, responder: r3.o.lencr.org, certificate: "/config/keys/letsencrypt/fullchain.pem"

 

thanks,

david

Link to comment

I'm seeing more error reports.  Ideas?

 

2021/01/06 13:27:25 [error] 431#431: *4765522 user "root" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24,
server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443"
2021/01/06 13:27:33 [error] 430#430: *4768442 user "root" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24,
server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443"
2021/01/06 13:27:54 [error] 431#431: *4770435 user "report" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24
, server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443"
2021/01/06 13:28:05 [error] 431#431: *4770753 user "admin" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24,
 server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443"
2021/01/06 13:28:23 [error] 430#430: *4772463 user "admin" was not found in "/config/nginx/.htpasswd", client: 37.46.150.24,
 server: _, request: "GET / HTTP/1.1", host: "<myExternalIP>:443"
2021/01/06 23:03:22 [crit] 430#430: *403730 SSL_read_early_data() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_ke
y_share:bad key share) while SSL handshaking, client: 192.241.221.196, server: 0.0.0.0:443

I have no idea what client is 37.46.150.24

 

david

Link to comment
On 1/1/2021 at 2:26 PM, saarg said:

You are exposing it to the whole world, that is not the same as having access on your local network.

You were right, I had port forwarded it on my router without realising it *Doh*
 

I have another issue with setting up photoprism, there isnt a sample file for that one. My set up is as the following:
 

location /photos {
    return 301 $scheme://$host/photos/;
}

location ^~ /photos/ {
    # enable the next two lines for http auth
    #auth_basic "Restricted";
    #auth_basic_user_file /config/nginx/.htpasswd;

    # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf
    #auth_request /auth;
    #error_page 401 =200 /ldaplogin;

    # enable for Authelia, also enable authelia-server.conf in the default site config
    #include /config/nginx/authelia-location.conf;

    include /config/nginx/proxy.conf;
    resolver 127.0.0.11 valid=30s;
    set $upstream_app PhotoPrism;
    set $upstream_port 2342;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

}

This makes it accessable via my domain name however the site is uninteractable - It just shows a large logo. There is a config file displayed on the site for setting up photoprism with nginx however I don't really understand it. How would I re-write mine? The sample config is as follows:

 

http {
  server {
    server_name example.com
    client_max_body_size 500M;

    location / {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $host;

      proxy_pass http://photoprism:2342;

      proxy_buffering off;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }
  }
}

 

Any help would be appreciated.

 

SP

Link to comment

hi!

 

is it possible for the reverse proxy to point to emby in a vm? if so, could someone please help me? :) i'm clueless!

edit: if thats not possible, can i install rar2fs to an existing emby docker?

 

EDIT2: I figured it out :) But is there a way to just write emby.my.domain and being redirected to https instead of write https manually?

 

thanks in advance

Edited by Hugh Jazz
Link to comment

Just for curiosity:

Is there any plan to extend swag container with native authelia and ldap server.

It would be an all-in-one easy solution for unraid with global users management and reverse proxy authentication.

 

I tried to achieve such a solution with swag + authelia + openldap + phpldapadmin, without success right now 😅

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.