[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

48 minutes ago, cpsmith516 said:

Are there any plans to add all of the DNS validation methods that are included with ACME v2 found in things like pfsense and opnsense? I'd really like to be able to utilize my GoDaddy API to do DNS validation and get wild card certs for my domain.

Our image doesn't use acme, which is a third party script. We use the official client, certbot.

 

Honestly, GoDaddy is not very good at DNS services. I'd take cloudflare over any of those domain registrar provided DNS service any day. Cloudflare is free, very easy to switch to and propagates changes almost instantly.

 

You're better off using cloudflare as your dns provider.

 

I have domains bought from various providers like dynadot and name cheap, but they all point the name servers to cloudflare, which handles dns for me

Edited by aptalca
  • Thanks 1
Link to comment
2 hours ago, aptalca said:

If sonarr and radarr are on macvlan, and others are on a custom bridge, they won't be able to connect to each other. That's a docker security feature to prevent connections between the host (and the host networks) and macvlan

Ok so just to confirm, with macvlan you mean this VLAN (VLAN pfsense) ? :

 

Docker_Network_settings1.thumb.PNG.fd7d9f053af0bcbf7e49fe4991b96232.PNG

 

My custom network for ombi and other proxy containers are on the br.fc02.... and the br0.51 is the pfsense network:

 

Docker_Network_settings2.thumb.PNG.f0abdebacdea05fb6e673ad58feaf96a.PNG

 

If in my case there is no way to do it with Unraid, is there another possibility through my unify controller? i also have a ubiquiti managed switch i was hoping to use in case i can't config it within Unraid...

 

Thanks for your help aptalca! :)

Link to comment

Newbie here. Trying to get the container setup but it is failing on all of my subdomains during the http-01 challenge verfication.

 

Setup:

domain - jesseefamily.org (subdomains have been setup for various other containers and CNAMEs have been added to the domain to point to a DDNS service. So far, everthing is routing correctly)

 

DDNS - using No-IP

 

Router: Have forwarded ports to 80 and 443

letsencryptsettings.JPG

letsencrypt.log

Link to comment
13 hours ago, SirCaveman said:

Ok so just to confirm, with macvlan you mean this VLAN (VLAN pfsense) ? :

 

Docker_Network_settings1.thumb.PNG.fd7d9f053af0bcbf7e49fe4991b96232.PNG

 

My custom network for ombi and other proxy containers are on the br.fc02.... and the br0.51 is the pfsense network:

 

Docker_Network_settings2.thumb.PNG.f0abdebacdea05fb6e673ad58feaf96a.PNG

 

If in my case there is no way to do it with Unraid, is there another possibility through my unify controller? i also have a ubiquiti managed switch i was hoping to use in case i can't config it within Unraid...

 

Thanks for your help aptalca! :)

There are three types of networking for docker: host, bridge and macvlan.

 

Host and bridge networks use the same ip as the host machine. Macvlan let's the container get its own ip. In this case you're using macvlan, and it has that security feature where it blocks communications between the container and the host ip (and everything that's using that ip).

 

I don't know what your other containers are on so can't comment on them.

Link to comment
7 hours ago, biggiesize said:

Newbie here. Trying to get the container setup but it is failing on all of my subdomains during the http-01 challenge verfication.

 

Setup:

domain - jesseefamily.org (subdomains have been setup for various other containers and CNAMEs have been added to the domain to point to a DDNS service. So far, everthing is routing correctly)

 

DDNS - using No-IP

 

Router: Have forwarded ports to 80 and 443

letsencryptsettings.JPG

letsencrypt.log 5.68 kB · 0 downloads

Either your ip is incorrect on your dns, or your port forwarding is incorrect.

 

Try this: https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

Link to comment
4 hours ago, aptalca said:

There are three types of networking for docker: host, bridge and macvlan.

 

Host and bridge networks use the same ip as the host machine. Macvlan let's the container get its own ip. In this case you're using macvlan, and it has that security feature where it blocks communications between the container and the host ip (and everything that's using that ip).

 

I don't know what your other containers are on so can't comment on them.

Hi aptalca, 

 

ok so my setup is like this:

 

(My USG is my gateway with 192.168.2.1 )

letsencrypt 172.19.0.2 ==> 192.168.2.101
sonarr        192.168.1.6 
radarr         192.168.1.7
ombi           172.19.0.4 ==> 192.168.2.101
pfsense VM WAN = 192.168.2.42 
pfsense VM LAN = 192.168.1.1

 

and then there are some containers on bridge (172.17.0.x ==> 192.168.2.101)

So am i correct to assume that indeed sonarr and radarr are unable to ever connect with the other containers with the current setup?

It took me quite a bit to get my head around the pfsense vm and making it route traffic from specific dockers(nzbget, deluge, sonarr, radarr) ONLY through OPENVPN which is also setup in pfsense.

 

But if i have to abandon this to get ombi to work with sonarr and radarr i will have to rethink everything... Although even if i bin pfsense and setup a openvpn container and try to route traffic through there i probably still need to use a macvlan for that... 

Link to comment
3 hours ago, SirCaveman said:

Hi aptalca, 

 

ok so my setup is like this:

 

(My USG is my gateway with 192.168.2.1 )

letsencrypt 172.19.0.2 ==> 192.168.2.101
sonarr        192.168.1.6 
radarr         192.168.1.7
ombi           172.19.0.4 ==> 192.168.2.101
pfsense VM WAN = 192.168.2.42 
pfsense VM LAN = 192.168.1.1

 

and then there are some containers on bridge (172.17.0.x ==> 192.168.2.101)

So am i correct to assume that indeed sonarr and radarr are unable to ever connect with the other containers with the current setup?

It took me quite a bit to get my head around the pfsense vm and making it route traffic from specific dockers(nzbget, deluge, sonarr, radarr) ONLY through OPENVPN which is also setup in pfsense.

 

But if i have to abandon this to get ombi to work with sonarr and radarr i will have to rethink everything... Although even if i bin pfsense and setup a openvpn container and try to route traffic through there i probably still need to use a macvlan for that... 

I'm assuming you're making those go through the vpn only for outgoing connections, not incoming. Then why not make your whole lan go through the vpn?

 

I use PIA and my whole lan goes out through the vpn by default via pfsense. Only for streaming devices (netflix and amazon refuse to work through vpn) I have a rule that lets them bypass the vpn based on their IP. I don't use vlans for containers or streamers (only for a guest network).

 

Then you'll have all containers in bridge or host, and no issues with connecting to each other.

Edited by aptalca
Link to comment
3 hours ago, aptalca said:

I'm assuming you're making those go through the vpn only for outgoing connections, not incoming. Then why not make your whole lan go through the vpn?

 

I use PIA and my whole lan goes out through the vpn by default via pfsense. Only for streaming devices (netflix and amazon refuse to work through vpn) I have a rule that lets them bypass the vpn based on their IP. I don't use vlans for containers or streamers (only for a guest network).

 

Then you'll have all containers in bridge or host, and no issues with connecting to each other.

Only NZBGet and Deluge may go out and in through VPN yes, Sonarr and Radarr don't need to I think. Plex will always stay on host and i will need my unifi controller to stay on bridge i assume because it needs to work with my USG ofcourse.

But if i assign br.051 (pfsense lan) to all containers, will my letsencrypt still proxy the 3 specific containers correctly? i'm having a hard time getting my head around the way proxy sits between pfsense and my actual internal network... 

I'm using cloudflare with my own domain for dns and letsencrypt btw..

 

Do you have your current setup also with just one nic? because the reason i set it up with a pfsense vm is because i only have one nic on the unraid machine.... otherwise i could have routed traffic through another nic etc..

Link to comment
12 hours ago, aptalca said:

Either your ip is incorrect on your dns, or your port forwarding is incorrect.

 

Try this: https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

Turns out my provider is blocking those ports. I switched to Cloudflare for DNS but am now getting the below error when starting the container.

 

nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:

Link to comment

Greetings all.  Has anyone setup code-server and/or GitLab-CE with letsencrypt?  I've been trying to figure this out for about 2 weeks with no luck.  Any help would be appreciated.  GitLab-CE doesn't have its own sample config but gitea does, not sure if I could use that with some modification or not.  Code server has one but not sure what I'm doing wrong.   I am using letsencrypt:0.35.1-ls36 because there was an update in late july or early august that caused my letsencrypt to crap out and made my existing ombi and nextcloud un-usable.

 

  I also get this warning: "nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:"

 

Any help would be greatly appreciated.  Thanks!

Link to comment
9 hours ago, Spectral Force said:

Greetings all.  Has anyone setup code-server and/or GitLab-CE with letsencrypt?  I've been trying to figure this out for about 2 weeks with no luck.  Any help would be appreciated.  GitLab-CE doesn't have its own sample config but gitea does, not sure if I could use that with some modification or not.  Code server has one but not sure what I'm doing wrong.   I am using letsencrypt:0.35.1-ls36 because there was an update in late july or early august that caused my letsencrypt to crap out and made my existing ombi and nextcloud un-usable.

 

  I also get this warning: "nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:"

 

Any help would be greatly appreciated.  Thanks!

Code server subdomain config that comes with letsencrypt works with no issues here

Edited by aptalca
Link to comment
49 minutes ago, aptalca said:

Code server subdomain config that comes with letsencrypt works with no issues here

I was forced to roll back my lets encrypt to an older version because of some update, as per my thread and the lua warning.  Do you think that would have anything to do with it?

 

I changed my domain in the conf to code.*; so that matches my subdomain, using duckdns.org and have that added in that docker settings.  just not sure why it's not working.  I followed the same steps I used to setup my Ombi install.

Link to comment
1 hour ago, Spectral Force said:

I was forced to roll back my lets encrypt to an older version because of some update, as per my thread and the lua warning.  Do you think that would have anything to do with it?

 

I changed my domain in the conf to code.*; so that matches my subdomain, using duckdns.org and have that added in that docker settings.  just not sure why it's not working.  I followed the same steps I used to setup my Ombi install.

 

The lua error is harmless and doesn't cause any problems.

 

It's not easy to say why yours is not working as we don't have any info other than not working. You only edited the server in the proxy conf and the container is on the same bridge as letsencrypt?

Check the nginx logs and see if you see anything related to code-server and also post the error you get when trying to access code-server. You are connecting from outside using your phone?

Link to comment

Ok I got the code-server working.  Anyone have a way to use GitLab-CE with LetsEncrypt?  I don't see a proxy conf for it but I do see one for gitea, not sure if I can mod that to work for gitlab-ce.  Anyone have any suggestions?  Thanks for the previous responses with the code-server.
 

Edited by Spectral Force
Link to comment
1 hour ago, dalben said:

Does this Docker handle assigning certs on a container that has it's own IP?  I have my unifi controller on custom and a dedicated IP and I'm hopig I can get LE to issue a cert.  I tried the other nginx/LE docker but it seems a different IP isn't supported

 

How to use the cert in another container is explained in the Readme.

Link to comment
1 hour ago, saarg said:

 

How to use the cert in another container is explained in the Readme.

I guess you mean this bit...

Quote

Using certs in other containers

This container includes auto-generated pfx and private-fullchain-bundle pem certs that are needed by other apps like Emby and Znc.

To use these certs in other containers, do either of the following:

(Easier) Mount the letsencrypt config folder in other containers (ie. -v /path-to-le-config:/le-ssl) and in the other containers, use the cert location /le-ssl/keys/letsencrypt/

(More secure) Mount the letsencrypt folder etc/letsencrypt that resides under /config in other containers (ie. -v /path-to-le-config/etc/letsencrypt:/le-ssl) and in the other containers, use the cert location /le-ssl/live/your.domain.url/ (This is more secure because the first method shares the entire letsencrypt config folder with other containers, including the www files, whereas the second method only shares the ssl certs)

These certs include:

cert.pem, chain.pem, fullchain.pem and privkey.pem, which are generated by letsencrypt and used by nginx and various other apps

privkey.pfx, a format supported by Microsoft and commonly used by dotnet apps such as Emby Server (no password)

priv-fullchain-bundle.pem, a pem cert that bundles the private key and the fullchain, used by apps like ZNC

All a bit over my head and I'd need to read up a fair bit to work that out, which is fine.  But I simply want to know if other containers with a different IP address is possible.  The current nginx/le docker I use doen't support that.  In the readme above I see no mention of differing IP addresses one way or another so I'm still unclear.

Link to comment
5 hours ago, dalben said:

I guess you mean this bit...

All a bit over my head and I'd need to read up a fair bit to work that out, which is fine.  But I simply want to know if other containers with a different IP address is possible.  The current nginx/le docker I use doen't support that.  In the readme above I see no mention of differing IP addresses one way or another so I'm still unclear.

Are you talking about reverse proxying containers on macvlan? Then the answer is no. It's a docker restriction (security feature) that blocks connections between host and containers on macvlan.

Link to comment

If this has been asked and answered, i apologize in advance. I searched but nothing came up.

 

My situation:  Trying to add a second domain which i did by creating a variable in docker called EXTRA_DOMAINS. Seems to work.

 

My problem:  Firstly, does the second domain leverage the SUBDOMAINS already created for the first domain? What if i wish to use other subdomains not listed. Can i create another variable for just subdomains to be used by the second domain?

 

Thanks!

 

 

Link to comment
40 minutes ago, pimogo said:

If this has been asked and answered, i apologize in advance. I searched but nothing came up.

 

My situation:  Trying to add a second domain which i did by creating a variable in docker called EXTRA_DOMAINS. Seems to work.

 

My problem:  Firstly, does the second domain leverage the SUBDOMAINS already created for the first domain? What if i wish to use other subdomains not listed. Can i create another variable for just subdomains to be used by the second domain?

 

Thanks!

 

 

Extra domains takes fqdn's so add your subdomains in there as additional fqdn's

Link to comment

is letsencrypt currently down/ having problems? trying to follow https://www.youtube.com/watch?v=I0lhZc25Sro to get nextcloud working from outside of the network but keep getting the command failed while trying to pulldown letsencrypt/ when running the docker it failing to reach the domains from duckdns even with ports forwarded on the router. spent around 3 hours trying to get this to work. any help would be great. (did try and use Resilio Sync instead of nextcloud but that had just as many problems if not more problems than this). anyone know a easy way of sending pictures and files from phone to unraid let me know.

 

Link to comment
4 hours ago, C_James said:

is letsencrypt currently down/ having problems? trying to follow https://www.youtube.com/watch?v=I0lhZc25Sro to get nextcloud working from outside of the network but keep getting the command failed while trying to pulldown letsencrypt/ when running the docker it failing to reach the domains from duckdns even with ports forwarded on the router. spent around 3 hours trying to get this to work. any help would be great. (did try and use Resilio Sync instead of nextcloud but that had just as many problems if not more problems than this). anyone know a easy way of sending pictures and files from phone to unraid let me know.

 

Post the commands or settings you tried and the error messages you got.

Link to comment

so set up the duckdns and duckdns docker so set up letencrypt and gets this "ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container" port forwards have been made for the ports. but the challenge fails every time. is there a way to get nextcloud to work from outside the local network on unraid easier? or any dockers apps that allow items to be send to a folder on unraid from a phone like pictures and that ? last 4 days has been trying to get nextcloud/resilio sync all are from linuxserver

Link to comment
28 minutes ago, C_James said:

so set up the duckdns and duckdns docker so set up letencrypt and gets this "ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container" port forwards have been made for the ports. but the challenge fails every time. is there a way to get nextcloud to work from outside the local network on unraid easier? or any dockers apps that allow items to be send to a folder on unraid from a phone like pictures and that ? last 4 days has been trying to get nextcloud/resilio sync all are from linuxserver

Take one step at a time. You have not gotten your certs yet, no point in messing around with reverse proxy.

 

See here: https://blog.linuxserver.io/2019/07/10/troubleshooting-letsencrypt-image-port-mapping-and-forwarding/

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.