[Support] Linuxserver.io - SWAG - Secure Web Application Gateway (Nginx/PHP/Certbot/Fail2ban)


Recommended Posts

Having an issue uploading large files to nextcloud only using letsencrypt reverse proxy, works fine without letsencrypt.  Even just a 2.3 GB file: the file completes uploading on the client, and I see that it's processing and copying the file into the final location on nextcloud//files/.  However, this only lasts for around 1 minute then stops writing the file, and tells the client that it timed out.  Watching the file get written, it's in the range of 800~1200 MB.
 
If I turn reverse proxy off and revert those settings, it works fine and the "processing" of copying into the final location runs for longer than that minute.  All the guides I've seen about configuring letsencrypt are removing client_max_body_size, but that was already removed back on 01/21/2019.  I'm on the latest nextcloud docker and letsencrypt docker.  
 
There were some timeout settings in letsencrypt/nginx/proxy.conf: send_timeout, proxy_*_timeout, increasing those significantly and restarting yielded the same result.  Same with modifying proxy_max_temp_file_size  in letsencrypt/nginx/proxy-confs/nextcloud.*.conf
 
I'm not really seeing anything in letsencrypt/nextcloud's log/[nginx,php]/*.log either.  Is there a loglevel I should be changing?


Sadly I can’t offer a helpful reply but sure wish I could. More or less at this point I suspect we will continue to have this issue until “chunked file upload for iOS app” is a priority and addressed. My problems mentioned a page or two back were only an issue on iOS app.
Link to comment
1 hour ago, blaine07 said:

 


Sadly I can’t offer a helpful reply but sure wish I could. More or less at this point I suspect we will continue to have this issue until “chunked file upload for iOS app” is a priority and addressed. My problems mentioned a page or two back were only an issue on iOS app.

 

Using android here, might be a different issue in my case?  

Also, for me, it works without letsencrypt reverse proxy (on LAN) and fails with letsencrypt reverse proxy (still on LAN, same upload speed)

Edited by robobub
Link to comment
15 hours ago, saarg said:

If you turn off your server at night the certs will not renew. Tha cron job is run at 2 in the night.

 

Have you checked in the browser that the current cert is expiring?

So, i left the ports redirected and the container working and indeed, the certificates renewed.

Thanks.

I have a script that starts letsencrypt (ad the containers using it) at 10:00 in the morning and turn them off at 14:00, basically for when i need them. Also, i usually keep ports 80 and 443 un-redirected, i don't really want to keep them open to the internet without the need for them.

Is it there any way to configure the hour the cronjob runs (any variable for the docker? i checked on the github but didn't find anything).

Thanks again.

Link to comment
1 hour ago, dhstsw said:

So, i left the ports redirected and the container working and indeed, the certificates renewed.

Thanks.

I have a script that starts letsencrypt (ad the containers using it) at 10:00 in the morning and turn them off at 14:00, basically for when i need them. Also, i usually keep ports 80 and 443 un-redirected, i don't really want to keep them open to the internet without the need for them.

Is it there any way to configure the hour the cronjob runs (any variable for the docker? i checked on the github but didn't find anything).

Thanks again.

If you look at our blog post about modifying our containers, you can add the cron tab file at each boot of the container so that it is persistent over updates. The downside of that is you will loose any updates we do regarding the cronjob.

 

You can modify the root file in /etc/crontabs/ and set the time to when your container runs.

Link to comment
32 minutes ago, saarg said:

If you look at our blog post about modifying our containers, you can add the cron tab file at each boot of the container so that it is persistent over updates. The downside of that is you will loose any updates we do regarding the cronjob.

 

You can modify the root file in /etc/crontabs/ and set the time to when your container runs.

 

Found it.
Thanks!

Link to comment
I am trying to expose my Octoprint page, but am having trouble finding a configuration that will work.  
 
Here's the examples that Octoprint provides: https://community.octoprint.org/t/reverse-proxy-configuration-examples/1107
 
Here's my current config:
 
server {   listen 443 ssl;   listen [::]:443 ssl;   server_name print.*;   include /config/nginx/ssl.conf;   client_max_body_size 0;   location / {       include /config/nginx/proxy.conf;       proxy_pass http://192.168.2.13:80;       proxy_set_header Host $http_host;       proxy_set_header Upgrade $http_upgrade;       proxy_set_header Connection "upgrade";       proxy_set_header X-Real-IP $remote_addr;       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Scheme $scheme;   }}

I took out a few lines that were causing the docker container to throw errors.  I'm currently getting a 500 error.  If I copy a config from another container and change the IP/port/subdomain, I do actually get to see the login page, but it says it's offline and asks me to reconnect.
 
Has anyone successfully configured Octoprint in this container? If so, would you be able to share the config?

I'm rather interested in this too. If you find an answer elsewhere, can you give us an update?

Sent from my ONEPLUS A5010 using Tapatalk

Link to comment

Getting the following errors (running calibre, shinobi, and ubooquity right now):

 

Server ready
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "ocsp.int-x3.letsencrypt.org" in the certificate "/config/keys/letsencrypt/fullchain.pem"
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)

Any ideas what is going on?

Link to comment

Hello there! I have just configured this awesome docker and I can access some dockers via HTTPS and DuckDNS but I cannot get to access to my server this way. I did not find the "server.subdomain.conf.sample" in proxy-confs so when I try to acces I land to "Welcome to our server". Is there something I'm missing/doing wrong? (I followed the awesome Spaceinvaders One setup video)

Link to comment
Hello there! I have just configured this awesome docker and I can access some dockers via HTTPS and DuckDNS but I cannot get to access to my server this way. I did not find the "server.subdomain.conf.sample" in proxy-confs so when I try to acces I land to "Welcome to our server". Is there something I'm missing/doing wrong? (I followed the awesome Spaceinvaders One setup video)

You 100% don’t want your Unraid Server accessible directly over the web if that’s what your fiddling with. Perhaps look into OpenVPN docker for remote access to server.
Link to comment
51 minutes ago, blaine07 said:


You 100% don’t want your Unraid Server accessible directly over the web if that’s what your fiddling with. Perhaps look into OpenVPN docker for remote access to server.

Just done that. Didn't know that exposing the server webui was that dangerous. Thanks for the tip.

Link to comment

Has anyone gotten this docker to work with the Zoneminder docker or any other docker that uses a base url as it's landing page?  Zoneminder uses a base url, (mydomain.com/zm) so using the subdomain and subfolder configs are not an option apparently.  I see that the nginx/site-conf directory has a file called "default" that has an example of couchpotato config that uses a base url, but I'm not sure if that file even does anything.

 

 

Link to comment
On 1/25/2020 at 1:51 PM, manderso said:

Looking at page information, on the security tab in firefox, for my nextcloud page, I see

Verified by: Let's Encrypt,

Expires on: December 28, 2019.

Finally figured this out by looking in the letsencrypt.log file in /config.

Was seeing this:

Quote

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/nextcloud.xxxx.info.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert is due for renewal, auto-renewing...
Non-interactive renewal: random delay of 385.555742254033 seconds
Plugins selected: Authenticator standalone, Installer None
Attempting to renew cert (nextcloud.xxxx.info) from /etc/letsencrypt/renewal/nextcloud.xxxx.info.conf produced an unexpected error: Account at /etc/letsencrypt/accounts/acme-v01.api.letsencrypt.org/directory/9f0ac5bfe9a6bf9465c982abea8e4cf1 does not exist. Skipping.
All renewal attempts failed. The following certs could not be renewed:
  /etc/letsencrypt/live/nextcloud.xxxx.info/fullchain.pem (failure)

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

All renewal attempts failed. The following certs could not be renewed:
  /etc/letsencrypt/live/nextcloud.xxxx.info/fullchain.pem (failure)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 renew failure(s), 0 parse failure(s)

and i'm not sure how I solved it, but that the account in the mentioned directory was not there.

I originally had 2 sub domains set to get a cert, and stopped using one of them and deleted it in the settings, then hit apply, which brought the image down again and for some reason, added a new account to the directory. Letsencrypt saw it and allowed the cert to renew.

Link to comment

Hello, I had updated to Unraid 6.8 which broke the --network extra param I was using so after a bunch of looking around and digging I finally got it to work.. with most dockers. The issue I have ran into is getting it to work with dockers that have the same container ports. For example, right now I am trying to get it to work with OrganizrV2 which has port 80 and port 443 as its ports. The solution I found for getting it to work with other dockers was to, in the conf, instead of using the docker name I had to use the IP of the letsencrypt docker. If I try to point the Organizr with the letsencrypt IP I get endless redirects and it fails because letsencrypt is using port 80 and port 443. This does not give me any errors in the logs. So I tried putting the container name in the conf instead of the IP and I get an operation timed out:

2020/02/04 18:23:20 [error] 397#397: *3 organizrv2 could not be resolved (110: Operation timed out), client: 192.168.1.1, server: organizr.*, request: "GET / HTTP/2.0", host: 

This is my conf for OrganizrV2:

server {
    listen 443 ssl;
    listen [::]:443 ssl;
 	
	server_name organizr.*;
	
	include /config/nginx/ssl.conf;
	
	client_max_body_size 0;
	
	
 	location / {
		
		include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_organizr organizrv2;
        proxy_pass http://$upstream_organizr:80;
		proxy_buffering off;
 	}
}

image.thumb.png.114e9843d896d9f690dd514cf445892c.png

 

Link to comment
On 2/2/2020 at 3:09 PM, phreeq said:

I'm rather interested in this too. If you find an answer elsewhere, can you give us an update?

Sent from my ONEPLUS A5010 using Tapatalk
 

Not yet.  I tried using an emby.config file and changing the ports/name/etc and I WAS able to get to the page, but for some reason the login says that it's disconnected and to wait for reconnect.  Not sure what the issue is.

Link to comment

I am trying to setup various dockers with the default subfolder confs for use with OrganizrV2. Some of the default configs are working and some aren't. 

  • ApacheGuacamole - works
  • Deluge - works
  • Jackett - This page can't be found 
  • Ombi - Appears to work but sits at a white page with the text "Loading..." (The subdomain conf for Ombi works though)
  • Plex - works
  • Radarr & Sonarr - I have forms authentication enabled and going to either of these turns the url into login?returnUrl=/radarr instead of /radarr causing it to not work
  • Sabnzbd - works
  • Tautulli - 404 not found the path '/tautulli' was not found 

I have made sure that all containers are on the same network and that the container names match what the conf is looking for. I do not see any errors for these in the error log file either. Any ideas for these issues? 

Link to comment
4 minutes ago, Chandler said:

I am trying to setup various dockers with the default subfolder confs for use with OrganizrV2. Some of the default configs are working and some aren't. 

  • ApacheGuacamole - works
  • Deluge - works
  • Jackett - This page can't be found 
  • Ombi - Appears to work but sits at a white page with the text "Loading..." (The subdomain conf for Ombi works though)
  • Plex - works
  • Radarr & Sonarr - I have forms authentication enabled and going to either of these turns the url into login?returnUrl=/radarr instead of /radarr causing it to not work
  • Sabnzbd - works
  • Tautulli - 404 not found the path '/tautulli' was not found 

I have made sure that all containers are on the same network and that the container names match what the conf is looking for. I do not see any errors for these in the error log file either. Any ideas for these issues? 

Try it without organizr. If they work (they work for us), then you can ask organizr devs why they don't work with it.

Link to comment

Noob question. Is it possible to use letsencrypt to direct port 80 to an apache docker?

 

I tried using this, but didnt work. Also I dont know what Im doing, just doing my best to copy something similar I guess. Copied this from the phpmyadmin config.

server {
    server_name apachenet.*;


    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_apache apache;
        proxy_pass http://192.168.0.7:80;
    }
}

 

Link to comment
4 hours ago, uek2wooF said:

Feature request:

It would be nice to be able to specify the document root on another share outside the appdata share.  Since appdata prefers the cache and my static content would probably be fine on the array it would save me some disk space on the cache drive.

 

Thanks!

You can do that. Map an additional volume where host side is on /mnt/user and modify the default site conf to point the root directive to it

 

EDIT: May have misread. I'm talking about the "web root".

 

If you're talking about being able to put the config folder on a share on the array, we can't do that. It's an issue/bug with the fuse filesystem implementation unraid uses for array shares

Edited by aptalca
Link to comment
2 hours ago, karlpox said:

Noob question. Is it possible to use letsencrypt to direct port 80 to an apache docker?

 

I tried using this, but didnt work. Also I dont know what Im doing, just doing my best to copy something similar I guess. Copied this from the phpmyadmin config.


server {
    server_name apachenet.*;


    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_apache apache;
        proxy_pass http://192.168.0.7:80;
    }
}

 

You removed crucial elements including the listen directive, as well as the inclusion of ssl.conf

Link to comment
12 hours ago, aptalca said:

Try it without organizr. If they work (they work for us), then you can ask organizr devs why they don't work with it.

Sorry, I was meaning I was trying to set these up for use with Organizr. I have not actually put them in Organizr yet so we can take that out of the equation. All I have done is enable the confs, make sure they are pointing to the right containers/ports, and entering mydomain.com/container and I received all those errors in my post. 

 

I fixed Tautulli. Had to add tautulli to the https root in its config. 

 

For Jackett, I have made no modifications to the subfolder conf other than renaming it to remove the sample portion. I don't get the usual 404 nginx error... 

image.png.37ec53a8a545f26a3c0ba58451e23522.png

Fixed Jackett, needed to redefine the base url in its gui. I guess the grayed out one didn't count. 

 

This leaves Ombi, Radarr, and Sonarr. I am not sure what to do with Ombi yet but Radarr and Sonarr I think I need to modify the confs.. It looks like it is definitely hitting them when I go to mydomain.com/radarr but then Radarr redirects it to mydomain.com/login?returnUrl=/radarr because I have forms authentication enabled. How do I get it to not redirect there? Basically it needs to redirect to mydomain.com/radarr/login?returnUrl=/ instead. 

 

Sonarr and Radarr are also now working since I added base urls to them too.. Now I just have an issue with Ombi. Heading to mydomain.com/ombi greets me with this:
image.png.bfcdc4dad3b5b38790c8fd8e7f539e42.png

Edited by Chandler
Link to comment
8 hours ago, aptalca said:
13 hours ago, uek2wooF said:

Feature request:

It would be nice to be able to specify the document root on another share outside the appdata share.  Since appdata prefers the cache and my static content would probably be fine on the array it would save me some disk space on the cache drive.

 

Thanks!

You can do that. Map an additional volume where host side is on /mnt/user and modify the default site conf to point the root directive to it

 

 

I tried

#root /config/www;
root /wwwroot;

and

root /mnt/user/wwwroot;

 

But it can't find either.  Something is mapping /config to a path and I guess it doesn't know where the real / is.

Link to comment
10 minutes ago, uek2wooF said:

 

I tried

#root /config/www;
root /wwwroot;

and

root /mnt/user/wwwroot;

 

But it can't find either.  Something is mapping /config to a path and I guess it doesn't know where the real / is.

You need to post the docker run command so we can see which folder you are mapping. With the info you posted, we can only guess.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.