[BETA DOCKER] NGINX Proxy


smdion

Recommended Posts

Per request via PM, here an NGINX Proxy docker.  I run an apache reverse proxy, so please test and let me know how it runs, once its working I will add to my repo. I forked dockerfile / nginx and adapted it to our base image.

 

https://registry.hub.docker.com/u/smdion/docker-nginx/

 

Available on my beta-template-repo: https://github.com/smdion/docker-containers/tree/beta-templates

 

docker run -d -p 80:80 -p 443:443 -v /path/to/nginx/sites-enabled:/etc/nginx/sites-enabled -v path/to/nginx/certs:/etc/nginx/certs -v path/to/nginx/logs:/var/log/nginx --name nginx smdion/docker-nginx

 

 

 

Link to comment

I have been thinking about this for some time and mentioned it in the thread discussing why IMHO the unRAID web GUI should not be on port 80 or 443 and these should be kept free for user facing things. In this thread lots of clever ideas were thrown about how to do redirect etc

 

Where i see the nginx reverse proxy coming into play is by presenting a single web hop page for arbitrary docker web applications. There are things kicking about like marachino but they are very specific. For example why should I have to remember the port for the polipo web gui or ask a marachino dev to support ubookquity... wouldn't it be better to just browse the unraid ip and from there be presented with a jump page.

 

I know the docker addon lists the ports etc but this is for admins only. it is of zero use to non admins.

 

So ramble aside I think what we should do here is initially present nginx on port 81 with a default setup that has a framework for reverse proxying anything anyone wants to add. By default everyone has emHTTP so that the one to use as an example.

 

There is also a load of clever stuff we can add , extra security and a single common self signed TLS cert etc but I think it is a case of build it and they will come.

 

Nice work

Link to comment

I have been thinking about this for some time and mentioned it in the thread discussing why IMHO the unRAID web GUI should not be on port 80 or 443 and these should be kept free for user facing things. In this thread lots of clever ideas were thrown about how to do redirect etc

 

Where i see the nginx reverse proxy coming into play is by presenting a single web hop page for arbitrary docker web applications. There are things kicking about like marachino but they are very specific. For example why should I have to remember the port for the polipo web gui or ask a marachino dev to support ubookquity... wouldn't it be better to just browse the unraid ip and from there be presented with a jump page.

 

I know the docker addon lists the ports etc but this is for admins only. it is of zero use to non admins.

 

So ramble aside I think what we should do here is initially present nginx on port 81 with a default setup that has a framework for reverse proxying anything anyone wants to add. By default everyone has emHTTP so that the one to use as an example.

 

There is also a load of clever stuff we can add , extra security and a single common self signed TLS cert etc but I think it is a case of build it and they will come.

 

Nice work

 

I have kinda already done this with Apache Reverse proxy, but mine is a redirect base on the "/address".  A jump page may be neat as well (or both).  The .conf looked a little easier to setup me for me than with nginx. I have changed the WebGUI of unRAID to 8008 and have Apache running on 80 and 443.  I use DDClient to update DynDNS with my domain. This was I can go to www.mydomain.com/nzbdrone or www.mydomain.com/nzbget and access everything.  I have a free SSL cert from www.smartssl.com. 

 

My config for apache reverse - http://pastebin.com/raw.php?i=TJkcxzvh

 

Is nginx really that superior over apache?

Link to comment

nginx is not better as such and its is certainly not as feature packed as Apache but its much lighter and faster at what it does.

 

I like most came to it from a need to reduce apache bloat so moved to lighttpd and then got frustrated with that and settled on nginx.

 

For this application it probably doesn't make much difference although lighter is always better. For that reason I would say nginx is a better long term fit.

 

The config will seem familiar to you

 

    location /nzbget {
                client_max_body_size 8m;
                proxy_pass http://127.0.0.1:6789;
    }

 

I prefer to bind all the daemons to localhost only and then control access via nginx reverse proxy as above.

 

The tricky part is whilst a jump page and the config are not hard I dont know how to make it completely point and click n00b friendly without just adding every possible daemon to the jump page.

Link to comment

nginx is not better as such and its is certainly not as feature packed as Apache but its much lighter and faster at what it does.

 

I like most came to it from a need to reduce apache bloat so moved to lighttpd and then got frustrated with that and settled on nginx.

 

For this application it probably doesn't make much difference although lighter is always better. For that reason I would say nginx is a better long term fit.

 

The config will seem familiar to you

 

    location /nzbget {
                client_max_body_size 8m;
                proxy_pass http://127.0.0.1:6789;
    }

 

I prefer to bind all the daemons to localhost only and then control access via nginx reverse proxy as above.

 

The tricky part is whilst a jump page and the config are not hard I dont know how to make it completely point and click n00b friendly without just adding every possible daemon to the jump page.

 

I wonder if we could take from the XML files to build a jump page

Link to comment

I was thinking the same thing but it doesnt feel that elegant to me as there would be lots of non HTTP(s) ports in there as well.

 

Perhaps we just hack something together as a working proof of concept and see if it inspires a longer term more robust solution.

 

 

Link to comment

I was thinking the same thing but it doesnt feel that elegant to me as there would be lots of non HTTP(s) ports in there as well.

 

Perhaps we just hack something together as a working proof of concept and see if it inspires a longer term more robust solution.

 

Maybe add a flag in variable? A 'turn on/turn off' for the web landing page? Thats a 'hackish' answer, but may work.

Link to comment

I was thinking about it and really the only way of not breaking the fundamental docker mantra of "run as many times as you like" is to go back to the first idea.

 

So we would mount RO the path to the dockerman xml and on container start do something like generate a local config file per dockerman container to allow users to do things like hide/disable add comments etc.

 

Done this way if someone want to run 2 of these it would not break anything.

 

On a seconf run any new containers could be detected and new configs generated. Its  not particularly elegant but it would be reliable.

Link to comment

Any way to expose the nginix.conf file itself?  I think that editing that file is necessary to set up shared memory spaces which are needed for things like rate limiting and connection limiting.

 

Got it working.  Please update and add in the location where you want the nginx.conf to be created in the /nginxconf volume

 

14bSptO.png

Link to comment
  • 3 months later...

please provide step by step as I am really struggling to get this working

 

I was assuming that after installing this I would have some base files to work from, but I don't even have a blank sites-enabled file to start filling in?

 

if I need to manually create files then fine but please let me know what and where

 

if it should have downloaded these files then please provide an idiots guide as I am obviously one :)

Link to comment
  • 2 weeks later...

still having real trouble here :(

 

cannot even just get a simple port 80 index.html file to display

 

here is my sites-enabled/default file

 

server {
        listen   80; 
        root /mnt/cache/docker/appdata/nginx/www; 
        index index.php index.html index.htm;
        server_name mynoipserveraddress; 
        location / {
        try_files $uri $uri/ /index.php;
        }
        location ~ \.php$ {
         }
         location ~ /\.ht {
                deny all;
        }
}

 

can anyone help me, its driving me mad as I know its something stupid I am missing....

Link to comment

cannot even just get a simple port 80 index.html file to display

you can't use port 80.. it is taken by unraid to display it's interface

use port 90 or 81 or 8080 or whatever you fancy.. just not 80, effa

 

to be complete... you leave the container port on 80, but the host port is something else.. when calling the page you must use the host port.. as in 127.0.0.1:90

Link to comment
  • 7 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.