[Support] Linuxserver.io - Nginx


Recommended Posts

4 hours ago, local.bin said:

Would you say this approach is production ready?

 

I am using your docker for a number of self hosted enterprise apps and they are not fully 7.2 ready yet.

Sure, it is. The container by default is connecting to php via ip address already. It doesn't really matter whether the php/fcgi service is running in the same container or somewhere else, as long as the ip address is reachable. 

Link to comment
7 hours ago, aptalca said:

Sure, it is. The container by default is connecting to php via ip address already. It doesn't really matter whether the php/fcgi service is running in the same container or somewhere else, as long as the ip address is reachable. 

Cool thanks, you make a fair point.

 

Tried changing just the IP and got a 404, so guess I need to point the php docker to the same root as my nginx docker is using...

 

Pinned to your build 140 (php 7.1.17) while I figure out the paths it needs.

Link to comment
  • 1 month later...

Ok, I'm lost again.  Had my reverse proxy (no external access) working for custom server names, but suddenly I'm getting connection refused errors from everything that should go through NGINX... 

 

I reinstalled the NGINX docker completly, and the test page works as expected, but my config file is either ignoring the port I'm redirecting to (or more likely as I see nothing in logs) not working at all.

 

I've cut my default file down to a single docker for the moment, but can anyone see why this server entry would get me connection refused having previously been working?  The plex server is working properly on 192.168.0.201:32400, or on plex.hda.home:32400 (which for the moment is from a local hosts file, but that will be addressed once I get a new router) so SOMETHING seems to be up with NGINX, but I havne't changed anything...

server {
    listen 80;
    server_name plex.hda.home;
    
    location / {
        proxy_pass http://192.168.0.201:32400/;
    }
}

PS: And of course it worked the moment I posted this, having done nothing differently...  Resolved I guess, but I'll leave this for the moment lest anyone have ideas.  It's a stupidly simple config I know, but none of this is exposed externally except through a VPN, and the custom DNS is, like I said, entirely hosts based for the moment (I had dnsmasq working on my router, but for the moment I'm having to connect through a non-optional ISP device with no such abilities, looking at solutions, but frankly hosts works for me).

Edited by Bureaucromancer
Link to comment
  • 2 months later...
4 hours ago, unevent said:

Is there a reason for a sync being issued on container stop?  Can it be removed?  It spins up the entire array when this container is stopped or updated.  Thanks

I don't believe that's container related. You should ask in one of the unraid threads. 

Link to comment
29 minutes ago, aptalca said:

I don't believe that's container related. You should ask in one of the unraid threads. 

-------------------------------------
_ ()
| | ___ _ __
| | / __| | | / \
| | \__ \ | | | () |
|_| |___/ |_| \__/


Brought to you by linuxserver.io
We gratefully accept donations at:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid: 99
User gid: 100
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Signal handled: Terminated.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
Signal handled: Terminated.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.

 

Link to comment
1 hour ago, j0nnymoe said:

This is due to s6 which is used within all our containers. I believe you need to lookup spin up groups within unraid to get around this.

This is on cache only and no maps to user shares.  Can you elaborate on how spin up groups will solve the issue?  Thanks

Link to comment

Syncing disks is happening inside the container. Did you map your unraid disks or protected array locations in the container? If not, it shouldn't touch the array at all. Check your container settings and make sure that /mnt is not mapped. I remember there was talk a while back that unraid was going to add /mnt mapping by default to all newly created containers. Can't remember if that ended up happening or not. 

Link to comment

so, i am running this speedtest docker on my unRAID server:

https://github.com/adolfintel/speedtest/tree/docker

 

 

Everything is running fine locally but when i try to setup a reverse Proxy with nginx, i run into a very annoying problem:

 

This is my nginx config for speedtest

 

    location /speed {

        include /config/nginx/proxy.conf;

        proxy_pass http://192.168.178.22:6580/;

    }

 

I can reach the page just fin, but if i try to use it i get a 404 error for one of the .js files that is needed.

 

 

The TEST Button tries to call https://REVER.SE/speedtest\_worker.min.js but since everything is behind the reverse proxy the correct path would be https://REVER.SE/PROXY/speedtest\_worker.min.js

 

i tried for hours to figure out how to setup a rewrite rule in nginx to automatically change all the urls, but without luck.

 

 

can somebody help me out here?

Edited by Random.Name
Link to comment
14 minutes ago, Random.Name said:

so, i am running this speedtest docker on my unRAID server:

https://github.com/adolfintel/speedtest/tree/docker

 

 

Everything is running fine locally but when i try to setup a reverse Proxy with nginx, i run into a very annoying problem:

 

This is my nginx config for speedtest

 

    location /speed {

        include /config/nginx/proxy.conf;

        proxy_pass http://192.168.178.22:6580/;

    }

 

I can reach the page just fin, but if i try to use it i get a 404 error for one of the .js files that is needed.

 

 

The TEST Button tries to call https://REVER.SE/speedtest\_worker.min.js but since everything is behind the reverse proxy the correct path would be https://REVER.SE/PROXY/speedtest\_worker.min.js

 

i tried for hours to figure out how to setup a rewrite rule in nginx to automatically change all the urls, but without luck.

 

 

can somebody help me out here?

You can proxy via subdomain

Link to comment
1 hour ago, Random.Name said:

well, since i am quite new to nginx i would love to get some help here, too ;)

Also everything is running on a dyndns and i am not sure if sub domains work with ddns.net

Not sure about ddns but duckdns let's you have any sub-subdomain below yours. So if you put in yoursubdomain.duckdns.org as the URL in letsencrypt and put in speedtest as the SUBDOMAINS your cert will cover speedtest.yoursubdomain.duckdns.org as well as yoursubdomain.duckdns.org. 

 

In nginx, there is already an example for cp on how you can reverse proxy a subdomain. Use that and it should work. 

 

When you use the subfolder method, the recommended way is to match the location (speedtest here) to the base url. But if there is no base url for your application, this get complicated as you experienced. Subdomain method doesn't require a base url

Link to comment
7 hours ago, aptalca said:

Syncing disks is happening inside the container. Did you map your unraid disks or protected array locations in the container? If not, it shouldn't touch the array at all. Check your container settings and make sure that /mnt is not mapped. I remember there was talk a while back that unraid was going to add /mnt mapping by default to all newly created containers. Can't remember if that ended up happening or not. 

I have /mnt/cache/.apps/calibre_library/ and /mnt/cache/.appdata/nginx/ as the only maps.

Link to comment
  • 2 months later...

Hello,

is it possible to add the PHP-Module "imagick" to the container? Already managed to install "imagick" to the container, is there a way to add this permanently?

 

Is there also a docker for redis from Linuxserver.io? Found one direct from redis.

 

I've run Nextcloud (and also a reverse proxy for my other stuff) in this docker and it would be super cool if you or i could add this permanently.

I also know that there is the Nextcloud docker from Linuxserver.io but i don't want to reverse proxy a nginx server if i already have one...

 

Best regards from Austria,

chips

Edited by ich777
Link to comment
1 hour ago, ich777 said:

Hello,

is it possible to add the PHP-Module "imagick" to the container?

Is there also a docker for redis from Linuxserver.io?

 

I've run Nextcloud (and also a reverse proxy for my other stuff) in this docker and it would be super cool if you or i could add this permanently.

I also know that there is the Nextcloud docker from Linuxserver.io but i don't want to reverse proxy a nginx server if i already have one...

 

Best regards from Austria,

chips

 

There is no redis contain er from us. 

I'll have a talk with the other guys about the php module. Is that the one you want added permanently or do you mean nextcloud? 

Link to comment
1 minute ago, saarg said:

 

There is no redis contain er from us. 

I'll have a talk with the other guys about the php module. Is that the one you want added permanently or do you mean nextcloud? 

The php module "imagick" would be nice if you add it permanently.

Link to comment
8 minutes ago, saarg said:

I didn't find it as an alpine package. Only php7-pecl-imagick. Is that the one? 

More or less, it's a bigger package.
The right one is php7-imagick but the other one works too.

 

But i think php7-pecl-imagick it will do it's job fine.

Edited by ich777
Link to comment
On 3/18/2019 at 1:42 PM, ich777 said:

Thank you!

Holy moly, php7-pecl-imagick has a ton of dependencies and the size of all apps installed jump from 156MB to 233MB.

 

So I'm not going to add it, but, you can create a file on your host called 90-config, with the following contents:

#!/usr/bin/with-contenv bash

apk add --no-cache php7-pecl-imagick

and then map that file on the host to location in container: /etc/cont-init.d/90-config (add new path in container settings)

 

That will install it during container start and will survive container recreation as long as the file on the host is still there

  • Upvote 1
Link to comment
  • 3 weeks later...
5 hours ago, ich777 said:

Works fine thank you @aptalca!

 

Is it possible to add a cron job with the startup script in /etc/cont-init.d/90-config ?

If yes how should i do that or is there a better way of doing this?

Do you mean to run the 90-config script using cron or run a cron job inside the container in the same script? 

Link to comment
On 4/10/2019 at 6:34 PM, saarg said:

Do you mean to run the 90-config script using cron or run a cron job inside the container in the same script? 

I want to run a specific cron job without changing the hosts cronjobs, is this even possible?

I think the answer is: "run a cron job inside the container in the same script" or in a standalone "cronjob" script.

 

Edit:

Found a solution at this Posting

Edited by ich777
Found solution
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.