[Support] Djoss - Nginx Proxy Manager


Djoss

Recommended Posts

This is a great docker and I am just getting started. 

I can access my server via http without issue.

But I'm having issues using my custom ssl.  

 

I am getting err ssl version or cipher mismatch when i try to access the server via the secure port.

 

Not sure what the problem is.  Entering the ssl and key is simple. 

 

However I notice the fields do not change when I enter the location of the certificate and key (by that I mean, the js box does not populate with the location of crt and key such as c://documents/ssl/key.pem).    Is that to be expected?

 

When I add ssl it saves with the correct expiration date, but says the provider is "custom".

 

Also, crt and key are pem encoded.  Is there an issue with that?

 

Thanks for your help.

 

 

Link to comment
11 hours ago, eds said:

I am getting err ssl version or cipher mismatch when i try to access the server via the secure port.

Are you able to see the cert from the browser?  Is it the expected one?

 

11 hours ago, eds said:

However I notice the fields do not change when I enter the location of the certificate and key (by that I mean, the js box does not populate with the location of crt and key such as c://documents/ssl/key.pem).    Is that to be expected?

This is a known issue: https://github.com/jc21/nginx-proxy-manager/issues/50

Link to comment
5 hours ago, Djoss said:

Are you able to see the cert from the browser?  Is it the expected one?

 

No. I only see the error

 

I've done some fiddling and got some progress.

 

Putting the ssl issue aside for a moment...

 

I was able to open port 80 on my modem which was surprisingly easy.

 

So i port forwarded port 80 to the docker and I am able to see, from both vpn and my mobile device, the congratulations page of nginx. 

 

Nice!

 

Now, I am using namecheap as my ddns (which works great with my asus router so why use duckdns which does not??).   This could be where I am getting stuck.

 

I read a few pages back when using your own domain name you  need to create a subdomain using a cname record and have host: * and value: yourdomainname.com

 

That wildcard now causes every subdomain to go to the congratulations page (i.e. couchpotato.yourdomainname.com,nzbget.yourdomainname.com, etc ).   Is that a good thing?

 

If I create an entry in nginx proxy manager and a cname host: couchpotato value: yourdomainname.com  I get a "Bad Gateway" error when it type in couchpotato.yourdomainname.com.

 

Then, i read elsewhere you do not need a cname, but an a record.   So I also created an a record with host: couchpotato value: ip address of my router.  

 

Finally, I tried removing everything in namecheap except the cname wildcard entry (*.yourdomainname.com) and just adding an entry in proxy manager and that took me to a 502 bad gateway.

 

Which of these is correct? 

 

It seems to me the last entry should be correct, but I get a 502 error page.  What is that about?

 

Info in ngnix seems straightforward so I cannot imagine there is a problem there.

 

Thanks for your help!

 

 

Link to comment
23 hours ago, eds said:

That wildcard now causes every subdomain to go to the congratulations page (i.e. couchpotato.yourdomainname.com,nzbget.yourdomainname.com, etc ).   Is that a good thing?

I would not do that.  I would prefer to explicitly tell which subdomains you support.  Else, everything is redirected to your server, which create unnecessary traffic.

23 hours ago, eds said:

I get a "Bad Gateway" error when it type in couchpotato.yourdomainname.com.

This is usually due to an incorrect configuration of the "Forward IP" in Nginx Proxy Manager.  Make sure the set the IP on your local network of the service to forward to.  Make sure to also use to correct scheme (http/https).

Link to comment
On 1/26/2019 at 3:28 PM, Djoss said:

I would not do that.  I would prefer to explicitly tell which subdomains you support.  Else, everything is redirected to your server, which create unnecessary traffic.

This is usually due to an incorrect configuration of the "Forward IP" in Nginx Proxy Manager.  Make sure the set the IP on your local network of the service to forward to.  Make sure to also use to correct scheme (http/https).

@DJoss

 

Thanks man, I finally got it working!

 

For me, a few things that may help others:

 

1. This only works if the various containers are in bridge mode (which I think was previously discussed on this board, but I forgot). All my containers had their own ip addresses so I had to change each, stop and restart them.  Remember once all containers using this app are in bridge mode, in proxy manager, the ip address is now unraid's ip (with different ports). 

2.  Also, its worth noting, you DO NOT have to port forward in your router to any of the containers. Only forward ports to this app as required and that's it.  So I had to remove all the port forwards and ips for the various containers within my router.

3. My namecheap subdomain set up was right on. Cname  subdomains for each app linked to the domain name seems to be work.  No need to do a wild card.

4. In the apps, I  made sure all containers did not reference ssl anywhere. 

5. All destination ips in proxy manager were set to http (so http://unraidip:appport).   

6. I knew I was on to something when, looking at the logs, lets encrypt would accurately send an crt. Still could not get my custom cert to work but beggers cannot be choosers. 

7. One annoyance -- the proxy app times out when trying to secure an ssl certificate via lets encrypt.  So it appears to fail, but when you cancel the function you are able to select the ssl cert.

8.  Finally, once lets encrypt had the cert, I set each subdomain to force use the crt and it works.

 

Great app!

  • Thanks 1
Link to comment

Has anyone gotten this to work with Nextcloud?

 

I'm stuck since nextcloud seems to throw errors and cannot connect to my mysql container if on a bridge network.  

 

So I have to assign nextcloud its own ip address in order to work.  But doing so causes gateway issues for this app.

Link to comment
On 1/27/2019 at 6:25 AM, OOmatrixOO said:

Hi.

does anybody have this worked with the emby server over the https port?

It worked for me with the http port but not over the https port.

The Public HTTPS port number in emby is 8920 and Secure connection mode is managed by reverse proxy.

 

i have 502 bad gateway issue

 

https://abload.de/img/3moj7w.png

I have the same thing.  It seems that Emby only runs on http port in this setup.  But this is not an issue for me.

Link to comment
8 hours ago, eds said:

Has anyone gotten this to work with Nextcloud?

 

I'm stuck since nextcloud seems to throw errors and cannot connect to my mysql container if on a bridge network.  

 

So I have to assign nextcloud its own ip address in order to work.  But doing so causes gateway issues for this app.

When a container has its own IP, it cannot communicate with other services on the same host.  It's for this reason that you need to use the bridge mode.

You should be able to use nextcloud in bridge mode (along with your mysql container).  Make sure you configured nexcloud with the correct IP/port to connect to the DB. 

Link to comment

I can't get my cert to work with Let's Encrypt.

 

Whenever I request an SSL certificate from Let's Encrypt, I get an internal error from the IP address and port of the Nginx GUI.  When I refresh the SSL certificates tab, it shows the certificates I requested, but the expiration date and time are the exact date and time I requested them.  Not sure what could cause this.

  • Thanks 1
Link to comment
2 hours ago, tvan said:

I can't get my cert to work with Let's Encrypt.

 

Whenever I request an SSL certificate from Let's Encrypt, I get an internal error from the IP address and port of the Nginx GUI.  When I refresh the SSL certificates tab, it shows the certificates I requested, but the expiration date and time are the exact date and time I requested them.  Not sure what could cause this.

In my experience it means letsencrypt cannot see the docker you are trying to secure from the wan side. 

 

Either the ddns setup is not correct or port 80 is being blocked. 

Link to comment
2 hours ago, eds said:

In my experience it means letsencrypt cannot see the docker you are trying to secure from the wan side. 

 

Either the ddns setup is not correct or port 80 is being blocked. 

OK.  My ddns is set up with my puplic (external) ip address.  Port 80 on firewall is open and being forwarded to the ip address of the NPM docker on port 8080.   Is this correct?

Link to comment
Just now, tvan said:

OK.  My ddns is set up with my puplic (external) ip address. 

Are you seeing the congratulations page when you check the domain name from outside your lan?  If so, then forwarding is good. 

 

2 minutes ago, tvan said:

Port 80 on firewall is open and being forwarded to the ip address of the NPM docker on port 8080.   Is this correct?

This one I am not so sure.   If you are on a residential plan internet plan, port 80 might be being blocked by the isp (as was the case with me).   Check to make sure port 80 is open by going to https://canyouseeme.org/ and checking 80.

 

If it is open and you are forwarding the ddns correctly, letsencrypt should have no problems issuing a crt. 

Link to comment
3 hours ago, binhex said:

wow!, how easy is this to setup?! loving this docker image, ive wanted something like this for a VERY long time, now if only subfolder redirection could be done then it would be perfect!, off to check the feature requests on github :-).

 

thanks @Djoss for the image!.

 

Assuming that subfolder support would mean that an app like Ubooquity would be supported (it's URL string is http://<IP>:2202/ubooquity/admin)?

 

Link to comment
12 hours ago, drsparks68 said:

 

Assuming that subfolder support would mean that an app like Ubooquity would be supported (it's URL string is http://<IP>:2202/ubooquity/admin)?

 

i mean as in you can't currently do this (at least thats my understand, please correct me if im wrong):-

 

https://<public domain name>/<app name> => proxy's to http(s)://<internal ip>:<correct port for the app>

 

but you can do this (as in create subdomains for each app) which is what im currently doing:-

 

https://<appname>.<public domain name> => proxy to http(s)://<internal ip>:<correct port for the app>

 

the problem with using subdomains is unless you have a wildcard cert (very expensive and LE doesnt allow this) then the common name will not match the <appname>.<public domain name> and therefore you will get cert errors, with subfolders the cert remains valid and you simply proxy to the correct internal app based on the subfolder name.

 

tip:- if you control dns (im using namecheap, not sure if this is possible with LE?) for your cert and you want to do what i did then i had to add in a cname record as follows to get subdomains to correctly fwd:-

type	host	value
cname	*	<name of domain>

 

Edited by binhex
Link to comment

At home, I have this nice docker working fine for Nextcloud.

 

I tried using it at work for a Synology NAS, instead of using the built-in things from DSM, so later I can expand this to others services, but I can't get it to work. It seemed so easy with Nextcloud.

 

Anyone using it with a Synology NAS and modern DSM (6+) ?

Link to comment
23 hours ago, binhex said:

i mean as in you can't currently do this (at least thats my understand, please correct me if im wrong):-

 

https://<public domain name>/<app name> => proxy's to http(s)://<internal ip>:<correct port for the app>

Correct.  This is currently not possible.

23 hours ago, binhex said:

but you can do this (as in create subdomains for each app) which is what im currently doing:-

 

https://<appname>.<public domain name> => proxy to http(s)://<internal ip>:<correct port for the app>

Correct.

23 hours ago, binhex said:

the problem with using subdomains is unless you have a wildcard cert (very expensive and LE doesnt allow this) then the common name will not match the <appname>.<public domain name> and therefore you will get cert errors, with subfolders the cert remains valid and you simply proxy to the correct internal app based on the subfolder name.

No need for a wildcard certificate.  You just need to have a DNS name for each application, like app1.example.com, app2.example.com, etc.  LE will generate a certificate for each of them.

23 hours ago, binhex said:

tip:- if you control dns (im using namecheap, not sure if this is possible with LE?) for your cert and you want to do what i did then i had to add in a cname record as follows to get subdomains to correctly fwd:-


type	host	value
cname	*	<name of domain>

It may be better to just add a cname for each DNS name you need.  Else, every subdomain will resolve successfully to your machine...

  • Like 1
Link to comment
19 hours ago, Marco L. said:

At home, I have this nice docker working fine for Nextcloud.

 

I tried using it at work for a Synology NAS, instead of using the built-in things from DSM, so later I can expand this to others services, but I can't get it to work. It seemed so easy with Nextcloud.

 

Anyone using it with a Synology NAS and modern DSM (6+) ?

It should work the same way on Synology.  However, since Synology doesn't use templates, you may have to do more container configuration.

Link to comment
42 minutes ago, Djoss said:

It should work the same way on Synology.  However, since Synology doesn't use templates, you may have to do more container configuration.

Actually, I both have pfSense at work and at home, but at home I had something different in the DNS Resolver. Rookie mistakes... Working fine now. There is still some differences. With a Synology, you have to activate your custom domain in the network parameters, import the certificates in DSM and then assign the certificate to the service linked to this network option. So it seems every 3 month you have to reimport the certificate into DSM, which kind of diminishes the advantages of NPM auto-management of Let's Encrypt.

If someone didn't need to import the certificates into DSM, please explain. Maybe I'll need to ditch NPM and use some package from the DSM store...

Link to comment
24 minutes ago, Marco L. said:

Actually, I both have pfSense at work and at home, but at home I had something different in the DNS Resolver. Rookie mistakes... Working fine now. There is still some differences. With a Synology, you have to activate your custom domain in the network parameters, import the certificates in DSM and then assign the certificate to the service linked to this network option. So it seems every 3 month you have to reimport the certificate into DSM, which kind of diminishes the advantages of NPM auto-management of Let's Encrypt.

If someone didn't need to import the certificates into DSM, please explain. Maybe I'll need to ditch NPM and use some package from the DSM store...

If you want/need to use the certificate provided by your Synology, then you are right you need to manually import it in NPM.

However, you can use your own domain and let NPM generates the certificates.  You can get your own DNS names for free with DuckDNS.

Link to comment
1 hour ago, Djoss said:

If you want/need to use the certificate provided by your Synology, then you are right you need to manually import it in NPM.

However, you can use your own domain and let NPM generates the certificates.  You can get your own DNS names for free with DuckDNS.

I don't need the certificate from my Synology, my Synolody DSM needs the certificate generated by NPM :

image.thumb.png.c536ced7050e1972b524a68cf71dc7d9.png

image.thumb.png.d4c78fd27087a2cb0d82c93998c0586c.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.