[Support] Djoss - Nginx Proxy Manager


Djoss

Recommended Posts

55 minutes ago, Froberg said:

Help please. 

 

I just moved everything away from cache to replace my dual 256G drives with dual 1TB drives. 

Everything seems to have transferred nicely, but nginx is broken - and as a consequence as is my Nextcloud instance. 

 

nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-5/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/npm-5/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

 

Is what it's saying in the log file. 

I assume I must have lost something in transit. 

I tried removing the container and re-adding it, hoping that would help resolve it. 

 

So what do I do? Transfer any necessary configs, remove the old directory and start over? Please advise I'm a bit out of my depth on this one. 

 

Check if the correct folder is mounted (it should contain both config and certificates). Also make sure these files are readable by the specified user/group.

 

image.thumb.png.2c94298ba55d148c61c32f133af636e5.png

Link to comment
3 hours ago, mattie112 said:

 

Check if the correct folder is mounted (it should contain both config and certificates). Also make sure these files are readable by the specified user/group.

 

image.thumb.png.2c94298ba55d148c61c32f133af636e5.png

Cheers mate. 

 

Reading your suggestion made me realize "hey, it wasn't that tough setting up in the first place.." so I just changed the folder and started over. Back up n' running. Thanks for the help though! 

  • Like 1
Link to comment

Righto all. RE - 502 Gateway error

Just a quick psa that had me scratching my head for a few days and cause me to install/reinstall this and other containers that doesn't get mentioned in many videos on container setup when using a custom proxy network wondering why it would work fine for hours then break giving 502 bad gateway errors.

 

Set your container IP's as static within unraid container edit page as they are ultimately static in the proxy manager.

 

So whatever ip you designated the redirect ip to, in my case 172.18.0.5 is what you need in the container edit page under the ip.

 

Can't believe I was so oblivious to this yet have static ips for all sorts of things. I was under the impression that unraid would just keep giving containers the same ip for some strange reason. I'm sure heaps of people know this one but had an epiphany this morning after my dramas.

 

This container kicks butt btw

  • Like 1
Link to comment

I've got it reinstalled now.

But it wont let me request a new SSL certificate, it says "Internal Error". 
The docker log says

Quote

 

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for xxx.yyyy.zzz
Using the webroot path /config/letsencrypt-acme-challenge for all unmatched domains.
Waiting for verification...
Challenge failed for domain xxx.yyyy.zzz
http-01 challenge for xxx.yyyy.zzz
Cleaning up challenges
Some challenges have failed.

 

/appdata/NginxProxyManager/log/letsencrypt/letsencrypt.txt says to make sure I made the correct A/AAAA DNS record (I used the same as before, before it broke), and I checked it again, just to be sure.

It doesn't tell me what the issue is, or I'm missing it.

 

Do you guys know what it is, and how to fix it?

Link to comment
45 minutes ago, Nanobug said:

I've got it reinstalled now.

But it wont let me request a new SSL certificate, it says "Internal Error". 
The docker log says

 

/appdata/NginxProxyManager/log/letsencrypt/letsencrypt.txt says to make sure I made the correct A/AAAA DNS record (I used the same as before, before it broke), and I checked it again, just to be sure.

It doesn't tell me what the issue is, or I'm missing it.

 

Do you guys know what it is, and how to fix it?

I have found much better success with the dns challenge

Link to comment
1 hour ago, Nanobug said:

I've got it reinstalled now.

But it wont let me request a new SSL certificate, it says "Internal Error". 
The docker log says

 

/appdata/NginxProxyManager/log/letsencrypt/letsencrypt.txt says to make sure I made the correct A/AAAA DNS record (I used the same as before, before it broke), and I checked it again, just to be sure.

It doesn't tell me what the issue is, or I'm missing it.

 

Do you guys know what it is, and how to fix it?

 

Can you please verify / test that from the internet (so test it through 4G for example) you can reach port 80 over http and that that reaches NPM. Letsencrypt needs UNENCRYPTED access to the "acme" files.

Link to comment
23 hours ago, ados said:

@balder thats correct, MGINX PM only supports wildcards in SSL but will work with subdomains.

If you want wildcards you would be better with raw NGINX docker using config files.

You should have no exposed instances to internet without a login wall and if you have that you should have SSL.

 

I think maybe you're misunderstanding me, or I'm not explaining it right (which is very possible). I'm talking about adding a proxy host '*.mydomain.com' and having whatever requests sent on to my web server. It used to work - I would add a host *.mydomain.com, and it would prompt me to add it like it does with any other domain name, but after I re-installed it no longer does. It will not accept an asterisk anywhere in the name.

Link to comment
1 hour ago, mattie112 said:

 

Can you please verify / test that from the internet (so test it through 4G for example) you can reach port 80 over http and that that reaches NPM. Letsencrypt needs UNENCRYPTED access to the "acme" files.

I'm so dumb....
My port forwarding was something like this:
80 --> 8080
443 --> 4443

I think I changed the ports last time, I can't remember why.
Now it's:
80 --> 2080

443 --> 20443

I've got 10000 - 19999 assigned for game servers so I don't have to update that range all the time.

 

It works now, thanks :)

  • Like 1
Link to comment
1 hour ago, hjaltioj said:

Hi
Is it possible to export or link certificates to local hosts, so i can use them locally?

What do you want? You want to have the the certificate file? Why not use NPM also for internal hosts?

 

But anyway: /mnt/user/appdata/NginxProxyManager/letsencrypt there they are

Link to comment
1 minute ago, mattie112 said:

What do you want? You want to have the the certificate file? Why not use NPM also for internal hosts?

 

But anyway: /mnt/user/appdata/NginxProxyManager/letsencrypt there they are

how can i use it for local hosts?

Link to comment
Just now, hjaltioj said:

how can i use it for local hosts?

Again: what do you want to do? You can use the files as regular certificate if you want to manually configure?

 

If you want to request a SSL certificate (through letsencrypt) than you NEED it to be accessable from the internet, this is how letsencrypt verifies the host. If you have a 100% offline host you cannot use letsencrypt.

Link to comment
1 minute ago, mattie112 said:

Again: what do you want to do? You can use the files as regular certificate if you want to manually configure?

 

If you want to request a SSL certificate (through letsencrypt) than you NEED it to be accessable from the internet, this is how letsencrypt verifies the host. If you have a 100% offline host you cannot use letsencrypt.

i was thinkin of linking the certs to internal services, like sonarr, radarr etc. sonar.domain.local ?

Link to comment

As long as NPM is accessable from the internet that is possible but letsencrypt need to verify you are indeed the owner by placing a small file with a 'challenge' code on the domain you are requesting ssl for. If it cannot read that code it cannot determine that you are the owner and it won't assign a certificate. So if you are 100% offline than this is not an option. You can however buy a ssl certificate if you supply your own CSR. But yeah not with letsencrypt.

 

(well technically you can have an other host that is public request a wildcard certificate and then manually copy that certificate every 2-3 months to your local hosts but I don't have experience with that myself)

  • Like 1
Link to comment

For those wanting to restrict login access to their sub domains or sub-folders from public access through NGINX Proxy Manager I have created a guide to use Organizr.

 

Its a powerful SSO based platform for accessing multiple resources and allows for configuration using their API to restrict access to URL resources without authentication to Organizr.

 

Guide: 

 

Link to comment

I’m not sure if this belongs here, or somewhere else. I need some help getting NPM to create short local URLs for various internal self-hosted services running on proxynet using split DNS (on pfSense) and local certs. I want this for convenience for me, so I can type in krusader.boyturtle in the address bar and not have to remember ipadd:port number. I also want this so I don’t have to deal with the annoying browser non-secure warnings. This is all for services that are not internet facing so cname and aname records are not involved.

 

I have created self signed certs and keys, added the short local domain and host to the Host Overrides in DNS Resolver in pfSense, imported said certs into NPM and then added added the host to NPM, using the created/imported cert. When I click the link to lauch the shortcut, the browser times out; this happens if the cert is applied or not. When pinging krusader.boyturtle from my desktop (outside of proxynet), it resolves to docker0 ip address, but gets no further. When pinging from another docker cli, it does not resolve at all (I get Name or service not known).

 

What am I missing to get around this, or have I hit a feature of the Unraid docker system that doesn’t allow this traffic through in this manner?

Edited by Boyturtle
Link to comment

"When I click the link to lauch the shortcut, the browser times out; this happens if the cert is applied or not"

 

What link/shorcut? You cannot access the NPM control panel? Or you cannot access one of the services proxied?

 

What is proxynet? A docker network on your Unraid?

 

And are the DNS servers your Unraid uses correct? If it doesn't resolve it is probably not using your pfSense as DNS server.

Link to comment

Apologies for not making everything clearer, but I'm struggling to find the correct terminology to express my problem.

6 hours ago, mattie112 said:

"When I click the link to lauch the shortcut, the browser times out; this happens if the cert is applied or not"

 

What link/shorcut? You cannot access the NPM control panel? Or you cannot access one of the services proxied?

I can access the NPM control panel, but am unable to connect to some proxied services; I am able to connect to the services that are internet facing where I have created cname records with the dns provider, but am unable to connect to non internet facing services that are also proxied.

7 hours ago, mattie112 said:

What is proxynet? A docker network on your Unraid?

Proxynet is the bridge docker network created on unraid. It has a Subnet of 172.18.0.0/16 and a gateway of 172.18.0.1. All the proxied services run on this network.

7 hours ago, mattie112 said:

And are the DNS servers your Unraid uses correct? If it doesn't resolve it is probably not using your pfSense as DNS server.

In Settings>Network Settings,  DNS is pointing to pfSense. When I ping krusader.boyturtle from the desktop, it resolves to 172.18.0.1, the gateway for the docker network (proxynet), but it doesn't get any further and I have 100% packet loss.

Link to comment
On 4/19/2021 at 7:42 PM, mattie112 said:

As long as NPM is accessable from the internet that is possible but letsencrypt need to verify you are indeed the owner by placing a small file with a 'challenge' code on the domain you are requesting ssl for. If it cannot read that code it cannot determine that you are the owner and it won't assign a certificate. So if you are 100% offline than this is not an option. You can however buy a ssl certificate if you supply your own CSR. But yeah not with letsencrypt.

 

(well technically you can have an other host that is public request a wildcard certificate and then manually copy that certificate every 2-3 months to your local hosts but I don't have experience with that myself)

Hi again.

i didnt realize i could just use port 80/443 then host overrides on pfsense would work :)
But when i change the ports and the the docker to pull your image, it dosent work? see screenshots.

 

is there something more to do ?

thank you :)

Screenshot 2021-04-22 at 19.38.05.png

Screenshot 2021-04-22 at 19.37.27.png

Link to comment

Can you access 10.10.10.254:80 (and :443) internally? Or is your problem with external access? With my fork you do need to have it set to custom br0 with a fixed IP (like you have) so that should work. Or perhaps there is something in the log?

Link to comment

I've got a quick question:

 

I've made my Nextcloud container available for external use. Right now I'm forwarding port 443 and NPM forwards the incoming requests to the container.

 

Since 443 is very common, I wanted to at least avoid port scans for common ports. Disabling the port forwarding of port 80 in my router makes NPM unaccessable as far as I can tell, so this does not seem to work.

 

Is there any way to go away from forwarding port 80 and 443 and move to let's say 20080 and 20443?

Link to comment

So you want to host your site/nextcloud on some other port then 80/443? Why do you want NPM then? Only for SSL? Letsencrypt MUST use port 80 for it's authentication/verification. You can however run https on a different port. Or use your own / custom certificates.

Link to comment
12 hours ago, mattie112 said:

So you want to host your site/nextcloud on some other port then 80/443? Why do you want NPM then? Only for SSL?

 

Well, forgive my ignorance, but I always assumed that NPM was a sort of security measure. You know, instead of exposing the docker container "as is".

 

 

12 hours ago, mattie112 said:

Letsencrypt MUST use port 80 for it's authentication/verification. You can however run https on a different port. Or use your own / custom certificates.

 

So I could theorically open port 80 to get the certificate once and then close it again until the certificate expries?

Link to comment
10 minutes ago, Pillendreher said:

 

Well, forgive my ignorance, but I always assumed that NPM was a sort of security measure. You know, instead of exposing the docker container "as is".

 

 

 

So I could theorically open port 80 to get the certificate once and then close it again until the certificate expries?

Well it really is a reverse proxy mostly used for SSL offloading. It has some access control features so you can use that. Like only allowing a certain IP or add a password. But in most routers you can also allow only specific IPs and most containers have some kind of login. So yeah in my opinion it is really just for SSL offloading.

 

Nextcloud for example has users so you won't be adding a password to NPM so really only the accesslist is left over imo :)

 

anyway you can open and close the port yes but you must open the port if you want to renew your certificate in a couple of months. It's not really the way to go with letsencrypt to have a flow like that. But yeah it should work.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.