[Support] Djoss - Nginx Proxy Manager


Djoss

Recommended Posts

So, I switched over from SWAG to Nginx Proxy Manager.  I used to be able to just go to my domain and use a www.domain.com/subdirectory to access other files in my "www" directory for the domain under the nginx folder.  I have moved my files to the default_www folder in the NPM docker, and have a host for domain.com, www.domain.com which works.  When I use domain.com/subdirectory, it fails.  I've added custom, which then ends up sending it to www.domain.com:port/subdirectory which fails.

 

My domain is only used for subdomains, the default domain I simply use for easy web sharing of files - ie www.domain.com/files/blah.jpg or similar.  How can I allow subdirectories in this manner?  Do I have to create a new custom location for each folder I am trying to use?  And if so, how can I make that work?  Custom Location with /files and IP/files on the same port fails and adds in the docker forwarded port (ie, :4443 or :8080) whatever I put in the custom location.  I even attempted to do advanced there and it just 404s, so I did something wrong.

 

I was able to setup my emulator.js with a subdirectory, but that was an entirely different port, I literally just want it to read the subfolder - I use an index.php so people can see the files, as well.

 

My Advanced settings for my domain (which matches what was in my old nginx domain conf file, sans the local directory itself has been updated, so my root is pointing to the main folder, none of the actual subfolders, such as /files)

 

root /config/nginx/default_www/;
index index.html index.htm index.php;

location / {
    try_files $uri $uri/ /index.html /index.php?$args =404;
}

 

Edited by PsiKoTicK
Link to comment

I recently transitioned to using an OPNSense deployment on my home network.  I've followed several tutorials for getting things setup and almost everything appears to be working.  What is not is the NGINX Proxy Manager container services from my unRAID server.  I have the NPM set up according to the Ibracorp tutorial (except including Authelia). I use duckdns.org - generated subdomains in concert with the duckdns container.  I cannot access these subdomains anymore, getting a "took too long to respond."  Troubleshooting I've done so far:

 

- there are no errors registered in the NPM log upon startup.  On the Web GUI, all the proxy hosts show as "online;"

- I've verified the duckdns API is correct;

- the containers associated with the subdomains are working and accessible through my LAN (e.g. I access calibre-web through the unraid IP and assigned port);

- I've forwarded the ports through the OPNSense NAT page according to tutorials.  Other port forwards work (e.g. unRAID Wireguard service and VM Wake-on-LAN work).  I've selected NAT reflection and corresponding firewall rules;

-I've pinged the subdomains and the pings no errors came back; and

-I've also shut down NPM and installed SWAG.  No errors when it is up and running, but I get the same results (no access to the subdomains).

 

What I think is happening is there's something going on with Let's Encrypt SSL certificates.  During troubleshooting, when I tried to delete and re-create an SSL certificate, I got the following error:  "Communication with the API failed, is NPM running correctly?"  Then the container crashes.

 

I've not had any issues with NPM up until my recent transition to OPNSense.  I cannot get access through SWAG either, so I'm assuming something is preventing the SSL certificates from doing their thing.  Has anyone seen this before or is there any insight that can be provided based on what I've provided?  

 

I appreciate your time.

Let's Encrypt Error.png

NPM SSL crash error.png

NPM docker setup.png

NPM Status (GUI).png

OPNSense Port Forwards.png

Unbound overrides.png

Link to comment

I pulled my domain back to Google from Cloudflare but I can't get my nextcloud going again. Mealie has no problem and was running immediately. NC will not get a certificate and will not do https. With http it doesn't go to a webpage at all. What is the internall error? 

 

interr.JPG

interr2.JPG

interr3.JPG

interr4.JPG

interr5.JPG

Edited by Joshwaaa
add photo
Link to comment

Here, also, is my conf if it helps. I thought it would be a validation limit but I slept all night and tried in the AM and it gave the same "internal error"

 

 

# ------------------------------------------------------------
# nextcloud.xxxx.com
# ------------------------------------------------------------


server {
  set $forward_scheme http;
  set $server         "192.168.86.23";
  set $port           444;

  listen 8080;
listen [::]:8080;


  server_name nextcloud.xxxx.com;




# Asset Caching
  include conf.d/include/assets.conf;


  # Block Exploits
  include conf.d/include/block-exploits.conf;






proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;


  access_log /data/logs/proxy-host-1_access.log proxy;
  error_log /data/logs/proxy-host-1_error.log warn;







  location / {






    
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;
    

    # Proxy!
    include conf.d/include/proxy.conf;
  }


  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

 

Link to comment

Here is logs pulled from Dozzle. Hopefully these last 3 posts will be enough to see the problem! How do I not have root permission running as the admin?

 

 

07/15/2023 6:06:18 AM
[app         ] [7/15/2023] [6:06:18 AM] [Nginx    ] › ℹ  info      Reloading Nginx
07/15/2023 6:06:23 AM
[app         ] [7/15/2023] [6:06:23 AM] [SSL      ] › ℹ  info      Requesting Let'sEncrypt certificates for Cert #27: nextcloud.xxxx.com
07/15/2023 6:06:23 AM
[app         ] [7/15/2023] [6:06:23 AM] [SSL      ] › ℹ  info      Command: certbot certonly --config "/etc/letsencrypt.ini" --cert-name "npm-27" --agree-tos --authenticator webroot --email "[email protected]" --preferred-challenges "dns,http" --domains "nextcloud.xxxx.com" 
07/15/2023 6:06:24 AM
[app         ] [7/15/2023] [6:06:24 AM] [Nginx    ] › ⬤  debug     Deleting file: /data/nginx/temp/letsencrypt_27.conf
07/15/2023 6:06:24 AM
[app         ] [7/15/2023] [6:06:24 AM] [Nginx    ] › ℹ  info      Reloading Nginx
07/15/2023 6:06:24 AM
[app         ] [7/15/2023] [6:06:24 AM] [Express  ] › ⚠  warning   Command failed: certbot certonly --config "/etc/letsencrypt.ini" --cert-name "npm-27" --agree-tos --authenticator webroot --email "xxxx" --preferred-challenges "dns,http" --domains "nextcloud.xxxx.com" 
07/15/2023 6:06:24 AM
[app         ] The following error was encountered:
07/15/2023 6:06:24 AM
[app         ] [Errno 13] Permission denied: '/var/log/letsencrypt/letsencrypt.log'
07/15/2023 6:06:24 AM
[app         ] Either run as root, or set --config-dir, --work-dir, and --logs-dir to writeable paths.
07/15/2023 6:06:24 AM
[app         ] Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-fa6dp8rp/log or re-run Certbot with -v for more details.
07/15/2023 6:19:56 AM
[app         ] [7/15/2023] [6:19:56 AM] [SSL      ] › ℹ  info      Renewing SSL certs close to expiry...
07/15/2023 6:19:57 AM
[app         ] [7/15/2023] [6:19:57 AM] [SSL      ] › ✖  error     Error: Command failed: certbot renew --non-interactive --quiet --config "/etc/letsencrypt.ini" --preferred-challenges "dns,http" --disable-hook-validation  
07/15/2023 6:19:57 AM
[app         ] The following error was encountered:
07/15/2023 6:19:57 AM
[app         ] [Errno 13] Permission denied: '/var/log/letsencrypt/letsencrypt.log'
07/15/2023 6:19:57 AM
[app         ] Either run as root, or set --config-dir, --work-dir, and --logs-dir to writeable paths.
07/15/2023 6:19:57 AM
[app         ]     at ChildProcess.exithandler (node:child_process:402:12)
07/15/2023 6:19:57 AM
[app         ]     at ChildProcess.emit (node:events:513:28)
07/15/2023 6:19:57 AM
[app         ]     at maybeClose (node:internal/child_process:1100:16)
07/15/2023 6:19:57 AM
[app         ]     at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)

 

Edited by Joshwaaa
context
Link to comment

Hi there way smarter people than myself,

I do have problem to which I can't find the solution on my own.

 

Generally speaking my system works totally fine and here is how it is setup:

If I access nextcloud.mydomain.com, my domainprovider gets the current IP automatically from my router and directs all requests to my router at my public IP. After that, I have portforwarding to forward all :443 traffic to my nginx docker in my custom "proxynet" network. So basically public_ip:443 -> unraid_ip:1443 (important part for later).

Nginx now gets unraid_ip:1443 -> nginx_ip:4443 and in there I have set up a proxy host which forwards the port to unraid_ip:2443 which is the portmapping for nextcloud in "proxynet" again. So unraid_ip:2443 -> nextcloud_ip:443.

Everyhing so far so good. I can access everything with nextcloud.mydomain.com and it is working fine, https and everything.

 

Now to the problem...

Thanks to my amazing ISP (haha...) I am stuck without internet for almost 2 weeks now. Since a few days, I managed to get my hands on an LTE router with a sim card. That way I do at lease have internet and can work again. But...

Unfortunately the LTE router can only forward public_ip:443 -> anything:443 and I can't change it to the port mapping for nginx which is unraid_ip:1443. unraid_ip:443 is ofc already taken by unraid itself.

 

Another problem is, even though I can access the server locally with IP and everything, my phone and desktop clients won't talk to the sever and don't sync files. I can't simply add a local dns record (my dhcp is pi-hole), because that can't change the portmapping as well.

 

So my question is:

Is there any way I can get the current work-around setup to work? I did try a second nginx docker in my mcvlan network, with its own IP to forward to. Basically nginx2_ip:443.

I did not get it to work yet- maybe because of the same access problem that Joshwaaa described- but I did not yet look too far into that.

Is my idea a possible solution at all? Should it work? Is there a way simpler one that I didn't think of?

 

I'd be happy to read suggestions :)

Link to comment
11 hours ago, dreadu said:

Should it work?

sure, but i assume your Problem is here related

 

11 hours ago, dreadu said:

my hands on an LTE router with a sim card.

this setup usually means you now dont have a recent public ipv4 anymore (dslite ...) and you cant directly access your server externally anymore ...

- so a ipv6 setup would be required (personally not the best idea as often ipv6 is not working in many cases)

- a so called VPS setup (external "bridge" Server) would solve this by settings up either a SSH Tunnel, WG, ...

 

so if you really want to play around and test ...

- you could change the already bounded unraid ports by changing them ... and then use 80 / 443 for NPM

- you also just could run them in a br0 macvlan dockerwith their dedicated ip ... and ports are free then

- ...

 

but, is this a temp situation now ? then before you mess up your configs may wait until your ISP solved the fallout issue ;) as im personally pretty sure you are behind a ipv4 NAT now and cant get your Server access working properly anyway ... like described above.

 

image.png.d7fa5f26f2fd527d3a0dd43cf27da2ec.png

Link to comment
11 hours ago, alturismo said:

[...]

this setup usually means you now dont have a recent public ipv4 anymore (dslite ...) and you cant directly access your server externally anymore ...

- so a ipv6 setup would be required (personally not the best idea as often ipv6 is not working in many cases)

- a so called VPS setup (external "bridge" Server) would solve this by settings up either a SSH Tunnel, WG, ...

 

so if you really want to play around and test ...

- you could change the already bounded unraid ports by changing them ... and then use 80 / 443 for NPM

- you also just could run them in a br0 macvlan dockerwith their dedicated ip ... and ports are free then

 

After playing around a bit I think you are fully correct...
 

I changed the default unraid ports and created a new NPM on host network- but I can't create a new certificate, supposedly because the LTE router does use some IPv4v6 mode and does not realy have a decent public ip :/

Also I went with host instead of br0 thanks to @mgutts great work here

which I stumbled upon while searching for a solution.image.thumb.png.1d9231ab31679e48b173d03a493889c9.png

I guess my only way out is to await the mercy of my ISP, or hope my extraordinary termination will go through next week, so I can search a new one...

Thanks a lot anyways and have a nice rest of the weekend :)

Link to comment

There's a forum abou this but I think it was in the wrong place. I was searching for it and stumbled on it. https://forums.unraid.net/topic/142123-scrolling-out-of-memory-log-errors/ .

Regardless, i'm having similar issues. Any idea of the cause and any solution? Basically this on loop.

 

2023/07/23 20:28:00 [error] 2916#2916: *1201991 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:00 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:01 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:01 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:01 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7048. Increase nchan_max_reserved_memory.
2023/07/23 20:28:01 [error] 2916#2916: *1202000 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:01 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:02 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:02 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:02 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7048. Increase nchan_max_reserved_memory.
2023/07/23 20:28:02 [error] 2916#2916: *1202009 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:02 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:03 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:03 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:03 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7036. Increase nchan_max_reserved_memory.
2023/07/23 20:28:03 [error] 2916#2916: *1202016 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:03 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:04 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:04 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:04 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7036. Increase nchan_max_reserved_memory.
2023/07/23 20:28:04 [error] 2916#2916: *1202027 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:04 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:05 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:05 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:05 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7039. Increase nchan_max_reserved_memory.
2023/07/23 20:28:05 [error] 2916#2916: *1202036 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:05 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:06 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:06 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:06 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7039. Increase nchan_max_reserved_memory.
2023/07/23 20:28:06 [error] 2916#2916: *1202046 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:06 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:07 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:07 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:07 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7045. Increase nchan_max_reserved_memory.

 

Link to comment
On 7/22/2023 at 7:16 PM, dreadu said:

Hi there way smarter people than myself,

I do have problem to which I can't find the solution on my own.

 

Generally speaking my system works totally fine and here is how it is setup:

If I access nextcloud.mydomain.com, my domainprovider gets the current IP automatically from my router and directs all requests to my router at my public IP. After that, I have portforwarding to forward all :443 traffic to my nginx docker in my custom "proxynet" network. So basically public_ip:443 -> unraid_ip:1443 (important part for later).

Nginx now gets unraid_ip:1443 -> nginx_ip:4443 and in there I have set up a proxy host which forwards the port to unraid_ip:2443 which is the portmapping for nextcloud in "proxynet" again. So unraid_ip:2443 -> nextcloud_ip:443.

Everyhing so far so good. I can access everything with nextcloud.mydomain.com and it is working fine, https and everything.

 

Now to the problem...

Thanks to my amazing ISP (haha...) I am stuck without internet for almost 2 weeks now. Since a few days, I managed to get my hands on an LTE router with a sim card. That way I do at lease have internet and can work again. But...

Unfortunately the LTE router can only forward public_ip:443 -> anything:443 and I can't change it to the port mapping for nginx which is unraid_ip:1443. unraid_ip:443 is ofc already taken by unraid itself.

 

Another problem is, even though I can access the server locally with IP and everything, my phone and desktop clients won't talk to the sever and don't sync files. I can't simply add a local dns record (my dhcp is pi-hole), because that can't change the portmapping as well.

 

So my question is:

Is there any way I can get the current work-around setup to work? I did try a second nginx docker in my mcvlan network, with its own IP to forward to. Basically nginx2_ip:443.

I did not get it to work yet- maybe because of the same access problem that Joshwaaa described- but I did not yet look too far into that.

Is my idea a possible solution at all? Should it work? Is there a way simpler one that I didn't think of?

 

I'd be happy to read suggestions :)

 

I have my NPM container running in network "br0" so I can use an IP from my routers 'range'. Then I can use that IP to forward traffic to whatever port (including 80/443)

 

image.thumb.png.45ef00d43c7a5455f74f46bb0715d0a2.png

Edited by mattie112
Link to comment
On 7/24/2023 at 4:00 AM, Joshwaaa said:

There's a forum abou this but I think it was in the wrong place. I was searching for it and stumbled on it. https://forums.unraid.net/topic/142123-scrolling-out-of-memory-log-errors/ .

Regardless, i'm having similar issues. Any idea of the cause and any solution? Basically this on loop.

 

2023/07/23 20:28:00 [error] 2916#2916: *1201991 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:00 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:01 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:01 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:01 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7048. Increase nchan_max_reserved_memory.
2023/07/23 20:28:01 [error] 2916#2916: *1202000 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:01 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:02 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:02 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:02 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7048. Increase nchan_max_reserved_memory.
2023/07/23 20:28:02 [error] 2916#2916: *1202009 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:02 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:03 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:03 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:03 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7036. Increase nchan_max_reserved_memory.
2023/07/23 20:28:03 [error] 2916#2916: *1202016 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:03 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:04 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:04 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:04 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7036. Increase nchan_max_reserved_memory.
2023/07/23 20:28:04 [error] 2916#2916: *1202027 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:04 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:05 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:05 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:05 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7039. Increase nchan_max_reserved_memory.
2023/07/23 20:28:05 [error] 2916#2916: *1202036 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:05 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:06 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:06 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:06 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7039. Increase nchan_max_reserved_memory.
2023/07/23 20:28:06 [error] 2916#2916: *1202046 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
2023/07/23 20:28:06 [error] 2916#2916: MEMSTORE:00: can't create shared message for channel /devices
2023/07/23 20:28:07 [crit] 2916#2916: ngx_slab_alloc() failed: no memory
2023/07/23 20:28:07 [error] 2916#2916: shpool alloc failed
2023/07/23 20:28:07 [error] 2916#2916: nchan: Out of shared memory while allocating message of size 7045. Increase nchan_max_reserved_memory.

 


It is not clear to me (also in the original post). Is this the Nginx from NPM logging this? Or unraid? if you stop the NPM docker container does the log continue? If it does than it has to be reported to the Unread team.

Link to comment

I am using this docker and when I run that test it comes back with a D rating.   Here are the ones it that are having issues. 

 

Strict-Transport-Security

 

Content-Security-Policy

 

X-Content-Type-Options

 

Permissions-Policy

 

I did some digging on goggle and found the following.

 

Workaround - Security Headers @ NGINX Proxy Manager · GitHub

https://gist.github.com/R0GGER/916183fca41f02df1471a6f455e5869f

Link to comment
1 hour ago, JM2005 said:

I am using this docker and when I run that test it comes back with a D rating.   Here are the ones it that are having issues. 

 

Strict-Transport-Security

 

Content-Security-Policy

 

X-Content-Type-Options

 

Permissions-Policy

 

I did some digging on goggle and found the following.

 

Workaround - Security Headers @ NGINX Proxy Manager · GitHub

https://gist.github.com/R0GGER/916183fca41f02df1471a6f455e5869f

 

Those header are really application headers. Yes your web server can send out headers but is more for your application to define them. Your webserver does not know what your application wants.

 

But yes: you can "override" them in nginx.

Link to comment
  • 2 weeks later...

Been running nginx proxy manager on unraid docker for a long time with limited issues - only issue I had really been incurring and never figured out was the auto-renewal for certificates never working..

 

I received an email saying a certificate was expiring shortly and went to log in to the UI to manually renew - the UI wouldn't go past the login page, it just refreshed without any sort of error message.

 

Checking the logs I noticed repeated errors related to "certbot-route53-dns" (how my own certificates are authenticated) with no valid version available and failure to "pip install" this package. 

 

Remoting in via terminal, I manually executed the pip install command to which the login via the UI immediately worked.. however renewing the certificate still failed (UI showing "Invalid error") until I manually killed the standard "certbot" process.

 

Obviously this is only a temporary fix given it will reset whenever the container is rebuilt.

 

I am guessing this may be a bug introduced with recent update to v23.08.1? I didn't take screenshots at the time but can rebuild the container to replicate if not easily identifiable..

  • Like 1
Link to comment
  • 4 weeks later...

I have the exact same issue and I can not login.

[app         ] [9/6/2023] [1:43:57 PM] [Migrate  ] › ℹ  info      Current database version: none
[app         ] [9/6/2023] [1:43:58 PM] [Global   ] › ✖  error     Command failed: pip install --no-cache-dir certbot-dns-route53==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') 
[app         ] An unexpected error occurred:
[app         ] pkg_resources.ContextualVersionConflict: (urllib3 2.0.4 (/usr/lib/python3.10/site-packages), Requirement.parse('urllib3<1.27,>=1.25.4'), {'botocore'})
[app         ] Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-ka2_ng49/log or re-run Certbot with -v for more details.
[app         ] ERROR: Could not find a version that satisfies the requirement certbot-dns-route53== (from versions: 0.15.0.dev0, 0.15.0, 0.16.0, 0.17.0, 0.18.0, 0.18.1, 0.18.2, 0.19.0, 0.20.0, 0.21.0, 0.21.1, 0.22.0, 0.22.1, 0.22.2, 0.23.0, 0.24.0, 0.25.0, 0.25.1, 0.26.0, 0.26.1, 0.27.0, 0.27.1, 0.28.0, 0.29.0, 0.29.1, 0.30.0, 0.30.1, 0.30.2, 0.31.0, 0.32.0, 0.33.0, 0.33.1, 0.34.0, 0.34.1, 0.34.2, 0.35.0, 0.35.1, 0.36.0, 0.37.0, 0.37.1, 0.37.2, 0.38.0, 0.39.0, 0.40.0, 0.40.1, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.32.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0)
[app         ] ERROR: No matching distribution found for certbot-dns-route53==

 

 

On 8/15/2023 at 1:04 PM, Ptolemyiv said:

however renewing the certificate still failed (UI showing "Invalid error") until I manually killed the standard "certbot" process.

How did you kill the certbot process?

Edited by JCM
Link to comment

Hi,

 

I have set up a few dockers e.g. Jellyfin, Airsonic etc. which I expose to the internet via subdomains. 


I set up an Argo Tunnel via the cloudflared docker to allow for connections. The requests then go from cloudflared to my reverse proxy. I decided for Nginx Proxy Manager as it would work out of the box with my cloudflare certificate (I had trouble with getting the self-certificate requested by swag to be accepted by cloudflared) The connection from the cloudflare server via the argo tunnel to the reverse proxy should be secure via the certificate / https. 

 

My question is though the following : Most of the proxy hosts (except nextcloud) are connected via http to the reverse proxy, e.g. Piwigo to Nginx. Does this still qualify as secure as this connection is already "within the server" ? Or does this break "a secure https chain" and creates a vulnerability ? The subdomains all start with https://... but sometimes chrome would flag the site as "dangerous", Safari for instance doesn't.

 

I read in the post above : "Nginx Proxy Manager doesn't have the support for forwarding to a HTTPs backend/server." I am not sure if this is related to my question. 

 

Feedback or some good links for reading would be much appreciated, happy to provide more info if necessary.

 

Many thanks

Link to comment
16 hours ago, mattie112 said:

Internally you can use Http just fine! Perhaps only if your NPM and service are on physically different machines then it is a question of: do you trust your internal network. 

Ok, great! No, in my case it is all on the same machine, so it should not be a problem. Thank you!

Edited by pho
Link to comment
  • 2 weeks later...
On 9/6/2023 at 12:46 PM, JCM said:

I have the exact same issue and I can not login.

[app         ] [9/6/2023] [1:43:57 PM] [Migrate  ] › ℹ  info      Current database version: none
[app         ] [9/6/2023] [1:43:58 PM] [Global   ] › ✖  error     Command failed: pip install --no-cache-dir certbot-dns-route53==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') 
[app         ] An unexpected error occurred:
[app         ] pkg_resources.ContextualVersionConflict: (urllib3 2.0.4 (/usr/lib/python3.10/site-packages), Requirement.parse('urllib3<1.27,>=1.25.4'), {'botocore'})
[app         ] Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-ka2_ng49/log or re-run Certbot with -v for more details.
[app         ] ERROR: Could not find a version that satisfies the requirement certbot-dns-route53== (from versions: 0.15.0.dev0, 0.15.0, 0.16.0, 0.17.0, 0.18.0, 0.18.1, 0.18.2, 0.19.0, 0.20.0, 0.21.0, 0.21.1, 0.22.0, 0.22.1, 0.22.2, 0.23.0, 0.24.0, 0.25.0, 0.25.1, 0.26.0, 0.26.1, 0.27.0, 0.27.1, 0.28.0, 0.29.0, 0.29.1, 0.30.0, 0.30.1, 0.30.2, 0.31.0, 0.32.0, 0.33.0, 0.33.1, 0.34.0, 0.34.1, 0.34.2, 0.35.0, 0.35.1, 0.36.0, 0.37.0, 0.37.1, 0.37.2, 0.38.0, 0.39.0, 0.40.0, 0.40.1, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.32.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0)
[app         ] ERROR: No matching distribution found for certbot-dns-route53==

 

 

How did you kill the certbot process?

I think I just went into the terminal and ran e.g.:

 

ps -ef | grep certb

 

Look for the process ID which is the number immediately after user such as:

root 1234 5100 ...

 

To kill the process type:

kill 1234

 

 

  • Like 1
Link to comment

Hello 

 

Since yesterday, NPM has been shut down for no reason.

When I try to restart it, it stays off WITHOUT ANY message in the unRAID WebUI.

The log displays this:

 

Quote

[init        ] container is starting...
[cont-env    ] loading container environment variables...
[cont-env    ] APP_NAME: executing...
[cont-env    ] APP_NAME: /etc/cont-env.d/APP_NAME: line 1: Nginx: not found
[cont-env    ] APP_NAME: terminated successfully.
[cont-env    ] APP_NAME: loading...
[cont-env    ] APP_VERSION: executing...
[cont-env    ] APP_VERSION: /etc/cont-env.d/APP_VERSION: line 1: 2.10.4: not found
[cont-env    ] APP_VERSION: terminated successfully.
[cont-env    ] APP_VERSION: loading...
[cont-env    ] DOCKER_IMAGE_PLATFORM: executing...
[cont-env    ] DOCKER_IMAGE_PLATFORM: /etc/cont-env.d/DOCKER_IMAGE_PLATFORM: line 1: linux/amd64: not found
[cont-env    ] DOCKER_IMAGE_PLATFORM: terminated successfully.
[cont-env    ] DOCKER_IMAGE_PLATFORM: loading...
[cont-env    ] DOCKER_IMAGE_VERSION: executing...
[cont-env    ] DOCKER_IMAGE_VERSION: /etc/cont-env.d/DOCKER_IMAGE_VERSION: line 1: 23.08.1: not found
[cont-env    ] DOCKER_IMAGE_VERSION: terminated successfully.
[cont-env    ] DOCKER_IMAGE_VERSION: loading...
[cont-env    ] HOME: executing...
[cont-env    ] HOME: terminated successfully.
[cont-env    ] HOME: loading...
[cont-env    ] TAKE_CONFIG_OWNERSHIP: executing...
[cont-env    ] TAKE_CONFIG_OWNERSHIP: /etc/cont-env.d/TAKE_CONFIG_OWNERSHIP: line 1: 1: not found
[cont-env    ] TAKE_CONFIG_OWNERSHIP: terminated successfully.
[cont-env    ] TAKE_CONFIG_OWNERSHIP: loading...
[cont-env    ] XDG_CACHE_HOME: executing...
[cont-env    ] XDG_CACHE_HOME: /etc/cont-env.d/XDG_CACHE_HOME: line 1: /config/xdg/cache: Permission denied
[cont-env    ] XDG_CACHE_HOME: terminated successfully.
[cont-env    ] XDG_CACHE_HOME: loading...
[cont-env    ] XDG_CONFIG_HOME: executing...
[cont-env    ] XDG_CONFIG_HOME: /etc/cont-env.d/XDG_CONFIG_HOME: line 1: /config/xdg/config: not found
[cont-env    ] XDG_CONFIG_HOME: terminated successfully.
[cont-env    ] XDG_CONFIG_HOME: loading...
[cont-env    ] XDG_DATA_HOME: executing...
[cont-env    ] XDG_DATA_HOME: /etc/cont-env.d/XDG_DATA_HOME: line 1: /config/xdg/data: not found
[cont-env    ] XDG_DATA_HOME: terminated successfully.
[cont-env    ] XDG_DATA_HOME: loading...
[cont-env    ] XDG_RUNTIME_DIR: executing...
[cont-env    ] XDG_RUNTIME_DIR: /etc/cont-env.d/XDG_RUNTIME_DIR: line 1: /tmp/run/user/app: not found
[cont-env    ] XDG_RUNTIME_DIR: terminated successfully.
[cont-env    ] XDG_RUNTIME_DIR: loading...
[cont-env    ] XDG_STATE_HOME: executing...
[cont-env    ] XDG_STATE_HOME: /etc/cont-env.d/XDG_STATE_HOME: line 1: /config/xdg/state: not found
[cont-env    ] XDG_STATE_HOME: terminated successfully.
[cont-env    ] XDG_STATE_HOME: loading...
[cont-env    ] container environment variables initialized.
[cont-secrets] loading container secrets...
[cont-secrets] container secrets loaded.
[cont-init   ] executing container initialization scripts...
[cont-init   ] 10-check-app-niceness.sh: executing...
[cont-init   ] 10-check-app-niceness.sh: terminated successfully.
[cont-init   ] 10-clean-logmonitor-states.sh: executing...
[cont-init   ] 10-clean-logmonitor-states.sh: terminated successfully.
[cont-init   ] 10-clean-tmp-dir.sh: executing...
[cont-init   ] 10-clean-tmp-dir.sh: terminated successfully.
[cont-init   ] 10-init-users.sh: executing...
[cont-init   ] 10-init-users.sh: terminated successfully.
[cont-init   ] 10-set-tmp-dir-perms.sh: executing...
[cont-init   ] 10-set-tmp-dir-perms.sh: terminated successfully.
[cont-init   ] 10-xdg-runtime-dir.sh: executing...
[cont-init   ] 10-xdg-runtime-dir.sh: mkdir: can't create directory '': No such file or directory
[cont-init   ] 10-xdg-runtime-dir.sh: terminated with error 1.

 

Please help me to correct this

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.