cglatot

Members
  • Posts

    20
  • Joined

  • Last visited

Posts posted by cglatot

  1. 4 hours ago, digiblur said:

    Yes, for some reason this container logs every single dns request to the container log and keeps it. Stop your entire docker service and turn on the log rotation and limits in the unRaid docker settings. It will keep the problem at bay.

     

     

     

     

     

     

    I've already got log rotation on. Here's my docker settings:

     

    image.thumb.png.5a7c2e1f3657fafc75c2947570d82d4e.png

  2. Has anyone has any issues with PiHole taking up an insane amount of space for the Docker disk image? With PiHole enabled, it quickly eats up my Docker disk image space. It's set to 40GB, and after removing PiHole just now the usage went from 76% down to 22%. I have no idea what is causing it - I set it up following SpaceInvaderOnes guide. It works great otherwise - just destroys the space and I need to remove it to clear it up again.

     

    Setup below, am I missing something?

     

    image.thumb.png.51a87e2669253a0c5f05d02132b9c3b5.png

     

     

  3. On 9/2/2017 at 8:19 PM, lionceau said:

    Ok, I'm at the UPS now and did some research. You're right in that unRAID doesn't control the delay. This "Turn off UPS after shutdown" functionality doesn't work with Cyberpower UPS and should be set to "NO" or you will experience behaviour such as the scheduled hard shutdowns I experienced.

     

    There's also no way to change the default 1 hour shutdown delay/schedule, not even in the Windows software or unRAID's apctest. APC units default to 90 seconds. I'm going to do some more research on this considering the poster in the freeNAS forum seems to be able to control this via USB.

     

    There's a longer thread here:

    http://lime-technology.com/forum/index.php?topic=13411.0

     

     

     

    Did you ever do more research into this? I am just now discovering this issue myself :P 

  4. IGNORE THIS!! I'm going to leave this here for prudence sake in case someone else has the same issue.

     

    It turns out the template DID have a "www" subdomain, which I removed (I removed the "subdomains" variable completely). For some reason this did not work. So I remade the subdomains variable, manually removed the container, and remade the container using the Unraid GUI. This time it worked.

     

    ------------------------------------------------------------------------------

     

    I'm having some issues with getting a new cert generated by letsencrypt / certbot. Everything was working fine up until this morning.

     

    It is trying to get a cert for www.mydomain.ddns.net - but that does not exist since it is a DDNS service - there is only mydomain.ddns.net. How do I change it so that it does not try to validate www.mydomain.ddns.net? My unraid docker config does not have any subdomains listed for it.

     

    See the log below.

     

    -------------------------------------
    _ _ _
    | |___| (_) ___
    | / __| | |/ _ \
    | \__ \ | | (_) |
    |_|___/ |_|\___/
    |_|
    
    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donations/
    -------------------------------------
    GID/UID
    -------------------------------------
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    2048 bit DH parameters present
    SUBDOMAINS entered, processing
    Sub-domains processed are: -d www.MYDOMAIN.ddns.net
    E-mail address entered: REDACTED
    Different sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
    usage:
    certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...
    
    Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
    it will attempt to use a webserver both for obtaining and installing the
    cert.
    certbot: error: argument --cert-path: No such file or directory
    
    Generating new certificate
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Obtaining a new certificate
    Performing the following challenges:
    tls-sni-01 challenge for MYDOIMAIN.ddns.net
    tls-sni-01 challenge for www.MYDOMAIN.ddns.net
    Waiting for verification...
    Cleaning up challenges
    Failed authorization procedure. www.MYDOMAIN.ddns.net (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: DNS problem: NXDOMAIN looking up A for www.MYDOMAIN.ddns.net
    
    IMPORTANT NOTES:
    - The following errors were reported by the server:
    
    Domain: www.MYDOIMAIN.ddns.net
    Type: connection
    Detail: DNS problem: NXDOMAIN looking up A for www.MYDOMAIN.ddns.net
    
    To fix these errors, please make sure that your domain name was
    entered correctly and the DNS A record(s) for that domain
    contain(s) the right IP address. Additionally, please check that
    your computer has a publicly routable IP address and that no
    firewalls are preventing the server from communicating with the
    client. If you're using the webroot plugin, you should also verify
    that you are serving files from the webroot path you provided.
    - Your account credentials have been saved in your Certbot
    configuration directory at /etc/letsencrypt. You should make a
    secure backup of this folder now. This configuration directory will
    also contain certificates and private keys obtained by Certbot so
    making regular backups of this folder is ideal.
    /var/run/s6/etc/cont-init.d/50-config: line 127: cd: /config/keys/letsencrypt: No such file or directory
    [cont-init.d] 50-config: exited 1.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

     

  5. I'm using a template from here: html5up.net

     

    Just download one and modify the index.html

     

    The guy is super talented and these are really easy to customize

    Thanks for sharing aptalca. The webUI is showing a nice overview now :-) However I don't quite understand how to open the Apps by clicking on the nice buttons. Where does this need to be added? https://192.168.xxx.xxx:xxx/nextcloud

     

    You need to edit the HTML file(s) to include links to your apps. If you are using reverse proxy, use your domain, not the IP (local IPs will only work from the local network / VPN).

     

    If you don't know HTML: http://www.w3schools.com/html/

  6. How can I block access to domain.com/test.txt, /folder, /sample.doc etc etc ?

     

    EDIT: I would like to know a more elegant method to do this, but in the meantime you can block multiple files / directories using this location format:

     

    location ~ /(dir1|dir2|dir3|file1.ext|file2.ext|file3.ext) {
    	deny all;
    	return 404;
    }
    

     

    I would also like to know this! I didn't even realise that they could be accessed!

     

    Where in the config file do I enter that code? I tried all the way at the bottom but it messes up my whole domain.com page.

     

    Put it in the same place as your other location directives. Make sure that you do not include any directories that house resources like CSS, images, etc. that any html/php files need access to. The deny all is a literal deny ALL.

     

    I'm still trying to work out how to stop direct-linking to images / css files whilst still allowing the server to serve them in web-pages. Apparently it can be done with nginx referer parameters, but I couldn't get it to work.

  7. How can I block access to domain.com/test.txt, /folder, /sample.doc etc etc ?

     

    EDIT: I would like to know a more elegant method to do this, but in the meantime you can block multiple files / directories using this location format:

     

    location ~ /(dir1|dir2|dir3|file1.ext|file2.ext|file3.ext) {
    	deny all;
    	return 404;
    }
    

     

    I would also like to know this! I didn't even realise that they could be accessed!

  8. @cglatot

     

    What is "include /config/nginx/proxy.conf; " for? I don't have that file inside my nginx folder. Also, once I add the code below my nginx server stops responding.

     

    location /web/ {
    include /config/nginx/proxy.conf;
    proxy_pass http://192.168.1.148:32400/web/;
    }
    
    location /plex/ {
    proxy_pass http://127.0.0.1/web/;
    }
    

     

    I'll try the basic auth once I get this sorted out.

     

    Are you using nginx, or nginx-letsencrypt? If using the latter (I.E. Aptalca's entry) then it should be in the nginx folder. It contains the following:

     

    client_max_body_size 10m;
    client_body_buffer_size 128k;
    
    #Timeout if the real server is dead
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
    
    # Advanced Proxy Config
    send_timeout 5m;
    proxy_read_timeout 240;
    proxy_send_timeout 240;
    proxy_connect_timeout 240;
    
    # Basic Proxy Config
    proxy_set_header Host $host:$server_port;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_redirect  http://  $scheme://;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
    proxy_cache_bypass $cookie_session;
    proxy_no_cache $cookie_session;
    proxy_buffers 32 4k;

  9. Anyone have plex working under nginx reverse proxy? I can't get it to work for the life of me >:(

     

    Yup! You need to make sure the location is /web/. Here is the location entry for my plex:

     

    location /web/ {
    include /config/nginx/proxy.conf;
    proxy_pass http://192.168.XXX.XXX:XXXX/web/;
    }

     

    And if you want to use /plex, then you can refer the plex location back to nginx with the following:

     

    location /plex/ {
    proxy_pass http://127.0.0.1/web/;
    }

     

    So then both /plex/ and /web/ will redirect to Plex.

     

    When you have done this, can you do me a favour? Can you put basic auth on your root location like below and let me know if /plex/ (or /web/) then ask you for a username/pass? I'm having this issue and can't get a fix going. (For reference, the problem I have is detailed here with screenshot: https://lime-technology.com/forum/index.php?topic=43696.msg502493#msg502493)

     

    location / {
    auth_basic "Restricted";
    auth_basic_user_file /config/nginx/.htpasswd;
    ....... (Whatever else you have in your root)
    }

  10. try adding

    auth_basic off;

    to the subpages

    ie

    location /plex {
      auth_basic off;
      proxy ...
    }
    

     

    Refer to http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html for more details

     

    No joy! I was really hoping that would work. I added 'auth_basic off;' to both /plex and /web and both still ask me for auth.

     

    The strange thing is that plex begins to load before it asks for the auth credentials - whereas the other locations protected with auth will be white and not loaded until the credentials are entered.

     

    Does anyone else have Plex set up via reverse proxy? Can they test if adding basic auth to the / location will also prompt auth for the Plex proxy if it does NOT have basic auth?

  11. try adding

    auth_basic off;

    to the subpages

    ie

    location /plex {
      auth_basic off;
      proxy ...
    }
    

     

    Refer to http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html for more details

     

    No joy! I was really hoping that would work. I added 'auth_basic off;' to both /plex and /web and both still ask me for auth.

     

    The strange thing is that plex begins to load before it asks for the auth credentials - whereas the other locations protected with auth will be white and not loaded until the credentials are entered.

  12.  

     

    Hi all. First I want to say thanks for creating this - it has made my life so much easier.

     

    I have everything set up and working, I am reverse proxying various services (deluge, nzbget, sonarr, couch, etc) and I have basic auth set up for them using htpasswd. All is working fine.

     

    There are currently 4 locations that I don't have auth on: /request/, /web/, /plex/ (which just proxies to /web/), and / (which displays index.html).

     

    I want to use basic auth on the / location, because I want to create a list of URLs that I can easily access in index.html (instead of having to remember them all), but I only want authenticated users to see this. The problem is, when I put basic auth on the / location, it interferes with my Plex login.

     

    Here are the relevant location entries:

     

    location / {
    auth_basic "Restricted";
    auth_basic_user_file /config/nginx/.htpasswd;
    try_files $uri $uri/ /index.html /index.php?$args =404;
    }
    location /web/ {
    include /config/nginx/proxy.conf;
    proxy_pass http://192.168.XXX.XXX:XXXX/web/;
    }
    location /plex/ {
    proxy_pass http://127.0.0.1/web/;
    }
    

     

    Whenever I go to example.mydomain.url/plex or example.mydomain.url/web it begins to load plex, but it will then pause the loading and ask me for the auth (see screenshot). If I put in the correct creds, it will continue loading. I can also click cancel (twice) and it will continue loading. But I don't want to have the auth dialog pop up at all. If I remove the basic auth from / then no auth dialog pops up.

     

    The other service that I am not using with basic auth is plex requests. But it does not get affected whether or not / has auth. It will never prompt me to auth (unless I include auth in the location for /request/). Here is it's entry:

     

    location /request/ {
    include /config/nginx/proxy.conf;
    proxy_pass http://192.168.XXX.XXX:XXXX/request/;
    }
    

     

    The only difference that I can see between them is that Plex uses a host connection, whereas plex requests uses a bridged connection; but I'm not sure if that's relevant.

     

    The workaround that I thought of is to use /home and create www/home/index.html and serve that when I type example.mydomain.url/home, but that is rather inelegant, and I would like to try to make the page appear (with auth) with just using example.mydomain.url

     

    Any help is greatly appreciated!

     

    Your plex proxy address is incorrect. 127.0.0.1 is inside the nginx-letsencrypt container. It needs to point to the plex container. Use http://localunraidip:32400/web

     

    The /plex/ proxies to the /web/ location (just above it in my code snippet), which proxies to the plex container, so local for that entry is correct. The reason that is there is so that I can use /plex instead of /web.

     

    Even if I navigate to /web (which proxies direct to the plex container), I still have the same problem - so you can pretty much ignore the /plex location entry.

  13. Hi all. First I want to say thanks for creating this - it has made my life so much easier.

     

    I have everything set up and working, I am reverse proxying various services (deluge, nzbget, sonarr, couch, etc) and I have basic auth set up for them using htpasswd. All is working fine.

     

    There are currently 4 locations that I don't have auth on: /request/, /web/, /plex/ (which just proxies to /web/), and / (which displays index.html).

     

    I want to use basic auth on the / location, because I want to create a list of URLs that I can easily access in index.html (instead of having to remember them all), but I only want authenticated users to see this. The problem is, when I put basic auth on the / location, it interferes with my Plex login.

     

    Here are the relevant location entries:

     

    location / {
    auth_basic "Restricted";
    auth_basic_user_file /config/nginx/.htpasswd;
    try_files $uri $uri/ /index.html /index.php?$args =404;
    }
    location /web/ {
    include /config/nginx/proxy.conf;
    proxy_pass http://192.168.XXX.XXX:XXXX/web/;
    }
    location /plex/ {
    proxy_pass http://127.0.0.1/web/;
    }
    

     

    Whenever I go to example.mydomain.url/plex or example.mydomain.url/web it begins to load plex, but it will then pause the loading and ask me for the auth (see screenshot). If I put in the correct creds, it will continue loading. I can also click cancel (twice) and it will continue loading. But I don't want to have the auth dialog pop up at all. If I remove the basic auth from / then no auth dialog pops up.

     

    The other service that I am not using with basic auth is plex requests. But it does not get affected whether or not / has auth. It will never prompt me to auth (unless I include auth in the location for /request/). Here is it's entry:

     

    location /request/ {
    include /config/nginx/proxy.conf;
    proxy_pass http://192.168.XXX.XXX:XXXX/request/;
    }
    

     

    The only difference that I can see between them is that Plex uses a host connection, whereas plex requests uses a bridged connection; but I'm not sure if that's relevant.

     

    The workaround that I thought of is to use /home and create www/home/index.html and serve that when I type example.mydomain.url/home, but that is rather inelegant, and I would like to try to make the page appear (with auth) with just using example.mydomain.url

     

    Any help is greatly appreciated!

    plex_auth.PNG.e9b4a2d25988e52cadb046a9a5cfacc7.PNG