zyphermonkey

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

zyphermonkey's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I didn't ignore it (but did ask a valid question about it) because it didn't make any sense to me at the time and honestly still doesn't. Why do docker names resolve via an internal docker dns server on a custom network, but not the default unraid bridge? There's no explanation as to why it's necessary to switch all your containers to a new network and the requirement to do so isn't listed on the 1st post or default landing page for docker hub or github where almost all other instructions and requirements are listed. I only made a suggestion to help improve the experience for others who might run into the same issue I had. Everything else during the setup I expected, but this wasn't one of those. I wasn't trying to criticize anyone's work and really appreciate the container.
  2. Gotcha. Doing that and reverting all my .conf's back to their default settings worked. I feel like this should be in the 1st post on this thread. Everything else was straightforward for me without digging into any manuals except for this. Could just be me though. 1. Create new docker network a. docker network create my-bridge 2. Install letsencrypt-docker using new network 3. Move any dockers you want to proxy to new network.
  3. So I missed the section at the bottom that mentions making a new custom network and moving all your containers over to it. Is that really necessary or is it enough to just have them all on the same network? All my dockers are on the same internal docker network (bridge). The issue appears to be with the resolver setting. If I disable it and set it statically in the .conf file it works fine. location /tautulli { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; # resolver 127.0.0.11 valid=30s; set $upstream_tautulli 172.17.0.14; proxy_pass http://$upstream_tautulli:8181; # proxy_pass http://192.168.1.10:8282; } I also can't resolve docker names from within the docker and there is nothing static in the hosts file except for local info. root@dcb925741e00:/$ nslookup tautulli nslookup: can't resolve '(null)': Name does not resolve nslookup: can't resolve 'tautulli': Name does not resolve root@dcb925741e00:/$ cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 dcb925741e00 root@dcb925741e00:/$ cat /etc/resolv.conf # Generated DNSv4 entries: nameserver 208.67.222.222 nameserver 192.168.1.1 # Generated DNSv6 entries:
  4. Okay so I got that part fixed. I have no idea how it happened but the "container ports" got changed to match the "host ports" and obviously nothing worked after that. Now I'm trying to set up some subfolder services and the only way I can get them to work without getting a 500 error is to have the following with a lot of the default settings commented out. I don't think I should be doing this. Is there something I need to configure in proxy.conf to get the default way to work? # first go into tautulli settings, under "Web Interface", click on show advanced, set the HTTP root to /tautulli and restart the tautulli container # to enable password access, uncomment the two auth_basic lines location /tautulli { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; # resolver 127.0.0.11 valid=30s; # set $upstream_tautulli tautulli; # proxy_pass http://$upstream_tautulli:8181; proxy_pass http://192.168.1.10:8282; }
  5. Are you supposed to be able to see the default index.html landing page even if there are errors loading certs? I have the ports forwarded on my firewall, but even if I go to the local ip:port I don't get anything like I do if I just load up a plain nginx docker. I just get the default "This site can’t be reached" page in chrome. and I also tried using a custom br0 interface so this docker would get it's own IP and could use port 80 and 443 on it's own and still no landing page. Here's the error I'm getting, but I fear it's because nginx isn't starting up correctly for some reason. Failed authorization procedure. zyphermonkey.strangled.net (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://zyphermonkey.strangled.net/.well-known/acme-challenge/0FLixOl9CLlYQEihDp7YvgO-I6GnyYZGjM7Jvb2Vvjg: Timeout during connect (likely firewall problem) and Domain: zyphermonkey.strangled.net Type: connection Detail: Fetching http://zyphermonkey.strangled.net/.well-known/acme-challenge/0FLixOl9CLlYQEihDp7YvgO-I6GnyYZGjM7Jvb2Vvjg: Timeout during connect (likely firewall problem)
  6. If you're going to implement per app settings I have a few requests. 1. I currently have configs in two different directories because I created my won appdata directory before UnRaid had a default. It would be nice if the app would allow me to specify the config directory for each app instead of just pointing it to one root appdata directory. (I know I should probably just work on getting all appdata into one root directory) 2. Once you start creating separate backups for each app it would awesome to only shutdown the container that is being backed up instead of shutting down all containers then backing up. This would prevent all containers from being down for hours because some containers (i.e. Plex) take forever to backup.
  7. I changed the /data volume back to my share after everything installed and it came back up and is serving metadata to headphones with no issues.
  8. I was getting the following error. I got past it by pointing the /data container volume from a share "/mnt/user/...." to a direct disk "/mnt/disk*/" Once it's done I'm going to try and move it back to a share and see if the docker runs like that. initialising empty databases *** /etc/my_init.d/40_initialise.sh failed with status 1 *** Killing all processes...