Jump to content
spants

[SUPPORT] pihole for unRaid - Spants repo

548 posts in this topic Last Reply

Recommended Posts

Thanks for creating this.  After having pi-hole running for days/weeks I get the following error logs when trying to load the pi-hole gui:

 

2016/09/23 14:57:19 [error] 242#242: *49 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 93 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summaryRaw&getQuerySources HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *44 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summary HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *50 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getForwardDestinations HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *47 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?overTimeData HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *48 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getQueryTypes HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"

 

These are the requests to the backend for the graph content, and the end result is that the graphs never load (you're left with spinning 'loading arrows').

 

It's discussed in the following github page, and you may need to upgrade php-fpm version:

 

https://github.com/pi-hole/pi-hole/issues/375

 

I'm available to test anything you need to.  I'll also try to fix it myself if/when I have the time.

 

 

 

To explicitly respond to your message, is this related to the log file sizes (the reason I have the Userscript piece to my above comment)? I don't believe spants installs the cron automatically so if you forgot to do that piece (or overlooked it) the logs may simply be getting too big.

Share this post


Link to post

I moved to diginc's repo as it is "directly" tied to the pi-hole development (https://hub.docker.com/r/diginc/pi-hole/). I took spants' template, changed the repository to: 'diginc/pi-hole:alpine' and the docker hub URL to: 'https://hub.docker.com/r/diginc/pi-hole/'.

 

diginc added a parameter to meet the necessary changes that urged spants to create his container, which is disabling IPV6 support so if you add a new container variable:

Name: Key 4

Key: IPv6

Value: False

Default Value: False

 

you should be up and running as spants designed but using the container that follows the pi-hole development a bit closer. Been running for a few weeks and haven't had an issue.

 

 

One other thing I did was install the User Scripts plugin and then create the cron script in: /boot/config/plugins/user.scripts/scripts/pihole/ (following the plugin structure, the script is named 'script'). Then I edit the UserScript settings to run it at the interval of my choosing (each morning).

 

My script looks like (just converted the cron file):

 

#!/bin/bash

DOCKER_NAME=pihole

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

 

 

#          Download any updates from the adlists

docker exec $DOCKER_NAME pihole updateGravity > /dev/null

 

# Pi-hole: Update the Web interface shortly after gravity runs

#          This should also update the version number if it is changed in the dashboard repo

#docker exec $DOCKER_NAME pihole updateDashboard > /dev/null

 

# Pi-hole: Parse the log file before it is flushed and save the stats to a database

#          This will be used for a historical view of your Pi-hole's performance

#docker exec $DOCKER_NAME dailyLog.sh # note: this is outdated > /dev/null

 

# Pi-hole: Flush the log daily at 11:58 so it doesn't get out of control

#          Stats will be viewable in the Web interface thanks to the cron job above

docker exec $DOCKER_NAME  pihole flush > /dev/null

 

Nice changes!.... I will update the template so that other people can use them. I have been trying the "pixelsrv-tls" and ab-solution adblock running on my router for a while which also works well!.

 

 

Share this post


Link to post

Thanks for creating this.  After having pi-hole running for days/weeks I get the following error logs when trying to load the pi-hole gui:

 

2016/09/23 14:57:19 [error] 242#242: *49 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 93 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summaryRaw&getQuerySources HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *44 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summary HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *50 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getForwardDestinations HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *47 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?overTimeData HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *48 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getQueryTypes HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"

 

These are the requests to the backend for the graph content, and the end result is that the graphs never load (you're left with spinning 'loading arrows').

 

It's discussed in the following github page, and you may need to upgrade php-fpm version:

 

https://github.com/pi-hole/pi-hole/issues/375

 

I'm available to test anything you need to.  I'll also try to fix it myself if/when I have the time.

 

 

 

To explicitly respond to your message, is this related to the log file sizes (the reason I have the Userscript piece to my above comment)? I don't believe spants installs the cron automatically so if you forgot to do that piece (or overlooked it) the logs may simply be getting too big.

 

Cron was taken from the first post on Aug 18:

 

root@Tower:/boot/config/plugins/pihole# ls -l
total 16
-rwxrwxrwx 1 root root 1519 Aug 18 23:47 pihole.cron*

root@Tower:/boot/config/plugins/pihole# cat pihole.cron
# Pi-hole: A black hole for Internet advertisements
# (c) 2015, 2016 by Jacob Salmela
# Network-wide ad blocking via your Raspberry Pi
# http://pi-hole.net
# Updates ad sources every week
#
# Pi-hole is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.

# Your container name goes here:
#DOCKER_NAME=pihole-for-unRaid
#PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Pi-hole: Update the ad sources once a week on Sunday at 01:59
#          Download any updates from the adlists
59 1    * * 7   root    docker exec pihole-for-unRaid pihole updateGravity > /dev/null

# Pi-hole: Update the Web interface shortly after gravity runs
#          This should also update the version number if it is changed in the dashboard repo
#30 2    * * 7   root    docker exec pihole-for-unRaid pihole updateDashboard > /dev/null

# Pi-hole: Parse the log file before it is flushed and save the stats to a database
#          This will be used for a historical view of your Pi-hole's performance
#50 23  * * *   root    docker exec pihole-for-unRaid dailyLog.sh # note: this is outdated > /dev/null

# Pi-hole: Flush the log daily at 11:58 so it doesn't get out of control
#          Stats will be viewable in the Web interface thanks to the cron job above
58 23   * * *   root    docker exec pihole-for-unRaid pihole flush > /dev/null
root@Tower:/boot/config/plugins/pihole#

 

Log file is currently sitting at ~87M:

 

bash-4.3# pwd
/var/log

bash-4.3# ls -l pihole.log
-rw-rw-rw-    1 root     root      86833328 Sep 24 00:00 pihole.log

Share this post


Link to post

Thanks for creating this.  After having pi-hole running for days/weeks I get the following error logs when trying to load the pi-hole gui:

 

2016/09/23 14:57:19 [error] 242#242: *49 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 93 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summaryRaw&getQuerySources HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *44 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summary HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *50 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getForwardDestinations HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *47 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?overTimeData HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"
2016/09/23 14:57:19 [error] 242#242: *48 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getQueryTypes HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/"

 

These are the requests to the backend for the graph content, and the end result is that the graphs never load (you're left with spinning 'loading arrows').

 

It's discussed in the following github page, and you may need to upgrade php-fpm version:

 

https://github.com/pi-hole/pi-hole/issues/375

 

I'm available to test anything you need to.  I'll also try to fix it myself if/when I have the time.

 

 

 

To explicitly respond to your message, is this related to the log file sizes (the reason I have the Userscript piece to my above comment)? I don't believe spants installs the cron automatically so if you forgot to do that piece (or overlooked it) the logs may simply be getting too big.

 

Cron was taken from the first post on Aug 18:

 

root@Tower:/boot/config/plugins/pihole# ls -l
total 16
-rwxrwxrwx 1 root root 1519 Aug 18 23:47 pihole.cron*

root@Tower:/boot/config/plugins/pihole# cat pihole.cron
# Pi-hole: A black hole for Internet advertisements
# (c) 2015, 2016 by Jacob Salmela
# Network-wide ad blocking via your Raspberry Pi
# http://pi-hole.net
# Updates ad sources every week
#
# Pi-hole is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.

# Your container name goes here:
#DOCKER_NAME=pihole-for-unRaid
#PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Pi-hole: Update the ad sources once a week on Sunday at 01:59
#          Download any updates from the adlists
59 1    * * 7   root    docker exec pihole-for-unRaid pihole updateGravity > /dev/null

# Pi-hole: Update the Web interface shortly after gravity runs
#          This should also update the version number if it is changed in the dashboard repo
#30 2    * * 7   root    docker exec pihole-for-unRaid pihole updateDashboard > /dev/null

# Pi-hole: Parse the log file before it is flushed and save the stats to a database
#          This will be used for a historical view of your Pi-hole's performance
#50 23  * * *   root    docker exec pihole-for-unRaid dailyLog.sh # note: this is outdated > /dev/null

# Pi-hole: Flush the log daily at 11:58 so it doesn't get out of control
#          Stats will be viewable in the Web interface thanks to the cron job above
58 23   * * *   root    docker exec pihole-for-unRaid pihole flush > /dev/null
root@Tower:/boot/config/plugins/pihole#

 

Log file is currently sitting at ~87M:

 

bash-4.3# pwd
/var/log

bash-4.3# ls -l pihole.log
-rw-rw-rw-    1 root     root      86833328 Sep 24 00:00 pihole.log

 

Just found these in the unraid syslogs.  Looks like the cron may not be running properly:

 

root@Tower:/var/log# grep pihole syslog*
syslog:Sep 22 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog:Sep 23 10:20:56 Tower php: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker 'stop' 'pihole-for-unRaid'
syslog:Sep 23 10:21:00 Tower php: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker 'start' 'pihole-for-unRaid'
syslog.1:Sep 14 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.1:Sep 15 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.1:Sep 16 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.1:Sep 17 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.1:Sep 18 01:59:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole updateGravity > /dev/null
syslog.1:Sep 18 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.1:Sep 19 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.1:Sep 20 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.1:Sep 21 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.2:Sep  7 23:58:16 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.2:Sep  8 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.2:Sep  9 23:58:11 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.2:Sep 10 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.2:Sep 11 01:59:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole updateGravity > /dev/null
syslog.2:Sep 11 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.2:Sep 12 23:58:07 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
syslog.2:Sep 13 23:58:01 Tower crond[1512]: exit status 127 from user root root    docker exec pihole-for-unRaid pihole flush > /dev/null
root@Tower:/var/log#

 

I manually ran the flush and the graphs now appear:

 

root@Tower:/var/log# docker exec pihole-for-unRaid pihole flush
::: Flushing /var/log/pihole.log ...... done!

Share this post


Link to post

note that on the new template that I have changed the name from pihole-for-unraid to just pihole for the docker name. It now uses the base docker that I modified so my template is just a template for this (I dont need to build a docker anymore)

Share this post


Link to post

Sorry for the basic question. I use reverse proxy docker to acces other container interfaces from outside my network using my own domain name. All password protected. Reverse proxy uses port 80. Does this mean I cannot use pinhole for unraid? Is there a way to configure reverse proxy to another port, and how will that impact accessing using my domain? For example myowndomain.com/couchpotato.

 

I am tempted to dust off my raspberry-pi and try pinhole.

Share this post


Link to post

i do the same.

 

pihole uses port 80

unraid on 81

apache (spotweb and stuff) on 82

letsencrypt and proxy on 1962

Share this post


Link to post

i do the same.

 

pihole uses port 80

unraid on 81

apache (spotweb and stuff) on 82

letsencrypt and proxy on 1962

 

Any chance you could post a sanitized version of your reverse proxy conf file? Curious to see how you are assigning both stuff on 82 and on 1962. I have been trying to also figure out how to do the letsencrypt with the linuxserver Apache docker. I use the older reverse proxy docker still. And I simultaneously use the Apache Docker as well. I want to consolidate. I posted my issue w/ letencrypt on the Apache docker thread https://lime-technology.com/forum/index.php?topic=43858.msg500727#msg500727

;)  :-[  ;)

 

Many thanks!!

 

H.

Share this post


Link to post

i do the same.

 

pihole uses port 80

unraid on 81

apache (spotweb and stuff) on 82

letsencrypt and proxy on 1962

 

i thought letsencrypt cert auto renewal checks port 443 or 80? does it only need to see one of those ports open regardless what's running on it?

Share this post


Link to post

note that on the new template that I have changed the name from pihole-for-unraid to just pihole for the docker name. It now uses the base docker that I modified so my template is just a template for this (I dont need to build a docker anymore)

 

I just deleted and re-created with the new 'pihole' docker.  Wanted to let you know that there's no longer a 'WebUI' option when it's running and I click the pihole docker icon.  If I manually go to '<tower ip>/admin' the pihole web ui loads.

Share this post


Link to post

Strange, working fine on mine. Is it cached in your browser?

Share this post


Link to post

i do the same.

 

pihole uses port 80

unraid on 81

apache (spotweb and stuff) on 82

letsencrypt and proxy on 1962

 

Any chance you could post a sanitized version of your reverse proxy conf file? Curious to see how you are assigning both stuff on 82 and on 1962. I have been trying to also figure out how to do the letsencrypt with the linuxserver Apache docker. I use the older reverse proxy docker still. And I simultaneously use the Apache Docker as well. I want to consolidate. I posted my issue w/ letencrypt on the Apache docker thread https://lime-technology.com/forum/index.php?topic=43858.msg500727#msg500727

;)  :-[  ;)

 

Many thanks!!

 

H.

 

Sorry for delay - I wasnt getting notifications.

The Apache docker is different to the nginx-letsencrypt one

 

Apache mappings for docker are

443/tcp  192.168.1.22:444

80/tcp  192.168.1.22:82

 

Nginx-letsencrypt

443/tcp  192.168.1.22:443

80/tcp  192.168.1.22:1962

 

proxy.conf

client_max_body_size 5000m;
client_body_buffer_size 128k;

#Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Advanced Proxy Config
send_timeout 5m;
proxy_read_timeout 240;
proxy_send_timeout 240;
proxy_connect_timeout 240;

# Basic Proxy Config
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect  http://  $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 32 4k;

 

site-confs

server {
listen 80;

listen 443 ssl default_server;

if ($scheme = http) {
	return 301 https://xxxxx.duckdns.org$request_uri;
}

root /config/www;
index index.html index.htm index.php;

server_name xxxxx.duckdns.org;

ssl_certificate /config/keys/fullchain.pem;
ssl_certificate_key /config/keys/privkey.pem;
ssl_dhparam /config/nginx/dhparams.pem;
ssl_ciphers 'snip';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;

client_max_body_size 5000M;

### Add HTTP Strict Transport Security ###
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains" always;
add_header Front-End-Https on;

location / {
	try_files $uri $uri/ /index.html /index.php?$args =404;
}

       # amazon echo-node red
location /echo {
        proxy_pass http://192.168.1.22:1880/echo;
}

location /nextcloud {
			include /config/nginx/proxy.conf;
			proxy_pass https://192.168.1.22:442/nextcloud;
}

}

 

so the port changes are done via the docker mapping.

Share this post


Link to post

note that on the new template that I have changed the name from pihole-for-unraid to just pihole for the docker name. It now uses the base docker that I modified so my template is just a template for this (I dont need to build a docker anymore)

 

I just deleted and re-created with the new 'pihole' docker.  Wanted to let you know that there's no longer a 'WebUI' option when it's running and I click the pihole docker icon.  If I manually go to '<tower ip>/admin' the pihole web ui loads.

 

Did you install the one from the beta section? I found there is two dockers, they are very different.

Share this post


Link to post

I have everything set including the daily cron.  And it's all working... or better said, the ads are being blocked as far as I can tell

But I am getting the following error messages when I run the cron and the dashboard is not working:

 

Script Starting Tue, 01 Nov 2016 18:39:39 +0100

 

Full logs for this script are available at /tmp/user.scripts/tmpScripts/pihole/log.txt

 

cat: can't open '/etc/pihole/list.0.raw.githubusercontent.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.1.mirror1.malwaredomains.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.2.sysctl.org.domains': No such file or directory

cat: can't open '/etc/pihole/list.3.zeustracker.abuse.ch.domains': No such file or directory

cat: can't open '/etc/pihole/list.4.s3.amazonaws.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.5.s3.amazonaws.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.6.hosts-file.net.domains': No such file or directory

cat: can't open '/etc/pihole/list.7.raw.githubusercontent.com.domains': No such file or directory

 

dnsmasq: failed to create listening socket for port 53: Address in use

Script Finished Tue, 01 Nov 2016 18:40:20 +0100

 

Full logs for this script are available at /tmp/user.scripts/tmpScripts/pihole/log.txt

 

My docker log:

 

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

::: Testing DNSmasq config: ::: Testing PHP-FPM config: ::: Testing NGINX config: ::: All config checks passed, starting ...

:::

::: Neutrino emissions detected...

:::

::: No custom adlist file detected, reading from default file... done!

:::

::: Getting raw.githubusercontent.com list... No changes detected, transport skipped!

::: Getting mirror1.malwaredomains.com list... No changes detected, transport skipped!

::: Getting sysctl.org list... No changes detected, transport skipped!

::: Getting zeustracker.abuse.ch list... No changes detected, transport skipped!

::: Getting s3.amazonaws.com list... No changes detected, transport skipped!

::: Getting s3.amazonaws.com list... No changes detected, transport skipped!

::: Getting hosts-file.net list... No changes detected, transport skipped!

::: Getting raw.githubusercontent.com list... No changes detected, transport skipped!

:::

cat: can't open '/etc/pihole/list.0.raw.githubusercontent.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.1.mirror1.malwaredomains.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.2.sysctl.org.domains': No such file or directory

cat: can't open '/etc/pihole/list.3.zeustracker.abuse.ch.domains': No such file or directory

cat: can't open '/etc/pihole/list.4.s3.amazonaws.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.5.s3.amazonaws.com.domains': No such file or directory

cat: can't open '/etc/pihole/list.6.hosts-file.net.domains': No such file or directory

cat: can't open '/etc/pihole/list.7.raw.githubusercontent.com.domains': No such file or directory

::: Aggregating list of domains... done!

::: Formatting list of domains to remove comments.... done!

::: 0 domains being pulled in by gravity...

::: Removing duplicate domains.... done!

::: 0 unique domains trapped in the event horizon.

:::

::: Adding adlist sources to the whitelist... done!

::: Whitelisting 6 domains... done!

::: BlackListing 0 domains... done!

::: Formatting domains into a HOSTS file...

:::

::: Cleaning up un-needed files... done!

:::

::: Refresh lists in dnsmasq...

::: Pi-hole blocking is Enabled

tail: can't open '/var/log/lighttpd/*.log': No such file or directory

==> /var/log/pihole.log <==

Nov 1 16:56:49 dnsmasq[173]: read /etc/hosts - 6 addresses

Nov 1 16:56:49 dnsmasq[173]: read /etc/pihole/gravity.list - 0 addresses

Nov 1 17:02:12 dnsmasq[172]: started, version 2.76 cachesize 10000

 

Nov 1 17:02:12 dnsmasq[172]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify

Nov 1 17:02:12 dnsmasq[172]: warning: interface eth0 does not currently exist

 

Nov 1 17:02:12 dnsmasq[172]: using nameserver 209.222.18.218#53

Nov 1 17:02:12 dnsmasq[172]: using nameserver 209.222.18.222#53

Nov 1 17:02:12 dnsmasq[172]: bad address at /etc/hosts line 7

Nov 1 17:02:12 dnsmasq[172]: read /etc/hosts - 6 addresses

Nov 1 17:02:12 dnsmasq[172]: read /etc/pihole/gravity.list - 0 addresses

Share this post


Link to post

Did you install the one from the beta section? I found there is two dockers, they are very different.

 

Sorry about all the confusion

 

Make sure that you use PIHOLE and not UNRAID-HOLE from now on. Pihole was in beta and I have now removed the beta flag.

Share this post


Link to post

dnsmasq: failed to create listening socket for port 53: Address in use

Script Finished Tue, 01 Nov 2016 18:40:20 +0100

 

This bit is strange!... are your running two of them?.

I would delete/rename the appdirectory for pihole and try again..

Share this post


Link to post

dnsmasq: failed to create listening socket for port 53: Address in use

Script Finished Tue, 01 Nov 2016 18:40:20 +0100

 

This bit is strange!... are your running two of them?.

I would delete/rename the appdirectory for pihole and try again..

 

Is this related? Its the template on a new setup:

https://puu.sh/s3sfM/a728584fb0.png

 

Share this post


Link to post

dnsmasq: failed to create listening socket for port 53: Address in use

Script Finished Tue, 01 Nov 2016 18:40:20 +0100

 

This bit is strange!... are your running two of them?.

I would delete/rename the appdirectory for pihole and try again..

 

Is this related? Its the template on a new setup:

https://puu.sh/s3sfM/a728584fb0.png

 

Could be!. silly me. Only needs one port 53.... I have updated the template...

 

edit... mine had the same but ran ok. attached is my config.

Screen_Shot_2016-11-02_at_00_34_37.png.1d450d035a4123ebc44f8cd0c26c1b1e.png

Share this post


Link to post

Changed things a bit around and the DNS 53 remark is gone. Was using plexconnect and had to move the ports via nginx.

That is done. However, I cannot browse when I only select my unraid ip address as dns. Used a similar setup as per the screenshot.

 

Log file:

 

Nov 2 08:45:04 dnsmasq[174]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify

Nov 2 08:45:04 dnsmasq[174]: warning: interface eth0 does not currently exist

 

 

Using IPv4

dnsmasq: syntax check OK.

[02-Nov-2016 08:44:59] NOTICE: configuration file /etc/php5/php-fpm.conf test is successful

 

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

::: Testing DNSmasq config: ::: Testing PHP-FPM config: ::: Testing NGINX config: ::: All config checks passed, starting ...

:::

::: Neutrino emissions detected...

:::

::: No custom adlist file detected, reading from default file... done!

:::

::: Getting raw.githubusercontent.com list... List updated, transport successful!

::: Getting mirror1.malwaredomains.com list... No changes detected, transport skipped!

::: Getting sysctl.org list... No changes detected, transport skipped!

::: Getting zeustracker.abuse.ch list... No changes detected, transport skipped!

::: Getting s3.amazonaws.com list... No changes detected, transport skipped!

::: Getting s3.amazonaws.com list... No changes detected, transport skipped!

::: Getting hosts-file.net list... No changes detected, transport skipped!

::: Getting raw.githubusercontent.com list... List updated, transport successful!

:::

::: Aggregating list of domains... done!

::: Formatting list of domains to remove comments.... done!

::: 126372 domains being pulled in by gravity...

::: Removing duplicate domains.... done!

::: 102289 unique domains trapped in the event horizon.

:::

::: Adding adlist sources to the whitelist... done!

::: Whitelisting 6 domains... done!

::: BlackListing 0 domains... done!

::: Formatting domains into a HOSTS file...

:::

::: Cleaning up un-needed files... done!

:::

::: Refresh lists in dnsmasq...

::: Pi-hole blocking is Enabled

==> /var/log/pihole.log <==

Nov 2 08:40:54 dnsmasq[180]: using nameserver 8.8.8.8#53

Nov 2 08:40:54 dnsmasq[180]: using nameserver 8.8.4.4#53

Nov 2 08:40:54 dnsmasq[180]: read /etc/hosts - 2 addresses

Nov 2 08:40:54 dnsmasq[180]: read /etc/pihole/gravity.list - 102289 addresses

Nov 2 08:45:04 dnsmasq[174]: started, version 2.76 cachesize 10000

 

Nov 2 08:45:04 dnsmasq[174]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify

Nov 2 08:45:04 dnsmasq[174]: warning: interface eth0 does not currently exist

 

Nov 2 08:45:04 dnsmasq[174]: using nameserver 8.8.8.8#53

Nov 2 08:45:04 dnsmasq[174]: using nameserver 8.8.4.4#53

Nov 2 08:45:04 dnsmasq[174]: read /etc/hosts - 2 addresses

tail: can't open '/var/log/lighttpd/*.log': No such file or directory

Nov 2 08:45:04 dnsmasq[174]: read /etc/pihole/gravity.list - 102289 addresses

Share this post


Link to post

The way it works is that you set your dns on machines (or via your router) to pint to your unraid box, this processes the requests and if they pass it will resolve by the dns entries in the template (which you can change).

 

Do you only have one interface? (I do)

Share this post


Link to post

Restarted the server and things seems to be OK now.

 

I do get an error message for a certain log file: tail: can't open '/var/log/lighttpd/*.log': No such file or directory

Also the dashboard does not show any stats.

Share this post


Link to post

Give it some time,  they should appear after 1hr of use

Share this post


Link to post

All seems to be working! But the way port 53 is past through two times, one time TCP and another time as UDP.

My problem is still getting plexconnect to work on another port. Managed to move port 80 and 443. However, port 53 is bugging me.

 

Anyone any suggestion how to move plexconnect from port 53 to port 6053 in Nginx?

Share this post


Link to post

thanks for the reminder as to why there were two port 53. Shouldnt do fixes when I am tired.

 

No apps should use port 53 as it is reserved for dns lookups. Not sure why your app is using it

 

edit: I see, it is used to fool apple tv to use plex. Not sure how to get round that one.

Share this post


Link to post

I have also run into the port 53 issue:

 

docker: Error response from daemon: failed to create endpoint pihole on network bridge: Error starting userland proxy: listen tcp 0.0.0.0:53: bind: address already in use.

 

If I run 'lsof -Pni | grep 53' the only thing that comes up with port 53 is:

 

dnsmasq  15316  nobody    5u  IPv4  26575      0t0  UDP 192.168.122.1:53

dnsmasq  15316  nobody    6u  IPv4  26576      0t0  TCP 192.168.122.1:53 (LISTEN)

 

Any ideas how to fix this one?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now