[SUPPORT] pihole for unRaid - Spants repo


Recommended Posts

5 minutes ago, darrenyorston said:

Is there a fix for the update issue? Every time I check my plugins for updates it tells me there is one for pihole.

That's not a pinhole template problem. I've seen it on several apps before and I think an update to unraid fixed it? ( Or was it a patch? )

Link to comment
13 minutes ago, spants said:

That's not a pinhole template problem. I've seen it on several apps before and I think an update to unraid fixed it? ( Or was it a patch? )

Thanks for your reply. I am on ver 6.7.2. Is there another available? I dont see an option to update. And its only on this app. All the others seem to update normally without a problem.

Link to comment

On Unraid 6.8.0 rc3

 

I've noticed if I enable my VMs that use br0 (Pihole uses same bridge), Pihole is inaccessable.  The webui for Pihole comes back as soon as I shutdown my VMs.

 

EDIT: Just updated to 6.8.0 rc4 and now Pihole and my VMs are running together and playing nicely.  Not sure what the problem was on rc3...

 

EDIT 2: Spoke too soon... seems to still be happening..

Edited by nblain1
Link to comment
13 hours ago, nblain1 said:

On Unraid 6.8.0 rc3

 

I've noticed if I enable my VMs that use br0 (Pihole uses same bridge), Pihole is inaccessable.  The webui for Pihole comes back as soon as I shutdown my VMs.

 

EDIT: Just updated to 6.8.0 rc4 and now Pihole and my VMs are running together and playing nicely.  Not sure what the problem was on rc3...

 

EDIT 2: Spoke too soon... seems to still be happening..

could it be related to this:

 

Link to comment

@spants - I don't thing so...  the br0 interface is up and running with PiHole working, but as soon as I fire up any VM internet cuts out because PiHole goes unresponsive (loses access to br0 somehow).  I will paste a portion of Unraid's log below showing what happens when a VM is enabled then disabled.  PiHole and VM logs show nothing out of the norm.

 

Oct 28 11:38:48 Tower avahi-daemon[9683]: Joining mDNS multicast group on interface vnet0.IPv6 with address f----identifier removed----c.
Oct 28 11:38:48 Tower avahi-daemon[9683]: New relevant interface vnet0.IPv6 for mDNS.
Oct 28 11:38:48 Tower avahi-daemon[9683]: Registering new address record for f----identifier removed----c on vnet0.*.
Oct 28 11:38:58 Tower avahi-daemon[9683]: Interface vnet0.IPv6 no longer relevant for mDNS.
Oct 28 11:38:58 Tower avahi-daemon[9683]: Leaving mDNS multicast group on interface vnet0.IPv6 with address f----identifier removed----c.
Oct 28 11:38:58 Tower kernel: br0: port 2(vnet0) entered disabled state
Oct 28 11:38:58 Tower kernel: device vnet0 left promiscuous mode
Oct 28 11:38:58 Tower kernel: br0: port 2(vnet0) entered disabled state
Oct 28 11:38:58 Tower avahi-daemon[9683]: Withdrawing address record for f----identifier removed----c on vnet0.

 

1st line from VM started.

5th line from VM stopped.

Edited by nblain1
Link to comment
  • 2 weeks later...
1 hour ago, macmanluke said:

I have had issues setting up pihole (seperate problem) but now i have ended up with 2x pihole dockers that i cant delete (or start)

 

If i click remove it just spins and does nothing. any ideas how to start clean?

This is probably nothing to do with this specific docker so I may split to its own topic. Go to Tools-Diagnostics and attach the complete Diagnostics zip file to your NEXT post. 

Link to comment

Thought I could get this sorted out myself, but it's become quite the hassle.

 

Coming over here from this pi-hole.net post.

 

Any help is much appreciated.

 

To sum up the most recent bits of it. I use a bonded connection, and that's the only way I am able to have the template command go through. I rarely get the WebUI for a few hours on and off (mostly off) and mobile devices not able to get an internet connection, and am finally setting out to resolve it once and for all.

 

When I use Bridge and everything else as close to the template as possible as suggested/"required" I get this error.
 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name=‘pihole-template’ --net=‘bridge’ -e TZ=“America/Denver” -e HOST_OS=“Unraid” -e ‘DNS1’=‘8.8.8.8’ -e ‘DNS2’=‘8.8.4.4’ -e ‘TZ’=‘Europe/London’ -e ‘WEBPASSWORD’=‘admin’ -e ‘INTERFACE’=‘br0’ -e ‘ServerIP’=‘192.168.1.199’ -e ‘ServerIPv6’=’’ -e ‘IPv6’=‘False’ -e ‘DNSMASQ_LISTENING’=‘all’ -p ‘53:53/tcp’ -p ‘53:53/udp’ -p ‘67:67/udp’ -p ‘81:80/tcp’ -p ‘443:443/tcp’ -v ‘/mnt/cache/appdata/pihole/pihole/’:’/etc/pihole/’:‘rw’ -v ‘/mnt/cache/appdata/pihole/dnsmasq.d/’:’/etc/dnsmasq.d/’:‘rw’ --cap-add=NET_ADMIN --dns 127.0.0.1 --dns 1.1.1.1 --restart=unless-stopped ‘pihole/pihole:latest’

WARNING: Localhost DNS setting (–dns=127.0.0.1) may fail in containers.
7b8f86e3ec0969ce56595ef8271838e8a737e38d1bbd54ad55566f74eaa65a1e
/usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint pihole-template (a55246178777a4ba3d645c4a2bf038f99e74fe4f48bf10cd5f0bb01acea6f6b6): Error starting userland proxy: listen udp 0.0.0.0:67: bind: address already in use.

The command failed.

As far as I can tell there aren't any other services using these ports. And if there were, why wouldn't it error if using bond0 instead of Bridge?

Link to comment
On 10/24/2019 at 4:24 PM, nblain1 said:

On Unraid 6.8.0 rc3

 

I've noticed if I enable my VMs that use br0 (Pihole uses same bridge), Pihole is inaccessable.  The webui for Pihole comes back as soon as I shutdown my VMs.

 

EDIT: Just updated to 6.8.0 rc4 and now Pihole and my VMs are running together and playing nicely.  Not sure what the problem was on rc3...

 

EDIT 2: Spoke too soon... seems to still be happening..

So, I still experience this problem on every release from 6.8.0-rc3 to rc6.  Not sure where to go from here... I feel like I've tried everything in my book of tricks.  My only option I see now is running Pihole in a VM which will require more resources.

 

Pihole docker works perfectly as long as any VMs are not using the br0 bridge.  If I set them to virtb0, then Pihole will work with VMs running, but I lose the ability to remote connect to the VMs as they are on a different IP address range.

 

If anyone has any ideas, I am more than willing to try them out! Thanks in advance.

Link to comment
On 11/20/2019 at 5:09 PM, TheInfamousWaffles said:

Would someone be so kind as to help me with my issue, please.

@TheInfamousWaffles with a quick look over I see that you are putting your DNS for the docker as itself... I am not saying mine is configured correctly, but I have my dns severs set up as traditional servers (i.e. 1.1.1.1 and 1.0.0.1) Here is a screenshot of my config. Unraid IP is .254 and Pihole is .253 for clarification of screenshot.

 

Also, I setup with the help of this video:

 

 

Untitled.png

Edited by nblain1
  • Like 1
Link to comment
10 hours ago, nblain1 said:

@TheInfamousWaffles with a quick look over I see that you are putting your DNS for the docker as itself... I am not saying mine is configured correctly, but I have my dns severs set up as traditional servers (i.e. 1.1.1.1 and 1.0.0.1) Here is a screenshot of my config. Unraid IP is .254 and Pihole is .253 for clarification of screenshot.

That would make more sense if it is asking for the server IP of the tower and not pihole. However that didn't change anything in my experience as it still shows nginx using port 80 and the WebUI timing out. :/ .

Link to comment

@TheInfamousWaffles I am not running a bonded connection, but I do run a bridge, but I understand that you can bridge a bonded connection to allow VMs and dockers to communicate.  I would suggest enabling bridging in your network settings and then running the Pi-Hole Docker in Network Type: br0 and setting a custom (unused of course) IP address and it's default port of 80.

 

This is my setup and I have no issues running Unraid's web UI on port 80 (server ip ending in .254) and Pi-Hole's UI port on 80 (Pi-Hole's ip ending in .253).

Edited by nblain1
Link to comment
1 hour ago, nblain1 said:

@TheInfamousWaffles I am not running a bonded connection, but I do run a bridge, but I understand that you can bridge a bonded connection to allow VMs and dockers to communicate.  I would suggest enabling bridging in your network settings and then running the Pi-Hole Docker in Network Type: br0 and setting a custom (unused of course) IP address and it's default port of 80.

Wow, spot on in somehow guessing Bridging wasn't enabled in my settings! Was it because I said I was using bond0?

 

WebUI is finally back up! Thank you @nblain1, been wasting too much time over an overlooked setting. 😅 .

Edited by TheInfamousWaffles
Link to comment
  • 4 weeks later...

My ISP has changed over to IPV6 and I had PiHole on unRaid working as normal on IPV4 but I am finding that it's not working properly with IPV6?

 

Do I need to set an IPV6 address in the template? Or is there some information that I need to add to Network Settings in unRaid? I did change it to IPV4 + IPV6 but I am finding that it breaks my DNS (1.1.1.1 - 1.0.0.1) completely, where as with IPV4 It was as simple as adding that DNS into the unRaid network settings and PiHole itself.

 

How do I achieve this with IPV6?

Link to comment
On 11/23/2019 at 10:07 PM, hypergolic said:

Any idea why pihole is trying to bind to 0.0.0.0 when I am specifying a ServerIP?

 

image.thumb.png.ee3e52cbeef2afcf1a96b18a3133d996.png

You selected "bridge" network. This is shared with your server network. Hence port 80 (HTTP) is already in use by the GUI and not available to pi-hole.

You'll need to set a custom network to give pi-hole its own IP address.

Link to comment
5 hours ago, z0ki said:

My ISP has changed over to IPV6 and I had PiHole on unRaid working as normal on IPV4 but I am finding that it's not working properly with IPV6?

 

Do I need to set an IPV6 address in the template? Or is there some information that I need to add to Network Settings in unRaid? I did change it to IPV4 + IPV6 but I am finding that it breaks my DNS (1.1.1.1 - 1.0.0.1) completely, where as with IPV4 It was as simple as adding that DNS into the unRaid network settings and PiHole itself.

 

How do I achieve this with IPV6?

Some general advice

1. In network configuration you need to enable both IPv4 and IPv6 for the interface(s). This will allow docker to use IPv6 as well.

2. In the docker configuration you need to define IPv6 custom network(s), which includes the IPv6 subnet and gateway

3. In pi-hole you need to define the ServerIPv6 (=IPv6) DNS address

4. In pi-hole you need to set the IPv6 upstream DNS servers

5. You need to configure your router to hand-out the appropriate IPv4 and IPv6 DNS addresses in DHCP (or set fixed DNS in your LAN clients)

Link to comment

Hi Folks,

Unfortunately this container is sriving me mental, I cannot get it to work at all.

 

Unraid Version: 6.8.0

Server: HP Gen 8 Microserver

I am using the following container options:

Netowrk Type: Custom : br0

 

Fixed IP / Server IP: 10.100.100.11

Docker PS shows that the container is stuck at starting:


b1eef528cf2c        pihole/pihole:latest           "/s6-init"               About a minute ago   Up About a minute (health: starting)

 

I cannot access the Admin Portal, and DNS resolution doesn't work:

 

[cont-init.d] executing container initialization scripts...
[cont-init.d] 20-start.sh: executing...
::: Starting docker specific checks & setup for docker pihole/pihole
WARNING Misconfigured DNS in /etc/resolv.conf: Two DNS servers are recommended, 127.0.0.1 and any backup server
WARNING Misconfigured DNS in /etc/resolv.conf: Primary DNS should be 127.0.0.1 (found 127.0.0.11)

nameserver 127.0.0.11
options ndots:0
[i] Existing PHP installation detected : PHP version 7.0.33-0+deb9u5

[i] Installing configs from /etc/.pihole...
[i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
::: Pre existing WEBPASSWORD found
Using default DNS servers: 8.8.8.8 & 8.8.4.4
DNSMasq binding to custom interface: br0
Added ENV to php:
"PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
"ServerIP" => "10.100.100.11",
"VIRTUAL_HOST" => "10.100.100.11",
Using IPv4
::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early))
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
https://mirror1.malwaredomains.com/files/justdomains
http://sysctl.org/cameleon/hosts
https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt
https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt
https://hosts-file.net/ad_servers.txt
::: Testing pihole-FTL DNS: FTL started!
::: Testing lighttpd config: Syntax OK
::: All config checks passed, cleared for startup ...
::: Docker start setup complete
[i] Pi-hole blocking is enabled
[✗] DNS resolution is currently unavailable

 

Port Mappings as follows:

10.100.100.11:443/TCP10.100.100.11:443
10.100.100.11:53/TCP10.100.100.11:53
10.100.100.11:53/UDP10.100.100.11:53
10.100.100.11:67/UDP10.100.100.11:67
10.100.100.11:80/TCP10.100.100.11:80

 

Docker Network:

IPv4 custom network on interface br0:

Subnet: 10.100.100.0/27 Gateway: 10.100.100.30

 

I have also tried setting DNS using ENV variables in the "Extra Variables" of the container, along with a number of other things like mounting /etc/resolv.conf and manually setting the DNS.

I am really at a loss as to why the container doesn't start, I've never had any issues previously, granted this is the first time I have attempted to use a custome interface.

Any help much appreciated as I'd like to keep my kids AD free now they're getting some tablets from Santa!

 

Cheers!

Link to comment

Has anyone seen this error? I have been running pi-hole with no problems for months, and all of a sudden I am getting this, yet pi-hole still runs properly and is actively filtering. I have made no settings changes, the only change I can think of since I last checked is the Unraid 6.8 official update. Thoughts?

 

image.thumb.png.37f0954af1e53a2ddb5a9d3db348929d.png

Link to comment

I fixed the problem by using the Krusader docker and changing the permissions for "Group" from View to View and Modify and now things are all good. Strange because I haven't made any changes or anything to the folders or docker container previously but everything looks good now. 

Annotation 2019-12-31 120114.png

Edited by gtosnipey
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.