[Support] Bind9


Recommended Posts

Support for Docker image Bind9 from VRx repo.

Application: Bind9 - https://www.isc.org/bind/

Docker Hub: https://hub.docker.com/r/pwa666/bind9

GitHub: https://github.com/vrx-666/bind9

This image contains WebUI based on Webmin to easly manage DNS Configuration (Bind9).
Note: Webmin was installed with a minimum of modules to manage Bind

 

[ 2022.06.08 ]

Fixed bugs with UnraidOS 6.10.x

 

[ 2022.06.24 ]

Healthcheck added: more

 

Link to comment
  • 2 weeks later...

- Some improvements  in webmin bind module.
RNDC is working now from WebUI. So easy zone update is possible thru webui, without restarting bind or whole container.
Just click on "Apply Zone" in top right corner when zone edited.

- changed network mode to bridge as default

- default webui password updated ( dificult -> difficult )


*Image is lightweight because build from alpine linux, most popularn bind+webmin on docker hub are based on ubuntu (4-5 times heavier).

Link to comment
On 7/30/2021 at 6:22 AM, VRx said:

@Owner did You have any error message?
If You have error like attached below, this is not image problem. But known problem that sometimes pointing to /mnt/user is crashing.
Let me know if message is different maybe.

error.PNG

Yes this it is. Seems tho that the system keeps the port binded after removal.

Link to comment
Just now, Owner said:

Yes this it is. Seems tho that the system keeps the port binded after removal.

Can I use /mnt/disk/appdata/bind9 instead. As far I know the reason for error was the port 53 had to be changed to 5353 but upon removal it doesn't removal the bind of port. I will look into this more. On how to remove the bind port 5353 error.

Link to comment
5 hours ago, Owner said:

Can I use /mnt/disk/appdata/bind9 instead.

Yes You can. This change should resolve this problem.
 

5 hours ago, Owner said:

As far I know the reason for error was the port 53 had to be changed to 5353 but upon removal it doesn't removal the bind of port.

You cannot change port to 5353, because DNS Service is using port 53, and all operating systems are using this port to resolve hostnames by default.
Maybe You have another service using this port, for example pihole container.

Can You please post some error message, and some logs from this container or maybe send me PM with this informations as You prefer?

There is a fix in progress, by the way of searching for solution, I found some other bugs.

Link to comment
On 8/1/2021 at 10:46 PM, VRx said:

Yes You can. This change should resolve this problem.
 

You cannot change port to 5353, because DNS Service is using port 53, and all operating systems are using this port to resolve hostnames by default.
Maybe You have another service using this port, for example pihole container.

Can You please post some error message, and some logs from this container or maybe send me PM with this informations as You prefer?

There is a fix in progress, by the way of searching for solution, I found some other bugs.

I will test it out port 5353 wasn't the issue. It was the starting up in general. Will give it a go and post some results

Link to comment
On 8/2/2021 at 2:17 PM, VRx said:

Update:
- Fixed env variables (setting admin password should now work properly)
- Fixed bind starting script (starting/restarting bind from webUI)

Reason behind 5353 is I use a dns server with OPNSense. No pi-hole. OPNSense has such blocking for better ease.

Link to comment
On 8/1/2021 at 10:46 PM, VRx said:

Yes You can. This change should resolve this problem.
 

You cannot change port to 5353, because DNS Service is using port 53, and all operating systems are using this port to resolve hostnames by default.
Maybe You have another service using this port, for example pihole container.

Can You please post some error message, and some logs from this container or maybe send me PM with this informations as You prefer?

There is a fix in progress, by the way of searching for solution, I found some other bugs.

I will try to figure out the unbinding issue. Not sure how that is caused. Still looking been away from forums

Link to comment
  • 2 weeks later...
  • 1 month later...
On 8/17/2021 at 4:36 AM, VRx said:

From now there will be weekly system update, every Sunday.
But every changes in application, as always, will be reported in this thread.

Awesome thanks. I will look at that but I don’t think I should need a custom bridge if my first dns sits on 192.168.75.1 and the other is on the Unraid box 192.168.75.3. But no doubt thanks for fixing those bugs. Much appreciated. 

Link to comment
  • 1 month later...
  • 2 weeks later...
2021-11-19 15:01:19,854 INFO supervisord started with pid 17
2021-11-19 15:01:20,857 INFO spawned: 'bind' with pid 39
2021-11-19 15:01:20,909 INFO exited: bind (exit status 1; not expected)
2021-11-19 15:01:21,912 INFO spawned: 'bind' with pid 73
2021-11-19 15:01:21,943 INFO exited: bind (exit status 1; not expected)
2021-11-19 15:01:23,946 INFO spawned: 'bind' with pid 107
2021-11-19 15:01:23,977 INFO exited: bind (exit status 1; not expected)
2021-11-19 15:01:26,981 INFO spawned: 'bind' with pid 141
2021-11-19 15:01:27,012 INFO exited: bind (exit status 1; not expected)
2021-11-19 15:01:28,013 INFO gave up: bind entered FATAL state, too many start retries too quickly

 

Is the default configuration needed (named.conf) before boot? 
Seems something is wrong with the entrypoint script/bin

Edited by Kanashii
Link to comment
  • 3 weeks later...
On 12/11/2021 at 9:34 PM, NeySlim said:

HI, how can I make bind to listen to ipv6 ?

I don't know how to remove de the -4 option, or add the -6 beside editing supervisor conf in container.

many thanks

 

Check for container update and update it.
Add new ENV:

IPV6=enable

Should work from now.

Link to comment
  • 4 weeks later...
  • 4 months later...

After update I get this, reinstalled cleared configs, still happening.

2022-05-31 04:09:23,794 INFO Set uid to user 0 succeeded
2022-05-31 04:09:23,798 INFO supervisord started with pid 1
2022-05-31 04:09:24,802 INFO spawned: 'bind' with pid 38
2022-05-31 04:09:25,282 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:09:26,287 INFO spawned: 'bind' with pid 152
2022-05-31 04:09:26,702 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:09:28,708 INFO spawned: 'bind' with pid 266
2022-05-31 04:09:29,156 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:09:32,163 INFO spawned: 'bind' with pid 380
2022-05-31 04:09:32,540 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:09:33,541 INFO gave up: bind entered FATAL state, too many start retries too quickly

If i go into the docker console and run "supervisord -n -c /etc/supervisord.conf" i get a similar result:

2022-05-31 04:18:20,996 INFO Set uid to user 0 succeeded
2022-05-31 04:18:21,000 INFO supervisord started with pid 536
2022-05-31 04:18:22,004 INFO spawned: 'bind' with pid 537
2022-05-31 04:18:22,389 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:18:23,393 INFO spawned: 'bind' with pid 651
2022-05-31 04:18:23,827 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:18:25,833 INFO spawned: 'bind' with pid 765
2022-05-31 04:18:26,197 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:18:29,204 INFO spawned: 'bind' with pid 879
2022-05-31 04:18:29,671 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:18:30,672 INFO gave up: bind entered FATAL state, too many start retries too quickly

and if i run it without the config file:
 

/usr/lib/python3.9/site-packages/supervisor/options.py:474: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
  self.warnings.warn(
2022-05-31 04:20:30,808 INFO Set uid to user 0 succeeded
2022-05-31 04:20:30,810 INFO supervisord started with pid 996
2022-05-31 04:20:31,815 INFO spawned: 'bind' with pid 997
2022-05-31 04:20:32,276 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:20:33,281 INFO spawned: 'bind' with pid 1111
2022-05-31 04:20:33,673 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:20:35,679 INFO spawned: 'bind' with pid 1225
2022-05-31 04:20:36,068 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:20:39,075 INFO spawned: 'bind' with pid 1339
2022-05-31 04:20:39,490 INFO exited: bind (exit status 1; not expected)
2022-05-31 04:20:40,491 INFO gave up: bind entered FATAL state, too many start retries too quickly



Seems that its not my config, its the image fault.

Edit----
I've attempted to run an older image pwa666/bind9:v1.3.40
but issue still persists, might be something glitched on the server and needs to be restarted, will do so tonight as I cant restart it now.

Edited by rwysocki_bones
Added more logs, attempted to run another version image
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.