This document outlines how we setup DNS data.
This design has been in production use at some very large sites with complex DNS requirements since the mid '90s.
At the other end of the scale, it works fine for small sites like Crufty.NET and just works with split-view, and DNSSEC.
We use an SCM system (cvs, git, hg etc.) and make to ensure reliable operation while allowing for multiple hostmasters and automated tools.
I currently use Git for my DNS data, but Mercurial works just as well. As far as I know; some of those original deployments are still using CVS.
To keep things simple, the scheme relies on rigid rules for naming of zone files etc, but thanks to a simple script, converting from your old DNS setup to this method is quite painless.
The advantages of this setup are:
An SCM allows you to keep an audit trail of changes. This is very imortant, because after a DNS outage caused by human error you will be able to work out what went wrong, and arrange to prevent it happening again via the regression suite.
More importantly it allows that the directory that the production named loads from is never used for editing. Editing is all done in a separate tree (or trees), to avoid the risk of a partially edited zone file being loaded.
The setup described here can utilize a number of different SCMs. The only real requirement is the ability to configure pre-commit checks.
The original distribution used the venerable CVS, which while it lacks many features compared to other more recent SCMs is adequate for this purpose.
Git and Mercurial (hg) are also good choices with easily configured pre-commit hooks. However unlike CVS you need to enable it for each clone of the repository that commits will be made from. This easily done the first time make is run.
Git and Mercurial have an advantage in being distributed SCMs; the primary repository need not exist on any of the name servers, and indeed the clones on the various name servers provide redundancy should the main repository be lost.
The setup for GIT will work best with a bare repository to act as the central repo that edits are pushed to and the live data is pulled from.
The hostmaster might:
$ cd $HOME $ git clone /share/gits/named.git named $ cd ~/named $ make
to obtain a clone of the repo to work with. That first make command will among other things, setup .git/hooks/pre-commit.
As we make changes:
$ vi hosts/crufty.net.db # make changes as desired $ git add hosts/crufty.net.db # stage changed files $ git commit -m"log comment" # commit if regression suite happy
unlike with CVS (or SVN); we can do multiple commits without the live named picking up any of them until we:
$ git push
Usage is virtually identical to Git, the most noticable difference being no equivalent to the git add step.
Make is a tool used to keep files up to date with their dependencies. We use it to ensure that the zone files loaded by named are up to date and that the zone file serial numbers are updated when any of the data within the zone is.
We provide a Makefile for bmake (BSD make) and a GNUmakefile for gmake. Both of these provide setup and then include dnsmagic.mk where the actual logic is kept - it has to avoid any trick syntax so as to remain compatible with both versions of make.
Both bmake and gmake are freely available, and run on just about any OS, so we limit our support to those.
As the maintainer of bmake for the past 30 years, that's my personal preference ;-)
To achieve our goal, the zone files referenced by named.conf or more specifically primary.zones contain nothing but the SOA record (where the serial number lives) and an appropriate $INCLUDE directive.
Since make is most conveniently driven by filename suffixes, we use the convention that the SOA file has an extension of .soa and that the included zone data file has an extension of .db
An example always helps:
# make depend dnsdeps -N named.conf # touch ns.list # make updsoa hosts/crufty.net.soa updsoa rev/203.12.250.soa bouncedns
In the above, we ran make depend which uses dnsdeps to ensure that all the dependencies of all the zone files referenced by primary.zones and any others included by named.conf are recorded. We then simply touch a file that some zones are dependent on and run make, which runs updsoa to update the serial number of zones that were dependent on ns.list.
# rm hosts/crufty.net.soa # make updsoa hosts/crufty.net.soa bouncedns
In the above example, we remove one of the .soa files - to simulate an accident or perhaps a new .db file. When we then run make the .soa file is [re]created automagically.
Originally all the tools involved were shell scripts, many still are. The regression suite is all shell scripts.
For better performance and debugability, some tools were also provided as perl scripts. These did the job well, but newer versions of Perl are making it harder to keep them working, and frankly Perl is a write-only language, which makes it difficult for others to understand. After 25 years, even I have to think about what is going on ;-)
Recently the important tools were re-implemented in Python.
This was in part prompted by a desire to add IPv6 support, but the fact that Python tends to be much cleaner and more readable code was also an incentive.
This was also an opportunity to remove support for bind-4 and other ancient cruft.
I was able to leverage a lot of existing utilities, which made it trivial to support a common framework for all the python scripts. Each of which will read ${progdir}/dns.cf and ./.dns.cf if they exist (normally I use rc as the extension for such files, but that's already taken for the shell scripts), as well as ${progdir}/${progname}.cf and ./.${progname}.cf if they exist.
The module that supports that leverages another which provides for a powerful set of string variable manipulation operations - modeled after bmake.
The python scripts thus expect to get most configuration from one of the above config files, but still support enough of the old command line options that they can be a drop in replacement for the perl scripts.
The end result is much cleaner, eg the line count of dnsdeps.py and updsoa.py are half that of the perl versions, and updrev.py is a third less than the perl version while doing more.
The makefile runs dnsdeps whenever the named.conf or primary.zones files are updated. The purpose is to ensure that make knows about all the files that a zone file depends on. The .depend file produced looks like:
.zones: \ hosts/crufty.net.soa \ rev/192.168.42.soa \ rev/192.168.66.soa \ rev/192.168.1.soa \ rev/162.194.94.192.soa \ external/crufty.net.soa \ named.conf: \ /etc/rndc.key \ named.local \ primary.zones \ dynamic.zones \ named.ca \ external.zones \ /etc/rndc.key: \ named.local: \ primary.zones: \ hosts/crufty.net.soa \ rev/192.168.42.soa \ rev/192.168.66.soa \ rev/192.168.1.soa \ rev/162.194.94.192.soa \ hosts/crufty.net.soa: \ ns.list \ hosts/crufty.net.db \ hosts/hosts.db \ ns.list: \ hosts/crufty.net.db: \ hosts/hosts.db \ hosts/hosts.db: \ rev/192.168.42.soa: \ ns.list \ rev/192.168.42.db \ etc.
The .zones target is key - its dependencies are all the .soa files and the makefile will populate the file .zones with the names of any that are out-of-date. This allows us to run updsoa and bouncedns only once.
If we were not trying to support gmake as well as bmake, the makefile could be a lot smarter, but this arrangement gets the job done.
Small sites can easily keep their in-addr.arpa zones in sync with the rest of their DNS data. For large networks or just for bootstrapping, updrev can be used to build in-addr .db files for all the A records found in the zone files. All it takes is:
make revs
With the more recent python version we can also produce ip6.arpa zones from AAAA records.
We use getdata to extract data from a named_dump.db into a format which is much easier to parse, updrev uses that data such that existing PTR records are maintained (provided a matching A record exists), and new ones derrived from A records are added.
Thus, updrev can be used to initially generate the reverse maps, and a human can then edit them to override the tool's choices, such overrides will be persitent.
The tool is reasonably efficient, 20 years ago the perl version could generate or update reverse maps at about 10,000 A records per minute (measured on a Sparc Classic - that was a large network ;-).
The Python version should be faster still, and provide better support for CIDR address allocations - as described in rfc2317 as well as IPv6.
For example I have the following in .dns.cf which all the python scripts read (among other things):
# only CIDR nets matter nets [=] 162.194.94.192/29
([=] means nets is a list) which helps setup for creating the following as per rfc2317:
zone "192/29.94.194.162.in-addr.arpa" { type primary; file "rev/162.194.94.192.soa"; };
The parent domain would need to provide CNAMES (again per rfc2317):
$ORIGIN 94.194.162.in-addr.arpa. 192/29 IN NS ns.crufty.net. 193 IN CNAME 193.192/29 194 IN CNAME 194.192/29
and so on. I've yet to learn if my ISP will support that.
If the parent domain wants to use - rather than / to separate subnet and bits you can use:
nets [=] 162.194.94.192-29
the zone file will be setup as desired.
Note that updrev only supports the DNS arrangement described in this document.
On a nameserver for a large network, it is not practical to reload/restart named every time a change is made. Even on a small nameserver, we want to reload named when any .soa file is updated but not as each .soa file is updated.
For this reason the bouncedns command above, simply touches a flag file to indicate that a DNS restart is needed. The same command is then run regularly from cron such that if the flag file exists, named is restarted.
Note that it is worthwhile coordinating the cron jobs on secondary servers such that the bouncedns jobs do not all run at the same time.
To update the tree that named loads from we have cron run dnsmagic on a regular basis. This script:
runs upddns which does an scm-update in named's data directory, and if anything has been updated runs make depend. It then runs make to ensure serial numbers are up to date and to set the bouncedns flag if needed.
Finally, if a .rdistrc file exists, upddns will source it - this is handy for distributing the secondary.zones file to secondary servers when needed. You can make it use rsync, rdist or anything you like so long as no human intervention is needed.
runs bouncedns so as to action the bounce flag if present.
runs /etc/rc_d/named check to ensure that named is running - if for some reason it failed to restart.
The assute reader will note that doing an automated SCM update in the live tree, risks updating that tree between two related commits, possibly introducing just the sort of problem we are trying to avoid. This is really only an issue with CVS, with Git or Mercurial hostmaters should only push complete change sets.
For sites still using CVS; if a file named .noscm is present, the SCM update step is skipped. As long as administrators are aware of the issue, the .noscm file can be removed and automated updates allowed. When an extensive set of changes are to be performed, .noscm should be created in the live tree to ensure no automated updates will occur until the commits are complete.
Truely rigid sites might only allow updates of the live tree to be done manually and under change management.
For many sites the cronjob modules included with DNS Magic, should prove quite useful.
The original installation instructions assumed you are using our configs tool. You can download a suitable archive of DNSMagic for unpacking within the /configs tree from https://www.crufty.net/ftp/pub/unix/
A new release of DNSMagic is now available from https://www.crufty.net/ftp/pub/sjg/DNSMagic.tar.gz
This version is much simplified in that it only contains the Python versions of key tools, which are simpler and more consistent to configure.
Just unpack the archive somewhere handy and see the README.rst for instructions.
Setup is quite simple thanks to dns_convert.sh An example probably will suffice...
$ mkdir /tmp/named $ cd /tmp/named $ dns_convert.sh $ ls makefile hosts/ mx/ ns.list db.auth named.conf primary.zones named.ca rev/ secondary/
If using CVS:
$ cvs import -m"original data" named NAMED NAMED_0 $ su # cd /var # mv named named.old # cvs checkout named ...
If using Git:
$ cd named $ git init $ git add --all $ git commit -m"original data" $ git clone --bare . /share/gits/named.git $ su # cd /var # mv named named.old # git clone /share/gits/named.git named ...
Then:
# cd named # make dnsdeps -N named.conf updsoa hosts/crufty.net.soa bouncedns ... # cd /etc # mv named.conf named.conf.old # ln -s /var/named/named.conf . # /etc/rc_d/bouncedns -f Stopping named Restarting named # exit $ cd $HOME $ cvs checkout named
There after, changes you make in ~/named can be committed to the repository and simply running upddns in /var/named will sort it out.
A corrupted primary DNS zone can bring a company to its knees. For this reason, regression testing is a must for all but trival setups. Even for simple setups where changes are made rarely, the regression suite is very handy.
The basic idea is to run named in test mode, and check that it can load the uncommitted configuration without complaint.
We can also get it to dump its data base - which we convert to a more useful format using getdata, to allow other checks to be performed.
As noted earlier, we rely on the SCM's pre-commit hooks to ensure our regression suite is run.
The setup for these is trivial since everything happens within the context of the repo you are commtting to, we make the pre-commit hook simply run make regress.
The pre-commit.sh script has instructions on how to install it. The installation is done automatically when make is first run.
We also have the advantage with these of being able to make edits on a machine other than the one where the main repository resides.
CVS makes it simple to enforce regression testing before changes can be committed to the repository.
Simply add a line like:
^named/ /usr/local/share/dns/regress
to $CVSROOT/CVSROOT/commitinfo, and that command will be run when ever a commit is made to $CVSROOT/named. Most systems support starting named with an alternate port and bootfile. This allows named to be started and given a chance to verify its input, without interfering with normal DNS service.
Note that if a large number of files have been updated, CVS may fail to invoke the regression suite due to too many args or rather to long a command line. This then causes the commit to fail. The only work around is to commit the files in several batches. The exact number of files which is too many is system dependent.
An alternative is to modify CVS such that the pre-commit filter is fed its args via stdin rather than the command line. We have a patch which does this if the filter command begins with xargs or its full path. For sites with more than 200 in-addr zone files this is a good option - or just use Git or Mercurial.
dns/regress is a symlink to rc.sh, so will look for the directory dns/regress.d and preform all the checks found there (that start with an S, see rc.sh(8) for details). If all of the checks pass, then the commit proceeds.
The basic modules are (most of these do nothing if ``NO_NAMED`` is set in the environment - which only matters for sites using CVS):
dns/regress.d/S10regress.sh
See regress.sh(1) for details. It sets up the environment, and if this is the first call for the current cvs commit, it starts named on a different port, with a trimmed named.conf (produced by dns/Makefile) that does not contain any secondary zones. The named process is killed when dns/regress terminates.
If using CVS; for subsequence calls by the same cvs process, we skip the above by setting NO_NAMED (which subsequent tests check) and if the original tests failed we bail out immediately.
Since we rely on scanning the syslog output from named, we take great pains to verify that syslog is acutally working before starting. Syslog can fail to log due to lack of space or simply due to bugs (at least one major UNIX vendor has a very unreliable syslogd).
dns/regress.d/S20checklog
This module simply checks the syslog output from named, for problems. It is deliberately pedantic, but that's what we want for regression testing. If it sees anything it is looking for the game is over.
Note that updating named to newer versions, may introduce new warnings - often these are benign in which case they need to be filtered.
dns/regress.d/S20chkorigin
With the DNS setup we are advocating, there is no need for $ORIGIN records in the zone files. Used incorrectly they can cause data to dissappear mysteriously (mysterious to the victim anyway). This module complains bitterly if it finds any $ORIGIN records and suggests an alternative.
dns/regress.d/S40getdb
This module causes named to dump its cache to named_dump.db and then runs getdata which produces a format which is easily searchable using grep:
SOA crufty.net ns.crufty.net hostmaster@crufty.net SOA 250.12.203.in-addr.arpa ns.crufty.net PTR 1.250.12.203.in-addr.arpa gate.crufty.net PTR 130.250.12.203.in-addr.arpa gate.crufty.net NS crufty.net ns.crufty.net MX crufty.net gate.crufty.net 100 A ns.crufty.net 203.12.250.1 A gate.crufty.net 203.12.250.1 A gate.crufty.net 203.12.250.130This saves us having to support a DNS client which can query named on a non-standard port. It can be omitted if no subsequent tests need to look at the data.
dns/regress.d/local.sh
This module looks for a regress.d directory within the tree being committed and if found runs the tests therein. This is a simple means for providing tests specific to a portion of your DNS data.
dns/regress.d/chkwildmx
Wild card MX's are evil. The only excuse for using them is in an external DNS which basically only provides some MX records. Note that this module is not run by default. Link it to say dns/regress.d/S45chkwildmx or in named/regress.d as it needs S40getdb to have run first. It simply checks that there is at least one wildcard MX record for each domain in $WILD_MX if not, it complains.
dns/regress.d/S70chkcvs
This module, runs an SCM update or status command (depends on the SCM in use) to see which files have not been added or committed to the SCM. It then runs make .depend to get the list of files that named will need when it reloads. If any of the needed files have not been added to the SCM, it generates an error. If any needed files have been added but not yet committed it issues a warning to that effect. The goal is to avoid committing files that rely on others which have not been committed and thus will not be available to the live named.
dns/regress.d/S90cleanup
Just as the name implies.
The simple process of feeding the DNS config into named will pick up the majority of errors. Sites with complex requirements may well find it necessary to add specific tests. Note that the numbering above is quite sparse so it is simple to instantiate new tests.
As mentioned above, if the variable NO_NAMED is set in the environment, then the above tests do very little. Presumably other tests will check the validity of the data in this case. Note that if a group of changes are to be committed individually, then loading up named each time is over-kill. This is the main reason for the variable NO_NAMED, it is set by regress.sh if it detects that it is not the first child of a CVS process and that the original did not fail.
If the variable FORCE_COMMIT is set in the environment, then dns/regress.d/regress.sh terminates dns/regress immediately and no checking is done. Obviously, this should be used with caution.
This example, was run with BIND 9.5, which generally stops after the 1st error, so quite a few iterations are needed.
named.conf:
include "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; zone "127.in-addr.arpa" { type primary; file "named.local"; }; include "primary.zones";
primary.zones:
zone "test.it" { type primary; file "hosts/test.soa"; };
hosts/test.soa:
@ IN SOA ns.crufty.net. hostmaster.crufty.net. ( 1.2 ; Last changed by - sjg 7200 ; Refresh 2 hour 1800 ; Retry 1/2 hour 3600000 ; Expire 1000 hours 14400 ) ; Minimum $INCLUDE n.list $INCLUDE hosts/test.db
hosts/test.db:
cool IN A 192.168.168.42 IN MX 100 cool foo IN A 192.168.168.1 IN A 192.168.168.2 IN A 192.168.168. IN MX foo fool IN CNAME foo IN MX foo
A first run though regress produces:
regress: checking start regress: making sure dependencies are up to date dnsdeps -N named.conf dnsdeps: cannot open: n.list make: ignoring stale .depend for n.list updsoa hosts/test.soa bouncedns reload regress: /bin/sh /share/dns/regress.d/S20checklog start dns_primary_load: hosts/test.soa:11: n.list: file not found zone test.it/IN: loading from primary file hosts/test.soa failed: file not found
Since BIND-9 does not support dotted serial numbers, updsoa converted it. After fixing the other errors:
;; DO NOT EDIT THIS FILE it is maintained by magic ;; see sjg for questions... ;; $TTL 14400 @ IN SOA ns.crufty.net. hostmaster.crufty.net. ( 2010072100 ; Last changed by - sjg 7200 ; Refresh 2 hour 1800 ; Retry 1/2 hour 3600000 ; Expire 1000 hours 14400 ) ; Minimum $INCLUDE ns.list $INCLUDE hosts/test.db
Now regress says:
regress: /bin/sh /share/dns/regress.d/S20checklog start zone test.it/IN: loading from primary file hosts/test.soa failed: bad dotted quad
Ok, fixed that...
hosts/test.db:
cool IN A 192.168.168.42 IN MX 100 cool foo IN A 192.168.168.1 IN A 192.168.168.2 IN A 192.168.168.3 IN MX foo fool IN CNAME foo IN MX foo
and we get:
regress: /bin/sh /share/dns/regress.d/S20checklog start dns_rdata_fromtext: hosts/test.db:11: near 'foo': not a valid number zone test.it/IN: loading from primary file hosts/test.soa failed: not a valid number
Fix the MX records:
cool IN A 192.168.168.42 IN MX 100 cool foo IN A 192.168.168.1 IN A 192.168.168.2 IN A 192.168.168.3 IN MX 10 foo fool IN CNAME foo IN MX 10 foo
and, one last item:
regress: /bin/sh /share/dns/regress.d/S20checklog start zone test.it/IN: loading from primary file hosts/test.soa failed: CNAME and other data
Remove the offending line and BIND (and hence regress) is happy:
regress: checking start regress: making sure dependencies are up to date dnsdeps -N named.conf regress: /bin/sh /share/dns/regress.d/S20checklog start regress: /bin/sh /share/dns/regress.d/S20chkorigin start regress: /bin/sh /share/dns/regress.d/S40getdb start regress: . /share/dns/regress.d/S60local.sh regress: /bin/sh /share/dns/regress.d/S70chkcvs start regress: /bin/sh /share/dns/regress.d/S90cleanup start
As of late 2009, pretty well all Internet sites running BIND should be using 9.5 or later. We are thus removing support for earlier versions.
We put the following into named.conf:
logging { // we want to know about all problems // so that the regression suite will pick them up // we only need this on the primary. category cname { default_syslog; }; category lame-servers { default_syslog; }; category insist { default_syslog; }; // we may also want some of these category xfer-out { default_syslog; }; category statistics { default_syslog; }; category update { default_syslog; }; category security { default_syslog; }; category os { default_syslog; }; category notify { default_syslog; }; category response-checks { default_syslog; }; category maintenance { default_syslog; }; };
BIND-9 is a complete re-write of BIND and is incompatible with earlier versions in several ways.
Cannot listen on port 0
BIND-8 allowed us to set the listen port to 0 (which gave us a random high numbered port) when running the regression suite, this is not allowed with BIND-9 so we have to revert to picking a port and hoping it is unused. This is far from ideal.
Must use rndc for dumping
To a large extent BIND-9 abandons use of signals for controlling named. So we have to detect BIND-9 and use rndc instead for many operations. We use rndc dumpdb -all and rndc blocks until the dump is complete. So this is actually a big improvment.
Defaults to wanting to create a session key for dynamic DNS in /var/run which causes problems for regress. So we add the followng to Makefile.inc:
CONF_TEST_SED+= -e 's,pid-file.*,session-keyfile "./s.key";,'
Secondary servers are a must. The setup is much simpler than for the primary. Apart from a copy of named.conf.sec as produced by the makefiles, secondaries need a hints file for the root zone (name.ca) and for localhost (named.local).
I have a list in my configs tree for setting up a secondary server:
# /configs/update_file -list dnsmagic-named-secondary.list Initializing Using: CONFIGS=/configs, SITE=NET/CRUFTY:local, DEST_HOST=bbb DEST_HOSTNAME=bbb.crufty.net TMP=/tmp/config.cqD35fmX, MD5_LIST=/tmp/config.cqD35fmX/MD5_LIST, PATCHES=, DESTDIR=, DIR_LIST=FreeBSD NET/CRUFTY NET local /configs config: . /configs/config.d/S09perl_setup.sh PERL=, PERL_INCS= usr/local/lib/perl config: /bin/sh /configs/config.d/S20install_files show Checking /var/named/named.ca - OK Checking /var/named/named.local - OK Checking /var/named/secondary.zones - OK Checking /var/named/external-secondary.zones - OK Checking /var/named/named.conf.sec - OK Checking /etc/named.conf - OK
*secondary.zones and named.conf.sec are just copied from /var/named/ on the primary, and /etc/named.conf is a symlink to /var/named/named.conf.sec.
It is a good idea for secondaries to be able to take over the role of primary server if needed. Having a backup copy of the named repository would be a big help, or even just copies of the primary.zones and any keys/ (for DNSSEC).
If using Git or Mercurial any checkout of the repository is an effective backup.
On my secondaries I install the full dnsmagic.list and checkout the named repository into /share/dns/named/ just like on the primary. I can thus easily transition the secondary to primary:
# /configs/update_file -list dnsmagic.list Initializing Using: CONFIGS=/configs, SITE=NET/CRUFTY:local, DEST_HOST=bbb DEST_HOSTNAME=bbb.crufty.net TMP=/tmp/config.7nl0ZyAv, MD5_LIST=/tmp/config.7nl0ZyAv/MD5_LIST, PATCHES=, DESTDIR=, DIR_LIST=FreeBSD NET/CRUFTY NET local /configs config: . /configs/config.d/S09perl_setup.sh PERL=, PERL_INCS= usr/local/lib/perl config: /bin/sh /configs/config.d/S20install_files Checking /etc/rc.sh - OK Checking /etc/add_path.sh - OK Checking /etc/rc_d/atexit.sh - OK Checking /etc/rc_d/bouncedns - OK Checking /etc/rc_d/cfgchange - OK .. Checking /etc/rc_d/debug.sh - OK .. Checking /etc/rc_d/dnsfuncs.sh - OK Checking /etc/rc_d/dnsmagic - OK Checking /etc/rc_d/fix_homes - OK Checking /etc/rc_d/funcs.sh - Updated [From /configs/etc/rc_d/funcs.sh] Checking /etc/rc_d/runas.sh - OK .. Making Directory /share/dns Making Directory /share/dns/bin Checking /share/bin/scm.sh - New [From /configs/share/bin/scm.sh] Checking /share/bin/setopts.sh - New [From /configs/share/bin/setopts.sh] Checking /share/bin/only1.sh - New [From /configs/share/bin/only1.sh] Checking /share/dns/bin/setopts.sh New -> /share/bin/setopts.sh .. Checking /share/dns/bin/bouncedns - New [From /configs/etc/rc_d/bouncedns] .. Checking /share/dns/Makefile - New [From /configs/share/dns/Makefile] Checking /share/dns/dnsmagic.mk - New [From /configs/share/dns/dnsmagic.mk] Checking /share/dns/pre-commit.sh - New [From /configs/share/dns/pre-commit.sh] Checking /share/dns/regress New -> /etc/rc.sh Making Directory /share/dns/regress.d .. # $ cd /share/dns/ $ git clone ns.crufty.net:/repos/gits/named.git named
I run bind in split-view, that is the answers given depend on who is asking. Internal clients get the internal view which are all rcf1918 addresses. Queries from outside the network get the external view which is much more limited.
This complicates the setup of internal secondaries that must also support the external view. The secondary cannot use IP4 to query the primary for the external zone view - it would just get the internal view. Similarly the primary cannot use IP4 to notify the secondary of a change to the external view.
The solution is to use IP6 link-local addresses. The secondary fetches the external view from the primary by querying its IP6 link-local address which does not match the acl to decide internal. Similary the primary can use an also-notify clause with the secondaries IP6 link-local address to notify it of changes to the external view.
Thus, in named.conf we have:
acl internals { 192.168.0.0/16; 127.0.0.1; external-ip/29; }; view internal { match-clients { internals; }; zone "127.in-addr.arpa" { type primary; file "named.local"; }; include "primary.zones"; }; view external { match-clients { ! internals; any; }; // every config needs these zone "." { type hint; file "named.ca"; }; zone "127.in-addr.arpa" { type primary; file "named.local"; }; include "external.zones"; };
The targets relating to external.zones are put into Makefile.inc.
DNSSec just works with this arrangement, at least this is so when using recent bind9 with associated bind-tools. Much of the info below is from ISC's DNSSEC-Guide.
After generating suitable keys in a sub-directory (eg keys/crufty.net), the following assumes anyone allowed to mess with the dns named repository is a member of group bind. We provide dnssec-key-init.sh to initially generate keys for a zone. Eg.:
dnssec-key-init.sh crufty.net
which will check that no keys for that zone exists, and if not:
mkdir -p keys/crufty.net chgrp bind keys chmod 750 keys dnssec-keygen -K keys/crufty.net -a RSASHA256 -b 1024 crufty.net dnssec-keygen -K keys/crufty.net -a RSASHA256 -b 2048 -f KSK crufty.net
The first key is a zone-signing key (ZSK), which you should plan on rolling over annually. We provide a dnssec-key-rollover script that follows the recommendations in the DNSSEC-Guide.
The second key is the key-signing key (KSK), which you might roll over much less often (if ever - due to the need to coordinate with parent zone).
We add the following to each zone to be signed:
key-directory "keys/crufty.net"; inline-signing yes; auto-dnssec maintain;
and just like that, bind will be signing the zone data.
Once you test everything is working - including all your secondaries, you need to provide the relevant key information to your parent zone. Just how you do that depends on the admins of the parent zone. Eg. the following will output the current DS record for crufty.net:
dig @ns.crufty.net crufty.net. DNSKEY | dnssec-dsfromkey -f - crufty.net
See the DNSSEC-Guide for more details.
Bind 9.16 introduces dnssec-policy, and warns that auto-dnssec is deprecated. So if you've not yet enabled DNSSEC look at the default dnssec-policy.
The following is a simple policy that more or less matches auto-dnssec:
// this policy matches the old auto-dnssec dnssec-policy "kskzsk" { keys { ksk lifetime unlimited algorithm rsasha256 2048; zsk lifetime P360D algorithm rsasha256 1024; }; };
but check your private keys are v1.3.
See dnssec-key-and-signing-policy for details.
While it would be possible to keep DNSSec keys in the SCM with the zone data, it is not necessarily a good idea.
Given the way we use separate checkouts of the zone data for editing and running the reqression suite, there is no need for anyone but the production named to have access to the real zone-signing key (ZSK) private key, which needs to be readily available for operational use.
The key-signing key (KSK) private key can be kept offline except when needed to sign a new ZSK.
Hostmasters can use random keys for the purpose of the regression suite.
Per the example above ensuring only members of group bind can access the keys/ directory helps protect the keys which must be online.
The lastest version of this page can be found at: https://www.crufty.net/help/dns/DNSMagic.htm
Author: | sjg@crufty.net /* imagine something very witty here */ |
---|---|
Revision: | $Id: DNSMagic.txt,v 1.17 2023/06/11 23:49:01 sjg Exp $ |
Copyright: | 1997-2023 Simon J. Gerraty |