D. J. Bernstein
Internet publication
djbdns

Brad Knowles's slander

See also the related qmail page.

Patently false statements


Brad Knowles, 2002.03.25: ``You cannot set an alternative SOA contact address (other than what is hard-coded within tinydns), if you do not have a patch from a third party.''

Facts: You can trivially set an alternative SOA contact address, or any other SOA information you want. The general Z-line syntax is explained in the tinydns-data documentation, and an example is covered in detail in the upgrade documentation under Administration.

The contact address ``hard-coded within tinydns'' is simply a convenient default contact address: hostmaster at the domain. What kind of idiot would treat this helpful abbreviation as if it were a limitation?

(There is an unauthorized third-party patch that lets the user specify, in one line, a different default contact address for all subsequent zones. This is another feature that isn't available in BIND. If you're curious why I have rejected the patch: It's clear that the same feature should instead be provided by higher-level data-editing tools. Putting the feature into the data syntax would make those higher-level tools unnecessarily difficult to write.)

Tip: When Knowles says something is impossible to do, check the official documentation. Chances are that you'll find out exactly how to do it.


Brad Knowles, 2002.03.25: ``Without a third party patch, tinydns does not support standard SRV records.''

Brad Knowles, 2002.11.09: ``[djbdns] natively supports very limited set of record types: SOA, NS, A, MX, PTR, TXT, CNAME.''

Facts: tinydns allows all record types, including SRV, through a generic record syntax. The syntax is explained in the tinydns-data documentation.

(There is an unauthorized third-party patch that adds a special syntax for SRV records. I have rejected this patch for the reasons mentioned above.)


Brad Knowles, 2001.06.11: ``TinyDNS and dsncache make it difficult to run both services on the same machine.''

Facts: It is trivial to run both services on separate IP addresses on the same machine. There are several examples of this in the documentation.


Brad Knowles, 2001.06.11: ``Many sites want to have separate internal versions and external versions of their DNS data. Since you can't mix these two services in the same program, you end up having to set up separate external and internal TinyDNS servers.''

Facts: tinydns fully supports client differentiation. You can, for example, provide records only to internal clients, making them invisible to external clients. This feature is more powerful than BIND's ``views'': tinydns's client differentiation has per-line granularity, while ``views'' have per-zone granularity.

Of course, separate servers are supported too.


Brad Knowles, 2002.03.25: ``Without a patch from a third party, tinydns does not listen to more than one IP address. If you have a multi-homed server, you have to apply a patch from someone other than they [sic] author, before you can get it to listen on more than one address/interface.''

Brad Knowles, 2002.11.09: ``Without third-party patch, [tinydns cannot] listen to more than one IP address.''

Facts: You can trivially tell tinydns to listen to another address on the same machine. Addresses can share configurations or have separate configurations; both setups are fully supported.


Brad Knowles, 2002.03.25: ``Like tinydns, dnscache will not bind to more than one IP address without a third party patch.''

Brad Knowles, 2002.11.09: ``Without third-party patch, [dnscache cannot] listen to more than one IP address.''

Facts: As with tinydns, you can trivially tell dnscache to listen to another address on the same machine.


Brad Knowles, 2002.11.09: ``[djbdns] truncates responses illegally: [it] does not set the TC bit.''

Facts: When djbdns truncates a response, it sets the TC bit, exactly as required by the protocol.


Brad Knowles, 2002.03.25, responding to the question ``What [standard] DNS protocol allows you to kick/restart the secondary bind server to tell it new zones are available?'' after he said that my recommended use of rsync was nonstandard: ``I believe that the protocol is called "NOTIFY".''

Facts: The NOTIFY protocol specifically prohibits modification of the zone structure; in particular, creation of new zones. The BIND implementation of NOTIFY has the same prohibition.


Brad Knowles, 2002.03.25, talking about dnscache's feature of removing old records from the cache to stay within a specified amount of memory: ``[With LRU] there is a significant amount of overhead that the nameserver would have to go through in order to simply perform the garbage-collection and memory flushing routines. [Without LRU] the overhead of garbage collection and memory flushing would be much, much greater -- and everything else the server does will suffer.''

Facts: dnscache doesn't ``suffer'' at all. The internal cache data structure in dnscache handles ``garbage collection and memory flushing'' for free. BIND's problems in this area are caused by BIND's naive choice of data structures, not by any inherent difficulty in the problem.


Brad Knowles, 2001.06.20, talking about the same feature: ``Thus, instead of doing swapping/paging thrashing, you'd be doing data thrashing. However, the difference is that as you re-query for that information, you'd be bottlenecked not on going to local disk, but instead by going to remote sites on the Internet. If there's *ANYTHING* that is slower than local disk latencies (which are usually measured in milliseconds), it has to be Internet latencies (which are frequently measured in tens or hundreds of milliseconds).''

Brad Knowles, 2002.03.25: ``... local disk typically has a latency measured in terms of single digit milliseconds. Contrariwise, DNS queries that have to go out to the Internet and come back are frequently measured in terms of tens, hundreds, or even thousands of milliseconds. Therefore, you will have traded a known serious problem with local swap thrashing for an unknown and quite probably much, much more serious remote data thrashing.''

Facts: Knowles is making a really stupid mistake here. The big problem with physical disk thrashing is throughput, not latency.

Yes, a 10-millisecond disk access has an order of magnitude lower latency than a 100-millisecond DNS query resolution. However, a typical machine can do only 100 disk accesses in a second, while it can easily do 1000 query resolutions! (Never mind the fact that, on a thrashing BIND system, the number of disk accesses exceeds the number of queries.)

The point is that disk access occupies a precious resource: the number of simultaneous disk accesses is at most the number of disks in the system. In contrast, a query resolution occupies a relatively small portion of RAM, so many queries can be resolved simultaneously.


Brad Knowles, 2001.06.20: ``What you're not seeing, and which is being totally hidden by the server, is the fact that the nameserver is critically short on memory.''

Facts: dnscache does not hide this information. It puts this information into statistical reports in the log. The meaning of the statistics is explained in the documentation.


Brad Knowles, 2002.03.25: ``One argument frequently used to support the use of djbdns over BIND is performance. Upon further investigation, this claim simply does not hold water. ... The best benchmarks available for tinydns indicate that it can handle at least 500 queries per second, but that is the highest number reported.''

Brad Knowles, 2002.11.09: ``Peak 500 qps, according to TinyDNS FAQ ... Personal testing: Real-world Internet demonstrated tinydns to ~250 qps. Private servers demonstrated tinydns to ~340 qps.''

Facts: In February 2002, Matt Simerson publicly reported an upgrade from BIND to tinydns. Let's look at the numbers:

We recently converted from BIND 8 over to DJBDNS due to some major issues we were having with scaling our DNS system. BIND started having all sorts of problems when we exceeded 125,000 zones. ... On the night of the cut over we updated the Server Irons, shifted all the traffic over to the new tinydns farm (4 servers) and sat back and watched in awe as each of the tinydns instances rocketed up to between 4,000 and 6,000 connections per second. Our Network Operations team was able to report back how many connections came into the system and also how many were being returned. The fastest machine (Dual 1GHz) was pumping out the 6,000/sec and each Dual 600 was cranking out a little over 4,000. The dual 600's showed the tinydns process eating about 80% of one CPU and the dual 1GHz system showed tinydns using about 40% of one CPU. For the first time in months, we were answering every valid request that came in.

For comparison: In September 2001, Knowles publicly reported ``Nameserver performance'' results. He pointed out a tinydns server responding in hundreds of milliseconds, and a BIND server responding much more quickly. He failed to notice that the tinydns server was on a much slower network link. Clueless.

It's amusing to note that Knowles's ``personal testing,'' as summarized in his own November 2002 graphs, showed tinydns serving the 20MB .tv zone at 300 queries per second, BIND 8 at 50 queries per second, and BIND 9 at 20 queries per second. The real lesson here is not that tinydns is faster than BIND, but that Knowles has no clue how to test performance.


Brad Knowles, 2002.11.09: dnscache is slow because Knowles's tests ``demonstrated dnscache to ~96 qps.''

Facts: Matt Simerson has publicly reported a production dnscache machine handling 1.95 billion queries in 5 days (on average, 4500 queries per second) using under half of a Xeon-550. Note that caches such as dnscache have to do much more work per query than servers such as tinydns.

The real lesson here, again, is that Knowles has no clue how to test performance.


Brad Knowles, 2002.03.25: ``A lot of the reasons the author gives for running djbdns instead of BIND are related to problems in older versions of BIND which have been fixed or are largely non-issues in later releases of BIND 9. ... While previous versions of BIND would not answer queries during startup, this is no longer a problem with BIND 9.''

Facts: This continues to be a problem with BIND 9. When BIND starts (for example, after a reboot), it has to read and parse all your zone files. BIND can't answer queries from a zone until it has parsed the relevant zone file.

In contrast, with djbdns, the preparsed data is saved on disk, so all your data is accessible as soon as tinydns starts.


Brad Knowles, 2001.04.16: ``Dan defines [djbdns] to be secure simply because he says so.''

Facts: The djbdns security blurb explains many of the reasons for djbdns's perfect security record. It includes specific features such as ``tinydns and walldns never cache information'' and metafeatures such as ``Bug-prone coding practices and libraries have been systematically identified and rejected.''


Fantasies

Knowles frequently attributes specific positions and statements to me. Here are several examples where the attributions are simply not true:

Knowles's assertions here are false, but they are not patently false. How can I prove that I never said something? The best way to see that Knowles is lying is to demand that he back up his paraphrases with something verifiable: a complete quote and a reasonably precise reference.

As another example, Knowles responded to this page, 2002.03.27: ``Given the lies we've seen from Dan in the past, I won't bother to waste my time with this page.'' Notice the pattern here: Knowles says that I'm a liar, and doesn't have a shred of evidence; I say that Knowles is a liar, and I provide extensive evidence.

Ease-of-use omissions

Common operations take less effort under djbdns than under BIND. I think that this is one of the biggest advantages of djbdns over BIND.

Knowles has presented his own ease-of-use comparisons, showing various procedures in djbdns and procedures in BIND. He's implicitly claiming that the procedures achieve the same results. That claim is simply not true. For example:

Knowles has also added unnecessary steps to the djbdns procedures. For example, at one point he presented a slide showing how to set up dnscache, and then another slide showing how to set up dnscache with another configuration, as if an administrator had to go through the steps on both slides. He showed only one configuration for BIND, of course.

Irrelevancies

Sometimes Knowles's comments have no relevance to the real world. The issue here isn't the accuracy of the comments themselves; the issue is Knowles's false claim that these are ``problems.'' Some of the comments are true; some of them are false; all of them are irrelevant.

Many people are under the mistaken impression that ``RFC'' means ``required protocol specification.'' In fact, it would be impossible to make an Internet computer comply with every RFC. Some RFCs contradict each other! As RFC 1796 explains, most RFCs do not specify standard protocols: they specify experiments, or proposed standards, or sometimes draft standards, or they can be purely informational. Furthermore, most standard RFCs are not required protocols, and not even recommended protocols: they are usually entirely optional, and there are often good reasons that they should not be used.

In short, if Knowles suggests that failure to comply with an RFC is a problem per se, he's trying to fool you. Tip: When Knowles starts talking about protocols, ask him ``How exactly does this matter for my users?''


Brad Knowles, 2001.03.11: ``IIRC, it doesn't hand out [root] referrals when asked questions outside of the zones it is authoritative for. This means that it violates the DNS protocols, among other things.''

Brad Knowles, 2001.06.11: ``There are a number of problems with TinyDNS. For one, it does not hand out [root] referrals to questions that are asked of zones it does not control. ... I believe that this is a violation of the RFCs, at least in spirit if not in the letter.''

Brad Knowles, 2002.03.25: ``By default, tinydns does not hand out [root] referrals to questions it is asked about zones it does not control. I believe that this violates the spirt of the RFCs, if not the letter.''

Facts: tinydns provides data as instructed by the system administrator. If the system administrator gives tinydns the root addresses, tinydns will provide root referrals. Otherwise it won't.

Knowles is making a fool of himself when he suggests that this has anything to do with DNS interoperability. DNS clients and caches, including BIND, throw away root referrals. Why? Because servers simply aren't authorized to say where the roots are. Even if a cache has been incorrectly told to ask tinydns about somebody else's domain, the cache won't be fooled into accepting a root referral.

Knowles's hero Paul Vixie wrote on 2002.11.22 that authoritative servers should not provide root referrals: ``It's not reasonable to ask an authoritative server to have to fetch anything from anybody. It should not need a root.cache file...''


Brad Knowles, 2002.11.09: ``By default, [tinydns] does not provide referrals,'' while ``Root & TLD nameservers do little else but referrals.'' (This time, Knowles is talking about referrals within zones that the server does control.)

Facts: tinydns provides referrals if it is configured to do so, just like BIND. It is used on some TLD nameservers, and provides referrals accordingly, just like BIND.

Knowles's comment is as idiotic as saying ``By default, BIND does not publish any addresses, while normal nameservers spend most of their time publishing addresses.'' That's because, by default, BIND doesn't have any data to publish; you have to give it the data.


Brad Knowles, 2002.11.09: ``By default, [tinydns] does not support zone transfers,'' and therefore ``violates RFCs.''

Facts: djbdns supports zone transfers, for the sites that need them. Zone transfers are disabled by default. This default is fully compliant with the RFCs.

RFC 1034 (part of the DNS standard), section 4.3.5, explicitly states that DNS data can be replicated through FTP or other protocols. It does not require zone transfers.


Brad Knowles, 2002.03.25: ``By default, tinydns does not support the use of TCP at all. This most definitely violates the spirt of the RFCs, as well as the letter (if a DNS query via UDP results in truncation, you're supposed to re-do the query using TCP instead).''

Brad Knowles, 2002.11.09: ``By default, [tinydns] does not support TCP,'' and therefore ``violates RFCs.''

Facts: djbdns supports TCP, for the sites that need it. TCP is disabled by default. This default is fully compliant with the RFCs.

Saying that tinydns doesn't support TCP is missing the point. There are two cooperating programs, tinydns and axfrdns, using the same database. UDP service is handled by tinydns. TCP service is handled by axfrdns, at the sites that need it.

RFC 1123 (the Host Requirements standard), section 6.1.3.2, requires UDP service. It does not require TCP service. The situations where you need TCP service are listed in the djbdns documentation.


Brad Knowles, 2002.03.25: ``Without a patch from a third party, tinydns does not support the standard "NOTIFY" protocol of informing secondary nameservers that the zone has been updated.''

Facts: BIND pauses before sending NOTIFY. You obtain quicker, more reliable updates by setting a fast schedule for the zone-transfer client.


Brad Knowles, 2002.03.25: ``When an IQUERY is sent to a djbdns server, it will respond with opcode set to QUERY.''

Brad Knowles, 2002.11.09: ``[djbdns] provides strange responses to query types it does not support,'' and thus ``violates the `be liberal in what you accept, conservative in what you generate' principle.''

Facts: Clients do not send IQUERY. IQUERY is obsolete. Even the BIND company admits this. The primary use of IQUERY is by attackers trying to break into pre-8.1.2-t3b versions of BIND.

As for the be-liberal-in-what-you-accept principle: See RFC 1123 (the Host Requirements standard), section 1.2.2. The principle says that programs shouldn't crash when something unusual happens:

     Software should be written to deal with every conceivable error,
     no matter how unlikely; sooner or later a packet will come in with
     that particular combination of errors and attributes, and unless
     the software is prepared, chaos can ensue.  In general, it is best
     to assume that the network is filled with malevolent entities that
     will send in packets designed to have the worst possible effect.
The principle does not say that programs should be polite to these malevolent entities.
Brad Knowles, 2002.03.25: ``DNSCACHE (the caching server) does not respond to queries with the RD bit clear in the query.''

Facts: Under the DNS protocol, queries from clients to caches set the RD bit, and queries from caches to servers clear the RD bit. The picture is quite clearly laid out in RFC 1035 (part of the DNS standard), page 6. A query to a cache without the RD bit means that the cache is being incorrectly used as a server. Queries of this type are bogus and have no relevance to DNS interoperability. BIND answers them as a cache snooping mechanism; dnscache discards them to help protect user privacy.


Brad Knowles, 2002.03.25: ``There aren't even any patches that can get djbdns to implement TSIG, Dynamic DNS, or DNSSEC, nor are they ever likely to be created (my understanding is that the author is strongly opposed to them).''

Brad Knowles, 2002.11.09: ``[djbdns] does not, and author's code will not, support new DNS features: DNSSEC, TSIG, IXFR, NOTIFY, EDNS0, IPv6, etc...''

Facts: IPSEC provides better security than TSIG. IPSEC is inherently easier to set up than TSIG: it has the big advantage of applying to all protocols, rather than being glued into the guts of one protocol. There are, similarly, superior alternatives to the DNS update protocol, IXFR, and NOTIFY.

EDNS0 currently doesn't accomplish anything. I'm not strongly opposed to it; there simply isn't any benefit for the users.

DNSSEC currently doesn't accomplish anything, even though it is falsely advertised as preventing forgeries. I'm not strongly opposed to it; there simply isn't any benefit for the users.

djbdns supports IPv6 records, just like records of any other type. However, making servers reachable through IPv6 currently doesn't accomplish anything. I'm not strongly opposed to it; there simply isn't any benefit for the users.