paper - TU Dresden

6 downloads 63428 Views 267KB Size Report
connect remote hosts, send emails, or just to find configuration parameters for a device. DNS is well known to ... Internet traffic is caused by miss-configured DNS servers and resolvers. .... Additionally, bulk data such as DNS records are sent ...
REDUCIBLE COMPLEXITY IN DNS Gert Pfeifer, André Martin and Christof Fetzer TU Dresden, 01062 Dresden, Germany [email protected]

ABSTRACT DNS is one of the most important components of the Internet infrastructure. Unfortunately, it is known to be difficult to implement, and available implementations are difficult to configure correctly. DNS performance and availability often suffer from poor configuration which leads to unavailability or erroneous behavior of distributed systems that depend on DNS. The data structures of DNS are historically grown. Some are no longer needed, some have changed their semantics. However, they have to be maintained by implementations. We measure the extent and configuration issues in DNS data and propose simplifications to DNS data types and semantics that would allow building more dependable implementations. New DNS implementations could also reduce complexity by ignoring certain functionality of DNS that are not needed or that can be implemented in other ways. KEYWORDS Domain Name System, Internet, Dependability, Data Mining, Configuration Issues

1

INTRODUCTION

DNS is a distributed name service that belongs to the core services of the Internet. It has been designed in the 80's to replace host-local configuration files for naming of Internet hosts. Although this has been the primary task of DNS ever since, DNS can do much more: It is a universal directory which can map arbitrary user-defined data to symbolic names. DNS resolvers are nowadays part of almost every distributed Internetbased system. Usually stub resolvers are directly compiled into the software. The resolvers are used to connect remote hosts, send emails, or just to find configuration parameters for a device. DNS is well known to have configuration issues. Jung et al. [1] found that a significant high amount of Internet traffic is caused by miss-configured DNS servers and resolvers. These configuration issues may lead to performance degradation, crashes, and endless loops in resolvers. They may also confuse users by returning inappropriate error messages propagated by applications. The goal of our work is to find modifications of DNS that allow (1) to reduce the complexity of DNS implementations and (2) to prevent DNS configuration issues from influencing performance and dependability. To achieve this, we tried to obtain as many zones/resource records (RR) as possible to analyze various configuration faults and characteristics about DNS. We then classify features in important ones that are still in use and obsolete ones that only exist for historical reasons. We propose simplifications whenever features that have no clear purpose are found to cause many problems. We furthermore propose stronger checks in implementations where vital features are causing problems. Our approach to propose changes in implementations for a very common protocol like DNS and to omit features might look odd, but this has successfully been done before. Good examples for simplified DNS server implementations are Unbound [2] and Daniel Bernstein's TinyDNS [3]. In Unbound complexity has been reduced by implementing only functions of DNS that are needed for a validating, recursive, and caching DNS server. It is not intended to use as an authoritative server. One of the reasons could be that BIND often caused troubles using this configuration in the past. For example: it is not recommended to use BIND 8 as a recursive resolver since it might deliver poisoned cache entries to clients. The Internet Software Consortium finally pulled the plug on support for BIND 8 due to this problem. In Section 4.3 we show that there are still a significant number of resolvers running in this configuration. Daniel Bernstein's TinyDNS also reduces complexity by offering several servers for different purposes. This reduces complexity through separating

tasks to individual component such as a caching daemon, a DNS server with a data base backend, a load balancer, etc. Some functions of DNS are not implemented at all. As we will explain later, an external tool is used to transfers zones and therefore not part of the implementation/distribution. The rest of the paper is structured as follows: In Section 2 we discuss related work. Section 3 describes which techniques we are using to collect our data. Section 4 presents results. There are several subsections where we try to find simplifications for DNS server implementations. We summarize these results in Section 5.

2

RELATED WORK

Pappas et al. [10] studied the impact of configuration errors on DNS robustness using a modified recursive resolver in conjunction with DNS traces. In this study, the authors identified the following three different types of configuration errors in DNS: lame delegations (Section 4.1), diminished server redundancy (Section 4.2) and cyclic zone dependency (Section 4.9). In contrast to Pappas et al., we have identified several more classes of configuration faults that can lead to performance degradation such as lame names and invalid TTLs. In RFC 1912 [4] David Barr lists some of the most common DNS configuration errors. The RFC divides errors into three classes: DNS data, BIND operations, miscellaneous topics. The DNS data contributes most errors and is basically the topic of this paper. In contrast to our paper, the RFC does neither provide any real world examples nor investigate how often such errors occur. In 2001, Jung et al. [1] investigated DNS performance and effectiveness of caching based on DNS traces. The study is divided into two parts: The first presents client-received performance measurements such as latency, retransmission, negative responses and caching, and interaction with root servers. The second part analyzes the effectiveness of caching based on name popularity and TTL distribution. Jung et al. [1] focus primarily on client side performance rather than identifying sources that may lead to DNS performance degradation. Contrary to our work, the study [1] is based on real world DNS traffic which provides a reasonable evaluation of the impact of configuration faults. The paper also reveals the amount of extra DNS traffic caused through configuration issues: over a third of all lookups are not successfully answered. 23% of all client lookups in one trace do not get answered. In the same trace, 13% of lookups result in an answer that indicates an error. Most of these errors indicate that the queried name does not exist. Brownlee et al. [5] monitored DNS traffic at f.rootservers.net, and found that more than 14% of the query load was caused by bogus queries; their paper provides an analysis of the causes of errors. In contrast to [5], we do not look at traffic at all. Paul Vixie [6] explained that DNS' complexity is often underestimated. He points out that DNS was specified loosely, on purpose. Therefore, inoperability between different implementations is the common case. He also believes that a stronger specification would not have been as successful and writing it today would fail. Hence, we do not try to rewrite the specification, even with all the knowledge about configuration issues.

3

ARCHITECTURE

We are using a DNS Crawler that consists of server, crawler and client nodes, a distributed file system (DFS) with a Map/Reduce cluster running on top of it for data mining. The DNS Crawler is currently running on 60 nodes of the Planet-Lab test bed, and a five nodes storage cluster, gathering about 5,000 RRs per second. Crawler nodes, also often called "spiders" are responsible for processing jobs, i.e., crawling DNS data through different approaches such as AXFRs, NSEC walks etc., whereas server nodes coordinate the whole job distribution process. Client nodes act as DNS proxies to perform fast non-recursive DNS lookups based on previously crawled data. The crawled data is persistently stored onto a DFS. We are using Map/Reduce, a programming paradigm commonly used in functional programming, and improved for large cluster-based data processing by Google

Inc. [7], to identify obsolete DNS records, content distribution and for a broad variety of other data mining tasks such as the DNS statistics presented in this paper. Apache HADOOP serves currently as our implementation for the DFS and Map/Reduce cluster. It follows for the most part the design and implementation of Google’s GFS and Map/Reduce as described in [8] and [7]. Crawler, server and client nodes communicate through FreePastry [9], a fault-tolerant peer-to-peer (p2p) overlay network implementation. The built in distributed hash table (DHT) balances load and manages data replication across all participating server nodes in the network. Additionally, bulk data such as DNS records are sent through Apache MINA, a high performance non-blocking IO framework for Java.

4

RESULTS

The results we present here are deduced from the data we were able to acquire in a rather short period of time. We cannot conclude that they represent the character of the global DNS data, since many zones cannot be transferred for reasons of confidentiality or because they are just for internal use. However, we expect our data to represent characteristics of the publicly available DNS data. All points mentioned here are definitely visible for users and may account for certain problems in the resolution process.

4.1

Lame Delegations and Remote Lame Delegations

Lame delegations are delegations that do not provide an IP address and therefore cannot be used. Note that Papas et al. [10] have a much wider definition. They also require that name servers have to be authoritative for the child zone, if they are addressed in delegations, i.e., they reply an AA answer flag. In addition to this phenomenon, we also looked at remote lame delegations, which are delegations that come with an A record, pointing to a local address. Local addresses are not routable and therefore the delegation is lame for any remote host, even if the NS record is obtainable. Moreover, we look at lame names in general, like CNAME or MX records pointing to DNS names without address RRs. In our data set we have found 607,566 lame delegations and 200,270 remote lame delegations in 15,367,927 delegations altogether. It is certainly not possible to simplify name server implementations by removing delegations since delegating authority is one of the intrinsic features of DNS. However, the extra effort currently required, i.e., resolving DNS name for the delegated name server might be removed. It introduces the obligation to deliver glue records, since usually only the IP address of a server is needed. If services are interested in the DNS name of this specific DNS server, an inverse DNS lookup can be performed. RFC 1035 explains that if a machine's address is copied into an NS record, then you have to watch for changes in the address. Indirection avoids the opportunity for inconsistency since there is only one place to apply NS RR changes. On the other hand, we will always have inconsistencies with or even because of this kind of indirection whenever a name is changed. We have seen this quite often for MX records pointing to non-existing names. Note that lame names are sometimes useful, e.g., for name servers in SOA records. There they prevent update attacks against the primary name server. However, in the cases mentioned above, we are not aware of any such purpose.

4.2

Diminished Server Redundancy

This phenomenon has also been discussed by Papas et al. [10]. A zone in DNS does usually provide at least two name servers to serve RRs in case one server is down or unavailable through maintenance, or hardware or software failures. However, some zones provide at least one pair of name servers which are obviously behind the same router in the same subnet, which thus results in having a single point of failure. We examined the IP addresses of DNS servers as an indication of routing structures. In our data set we found 3,775,463 zones out of 3,890,409 (97%) to have diminished server redundancy. Note that due to Classless Inter-Domain Routing (CIDR) [11] this might be overestimated. We classified DNS servers using

network addresses of up to 24 bit. If CIDR is used with longer network addresses, we have false positives since it allows hosts with a common 24bit prefix to reside in different LANs.

4.3

DNS Server Implementations

DNS server implementations are available for a great variety of platforms and operating systems. The most popular DNS server software is BIND. We used the same fingerprinting tool [12] as ISC used for their measurements [13]. The results of our survey are shown in Figure 1. As expected, BIND is still the top choice. In contrast to the ISC measurements, we found more TinyDNS instances than Microsoft DNS servers. It is also interesting to see that BIND 8 is still on rank 9, even, there is no ISC support for this software anymore. 17398 out of 384164 (4.5%) are BIND 8, 10848 with recursion enabled. In other words, 2.82% recursive resolvers are known to allow cache poisoning. Paul Mockapetris said: "If my ISP was Figure 1 – DNS server implementations running BIND 8 in a forwarder configuration, I would claim that they were not protecting me the way they Label counts should be, ... Running that configuration would be 40,00% Internet malpractice." 35,00% 70% 60% 50% 40% 30% 20% 10% 0%

30,00%

4.4

Invalid TTLs

TTLs (Time-to-live) specify the time interval that a RR may be cached before a resolver will consult the nameserver again. The value is a 32 bit signed integer, ranging from 0 to 2,147,483,647, i.e., 231 positive serials. RRs with a zero TTL value will be used only once for the current transaction and thus not be cached. TTLs are specified in seconds and they do typically range from a few seconds up to several days or month depending on the type of the RR. We define TTLs as invalid, if they are out of range, i.e., greater than 2,147,483,647 which is equivalent to an interval of over 68 years. Such values could lead to crashed in resolver implementations, e. g., in Java implementations a NumberFormatException is a RuntimeException, which is usually not caught. The amount of RRs with invalid TTLs we have found is quite low (162 out of 156,639,518).

4.5

Namespace and Labels

25,00% 20,00% 15,00% 10,00% 5,00% 0,00% 1

3

5

7

9

11 13 15 17 19 21 23 25 27 29 31

Figure 2 – Label count distribution Label lengths (in characters) 35,00% 30,00% 25,00% 20,00% 15,00% 10,00% 5,00% 0,00% 1

6

11 16 21 26 31 36 41 46 51 56 61 66 71 76

Figure 3 – Label length distribution

To get an idea of the shape of the DNS namespace, we measured the fan-out at each zone, i.e., the number of delegations and the length of branches as reflected by the number of labels in a name as depicted in Figure 2. We also had a look at the typical length of labels, since it is interesting to estimate the effort of brute force attacks on the proposed NSEC3 RRs as discussed by Rose et al. in [14]. We found some rather long labels but they do not seem to be in use. Sometimes the names directly contain descriptions of the purpose of a web site, but in general labels are rather short, as Figure 3 shows. Long labels also contradict with the purpose of DNS to provide easy-to-remember human readable names.

4.6

RR Type Statistic & IPv6

We evaluated how often certain RR types are used (Fig. 4). An interesting result is that there is still some use of the HINFO RRs, which is leaking details that could easily be used by hackers to gain knowledge about the system to attack as described in RFC 1912 [4]. The high amount of RRSIG records is due to signatures changes that occur with each zone update. NSEC RRs remain unchanged in those cases. We can also see that IPv6 addresses still do not play an important role in the Internet. 42,450 out of 49,939,414 DNS names with addresses were found to have an A RR as well as AAAA RR. This is an important number to estimate the degree of preparation of the Internet to the introduction of IPv6. IPv6 will only be deployed if Internet services are reachable through IPv4 and IPv6. Clients cannot be expected to switch to version 6 as Figure 4 – RR type distribution long as this step would make some services unavailable. We expect this number to increase. NAT and classless networks helped a lot to make better use of the available addresses. However, recently the case of YouTube and Pakistan Telecom [15] has shown that CIDR makes BGP routing tables quite complex and therefore becomes a liability for the Internet. 40%

35%

30%

25%

20%

15%

10%

5%

0%

A

4.7

NS

RRSIG

PTR

MX

CNAME

SOA

Other

SOA RRs

A SOA (Start of Authority) RR, marks the start of a zone. It presents a tuple , where NSName is the domain name of the primary DNS server for the zone, email is the email address (@ is replaced by a dot) of the person responsible for maintenance of the zone, serial is the serial number which must be incremented with each update, refresh is the time interval when the slave will try to refresh the zone from the master, retry is the time interval between retries if the slave (secondary) fails to contact the master, expiry defines the time when the zone data is no longer authoritative (applies to slaves or secondary servers only) and TTL which is either used as default TTL for records that do not have a TTL or for negative caching time, depending on the DNS server implementation.

SOA Timings SOA records contain different timings which are important for synchronization between name servers and negative caching. However, those synchronization mechanisms seem to be outdated since RFC 1996 introduced a notification mechanism to inform slaves about zone updates. Furthermore, commonly used DNS server implementations, such as TinyDNS, use out of band synchronization, where, contrary to the notify mechanism used in BIND, zone updates are directly pushed to slaves. We have evaluated SOA timings in relation to their server implementations. The majority of zone administrators seem to neglect proper adjustments of timings at all and stick to their default values of their implementations. We believe this is due to the usage of push and notify mechanisms which obsolete timings. We have verified our assumption by comparing default and chosen timings for the zones in our data, e.g., the default expiry for TinyDNS of 1048576 [16] is most popular for servers running this implementation. Invalid SOA Entries Besides investigating timings in SOA RRs, we have also analyzed the format of email addresses and the existence of nameservers that are provided with each SOA RR. We have found 7,158 out of 4,514,229 SOA RRs with invalid email addresses, e.g., a ‘@’ was used instead of a dot, or SOA RRs pointing to nameservers that did not exist. Often administrators specify a wrong DNS server name in their SOA entries. The increment time-stamp reason is that hackers might try to do dynamic updates using brute force methods to break authentication. This creates traffic and load on name servers. 45% SOA serial format The serial number in a SOA record can be given as an 55% unsigned integer, which is incremented with each update. It can also be a timestamp for the day, concatenated with an update number up to 99 per day. The two formats seem to have almost equal popularity as shown in Figure 5. Figure 5 – Serial formats

SOA small refresh and small serial If a SOA RR has a very small refresh value, the secondary name servers would often ask the primary server for an update. If the serial is very small, there have not been many updates before. These settings seem to conflict. Note that these values might not be used since the server might be configured to notify secondary name servers about updates. We found 627 out of 4,514,229 to show such a setting (serial < 5 and refresh < 900). Note that this evaluation is only possible for serials that are incremented with each update, which is the behavior defined by the protocol. Zones without SOA or wrong SOA We were surprised to see that there are zones without SOA RRs, e.g., 888melody.info. This strengthens our claim that one could use DNS without any SOA records. SOA RRs are useless since email address can be found in RP (responsible person RR) and timings are not longer needed since the notification mechanism is used by default. The start of zone can be determined through delegation from a parent zone. As seen in the previous section, the name server is also actually considered optional.

4.8

Wildcard Usage

The wildcard mechanism of DNS is difficult to understand. Under certain conditions, a DNS server creates RRs, which equal the queried name and contents taken from wildcard RRs given in their zones. These conditions are different depending on the implementation. According to RFC 1034, wildcard records are only used iff there exists no other record, which would match the queried name. Note that this rule is not limited to a matching RR type. In reality, most name servers implement wildcards in a different way. A common practice is to use wildcards whenever a pattern matching on the name is possible, but no record for the desired type is available. MX records are the typical use case. Often all hosts in a domain should be given the same mail exchanger. Hence, a wildcard for the host name is used. Of course, although other records for a given host name exist, we expect the MX record from the wildcard to be replied whenever an MX record for a host is requested. This strategy is implemented, e. g., by the Microsoft DNS server. 166,524 out of 913,189 zones (18.23%) contained wildcards. This is less than we expected as, e.g., 673,137 of these zones contained a record for the name www. We only looked at zones which we transferred completely. Given the complexity of the wildcard mechanism and the differences in the implementations, one could argue that it should better not be used. However, since almost 20% of the zones make use of it, we think this mechanism has its raison d'être.

4.9

CNAME Problems

Length

Chains

Cycles

40308 1757 3 184 4 54 5 26 6 15 7 15 8 12 9 12 10 1 Table 1 – CNAME Chains/cycles 1 2

DNS allows mapping CNAME records to CNAME records which results in CNAME chains of arbitrary lengths. However, this is considered bad practice since recursive resolvers often follow such chains to find the desired data, which furthermore increases traffic and overall lookup time. CNAME chains may even inadvertently form cycles which can furthermore crash recursive resolvers. Table 1 lists the number of CNAME chains and cycles we have found in our data sets. We have found 42,384 unique chains out of 7,069,234 CNAME records. If we consider the total amount of these records in our data set, they account for 0.59 % of all CNAME records. Apart from that, we have also found 3,014 identity CNAME RRs, i.e., CNAME RRs pointing directly to themselves such as the following:

0 64 24 0 0 0 0 0 0 0

ftp.creativemedia.fr. 86400 IN CNAME ftp.creativemedia.fr. www.bcrst.org. 3600 IN CNAME www.bcrst.org.

The longest CNAME chain that we have found so far consisted of 99 elements, and pointed from 99.bottles-of-beer.thegrebs.com down to no.bottles-of-beer.thegrebs.com.

In order to simplify implementations of (recursive) resolvers, we suggest that nameservers should refuse loading zone files in case the configuration contains a CNAME cycle or chain, by simply looking up names in the local zone and querying destinations of CNAME RRs.

4.10 www Representation Nowadays, the hostname "www" is often meaningless. Web organization for different purposes. The "no-www" blog argues makers should configure their main sites to be accessible by domain.com as well as www.domain.com. The trivial way to achieve this is to put a wildcard record into the zone and omit www. Then the wildcard would match any hostname that is not defined and leads to the web site. We had determined, how often this is the case as shown in Figure 6. For this figure, we did not consider, we left out any inaddr.arpa zone. The 2nd obvious way to implement the desired behavior is to create a CNAME record for the www entry. We also analyzed the popularity of this approach as depicted in Figure 7. We share the opinion of the no-www blog. Unfortunately, it does not help to simplify DNS zones. The CNAME and the wildcard approach have both their shortcomings. CNAME configuration issues could lead to loops or chains and the wildcard mechanism is often misunderstood and implemented in different ways by different server implementations. Another approach could be to get rid of www entries in zone files completely and let clients deal with the problem. Often web browsers suggest typos, hostnames, or even use web search engines to find a web site in the case that the given URL does not resolve to a host.

servers are hosted on many hosts in an that "www" is deprecated [17]. Website

10% 2%

3%

www & wildcard www & no-wildcard no-www & no-wildcard no-www & wildcard 85%

Figure 6 – Relative popularity of – www vs. no-www approach

24%

wildcard & A 48%

1%

wildcard & CNAME www & CNAME www & A

27%

Figure 7 – Relative popularity of www approach – CNAME vs. A

4.11 Open Mail Relays Our DNS database can be used for many kinds of analysis. # Success/error message We focused on DNS configuration flaws, however, DNS 101 Success – open mail relays knowledge could be used to study various configuration flaws 24,132 Parsing errors of many other services such as (1) SSH servers using blacklisted host keys [18], (2) databases without proper user 293 Authentication errors authentication, (3) anonymous ftp servers, (4) recursive DNS 109,051 Invalid address errors resolvers, or (5) detecting dead servers: often names are Table 2 – Open mail relay test associated with certain services, like www for web, mail for smtp, imap, pop, login for ssh, db for some database. Whenever the expected service is not found at one of those names, the configuration may be obsolete. We have extracted a list of mail servers (from MX RRs) and tested those for open mail relays: Out of 208,822 mail relays, we have found 101 open ones that can easily be abused. Table 2 shows the results of our open mail relay test.

5

CONCLUSION

Current DNS operations show that DNS is too complicated to be implemented and configured correctly. Successful implementations are very mature, like ISC BIND, or simplified and modularized, like TinyDNS.

We looked at DNS data to find out, which features of DNS are actually used and which errors are common. If there are errors typically made in DNS features that have lost their meaning over time, because of protocol extensions or changes in Internet usage or user behavior, we propose simplification of implementations by omitting these features. We are aware of the fact that for a distributed application to be able to work, all participants have to agree on a protocol. However, we observed that for DNS this is already not the case. Some servers usually emit faulty RRs or refuse to supply certain RRs. Therefore, we expect DNS resolvers to be fault tolerant enough to take a few other changes. We propose sufficiently DNS server implementations not to accept CNAME chains if they can be detected locally during startup of the server. We know that this adds some lines of code, but it should simplify operations and maintenance. DNS servers should not start up or reload when zone files contain bugs. Often zone administrators have wrong expectations and might ignore warnings. AXFR and IXFR features could be omitted in server implementations. Most servers have zone files which could be updated with other mechanisms, as shown by the TinyDNS manual [3]. This would not just remove some lines of code, it would also prevent that administrators allow zone transfers to everyone. Name server RRs are often redundant. Therefore, there may be inconsistencies, if they are updated. We propose to store name server entries at only one place. A recursive resolver would find them in the parent zone, where they are needed for delegation. In the zone itself, there might be additional servers defined optionally for internal use. Then an update would also take effect atomically on a single location. If no dots are used in a host name, which is usually the case, SOA records are not needed any longer. NS records are enough to define a delegation to a zone and SOA timings are not needed since the introduction of the NOTIFY mechanism. The name server and e-mail contact are often not correct anyway. NS and RP records can replace them. The only field in a SOA record that is actually used is the minimum TTL. However, according to the IETF mailing list, the NSEC RR will have a TTL with the same semantic. Hence, all fields of a SOA record are obsolete or become obsolete with DNSSEC. Furthermore, DNS servers should never propagate any invalid TTL. This may crash resolvers and is easy to prevent. For obsolete RR types such as HINFO, which could be used to leak confidential information, there should be a type blacklist. A DNS server could refuse to propagate RRs with blacklisted types. This forces the administrator to remove the RR type from the blacklist explicitly and think about the purpose once more. Future work: The results found in our analysis will be used to detect fraud and maintain consistency in a DNS backup repository which is used to serve DNS records in the case of DNS outages.

REFERENCES [1] Jung, Jaeyeon, et al. DNS performance and the effectiveness of caching. [Online] ACM SIGCOMM, 2001. [2] Unbound. [Online] http://unbound.net/. [3] djbdns. [Online] http://cr.yp.to/djbdns.html. [4] Barr, D. RFC 1912 - Common DNS Operational and Configuration Errors. [Online] 1996. http://www.faqs.org/rfcs/rfc1912.html. [5] Brownlee, Nevil, Claffy, kc and Nemeth, Evi. DNS Measurements at a Root Server. [Online] Proceedings of the IEEE GlobeCom, San Antonio, TX, 2001. http://www.caida.org/outreach/papers/2001/DNSMeasRoot/dmr.pdf. [6] Vixie, Paul. DNS complexity. [Online] ACM, NY, USA, 2007. http://doi.acm.org/10.1145/1242489.1242499. [7] Dean, Jeff and Ghemawat, Sanjay. MapReduce: Simplified Data Processing on Large Clusters. [Online] 2004. http://labs.google.com/papers/mapreduce.html. [8] Ghemawat, Sanjay, Gobioff, Howard and Leung, Shun-Tak. The Google File System. [Online] 2003. http://labs.google.com/papers/gfs-sosp2003.pdf. [9] FreePastry. [Online] http://freepastry.rice.edu/. [10] Pappas, Vasileios, et al. Impact of Configuration Errors on DNS Robustness. [Online] SIGCOMM, Portland OR, 2004. http://irl.cs.ucla.edu/papers/vp_sigcomm04.pdf. [11] Fuller, V. RFC 4632 - The Internet Address Assignment and Aggregation Plan. [Online] 2006. http://www.ietf.org/rfc/rfc4632.txt. [12] fp DNS. [Online] 2005. http://code.google.com/p/fpdns/. [13] ISC Domain survey. [Online] http://www.isc.org/index.pl?/ops/ds. [14] Rose, S and Nakassis, A. Minimizing information leakage in the DNS. [Online] IEEE, vol.22, no.2, pp.22-25, March-April 2008. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?isnumber=4476060&arnumber=4476067. [15] YouTube.com knock out. [Online] http://www.itwire.com/content/view/16832/53. [16] djbdns - Default values for timings. [Online] http://cr.yp.to/djbdns/tinydns-data.html. [17] No WWW. blog [Online] http://no-www.org/. [18] OpenSSL Problem. [Online] http://www.debian.org/security/2008/dsa-1571.