Category Archives: Networking

Azure Latency Pilot Study: Part 4 – Variation in results with time

So far in this study, we have only considered the aggregated measurements and how they vary depending on the VMs used. In this post, we will considered how the RTT measurements may have changed over time.

The following 5 plots show the RTT as observed by each machine over time. The y axis stops as 12 ms to aid visibility of the majority of results. Using the CDF of all results from the second part in this series, we can see that these figures thus include around 85% of all data points.

alt text
alt text
alt text
alt text
alt text

The most striking aspect of these figures is the 2 hour gap in the results. The source of this failure is still unknown. Azure reported no machine or network failures during this time and the system removed and continued to take measurements without any manual intervention. Otherwise, between 2 and 5 ms the RTT measurements appear to be fairly randomly distributed, although as we saw last time their distribution approximates normal (as we would expect).

Overall, I am not particularly happy with the results collected so far and hope the next round of tests will generate better results. Tomorrow, we will compare how these results differ from other studies.

Azure Latency Pilot Study: Part 3 – Machine specific results

In this post we will be looking at the results for the Azure latency Pilot study described last week. Yesterday, we started by looking at the aggregated results and found that the measured RTT was larger then expected. Today, we will look at how the results vary depending on which VMs the measurements where taken between. It may be the case that we can infer something about topology between VMs, for example whether VM’s are in the same physical host and the same rack.

The table below shows the RTT between each pair of VMs. The first server in the pair, labelled src is the one which initialised the ping. The table includes each machine pinging itself for comparison.

src dst mean s.d min 25th 50th 75th max
1 1 51.0 65.1 1.7 4.3 65.25 71.6 1004.0
1 2 12.1 89.7 1.7 2.9 3.6 4.3 1004.7
1 3 9.5 66.5 1.6 3.4 4.1 4.8 1004.7
1 4 7.3 52.3 1.3 3.0 3.5 4.0 1004.1
1 5 11.8 84.9 1.6 3.0 3.6 4.4 1242.2
2 1 6.2 12.7 1.6 3.0 3.7 4.425 88.4
2 2 16.7 65.8 1.1 3.0 3.7 4.6 1002.9
2 3 15.0 95.6 1.4 3.0 3.7 4.4 1004.9
2 4 9.6 62.0 1.4 2.9 3.5 4.3 1003.6
2 5 15.3 80.0 1.3 2.9 3.5 4.3 1002.5
3 1 48.5 44.9 2.0 4.5 64.2 70.6 754.9
3 2 7.0 48.9 1.4 2.9 3.6 4.4 1005.0
3 3 3.9 4.2 1.6 3.1 3.8 4.6 124.9
3 4 5.8 32.0 1.3 2.9 3.4 4.0 628.6
3 5 6.7 52.5 1.5 2.9 3.8 4.4 1229.3
4 1 48.8 48.3 1.9 4.7 62.1 67.3 1003.0
4 2 7.7 59.5 1.3 2.8 3.6 4.4 1003.8
4 3 9.0 60.1 1.6 3.3 4.1 4.8 1004.0
4 4 5.2 29.3 1.2 2.7 3.3 3.8 754.6
4 5 15.6 104.7 1.5 2.8 3.6 4.3 1035.0
5 1 50.9 69.8 2.0 4.5 63.8 69.6 1005.3
5 2 9.4 70.9 1.3 2.9 3.7 4.3 1003.8
5 3 8.6 57.6 2.0 3.3 4.1 4.8 1003.7
5 4 6.5 47.7 1.3 2.7 3.4 4.0 1003.0
5 5 6.8 49.2 1.2 2.5 3.2 4.1 1023.0

All VMs other than VM 2 have high latency to VM 1. In fact, we see an average 65 ms RTT from VM 1 to itself. This warrants further investigation into how hping3 is measuring latency. Removing VM 1 from the equation, we observe reasonable uniformity in the RRTs between VMs 2 to 5. Between these the min, 25th, 50th and 75th percentile are all similar and the maximum varies highly, which is to be expected.

I would like to take a close look at how the distribution of RTT measurements varies between VMs 2 to 5. The table below shows the RTT between each pair of VMs between 2 to 5, at various percentile points.

src dst 80th 90th 95th 98th 99th
2 2 5.1 41.8 47.2 124.6 150.7
2 3 4.6 5.0 5.5 34.5 429.5
2 4 4.5 4.9 5.5 47.4 78.5
2 5 4.5 5.4 60.4 91.8 207.8
3 2 4.5 4.8 5.0 5.5 8.9
3 3 4.7 5.0 5.1 5.4 5.8
3 4 4.2 4.5 4.8 5.2 6.1
3 5 4.6 4.9 5.2 5.9 6.6
4 2 4.5 4.8 5.0 5.2 5.8
4 3 5.0 5.2 5.4 5.7 6.4
4 4 4.0 4.3 4.7 5.1 5.8
4 5 4.5 4.8 5.2 6.1 762.9
5 2 4.5 4.8 4.9 5.1 6.1
5 3 4.9 5.2 5.4 5.6 6.8
5 4 4.1 4.5 4.9 5.3 6.3
5 5 4.3 4.6 4.8 5.8 7.2

Yesterday, we saw that the 90th percentile for dataset as a whole was 61.4 ms, this is not representative of the RTT between VMs 2-5. We can see this information more clearly using the following 5 CDF, each graph representing the round trip time from each machine to each of the others (and itself).

alt text
alt text
alt text
alt text
alt text

Machine 1 is a clear outlier from the perspective for machines 1, 3, 4 and 5. The observed RTT doesn’t seems to be symmetric. Again this asymmetry warrants further investigation. The stepping in the CDFs is because the RTT is recorded to the nearest 1 decimal place.

Next time, we will look at how the observed RTT varies with time.

Azure Latency Pilot Study: Part 2 – Aggregated results

This is post we will be looking at the results for the Azure latency Pilot study described last week. We will starting by looking at the aggregate results, disregarding the time a measurement was taken and which machines the measurement was taken between.

The 22332 data points have been processed in Python3, in particular using the matplotlib and numpy libraries. The scripts are available in azure-measurements repository.
They currently only use the average round trip time, as reported by hping3, average over 10 pings.

property RTT (ms)
min 1.1
max 1242.2
mean 15.9
std 66.5

The results in the table above are much larger than I expected. Given the large standard deviation and very large maximum value, it is likely that a few large measures have skewed the results. Let’s take a look at the cumulative distribution function (CDF) and percentile points to see if this is the case.

percentile RTT (ms)
25th 3.0
50th 3.8
75th 4.7
90th 61.4
95th 69.7
99th 87.3

alt text

As expect, some large measurements have skewed the results. However, the proportion of measurement which are considered large is much greater then I expected. This warrants further investigation.

Before that, lets take a closer look at how the majority of value are distributed. Due to the central limit theorem and the sufficiently large sample size, we would expect to see a normal distribution. In simulators such as Raft Refloated, we simulate latency as normally distributed with given parameters, discarding values below a threshold value. We can take a closer look at the probability density function (PDF) and see if this is a reasonable approximation.

alt text

The green bars represent the probability of each RTT. We see an approximate normal distribution, although it is clear that this distribution doesn’t have the same parameters as the data set as a whole. The red lines shows a normal distributed with mean 3.6 and standard deviation of 1. This red line appears to be a reasonable approximation and could be used in simulation.

Next post, we will looks at how the measured RTT differs depending which of the 5 machines the measurement was taken between.

Azure Latency Pilot Study: Part 1 – Experimental Setup

This post, the first in a short series, discusses a simple overnight pilot study of measuring network characteristics on Microsoft Azure. This study was to be the first of many. Its purpose was to test the tools and gave some initial measurements, thus informing the the design of more substantial measurement studies in the future.

Motivation

Ultimately, I would like to answer the following questions about today’s cloud offerings:
1. How often do VMs fail in practice? What is the typical downtime? And to what extent are these failures correlated with each other? How does the failure rate vary with different price tiers and different cloud providers? For example, comparing normal instances to low cost instances like Amazon EC2 Spot instances or Google Clouds Preemptible Instances
2. How often do network partitions occur? What types of partitions do we see in practice? Do they isolate individual nodes or divide a cluster into a few disconnected groups? Do partial partitions, which we believe can cause issues for protocols such as Raft, occur in practice?
3. What are the latency characters between VMs? How about between between different datacenters by the same provider or by different providers?
4. How do today’s open source fault-tolerant data stores such as LogCabin or CorfuDB perform in practice? Is this sufficient to meet application demands? How quickly can such system heal after failures?
5. How can use the above to configure systems such as Raft Refloated or Coracle to simulate real work deployments of fault tolerance applications?

Experimental Setup

The experiment was run across virtual machines in the Azure. Azure is Microsoft’s cloud offering and a competitor to services such as Google Cloud and Amazon EC2. Azure was chosen simple over the competition simply because we have access to free credits. This was the first time I had used it (except from hosting the Coracle SIGCOMM demo) and it was relatively straight forward to perform simple operations, though not without its difficulties. For the management of the virtual machines, I mostly used Azure ASM CLI, a command line utility for managing VMs, its written in JS and is open source. This first test used 5 ‘small’ machines in North Europe, running overnight.

The machine themselves where simple Ubuntu 14.03 instance (and yes, Azure does have linux VM’s too). One machine was manually set-up, captured and cloned. Setup involved adding the public key of the data collection server, cloning measurement scripts and running them as a service, installing a few dependencies and running sudo waagent -deprovision before capturing the VM image.

Measurements

The measurement script simply TCP pings (sends SYN and waits for response) all the other machines in the test set every 20 sec and writes the results with test time to disk. It is worth noting that Azure drops ICMP traffic and whilst they acknowledge that is case of external traffic, many people (myself included) could not get internal ICMP traffic through either. The tool used was hping3, and it reported min, max and average round trip from a 10 successive pings.

The measurement server waited until after the end of the measurement study to collect data, to avoid interfering with the measurement. The collection script simply pulls the data from the measurement servers using scp (and the asymmetric keys established earlier). The other management jobs such as cleaning the data files or updating measurement scripts was done using parallel ssh.

Results

The experiment was ran between 19:00 and 08:50:00, across 5 virtual machines. In total, 22332 measurements were involved in the analysis, ranging from 1.1 ms to 1242 ms. The raw data is available online, as are all the scripts used.

Tomorrow, we will look at some analysis of these results.

Part 3: Running your own DNS Resolver with MirageOS

This article is the third in the “Running your own DNS Resolver with MirageOS” series. In the first part, we used the ocaml-dns library to lookup the hostname corresponding with an IP address using its Dns_resolver_mirage module. In the second part, we wrote a simple DNS server, which serves RRs from a zone file using the Dns_server_mirage module.

Today in the third part, we will combine the above to write a simple DNS resolver, which relays queries to another DNS resolver. Then we will compose this with our simple DNS server from last week, to build a resolver which first looks up queries in the host file and if unsuccessful will relay the query to another DNS resolver.

As always, the complete code for these examples is in ocaml-dns-examples.

3.1 DNS FoRwarder

When writing our simple DNS server, we used a function called serve_with_zonefile in Dns_server_mirage to service incoming DNS queries. Now we are going remove a layer of abstraction and instead use serve_with_processor:

val serve_with_processor: t -> port:int -> processor:(module PROCESSOR) -> unit Lwt.t
val serve_with_zonefile : t -> port:int -> zonefile:string -> unit Lwt.t

Now instead of passing the function a simple string, representing the filename of zonefile, we pass a first class module, satisfying the PROCESSOR signature. We can generate such a module by writing a process and using processor_of_process:

type ip_endpoint = Ipaddr.t * int

type 'a process = src:ip_endpoint -> dst:ip_endpoint -> 'a -> Dns.Query.answer option Lwt.t

module type PROCESSOR = sig
  include Dns.Protocol.SERVER

  (** DNS responder function.
      @param src Server sockaddr
      @param dst Client sockaddr
      @param Query packet
      @return Answer packet
  *)
  val process : context process
end

type 'a processor = (module PROCESSOR with type context = 'a)

val processor_of_process : Dns.Packet.t process -> Dns.Packet.t processor

So given a Dns.Packet.t process, which is a function of type:

src:ip_endpoint -> dst:ip_endpoint -> Dns.Packet.t -> Dns.Query.answer option Lwt.t

We can now service DNS packets. If we assume that myprocess is a function of this type, we can service DNS queries with the following unikernel

open Lwt
open V1_LWT
open Dns
open Dns_server

let port = 53

module Main (C:CONSOLE) (K:KV_RO) (S:STACKV4) = struct

  module U = S.UDPV4
  module DS = Dns_server_mirage.Make(K)(S)

  let myprocess ~src ~dst packet = ...

  let start c k s =
    let server = DS.create s k in
    let processor = ((Dns_server.processor_of_process myprocess) :> (module Dns_server.PROCESSOR)) in 
    DS.serve_with_processor server ~port ~processor
end

Now we will write an implementation of myprocess which will service DNS packets by forwarding them to another DNS resolver and then relaying the response.

Recall from part 1, that you can use the resolve function in Dns_resolver_mirage to do this. All that remains is to wrap invocation of resolve, in a function of type Dns.Packet.t process, which can be done as follows:

 
let process resolver ~src ~dst packet =
      let open Packet in
      match packet.questions with
      | [] -> (* we are not supporting QDCOUNT = 0  *)
          return None 
      | [q] -> 
         DR.resolve (module Dns.Protocol.Client) resolver 
         resolver_addr resolver_port q.q_class q.q_type q.q_name 
          >>= fun result ->
          return (Some (Dns.Query.answer_of_response result))) 
      | _ -> (* we are not supporting QDCOUNT > 1 *)
          return None
3.2 DNS server & forwarder

[this part requires PR 58 on ocaml-dns until it is merged in]

We will extend our DNS forwarded to first check a zonefile, this is achieve with just 3 extra lines:

...
DS.eventual_process_of_zonefiles server [zonefile]
>>= fun process ->
let processor = (processor_of_process (compose process (forwarder resolver)) :> (module Dns_server.PROCESSOR)) in
...

Here we are using compose to use two processes: one called process generated from the zonefile and one called forwarder, from the forwarding code in the last section.

Next time, we will extend our DNS resolver to include a cache.

 

 

VPN providers are hijacking DNS

Are you thinking of using a VPN to bypass DNS hijacking by your ISP (as described in Redirecting DNS for Ads and Profit and Middleboxes considered harmful: DNS Edition)?

Then think again.

A new paper titled “A Glance through the VPN Looking Glass: IPv6 Leakage and DNS Hijacking in Commercial VPN clients” by Vasile Claudiu Perta, Marco Valerio Barbera, Gareth Tyson, Hamed Haddadi and Alessandro Mei, demonstrates that many commercial VPN operators are at it too.

The paper will appear at The 15th Privacy Enhancing Technologies Symposium and is available online now (open access copy linked).

Paper Notes: Redirecting DNS for Ads and Profit

Redirecting DNS for Ads and Profit is one of the collection of papers from the ICSI team, with the results from the Netalyzr, network diagnosis tool. This paper focuses on the 66K session traces where DNS error traffic has been monetization and calls out Paxfire, for their role in this area, the paper focuses on NXDOMAIN wildcarding and search engine proxying (see my past post on how middleboxes interfere with DNS for an introduction to these techniques). The authors acknowledge the unrepresentative sample of Netalyzr users and the high number of sessions using OpenDNS or Comcast DNS resolvers.

NXDOMAIN wildcarding is not encouraged by ICANN and can have serious implications for non web browser DNS traffic (some resolvers only rewrite lookups starting with www. to try to prevent this). In many cases, redirection servers do not simply use HTTP 302.

The highlight of this paper was the fake NXDOMAIN opt-out offered by Paxfire, where the ad server simply served the user’s browser’s error page.

DNSSEC may provide authenticated denial of existence but this doesn’t necessarily fix the problem, for example Xerocole offers DNS resolvers with the option to simply rewrite DNSSEC signed NXDOMAIN responses without a signature, thus assuming the client will not validate DNSSEC.

OpenDNS wildcards NXDOMAIN and SERVFAIL errors as well as directing users to the redirection server if there server supports only IPv6. This is provided as an option in D-Link routers.

The study observed >12 ISPs using squid proxies to redirect search engine traffic. The study did not observe resolver independent NXDOMAIN redirection but did see NATs redirecting all DNS requests (regardless of resolver) to the configured recursive resolver, thus creating in-path NXDOMAIN rewriting if the new resolver uses NXDOMAIN wildcarding.

This paper is a fun, light read that I would recommend, though its results are a bit out of date now, as it used data from Jan 2010 to May 2011.

Part 2: Running your own DNS Resolver with MirageOS

Last time, we wrote a simple “dig like” unikernel. Given a domain and the address of a nameserver, the unikernel resolved the domain by asking the nameserver and returned the return to the console.

Today, we will look at another way to resolve a DNS query, being a DNS server. This is useful in its own right but also allows us to cool things with our local DNS resolver such as locally overwriting DNS names and resolving .local names, both of which we will add to our DNS resolver another day.

Today we use features only added to ocaml-dns library in version 0.15 (currently PR #52), so if you do not have this version or later, then update OPAM or pin the master branch on github.

Building a DNS server with MirageOS is simple, look at the following code:

open Lwt
open V1_LWT
open Dns
open Dns_server

let port = 53
let zonefile = "test.zone"

module Main (C:CONSOLE) (K:KV_RO) (S:STACKV4) = struct

  module U = S.UDPV4
  module DNS = Dns_server_mirage.Make(K)(S)

  let start c k s =
    let t = DNS.create s k in
    DNS.serve_with_zonefile t ~port ~zonefile
end

The above code will serve DNS requests to port 53, responding with the resource records (RR) in test.zone. We have provided an example zone file in the repo with the code from this guide. To use this unikernel, we also need to edit the config.ml file from yesterday.

open Mirage

let data = crunch "./data"

let handler =
  foreign "Unikernel.Main" (console @-> kv_ro @-> stackv4 @-> job)

let ip_config:ipv4_config = {
  address= Ipaddr.V4.make 192 168 1 2;
  netmask= Ipaddr.V4.make 255 255 255 0;
  gateways= [Ipaddr.V4.make 192 168 1 1];
}

let direct =
  let stack = direct_stackv4_with_static_ipv4 default_console tap0 ip_config  in
  handler $ default_console $ data $ stack

let () =
  add_to_ocamlfind_libraries ["dns.mirage";"dns.lwt-core"];
  add_to_opam_packages ["dns"];
  register "dns" [direct]

We are using crunch to access the zone file in the data directory. As explain in part 1, this config file is specific to my network setup for xen backends and can easily be generalised.

You can now test your DNS server and see it work

$ dig @192.168.1.2 ns0.d1.signpo.st.

 

Middleboxes considered harmful: DNS Edition

This article is brief overview of how middleboxes interact with DNS traffic. In particular I’m interested in finding out the answers to the following: Will middleboxes drop/modify DNS traffic and what is the purpose of this: stopping abuse, security, buggy implementations, advertising or censorship? Therefore does using your own stub resolver and recursive nameserver free you from the above issues? Do DNS recursive nameservers with caching respect the TTL? And ultimately how does the all this affect the deploy of DNS extensions such as DNSSEC, DNSCurve, DynDNS, EDNS?

My particular interest in DNS is how will research projects for naming edge network devices (e.g. HIP, UIA, UIP, MobilityFirst, CoDoNS, FERN) actually fair in the wild and is using or extending DNS a way around such issues. The title of this article is play on the title of the paper describing Delegation-Oriented Architecture.

Applications & Stub Resolvers

Stub resolver are in essence the clients to the Domain Name System (DNS), they sit between applications and DNS, usually ran locally by the OS and interfaced with by gethostbyname. The stub resolver is responsible for forming and parsing DNS packets for the application, offering a simple API to application for resolving domain names into IP address. The simplicity of this API is also its downfail, for example, gethostbyname has few error codes compared to DNS’s RCODEs. Proponents of DNSSEC hope that web browsers will present DNS validation failures to users in the same way that TLS failures are presented. At the moment however, for many stub resolvers the only possible error codes (often called h_errno) are HOST_NOT_FOUND, TRY_AGAIN, NO_RECOVERY and NO_ADDRESS. The application may not ever get this much information depending on the language API, such as Unix.gethostbyname in OCaml’s standard library.

A common linux default is to request AAAA records as well as A records even if the host doesn’t have a IPv6 address. Kreibich et al found that 13% of all sessions requested AAAA records: 42% of linux session requested AAAA records, compared to 10% of non-linux sessions, backing up this theory.

Some stub resolvers and client applications cache DNS responses, interestingly some do not respect TTLs. For example, the default cache time for ncsd (enabled by default on some linux distros) is 15 mins regardless of TTL, whereas internet explorer caches all records for 30 mins. It is important the caches respect short TTLs as they are increasingly utilised by content distribution networks and dynamic DNS. A quick check on my own browser (go to chrome://net-internals/#dns in chorme) shows that the browser cache contains 73 A/AAAA active records and 263 expired records.

Weaver et al. and Kreibich et al. studied how middleboxes interact with DNS traffic using the Netalyzr tool. Weaver et al. concluded that application wishing to use non-standard resource records (RRs) including TXT resources or DNSSEC should use their own DNS resolver and bypass the stub resolver provided by the host. It is often not possible for an application to overwrite the stub resolver’s choice of DNS resolver, which is normally a DNS resolver at the gateway, with a host of problems (see next section). The study also concluded that host stub resolvers often lack failovers (e.g. trying requests over TCP) to common issues such as: the gateway resolver not supporting the full DNS protocol, the gateway resolver cannot be trusted, the gateway resolver may be slow and the network gateway/middleboxes may filter UDP traffic.

In-Gateway Resolvers

The gateway resolver is a common (but not necessary) stage in DNS resolution (there may also be multiple stages of gateway resolvers). The stub resolver running on local host will usually forward the DNS query to the resolver(s) whos address it was given by DHCP lease when connecting to the local router. This address is normally a DNS resolver running at the gateway (at the .1 or .254 address in the local subnet e.g. 192.168.1.x) . I say “usually” as this can be overwritten, for example some people instead opt to use a public DNS server such as Google’s or OpenDNS, or run their own resolver, this is of course rare. Furthermore, not all gateways run DNS resolvers, in this case they typically refer hosts straight to the ISPs resolvers. Gateway resolvers have the advantage that they can enable the local resolution of domain such as .local or domain name for router adminisation (e.g. www.routerlogin.net for Netgear devices).

Weaver et al. tested the whether in-gateway resolvers correctly processed various DNS queries, they found that following: AAAA lookup (96%), TXT RRs (92%), unknown RRs (91%) and EDNS0 (91%). They also found that a significant number of in-gateway resolver are externally usable, opening the gateway to DoS attacks.

ISPs (& Other) Resolvers

The ISP’s resolver is a common (but not necessary) stage in DNS resolution (there many also be multiple stages of ISP resolvers). The ISP resolver is often the resolver responsible for begin to the actual resolution instead of just forwarding/proxying queries.

Despite there widespread deployment and dedicated management, these resolvers are not without there problems. Weaver et al found that 4% of sessions did not implement source port randomisation, only 55% of sessions exhibit EDNS0 usage, 4% of sessions implemented 0x20 whilst 94% propagate capitalisation unmodified. Kreibich et al found that 49% of sessions used DNSSEC enabled resolvers.

https://www.vs.uni-due.de/wander/20121229_Secure_Name_Resolution.pdf

DNSSEC capable resolvers by Matthäus Wander

NXDOMAIN wildcarding is where resolvers replaces responses with the NXDOMAIN error (for example, when a user mistypes a domain) with valid DNS responses point to another site, often with advertising. Weaver et al observed this in 24% of the sessions surveyed. This should only be done on queries from web browsers, though this is not always the case. This may also interact with web browsers who treat NXDOMAIN errors specifically, e.g. if the query fails due to NXDOMAIN, then suggest some likely alternatives. Worryingly, Weaver et al also observed a few cases of SERVFAIL wildcarding, IPv4 addresses in responses where IPv6 only was requested and ignoring additional answer RRs. Some resolvers redirect queries for some search engine, whilst other have malware to inject adverting. Kreibich et al found that essentially all resolver respected a 0 and 1 second TTL.

Another interesting area is the treatment of RRs from the Authority and Additional RR sets. For example, glue records are A RRs in the Additional section added to an answer with NS RRs which put the name servers under the domain they control, without these additional RR’s we would have a circular dependency. Kreibich et al found that 61% of sessions accept glue records when the glue records refer to authoritative nameservers, 25% accept A records corresponding to CNAMEs contained in the reply and 21% of sessions accepting any glue records present in the Additional field, and those only doing so for records for subdomains of the authoritative server.

Other ISP controlled middleboxes

It is clear that resolvers (stub, in-gateway and ISP/Public) do not reliability handle all DNS traffic and all its extensions. Users could opt to run there own resolver and stub resolvers, would this mean that their traffic be free from modification by middleboxes? Of course not.

ISPs have been know to highjack traffic to port 53 to their own DNS resolvers or simply drop it, blocking use of third party DNS resolvers. Some public resolvers support alternative ports (e.g. OpenDNS supports port 5353), but this can be difficult to configure as its cannot be easily expressed in /etc/resolv.conf. There is some evidence of gateways provided by ISPs, redirecting traffic to port 53 to the ISP’s DNS resolvers

TLDs and Root Servers

The root DNS server (or actually the 504 servers, 13 addresses) is the heart of the DNS. The root has supported DNSSEC since 2010, will not support DNSCurve. Likewise many of the TLD’s support DNSSEC and will not support DNSCurve. On the whole, these seems to fairly well managed and free of major issues.