Monthly Archives: February 2015

Middleboxes considered harmful: DNS Edition

This article is brief overview of how middleboxes interact with DNS traffic. In particular I’m interested in finding out the answers to the following: Will middleboxes drop/modify DNS traffic and what is the purpose of this: stopping abuse, security, buggy implementations, advertising or censorship? Therefore does using your own stub resolver and recursive nameserver free you from the above issues? Do DNS recursive nameservers with caching respect the TTL? And ultimately how does the all this affect the deploy of DNS extensions such as DNSSEC, DNSCurve, DynDNS, EDNS?

My particular interest in DNS is how will research projects for naming edge network devices (e.g. HIP, UIA, UIP, MobilityFirst, CoDoNS, FERN) actually fair in the wild and is using or extending DNS a way around such issues. The title of this article is play on the title of the paper describing Delegation-Oriented Architecture.

Applications & Stub Resolvers

Stub resolver are in essence the clients to the Domain Name System (DNS), they sit between applications and DNS, usually ran locally by the OS and interfaced with by gethostbyname. The stub resolver is responsible for forming and parsing DNS packets for the application, offering a simple API to application for resolving domain names into IP address. The simplicity of this API is also its downfail, for example, gethostbyname has few error codes compared to DNS’s RCODEs. Proponents of DNSSEC hope that web browsers will present DNS validation failures to users in the same way that TLS failures are presented. At the moment however, for many stub resolvers the only possible error codes (often called h_errno) are HOST_NOT_FOUND, TRY_AGAIN, NO_RECOVERY and NO_ADDRESS. The application may not ever get this much information depending on the language API, such as Unix.gethostbyname in OCaml’s standard library.

A common linux default is to request AAAA records as well as A records even if the host doesn’t have a IPv6 address. Kreibich et al found that 13% of all sessions requested AAAA records: 42% of linux session requested AAAA records, compared to 10% of non-linux sessions, backing up this theory.

Some stub resolvers and client applications cache DNS responses, interestingly some do not respect TTLs. For example, the default cache time for ncsd (enabled by default on some linux distros) is 15 mins regardless of TTL, whereas internet explorer caches all records for 30 mins. It is important the caches respect short TTLs as they are increasingly utilised by content distribution networks and dynamic DNS. A quick check on my own browser (go to chrome://net-internals/#dns in chorme) shows that the browser cache contains 73 A/AAAA active records and 263 expired records.

Weaver et al. and Kreibich et al. studied how middleboxes interact with DNS traffic using the Netalyzr tool. Weaver et al. concluded that application wishing to use non-standard resource records (RRs) including TXT resources or DNSSEC should use their own DNS resolver and bypass the stub resolver provided by the host. It is often not possible for an application to overwrite the stub resolver’s choice of DNS resolver, which is normally a DNS resolver at the gateway, with a host of problems (see next section). The study also concluded that host stub resolvers often lack failovers (e.g. trying requests over TCP) to common issues such as: the gateway resolver not supporting the full DNS protocol, the gateway resolver cannot be trusted, the gateway resolver may be slow and the network gateway/middleboxes may filter UDP traffic.

In-Gateway Resolvers

The gateway resolver is a common (but not necessary) stage in DNS resolution (there may also be multiple stages of gateway resolvers). The stub resolver running on local host will usually forward the DNS query to the resolver(s) whos address it was given by DHCP lease when connecting to the local router. This address is normally a DNS resolver running at the gateway (at the .1 or .254 address in the local subnet e.g. 192.168.1.x) . I say “usually” as this can be overwritten, for example some people instead opt to use a public DNS server such as Google’s or OpenDNS, or run their own resolver, this is of course rare. Furthermore, not all gateways run DNS resolvers, in this case they typically refer hosts straight to the ISPs resolvers. Gateway resolvers have the advantage that they can enable the local resolution of domain such as .local or domain name for router adminisation (e.g. www.routerlogin.net for Netgear devices).

Weaver et al. tested the whether in-gateway resolvers correctly processed various DNS queries, they found that following: AAAA lookup (96%), TXT RRs (92%), unknown RRs (91%) and EDNS0 (91%). They also found that a significant number of in-gateway resolver are externally usable, opening the gateway to DoS attacks.

ISPs (& Other) Resolvers

The ISP’s resolver is a common (but not necessary) stage in DNS resolution (there many also be multiple stages of ISP resolvers). The ISP resolver is often the resolver responsible for begin to the actual resolution instead of just forwarding/proxying queries.

Despite there widespread deployment and dedicated management, these resolvers are not without there problems. Weaver et al found that 4% of sessions did not implement source port randomisation, only 55% of sessions exhibit EDNS0 usage, 4% of sessions implemented 0x20 whilst 94% propagate capitalisation unmodified. Kreibich et al found that 49% of sessions used DNSSEC enabled resolvers.

https://www.vs.uni-due.de/wander/20121229_Secure_Name_Resolution.pdf

DNSSEC capable resolvers by Matthäus Wander

NXDOMAIN wildcarding is where resolvers replaces responses with the NXDOMAIN error (for example, when a user mistypes a domain) with valid DNS responses point to another site, often with advertising. Weaver et al observed this in 24% of the sessions surveyed. This should only be done on queries from web browsers, though this is not always the case. This may also interact with web browsers who treat NXDOMAIN errors specifically, e.g. if the query fails due to NXDOMAIN, then suggest some likely alternatives. Worryingly, Weaver et al also observed a few cases of SERVFAIL wildcarding, IPv4 addresses in responses where IPv6 only was requested and ignoring additional answer RRs. Some resolvers redirect queries for some search engine, whilst other have malware to inject adverting. Kreibich et al found that essentially all resolver respected a 0 and 1 second TTL.

Another interesting area is the treatment of RRs from the Authority and Additional RR sets. For example, glue records are A RRs in the Additional section added to an answer with NS RRs which put the name servers under the domain they control, without these additional RR’s we would have a circular dependency. Kreibich et al found that 61% of sessions accept glue records when the glue records refer to authoritative nameservers, 25% accept A records corresponding to CNAMEs contained in the reply and 21% of sessions accepting any glue records present in the Additional field, and those only doing so for records for subdomains of the authoritative server.

Other ISP controlled middleboxes

It is clear that resolvers (stub, in-gateway and ISP/Public) do not reliability handle all DNS traffic and all its extensions. Users could opt to run there own resolver and stub resolvers, would this mean that their traffic be free from modification by middleboxes? Of course not.

ISPs have been know to highjack traffic to port 53 to their own DNS resolvers or simply drop it, blocking use of third party DNS resolvers. Some public resolvers support alternative ports (e.g. OpenDNS supports port 5353), but this can be difficult to configure as its cannot be easily expressed in /etc/resolv.conf. There is some evidence of gateways provided by ISPs, redirecting traffic to port 53 to the ISP’s DNS resolvers

TLDs and Root Servers

The root DNS server (or actually the 504 servers, 13 addresses) is the heart of the DNS. The root has supported DNSSEC since 2010, will not support DNSCurve. Likewise many of the TLD’s support DNSSEC and will not support DNSCurve. On the whole, these seems to fairly well managed and free of major issues.

DNS question: Avoiding circular dependencies without glue records?

Can someone help me the understand the following:

When the authoritative name server for a domain (e.g. ns1.example.com) lies within the domain over which it has authority (e.g. example.com), a query (e.g. for example.com) to the parent domain (e.g. .com) will include both the NS RRs, to delegate authority of the domain to the nameservers, in the answer section and the corresponding A RRs in the additional section, this is know as glue records. These glue records are essential to avoid a circular dependency, yet the Netalyzr study found that only 61% of resolvers accept glue records when the glue records refer to authoritative nameservers. How do the other 39% of resolvers actually work then, given that its very common for the authoritative name server for a domain to lie within the domain over which it has authority ?

Comcast blocking NASA.gov

Today, people love to hate their ISPs, they have a public image problem. A great example of this when Comcast apparently blocking NASA’s website in 2012. In fact, Comcast was the only major US ISP to be using DNSSEC validating resolvers thus where the only ones affected when NASA’s website failed to properly sign their DNS responses. Poor Comcast.

On January 18, 2012, the NASA.GOV domain had a DNS Security Extensions (DNSSEC) signing error that blocked access to all NASA.GOV sites when using DNS recursive resolvers performing DNSSEC validation. As one of the largest ISPs in the world utilizing DNSSEC validation, users of Comcast noticed a problem when attempting to connect to the website. This caused some people to incorrectly interpret this as Comcast purposely blocking access to NASA.GOV and recommending users switch from Comcast security-aware DNS resolvers to resolvers not performing DNSSEC validation … Instead, the administrators of the NASA.GOV domain had enabled DNSSEC signing for their domain, and the security signatures in their domain were no longer valid. The Comcast DNS resolvers correctly identified the DNSSEC signature errors and responded with a failure to Comcast customers. This is the expected result when a domain can no longer be validated, and this protects users from a potential security threat

source: http://www.internetsociety.org/deploy360/blog/2012/01/comcast-releases-detailed-analysis-of-nasa-gov-dnssec-validation-failure/

Video: An overview of secure name resolution [29c3]

Here is an excellent talk by Matthäus Wander, introducing DNSSEC, DNSCurve and few other DNS extensions.

 

A few points of interest:

  • stub resolvers need new API’s to report DNSSEC validation failures, then browsers can provides users with “TLS like” failure messages
  • AD flag is useless as there is no validation, yet windows 7/8 still read it
  • Comcast name servers support DNSSEC, though this hasn’t work out great for them
  • Some ISP redirect NXDOMAIN responses, another reason to run your own DNS resolver or use a public one.
  • The root server and big TLDs will not deploy DNSCurve
  • DNSSEC cannot be used directly to validate the DNSCurve public key, stored in the domain name of the parent NS record, as DNSSEC does not sign the domain name.

Matthäus’s slides are online.

Part 1: Running your own DNS Resolver with MirageOS

The following is the first part in a step-by-step guide to setting up your own DNS resolver using MirageOS. I will be running this on a low power, low cost ARM device called the Cubieboard 2. Up to date code for each version of the DNS resolver is on Github. This guide assumes some basic experience of lwt and MirageOS, up to the level of the Hello World Tutorial.

Feedback on this article and pull requests to the demo code are welcome.

Part 1.1 – Setting up the cubieboard with MirageOS

Plenty of information on setting up a cubieboard with Xen and MirageOS is available elsewhere, most notability:

For debugging I am a big fan for wireshark. I run a full wireshark sesson on the machine which is connection sharing to my cubieboard network, to check all external traffic.

For this guide, I will always be compiling for Xen ARM backend, with direct network connection via br0 and a static IP for all unikernels. My test network router is configured to give out static IP of the form 192.168.1.x to hosts with the MAC address 00:00:00:00:00:0x. As a result, my config.ml file look like:

open Mirage

let ip_config:ipv4_config = {
  address= Ipaddr.V4.make 192 168 1 2;
  netmask= Ipaddr.V4.make 255 255 255 0;
  gateways= [Ipaddr.V4.make 192 168 1 1];
}

let client =
  foreign "Unikernel.Client" @@ console @-> stackv4 @-> job

let () =
  add_to_ocamlfind_libraries [ "dns.mirage"; ];
  register "dns-client" 
[ client $ default_console $ direct_stackv4_with_static_ipv4 default_console tap0 ip_config]

Since the IP address of the unikernel is 192.168.1.2, before launching the unikernel, I do:

echo "vif = [ 'mac=00:00:00:00:00:02,bridge=br0' ]" >> dns-client.xl

I build unikernel using the usual commands:

mirage configure --xen
make depend; make; make run
# edit file.xl
sudo xl create -c file.xl

Part 1.2 – Getting Started

The following is the complete code for a unikernel which queries a DNS server for a DNS domain and prints to console the IP address returned.

open Lwt
open V1_LWT

let domain = "google.com"
let server = Ipaddr.V4.make 8 8 8 8

module Client (C:CONSOLE) (S:STACKV4) = struct

  module U = S.UDPV4
  module DNS = Dns_resolver_mirage.Make(OS.Time)(S)

  let start c s =
    let t = DNS.create s in
    OS.Time.sleep 2.0 
    >>= fun () ->
    C.log_s c ("Resolving " ^ domain)
    >>= fun () ->
    DNS.gethostbyname t ~server domain
    >>= fun rl ->
    Lwt_list.iter_s
      (fun r ->
         C.log_s c ("Answer " ^ (Ipaddr.to_string r))
      ) rl

end

This unikernel will query a DNS server at 8.8.8.8 (google public DNS resolver) for a domain google.com. Here we are using the simple function, DNS.gethostbyname, with the following type sig:

  val gethostbyname : t ->
    ?server:Ipaddr.V4.t -> ?dns_port:int ->
    ?q_class:Dns.Packet.q_class ->
    ?q_type:Dns.Packet.q_type ->
    string -> Ipaddr.t list Lwt.t

This returns a list of IP’s, which we then iterative over with Lwt_list.iter_s and print to the console.

Part 1.3 – Boot time parameters

Hardcoding the server and domain is far from ideal, instead we will provide them at boot time with Bootvar, the interface for bootvar is below:

type t
(* read boot parameter line and store in assoc list - expected format is "key1=val1 key2=val2" *)
val create: unit -> t Lwt.t

(* get boot parameter *)
val get: t -> string -> string option

(* get boot parameter, throws Not Found exception *)
val get_exn: t -> string -> string

We can now use this to provide domain and server at boot time instead of compile time

let start c s =
    Bootvar.create () >>= fun bootvar ->
    let domain = Bootvar.get_exn bootvar "domain" in
    let server = Ipaddr.V4.of_string_exn (Bootvar.get_exn bootvar "server") in
    ...

Part 1.4 – Using Resolve

Now, a real DNS resolver will need to make many more parameters (any DNS query) and return full DNS responses not just IP address. Thus we need to move on from DNS.hostbyname to using the less abstract resolve function, resolve:

  val resolve :
    (module Dns.Protocol.CLIENT) ->
    t -> Ipaddr.V4.t -> int ->
    Dns.Packet.q_class ->
    Dns.Packet.q_type ->
    Dns.Name.domain_name ->
    Dns.Packet.t Lwt.t 

We can achieve same result of hostbyname as follows:

...
    DNS.resolve (module Dns.Protocol.Client) t server 53 Q_IN Q_A (string_to_domain_name domain)
    >>= fun r ->
    let ips =
    List.fold_left (fun a x ->
      match x.rdata with
      | A ip -> (Ipaddr.V4 ip) :: a
      | _ -> a ) [] r.answers in
...

We are now explicit about parameters such as port, class and type. Note that we have opened the Dns.Name and Dns.Packet.t modules. The return value of resolve is a Dns.Packet.t, we fold over answers in the produce an IPaddr.V4 list as with hostbyname. We can also use the to_string function in Packet to print

I’ve taken a break to do some refactoring work on the ocaml-dns library. In the next post, Part 2, we will expand our code to a DNS stub resolver.

 

Squashing git commits

to squash the last n commits (e.g 37) into one

git reset --soft HEAD~37 && 
git commit --edit -m"$(git log --format=%B --reverse HEAD..HEAD@{1})"
git push -f

source: http://stackoverflow.com/questions/5189560/squash-my-last-x-commits-together-using-git/5201642#5201642, thanks david