See also my Google citations.
Content delivery networks (CDNs) commonly use DNS to map end-users to the best edge servers. A recently proposed EDNS0-Client-Subnet (ECS) extension allows recursive resolvers to include end-user subnet information in DNS queries, so that authoritative DNS servers, especially those belonging to CDNs, could use this information to improve user mapping. In this paper, we study the ECS behavior of ECS-enabled recursive resolvers from the perspectives of the opposite sides of a DNS interaction, the authoritative DNS servers of a major CDN and a busy DNS resolution service. We find a range of erroneous (i.e., deviating from the protocol specification) and detrimental (even if compliant) behaviors that may unnecessarily erode client privacy, reduce the effectiveness of DNS caching, diminish ECS benefits, and in some cases turn ECS from facilitator into an obstacle to authoritative DNS servers’ ability to optimize user-to-edge-server mappings.
Recursive resolvers in the Domain Name System play a critical role in not only DNS' primary function of mapping hostnames to IP addresses but also in the load balancing and performance of many Internet systems. Prior art has observed the existence of complex recursive resolver structures where multiple recursive resolvers collaborate in a "pool". Yet, we know little about the structure and behavior of pools. In this paper, we present a characterization and classification of resolver pools. We observe that pools are frequently disperse in IP space, and some are even disperse geographically. Many pools include dual-stack resolvers and we identify methods for associating the IPv4 and IPv6 addresses. Further, the pools exhibit a wide range of behaviors from uniformly balancing load among the resolvers within the pool to proportional distributions per resolver.
The Domain Name System (DNS) is a critical component of the Internet infrastructure as it maps human-readable hostnames into the IP addresses the network uses to route traffic. Yet, the DNS behavior of individual clients is not well understood. In this paper, we present a characterization of DNS clients with an eye towards developing an analytical model of client interaction with the larger DNS ecosystem. While this is initial work and we do not arrive at a DNS workload model, we highlight a variety of behaviors and characteristics that enhance our mental models of how DNS operates and move us towards an analytical model of client-side DNS operation.
Version 2 of the Hypertext Transfer Protocol (HTTP/2) was finalized in May 2015 as RFC 7540. It addresses well-known problems with HTTP/1.1 (e.g., head of line blocking and redundant headers) and introduces new features (e.g., server push and content priority). Though HTTP/2 is designed to be the future of the web, it remains unclear whether the web will---or should---hop on board. To shed light on this question, we built a measurement platform that monitors HTTP/2 adoption and performance across the Alexa top 1 million websites on a daily basis. Our system is live and up-to-date results can be viewed at isthewebhttp2yet.com. In this paper, we report findings from an 11 month measurement campaign (November 2014 - October 2015). As of October 2015, we find 68,000 websites reporting HTTP/2 support, of which about 10,000 actually serve content with it. Unsurprisingly, popular sites are quicker to adopt HTTP/2 and 31% of the Alexa top 100 already support it. For the most part, websites do not change as they move from HTTP/1.1 to HTTP/2; current web development practices like inlining and domain sharding are still present. Contrary to previous results, we find that these practices make HTTP/2 more resilient to losses and jitter. In all, we find that 80% of websites supporting HTTP/2 experience a decrease in page load time compared with HTTP/1.1 and the decrease grows in mobile networks.
Transport Layer Security (TLS), is the de facto protocol supporting secure HTTP (HTTPS), and is being discussed as the default transport protocol for HTTP2.0. It has seen wide adoption and is currently carrying a significant fraction of the overall HTTP traffic (Facebook, Google and Twitter use it by default). However, TLS makes the fundamental assumption that all functionality resides solely at the endpoints, and is thus unable to utilize the many in-network services that optimize network resource usage, improve user experience, and protect clients and servers from security threats. Re-introducing such in-network functionality into secure TLS sessions today is done through hacks, in many cases weakening overall security.
In this paper we introduce multi-context TLS (mcTLS) which enhances TLS by allowing middleboxes to be fully supported participants in TLS sessions. mcTLS breaks the "all-or-nothing" security model by allowing endpoints and content providers to explicitly introduce middleboxes in secure end-to-end sessions, while deciding whether they should have read or write access, and to which specific parts of the content. mcTLS enables transparency and control for both clients and servers.
We evaluate a prototype mcTLS implementation in both controlled and "live" experiments, showing that the benefits offered have minimal overhead.More importantly, we show that mcTLS can be incrementally deployed and requires small changes to clients, servers, and middleboxes, for a large number of use cases.
The Domain Name System (DNS) is a critical component of the Internet infrastructure that has many security vulnerabilities. In particular, shared DNS resolvers are a notorious security weak spot in the system. We propose an unorthodox approach for tackling vulnerabilities in shared DNS resolvers: removing shared DNS resolvers entirely and leaving recursive resolution to the clients. We show that the two primary costs of this approach—loss of performance and an increase in system load—are modest and therefore conclude that this approach is beneficial for strengthening the DNS by reducing the attack surface.
The Domain Name System (DNS) is a critical component of the Internet infrastructure as it maps human-readable names to IP addresses. Injecting fraudulent mappings allows an attacker to divert users from intended destinations to those of an attacker's choosing. In this paper, we measure the Internet's vulnerability to DNS record injection attacks-including a new attack we uncover. We find that record injection vulnerabilities are fairly common-even years after some of them were first uncovered.
The Domain Name System (DNS) is a critical component of the Internet infrastructure. It allows users to interact with Web sites using human-readable names and provides a foundation for transparent client request distribution among servers in Web platforms, such as content delivery networks. In this paper, we present methodologies for efficiently discovering the complex client-side DNS infrastructure. We further develop measurement techniques for isolating the behavior of the distinct actors in the infrastructure. Using these strategies, we study various aspects of the client-side DNS infrastructure and its behavior with respect to caching, both in aggregate and separately for different actors.
The Domain Name System (DNS) provides mapping of meaningful names to arbitrary data for applications and services on the Internet. Since its original design, the system has grown in complexity and our understanding of the system has lagged behind. In this dissertation, we perform measurement studies of the DNS infrastructure demonstrating the complexity of the system and showing that different parts of the infrastructure exhibit varying behaviors, some being violations of the DNS specification. The DNS also has known weaknesses to attack and we reinforce this by uncovering a new vulnerability against one component of the system. As a result, understanding and maintaining the DNS is increasingly hard. In response to these issues, we propose a modification to the DNS that simplifies the resolution path and reduces the attack surface. We observe that the potential costs of this modification can be managed and discuss ways that the cost may be mitigated.
TCP proxies have been introduced as a method to improve throughput and reduce congestion in mobile ad hoc networks. Proxies split the path into several shorter paths which have higher throughput due to reduced packet loss and round trip time. As a side effect, congestion is reduced because fewer link layer retransmissions occur. In current protocols, proxies are assigned at the start of the transfer and must be used for the duration. Due to mobility and congestion change, pinned proxies can actually reduce throughput. In this thesis, we present a second version of the DTCP protocol which includes the ability to switch proxies in the middle of a transfer. We demonstrate in the Network Simulator version 2 that the new protocol performs better than other related protocols in simulated mobile ad hoc networks with varying levels of mobility and congestion.