Biblio
Traditional security solutions that rely on public key infrastructure present scalability and transparency challenges when deployed in Internet of Things (IoT). In this paper, we develop a blockchain based authentication mechanism for IoT that can be integrated into the traditional transport layer security protocols such as Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS). Our proposed mechanism is an alternative to the traditional Certificate Authority (CA)-based Public Key Infrastructure (PKI) that relies on x.509 certificates. Specifically, the proposed solution enables the modified TLS/DTLS a viable option for resource constrained IoT devices where minimizing memory utilization is critical. Experiments show that blockchain based authentication can reduce dynamic memory usage by up to 20%, while only minimally increasing application image size and time of execution of the TLS/DTLS handshake.
The adoption of the HTTPS - i.e. HTTP over TLS - protocol by the Hellenic websites is studied in this work. Since this protocol constitutes a de-facto standard for secure communications in the web, our aim is to identify whether the underlying TLS protocol in popular websites in Greece is properly configured, so as to avoid known vulnerabilities. To this end, a systematic approach utilizing two well-known TLS scanner tools is adopted to evaluate 241 sites of high popularity. The results illustrate that only about half of the sites seem to be at a satisfactory level and, thus, there is still much room for improvement, mainly due to the fact that obsolete ciphers and/or protocol versions are still supported; there is also a small portion - i.e. about 3% of the sites - that do not implement the HTTPS at all, thus posing very high security risks for their users who provide their credentials via a totally insecure channel. We also examined, using an appropriate online questionnaire, whether the users are actually aware of what the HTTPS means and how they check the security of the websites. The outcome of this research shows that much work needs to be done to increase the knowledge and the security awareness of an average Internet user.
First standardized by the IETF in the 1990's, SSL/TLS is the most widely-used encryption protocol on the Internet. This makes it imperative to study its usage across different platforms and applications to ensure proper usage and robustness against attacks and vulnerabilities. While previous efforts have focused on the usage of TLS in the desktop ecosystem, there have been no studies of TLS usage by mobile apps at scale. In our study, we use anonymized data collected by the Lumen mobile measurement app to analyze TLS usage by Android apps in the wild. We analyze and fingerprint handshake messages to characterize the TLS APIs and libraries that apps use, and evaluate their weaknesses. We find that 84% of apps use the default TLS libraries provided by the operating system, and the remaining apps use other TLS libraries for various reasons such as using TLS extensions and features that are not supported by the Android TLS libraries, some of which are also not standardized by the IETF. Our analysis reveals the strengths and weaknesses of each approach, demonstrating that the path to improving TLS security in the mobile platform is not straightforward. Based on work published at: Abbas Razaghpanah, Arian Akhavan Niaki, Narseo Vallina-Rodriguez, Srikanth Sundaresan, Johanna Amann, and Phillipa Gill. 2017. Studying TLS Usage in Android Apps. In Proceedings of CoNEXT '17. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3143361.3143400
The application of machine learning for the detection of malicious network traffic has been well researched over the past several decades; it is particularly appealing when the traffic is encrypted because traditional pattern-matching approaches cannot be used. Unfortunately, the promise of machine learning has been slow to materialize in the network security domain. In this paper, we highlight two primary reasons why this is the case: inaccurate ground truth and a highly non-stationary data distribution. To demonstrate and understand the effect that these pitfalls have on popular machine learning algorithms, we design and carry out experiments that show how six common algorithms perform when confronted with real network data. With our experimental results, we identify the situations in which certain classes of algorithms underperform on the task of encrypted malware traffic classification. We offer concrete recommendations for practitioners given the real-world constraints outlined. From an algorithmic perspective, we find that the random forest ensemble method outperformed competing methods. More importantly, feature engineering was decisive; we found that iterating on the initial feature set, and including features suggested by domain experts, had a much greater impact on the performance of the classification system. For example, linear regression using the more expressive feature set easily outperformed the random forest method using a standard network traffic representation on all criteria considered. Our analysis is based on millions of TLS encrypted sessions collected over 12 months from a commercial malware sandbox and two geographically distinct, large enterprise networks.
Transport Layer Security (TLS), has become the de-facto standard for secure Internet communication. When used correctly, it provides secure data transfer, but used incorrectly, it can leave users vulnerable to attacks while giving them a false sense of security. Numerous efforts have studied the adoption of TLS (and its predecessor, SSL) and its use in the desktop ecosystem, attacks, and vulnerabilities in both desktop clients and servers. However, there is a dearth of knowledge of how TLS is used in mobile platforms. In this paper we use data collected by Lumen, a mobile measurement platform, to analyze how 7,258 Android apps use TLS in the wild. We analyze and fingerprint handshake messages to characterize the TLS APIs and libraries that apps use, and also evaluate weaknesses. We see that about 84% of apps use default OS APIs for TLS. Many apps use third-party TLS libraries; in some cases they are forced to do so because of restricted Android capabilities. Our analysis shows that both approaches have limitations, and that improving TLS security in mobile is not straightforward. Apps that use their own TLS configurations may have vulnerabilities due to developer inexperience, but apps that use OS defaults are vulnerable to certain attacks if the OS is out of date, even if the apps themselves are up to date. We also study certificate verification, and see low prevalence of security measures such as certificate pinning, even among high-risk apps such as those providing financial services, though we did observe major third-party tracking and advertisement services deploying certificate pinning.
Internet-of-Things devices often collect and transmit sensitive information like camera footage, health monitoring data, or whether someone is home. These devices protect data in transit with end-to-end encryption, typically using TLS connections between devices and associated cloud services. But these TLS connections also prevent device owners from observing what their own devices are saying about them. Unlike in traditional Internet applications, where the end user controls one end of a connection (e.g., their web browser) and can observe its communication, Internet-of-Things vendors typically control the software in both the device and the cloud. As a result, owners have no way to audit the behavior of their own devices, leaving them little choice but to hope that these devices are transmitting only what they should. This paper presents TLS–Rotate and Release (TLS-RaR), a system that allows device owners (e.g., consumers, security researchers, and consumer watchdogs) to authorize devices, called auditors, to decrypt and verify recent TLS traffic without compromising future traffic. Unlike prior work, TLS-RaR requires no changes to TLS's wire format or cipher suites, and it allows the device's owner to conduct a surprise inspection of recent traffic, without prior notice to the device that its communications will be audited.
Driven by CA compromises and the risk of man-in-the-middle attacks, new security features have been added to TLS, HTTPS, and the web PKI over the past five years. These include Certificate Transparency (CT), for making the CA system auditable; HSTS and HPKP headers, to harden the HTTPS posture of a domain; the DNS-based extensions CAA and TLSA, for control over certificate issuance and pinning; and SCSV, for protocol downgrade protection. This paper presents the first large scale investigation of these improvements to the HTTPS ecosystem, explicitly accounting for their combined usage. In addition to collecting passive measurements at the Internet uplinks of large University networks on three continents, we perform the largest domain-based active Internet scan to date, covering 193M domains. Furthermore, we track the long-term deployment history of new TLS security features by leveraging passive observations dating back to 2012. We find that while deployment of new security features has picked up in general, only SCSV (49M domains) and CT (7M domains) have gained enough momentum to improve the overall security of HTTPS. Features with higher complexity, such as HPKP, are deployed scarcely and often incorrectly. Our empirical findings are placed in the context of risk, deployment effort, and benefit of these new technologies, and actionable steps for improvement are proposed. We cross-correlate use of features and find some techniques with significant correlation in deployment. We support reproducible research and publicly release data and code.
Cloud platforms can leverage Trusted Platform Modules to help provide assurance to clients that cloud-based Web services are trustworthy and behave as expected. We discuss a variety of approaches to providing this assurance, and we implement one approach based on the concept of a trustworthy certificate authority. TaoCA, our prototype implementation, links cryptographic attestations from a cloud platform, including a Trusted Platform Module, with existing TLS-based authentication mechanisms. TaoCA is designed to enable certificate authorities, browser vendors, system administrators, and end users to define and enforce a range of trust policies for web services. Evaluation of the prototype implementation demonstrates the feasibility of the design, illustrates performance tradeoffs, and serves as an end-to-end, proof-of-concept evaluation of underlying trustworthy computing abstractions. The proposed approach can be deployed incrementally and provides new benefits while retaining compatibility with the existing public key infrastructure used for TLS.
Attribute-based methods provide authorization to parties based on whether their set of attributes (e.g., age, organization, etc.) fulfills a policy. In attribute-based encryption (ABE), authorized parties can decrypt, and in attribute-based credentials (ABCs), authorized parties can authenticate themselves. In this paper, we combine elements of ABE and ABCs together with garbled circuits to construct attribute-based key exchange (ABKE). Our focus is on an interactive solution involving a client that holds a certificate (issued by an authority) vouching for that client's attributes and a server that holds a policy computable on such a set of attributes. The goal is for the server to establish a shared key with the client but only if the client's certified attributes satisfy the policy. Our solution enjoys strong privacy guarantees for both the client and the server, including attribute privacy and unlinkability of client sessions. Our main contribution is a construction of ABKE for arbitrary circuits with high (concrete) efficiency. Specifically, we support general policies expressible as boolean circuits computed on a set of attributes. Even for policies containing hundreds of thousands of gates the performance cost is dominated by two pairing computations per policy input. Put another way, for a similar cost to prior ABE/ABC solutions, which can only support small formulas efficiently, we can support vastly richer policies. We implemented our solution and report on its performance. For policies with 100,000 gates and 200 inputs over a realistic network, the server and client spend 957 ms and 176 ms on computation, respectively. When using offline preprocessing and batch signature verification, this drops to only 243 ms and 97 ms.
TLS has the potential to provide strong protection against network-based attackers and mass surveillance, but many implementations take security shortcuts in order to reduce the costs of cryptographic computations and network round trips. We report the results of a nine-week study that measures the use and security impact of these shortcuts for HTTPS sites among Alexa Top Million domains. We find widespread deployment of DHE and ECDHE private value reuse, TLS session resumption, and TLS session tickets. These practices greatly reduce the protection afforded by forward secrecy: connections to 38% of Top Million HTTPS sites are vulnerable to decryption if the server is compromised up to 24 hours later, and 10% up to 30 days later, regardless of the selected cipher suite. We also investigate the practice of TLS secrets and session state being shared across domains, finding that in some cases, the theft of a single secret value can compromise connections to tens of thousands of sites. These results suggest that site operators need to better understand the tradeoffs between optimizing TLS performance and providing strong security, particularly when faced with nation-state attackers with a history of aggressive, large-scale surveillance.
The HTTPS ecosystem, including the SSL/TLS protocol, the X.509 public-key infrastructure, and their cryptographic libraries, is the standardized foundation of Internet Security. Despite 20 years of progress and extensions, however, its practical security remains controversial, as witnessed by recent efforts to improve its design and implementations, as well as recent disclosures of attacks against its deployments. The Everest project is a collaboration between Microsoft Research, INRIA, and the community at large that aims at modelling, programming, and verifying the main HTTPS components with strong machine-checked security guarantees, down to core system and cryptographic assumptions. Although HTTPS involves a relatively small amount of code, it requires efficient low-level programming and intricate proofs of functional correctness and security. To this end, we are also improving our verifications tools (F*, Dafny, Lean, Z3) and developing new ones. In my talk, I will present our project, review our experience with miTLS, a verified reference implementation of TLS coded in F*, and describe current work towards verified, secure, efficient HTTPS.
The semantics of online authentication in the web are rather straightforward: if Alice has a certificate binding Bob's name to a public key, and if a remote entity can prove knowledge of Bob's private key, then (barring key compromise) that remote entity must be Bob. However, in reality, many websites' and the majority of the most popular ones-are hosted at least in part by third parties such as Content Delivery Networks (CDNs) or web hosting providers. Put simply: administrators of websites who deal with (extremely) sensitive user data are giving their private keys to third parties. Importantly, this sharing of keys is undetectable by most users, and widely unknown even among researchers. In this paper, we perform a large-scale measurement study of key sharing in today's web. We analyze the prevalence with which websites trust third-party hosting providers with their secret keys, as well as the impact that this trust has on responsible key management practices, such as revocation. Our results reveal that key sharing is extremely common, with a small handful of hosting providers having keys from the majority of the most popular websites. We also find that hosting providers often manage their customers' keys, and that they tend to react more slowly yet more thoroughly to compromised or potentially compromised keys.
Recent studies shows that by the end of 2016 more than 60% of Internet traffic would be running on HTTPS. In presence of secure tunnels such as HTTPS, transparent caching solutions become in vain, as the application payload is encrypted by lower level security protocols. This paper addresses this issue and provides an alternate approach, for contents caching without compromising their security. There are three parts to our proposal. First, we propose two new IP layer primitives that allow routers to differentiate between IP and ICN flows. Second, we introduce DCAR (Dual-mode Content Aware Router), which is a traditional IP router enabled to understand the proposed IP primitives. Third, design of DISCS (DCAR based Information centric Secure Content Sharing) framework is proposed that leverages DCAR to allow content object caching along with security services that are comparable to HTTPS. Finally we share details on realizing such system.
Application of lightweight block ciphers in the TLS protocol is studied in this paper. More precisely, since the use of lightweight cryptographic algorithms is prerequisite for addressing security in highly constrained environments such as the Internet of Things, we focus on the behavior of the TLS performance in case that AES is being replaced by a lightweight block cipher; to this end, the recently proposed Speck cipher is being used as a case study. Experimental results exhibit that significant gain in performance can be achieved in such constrained environments, whereas in some cases Speck with larger key size than AES may also result in higher throughput.
SSL and TLS are used to secure the most commonly used Internet protocols. As a result, the ecosystem of SSL certificates has been thoroughly studied, leading to a broad understanding of the strengths and weaknesses of the certificates accepted by most web browsers. Prior work has naturally focused almost exclusively on "valid" certificates–those that standard browsers accept as well-formed and trusted–and has largely disregarded certificates that are otherwise "invalid." Surprisingly, however, this leaves the majority of certificates unexamined: we find that, on average, 65% of SSL certificates advertised in each IPv4 scan that we examine are actually invalid. In this paper, we demonstrate that despite their invalidity, much can be understood from these certificates. Specifically, we show why the web's SSL ecosystem is populated by so many invalid certificates, where they originate from, and how they impact security. Using a dataset of over 80M certificates, we determine that most invalid certificates originate from a few types of end-user devices, and possess dramatically different properties than their valid counterparts. We find that many of these devices periodically reissue their (invalid) certificates, and develop new techniques that allow us to track these reissues across scans. We present evidence that this technique allows us to uniquely track over 6.7M devices. Taken together, our results open up a heretofore largely-ignored portion of the SSL ecosystem to further study.
The semantics of online authentication in the web are rather straightforward: if Alice has a certificate binding Bob's name to a public key, and if a remote entity can prove knowledge of Bob's private key, then (barring key compromise) that remote entity must be Bob. However, in reality, many websites' and the majority of the most popular ones-are hosted at least in part by third parties such as Content Delivery Networks (CDNs) or web hosting providers. Put simply: administrators of websites who deal with (extremely) sensitive user data are giving their private keys to third parties. Importantly, this sharing of keys is undetectable by most users, and widely unknown even among researchers. In this paper, we perform a large-scale measurement study of key sharing in today's web. We analyze the prevalence with which websites trust third-party hosting providers with their secret keys, as well as the impact that this trust has on responsible key management practices, such as revocation. Our results reveal that key sharing is extremely common, with a small handful of hosting providers having keys from the majority of the most popular websites. We also find that hosting providers often manage their customers' keys, and that they tend to react more slowly yet more thoroughly to compromised or potentially compromised keys.
Attribute-based methods provide authorization to parties based on whether their set of attributes (e.g., age, organization, etc.) fulfills a policy. In attribute-based encryption (ABE), authorized parties can decrypt, and in attribute-based credentials (ABCs), authorized parties can authenticate themselves. In this paper, we combine elements of ABE and ABCs together with garbled circuits to construct attribute-based key exchange (ABKE). Our focus is on an interactive solution involving a client that holds a certificate (issued by an authority) vouching for that client's attributes and a server that holds a policy computable on such a set of attributes. The goal is for the server to establish a shared key with the client but only if the client's certified attributes satisfy the policy. Our solution enjoys strong privacy guarantees for both the client and the server, including attribute privacy and unlinkability of client sessions. Our main contribution is a construction of ABKE for arbitrary circuits with high (concrete) efficiency. Specifically, we support general policies expressible as boolean circuits computed on a set of attributes. Even for policies containing hundreds of thousands of gates the performance cost is dominated by two pairing computations per policy input. Put another way, for a similar cost to prior ABE/ABC solutions, which can only support small formulas efficiently, we can support vastly richer policies. We implemented our solution and report on its performance. For policies with 100,000 gates and 200 inputs over a realistic network, the server and client spend 957 ms and 176 ms on computation, respectively. When using offline preprocessing and batch signature verification, this drops to only 243 ms and 97 ms.