Biblio

Filters: Author is Barthe, Gilles  [Clear All Filters]
2022-02-24
Barthe, Gilles, Blazy, Sandrine, Hutin, Rémi, Pichardie, David.  2021.  Secure Compilation of Constant-Resource Programs. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1–12.
Observational non-interference (ONI) is a generic information-flow policy for side-channel leakage. Informally, a program is ONI-secure if observing program leakage during execution does not reveal any information about secrets. Formally, ONI is parametrized by a leakage function l, and different instances of ONI can be recovered through different instantiations of l. One popular instance of ONI is the cryptographic constant-time (CCT) policy, which is widely used in cryptographic libraries to protect against timing and cache attacks. Informally, a program is CCT-secure if it does not branch on secrets and does not perform secret-dependent memory accesses. Another instance of ONI is the constant-resource (CR) policy, a relaxation of the CCT policy which is used in Amazon's s2n implementation of TLS and in several other security applications. Informally, a program is CR-secure if its cost (modelled by a tick operator over an arbitrary semi-group) does not depend on secrets.In this paper, we consider the problem of preserving ONI by compilation. Prior work on the preservation of the CCT policy develops proof techniques for showing that main compiler optimisations preserve the CCT policy. However, these proof techniques critically rely on the fact that the semi-group used for modelling leakage satisfies the property: l1+ l1' = l2+l2'$\Rightarrow$l1=l2$\wedge$ l1' = l2' Unfortunately, this non-cancelling property fails for the CR policy, because its underlying semi-group is ($\backslash$mathbbN, +) and it is currently not known how to extend existing techniques to policies that do not satisfy non-cancellation.We propose a methodology for proving the preservation of the CR policy during a program transformation. We present an implementation of some elementary compiler passes, and apply the methodology to prove the preservation of these passes. Our results have been mechanically verified using the Coq proof assistant.
2022-02-25
Barthe, Gilles, Cauligi, Sunjay, Grégoire, Benjamin, Koutsos, Adrien, Liao, Kevin, Oliveira, Tiago, Priya, Swarn, Rezk, Tamara, Schwabe, Peter.  2021.  High-Assurance Cryptography in the Spectre Era. 2021 IEEE Symposium on Security and Privacy (SP). :1884–1901.
High-assurance cryptography leverages methods from program verification and cryptography engineering to deliver efficient cryptographic software with machine-checked proofs of memory safety, functional correctness, provable security, and absence of timing leaks. Traditionally, these guarantees are established under a sequential execution semantics. However, this semantics is not aligned with the behavior of modern processors that make use of speculative execution to improve performance. This mismatch, combined with the high-profile Spectre-style attacks that exploit speculative execution, naturally casts doubts on the robustness of high-assurance cryptography guarantees. In this paper, we dispel these doubts by showing that the benefits of high-assurance cryptography extend to speculative execution, costing only a modest performance overhead. We build atop the Jasmin verification framework an end-to-end approach for proving properties of cryptographic software under speculative execution, and validate our approach experimentally with efficient, functionally correct assembly implementations of ChaCha20 and Poly1305, which are secure against both traditional timing and speculative execution attacks.
2020-01-27
Almeida, José Bacelar, Barbosa, Manuel, Barthe, Gilles, Campagna, Matthew, Cohen, Ernie, Grégoire, Benjamin, Pereira, Vitor, Portela, Bernardo, Strub, Pierre-Yves, Tasiran, Serdar.  2019.  A Machine-Checked Proof of Security for AWS Key Management Service. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. :63–78.

We present a machine-checked proof of security for the domain management protocol of Amazon Web Services' KMS (Key Management Service) a critical security service used throughout AWS and by AWS customers. Domain management is at the core of AWS KMS; it governs the top-level keys that anchor the security of encryption services at AWS. We show that the protocol securely implements an ideal distributed encryption mechanism under standard cryptographic assumptions. The proof is machine-checked in the EasyCrypt proof assistant and is the largest EasyCrypt development to date.

2019-08-26
Barthe, Gilles, Fan, Xiong, Gancher, Joshua, Grégoire, Benjamin, Jacomme, Charlie, Shi, Elaine.  2018.  Symbolic Proofs for Lattice-Based Cryptography. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :538–555.

Symbolic methods have been used extensively for proving security of cryptographic protocols in the Dolev-Yao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limited in scope and expressivity. This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of lattice-based cryptography and are extremely important due to their potential role in post-quantum cryptography. Following (Barthe, Grégoire and Schmidt, CCS 2015), our approach combines a computational logic and deducibility problems—a standard tool for representing the adversary's knowledge, the Dolev-Yao model. The computational logic is used to capture (indistinguishability-based) security notions and drive the security proofs whereas deducibility problems are used as side-conditions to control that rules of the logic are applied correctly. We then use AutoLWE, an implementation of the logic, to deliver very short or even automatic proofs of several emblematic constructions, including CPA-PKE (Gentry et al., STOC 2008), (Hierarchical) Identity-Based Encryption (Agrawal et al. Eurocrypt 2010), Inner Product Encryption (Agrawal et al. Asiacrypt 2011), CCA-PKE (Micciancio et al., Eurocrypt 2012). The main technical novelty beyond AutoLWE is a set of (semi-)decision procedures for deducibility problems, using extensions of Gröbner basis computations for subalgebras in the (non-)commutative setting (instead of ideals in the commutative setting). Our procedures cover the theory of matrices, which is required for lattice-based assumption, as well as the theory of non-commutative rings, fields, and Diffie-Hellman exponentiation, in its standard, bilinear and multilinear forms. Additionally, AutoLWE supports oracle-relative assumptions, which are used specifically to apply (advanced forms of) the Leftover Hash Lemma, an information-theoretical tool widely used in lattice-based proofs.

2018-01-10
Almeida, José Bacelar, Barbosa, Manuel, Barthe, Gilles, Dupressoir, François, Grégoire, Benjamin, Laporte, Vincent, Pereira, Vitor.  2017.  A Fast and Verified Software Stack for Secure Function Evaluation. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1989–2006.
We present a high-assurance software stack for secure function evaluation (SFE). Our stack consists of three components: i. a verified compiler (CircGen) that translates C programs into Boolean circuits; ii. a verified implementation of Yao's SFE protocol based on garbled circuits and oblivious transfer; and iii. transparent application integration and communications via FRESCO, an open-source framework for secure multiparty computation (MPC). CircGen is a general purpose tool that builds on CompCert, a verified optimizing compiler for C. It can be used in arbitrary Boolean circuit-based cryptography deployments. The security of our SFE protocol implementation is formally verified using EasyCrypt, a tool-assisted framework for building high-confidence cryptographic proofs, and it leverages a new formalization of garbled circuits based on the framework of Bellare, Hoang, and Rogaway (CCS 2012). We conduct a practical evaluation of our approach, and conclude that it is competitive with state-of-the-art (unverified) approaches. Our work provides concrete evidence of the feasibility of building efficient, verified, implementations of higher-level cryptographic systems. All our development is publicly available.
Almeida, José Bacelar, Barbosa, Manuel, Barthe, Gilles, Blot, Arthur, Grégoire, Benjamin, Laporte, Vincent, Oliveira, Tiago, Pacheco, Hugo, Schmidt, Benedikt, Strub, Pierre-Yves.  2017.  Jasmin: High-Assurance and High-Speed Cryptography. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1807–1823.
Jasmin is a framework for developing high-speed and high-assurance cryptographic software. The framework is structured around the Jasmin programming language and its compiler. The language is designed for enhancing portability of programs and for simplifying verification tasks. The compiler is designed to achieve predictability and efficiency of the output code (currently limited to x64 platforms), and is formally verified in the Coq proof assistant. Using the supercop framework, we evaluate the Jasmin compiler on representative cryptographic routines and conclude that the code generated by the compiler is as efficient as fast, hand-crafted, implementations. Moreover, the framework includes highly automated tools for proving memory safety and constant-time security (for protecting against cache-based timing attacks). We also demonstrate the effectiveness of the verification tools on a large set of cryptographic routines.
2017-09-27
Barthe, Gilles, Gaboardi, Marco, Grégoire, Benjamin, Hsu, Justin, Strub, Pierre-Yves.  2016.  Proving Differential Privacy via Probabilistic Couplings. Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science. :749–758.
Over the last decade, differential privacy has achieved widespread adoption within the privacy community. Moreover, it has attracted significant attention from the verification community, resulting in several successful tools for formally proving differential privacy. Although their technical approaches vary greatly, all existing tools rely on reasoning principles derived from the composition theorem of differential privacy. While this suffices to verify most common private algorithms, there are several important algorithms whose privacy analysis does not rely solely on the composition theorem. Their proofs are significantly more complex, and are currently beyond the reach of verification tools. In this paper, we develop compositional methods for formally verifying differential privacy for algorithms whose analysis goes beyond the composition theorem. Our methods are based on deep connections between differential privacy and probabilistic couplings, an established mathematical tool for reasoning about stochastic processes. Even when the composition theorem is not helpful, we can often prove privacy by a coupling argument. We demonstrate our methods on two algorithms: the Exponential mechanism and the Above Threshold algorithm, the critical component of the famous Sparse Vector algorithm. We verify these examples in a relational program logic apRHL+, which can construct approximate couplings. This logic extends the existing apRHL logic with more general rules for the Laplace mechanism and the one-sided Laplace mechanism, and new structural rules enabling pointwise reasoning about privacy; all the rules are inspired by the connection with coupling. While our paper is presented from a formal verification perspective, we believe that its main insight is of independent interest for the differential privacy community.
2017-05-22
Barthe, Gilles, Fong, Noémie, Gaboardi, Marco, Grégoire, Benjamin, Hsu, Justin, Strub, Pierre-Yves.  2016.  Advanced Probabilistic Couplings for Differential Privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :55–67.

Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends on accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called optimal subset coupling that may be of independent interest.

2017-10-13
Barthe, Gilles, Farina, Gian Pietro, Gaboardi, Marco, Arias, Emilio Jesus Gallego, Gordon, Andy, Hsu, Justin, Strub, Pierre-Yves.  2016.  Differentially Private Bayesian Programming. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :68–79.

We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.