Biblio

Filters: Author is Strub, Pierre-Yves  [Clear All Filters]
2020-01-27
Almeida, José Bacelar, Barbosa, Manuel, Barthe, Gilles, Campagna, Matthew, Cohen, Ernie, Grégoire, Benjamin, Pereira, Vitor, Portela, Bernardo, Strub, Pierre-Yves, Tasiran, Serdar.  2019.  A Machine-Checked Proof of Security for AWS Key Management Service. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. :63–78.

We present a machine-checked proof of security for the domain management protocol of Amazon Web Services' KMS (Key Management Service) a critical security service used throughout AWS and by AWS customers. Domain management is at the core of AWS KMS; it governs the top-level keys that anchor the security of encryption services at AWS. We show that the protocol securely implements an ideal distributed encryption mechanism under standard cryptographic assumptions. The proof is machine-checked in the EasyCrypt proof assistant and is the largest EasyCrypt development to date.

2018-01-10
Almeida, José Bacelar, Barbosa, Manuel, Barthe, Gilles, Blot, Arthur, Grégoire, Benjamin, Laporte, Vincent, Oliveira, Tiago, Pacheco, Hugo, Schmidt, Benedikt, Strub, Pierre-Yves.  2017.  Jasmin: High-Assurance and High-Speed Cryptography. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1807–1823.
Jasmin is a framework for developing high-speed and high-assurance cryptographic software. The framework is structured around the Jasmin programming language and its compiler. The language is designed for enhancing portability of programs and for simplifying verification tasks. The compiler is designed to achieve predictability and efficiency of the output code (currently limited to x64 platforms), and is formally verified in the Coq proof assistant. Using the supercop framework, we evaluate the Jasmin compiler on representative cryptographic routines and conclude that the code generated by the compiler is as efficient as fast, hand-crafted, implementations. Moreover, the framework includes highly automated tools for proving memory safety and constant-time security (for protecting against cache-based timing attacks). We also demonstrate the effectiveness of the verification tools on a large set of cryptographic routines.
2017-09-27
Barthe, Gilles, Gaboardi, Marco, Grégoire, Benjamin, Hsu, Justin, Strub, Pierre-Yves.  2016.  Proving Differential Privacy via Probabilistic Couplings. Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science. :749–758.
Over the last decade, differential privacy has achieved widespread adoption within the privacy community. Moreover, it has attracted significant attention from the verification community, resulting in several successful tools for formally proving differential privacy. Although their technical approaches vary greatly, all existing tools rely on reasoning principles derived from the composition theorem of differential privacy. While this suffices to verify most common private algorithms, there are several important algorithms whose privacy analysis does not rely solely on the composition theorem. Their proofs are significantly more complex, and are currently beyond the reach of verification tools. In this paper, we develop compositional methods for formally verifying differential privacy for algorithms whose analysis goes beyond the composition theorem. Our methods are based on deep connections between differential privacy and probabilistic couplings, an established mathematical tool for reasoning about stochastic processes. Even when the composition theorem is not helpful, we can often prove privacy by a coupling argument. We demonstrate our methods on two algorithms: the Exponential mechanism and the Above Threshold algorithm, the critical component of the famous Sparse Vector algorithm. We verify these examples in a relational program logic apRHL+, which can construct approximate couplings. This logic extends the existing apRHL logic with more general rules for the Laplace mechanism and the one-sided Laplace mechanism, and new structural rules enabling pointwise reasoning about privacy; all the rules are inspired by the connection with coupling. While our paper is presented from a formal verification perspective, we believe that its main insight is of independent interest for the differential privacy community.
2017-05-22
Barthe, Gilles, Fong, Noémie, Gaboardi, Marco, Grégoire, Benjamin, Hsu, Justin, Strub, Pierre-Yves.  2016.  Advanced Probabilistic Couplings for Differential Privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :55–67.

Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends on accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called optimal subset coupling that may be of independent interest.

2017-10-13
Barthe, Gilles, Farina, Gian Pietro, Gaboardi, Marco, Arias, Emilio Jesus Gallego, Gordon, Andy, Hsu, Justin, Strub, Pierre-Yves.  2016.  Differentially Private Bayesian Programming. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :68–79.

We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.