Biblio
A key question for characterising a system's vulnerability against timing attacks is whether or not it allows an adversary to aggregate information about a secret over multiple timing measurements. Existing approaches for reasoning about this aggregate information rely on strong assumptions about the capabilities of the adversary in terms of measurement and computation, which is why they fall short in modelling, explaining, or synthesising real-world attacks against cryptosystems such as RSA or AES. In this paper we present a novel model for reasoning about information aggregation in timing attacks. The model is based on a novel abstraction of timing measurements that better captures the capabilities of real-world adversaries, and a notion of compositionality of programs that explains attacks by divide-and-conquer. Our model thus lifts important limiting assumptions made in prior work and enables us to give the first uniform explanation of high-profile timing attacks in the language of information-flow analysis.
Shielding systems such as AMD's Secure Encrypted Virtualization aim to protect a virtual machine from a higher privileged entity such as the hypervisor. A cornerstone of these systems is the ability to protect the memory from unauthorized accesses. Despite this protection mechanism, previous attacks leveraged the control over memory resources to infer control flow of applications running in a shielded system. While previous works focused on a specific target application, there has been no general analysis on how the control flow of a protected application can be inferred. This paper tries to overcome this gap by providing a detailed analysis on the detectability of control flow using memory access patterns. To that end, we do not focus on a specific shielding system or a specific target application, but present a framework which can be applied to different types of shielding systems as well as to different types of attackers. By training a random forest classifier on the memory accesses emitted by syscalls of a shielded entity, we show that it is possible to infer the control flow of shielded entities with a high degree of accuracy.
Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.