Visible to the public Biblio

Filters: Keyword is side-channels  [Clear All Filters]
2020-10-05
Rakotonirina, Itsaka, Köpf, Boris.  2019.  On Aggregation of Information in Timing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :387—400.

A key question for characterising a system's vulnerability against timing attacks is whether or not it allows an adversary to aggregate information about a secret over multiple timing measurements. Existing approaches for reasoning about this aggregate information rely on strong assumptions about the capabilities of the adversary in terms of measurement and computation, which is why they fall short in modelling, explaining, or synthesising real-world attacks against cryptosystems such as RSA or AES. In this paper we present a novel model for reasoning about information aggregation in timing attacks. The model is based on a novel abstraction of timing measurements that better captures the capabilities of real-world adversaries, and a notion of compositionality of programs that explains attacks by divide-and-conquer. Our model thus lifts important limiting assumptions made in prior work and enables us to give the first uniform explanation of high-profile timing attacks in the language of information-flow analysis.

2019-01-31
Buhren, Robert, Hetzelt, Felicitas, Pirnay, Niklas.  2018.  On the Detectability of Control Flow Using Memory Access Patterns. Proceedings of the 3rd Workshop on System Software for Trusted Execution. :48–53.

Shielding systems such as AMD's Secure Encrypted Virtualization aim to protect a virtual machine from a higher privileged entity such as the hypervisor. A cornerstone of these systems is the ability to protect the memory from unauthorized accesses. Despite this protection mechanism, previous attacks leveraged the control over memory resources to infer control flow of applications running in a shielded system. While previous works focused on a specific target application, there has been no general analysis on how the control flow of a protected application can be inferred. This paper tries to overcome this gap by providing a detailed analysis on the detectability of control flow using memory access patterns. To that end, we do not focus on a specific shielding system or a specific target application, but present a framework which can be applied to different types of shielding systems as well as to different types of attackers. By training a random forest classifier on the memory accesses emitted by syscalls of a shielded entity, we show that it is possible to infer the control flow of shielded entities with a high degree of accuracy.

2019-01-21
Isakov, M., Bu, L., Cheng, H., Kinsy, M. A..  2018.  Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators. 2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :62–67.

Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.

2018-09-12
Chhetri, Sujit Rokka, Canedo, Arquimedes, Faruque, Mohammad Abdullah Al.  2017.  Confidentiality Breach Through Acoustic Side-Channel in Cyber-Physical Additive Manufacturing Systems. ACM Trans. Cyber-Phys. Syst.. 2:3:1–3:25.
In cyber-physical systems, due to the tight integration of the computational, communication, and physical components, most of the information in the cyber-domain manifests in terms of physical actions (such as motion, temperature change, etc.). This leads to the system being prone to physical-to-cyber domain attacks that affect the confidentiality. Physical actions are governed by energy flows, which may be observed. Some of these observable energy flows unintentionally leak information about the cyber-domain and hence are known as the side-channels. Side-channels such as acoustic, thermal, and power allow attackers to acquire the information without actually leveraging the vulnerability of the algorithms implemented in the system. As a case study, we have taken cyber-physical additive manufacturing systems (fused deposition modeling-based three-dimensional (3D) printer) to demonstrate how the acoustic side-channel can be used to breach the confidentiality of the system. In 3D printers, geometry, process, and machine information are the intellectual properties, which are stored in the cyber domain (G-code). We have designed an attack model that consists of digital signal processing, machine-learning algorithms, and context-based post processing to steal the intellectual property in the form of geometry details by reconstructing the G-code and thus the test objects. We have successfully reconstructed various test objects with an average axis prediction accuracy of 86% and an average length prediction error of 11.11%.