Pitfalls in Machine Learning-based Adversary Modeling for Hardware Systems
Title | Pitfalls in Machine Learning-based Adversary Modeling for Hardware Systems |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Ganji, F., Amir, S., Tajik, S., Forte, D., Seifert, J.-P. |
Conference Name | 2020 Design, Automation Test in Europe Conference Exhibition (DATE) |
Date Published | March 2020 |
Publisher | IEEE |
ISBN Number | 978-3-9819263-4-7 |
Keywords | Adversary Models, Approximation algorithms, Boolean functions, Composed Hardware, cryptanalysis attacks, cryptographic scheme, cryptography, Hardware, Human Behavior, learning (artificial intelligence), logic locking, machine learning, machine learning-based adversary model, machine learning-based attacks, Metrics, physically unclonable functions, Picture archiving and communication systems, pubcrawl, resilience, Resiliency, Root-of-trust, Scalability |
Abstract | The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature. |
URL | https://ieeexplore.ieee.org/document/9116316 |
DOI | 10.23919/DATE48585.2020.9116316 |
Citation Key | ganji_pitfalls_2020 |
- machine learning
- Scalability
- Root-of-trust
- Resiliency
- resilience
- pubcrawl
- Picture archiving and communication systems
- physically unclonable functions
- Metrics
- machine learning-based attacks
- machine learning-based adversary model
- Adversary Models
- logic locking
- learning (artificial intelligence)
- Human behavior
- Hardware
- Cryptography
- cryptographic scheme
- cryptanalysis attacks
- Composed Hardware
- Boolean functions
- Approximation algorithms