Visible to the public Biblio

Filters: Author is Amir, S.  [Clear All Filters]
2021-01-28
Ganji, F., Amir, S., Tajik, S., Forte, D., Seifert, J.-P..  2020.  Pitfalls in Machine Learning-based Adversary Modeling for Hardware Systems. 2020 Design, Automation Test in Europe Conference Exhibition (DATE). :514—519.

The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature.