Visible to the public Biblio

Filters: Keyword is Arbiter-PUF  [Clear All Filters]
2019-06-10
Su, H., Zwolinski, M., Halak, B..  2018.  A Machine Learning Attacks Resistant Two Stage Physical Unclonable Functions Design. 2018 IEEE 3rd International Verification and Security Workshop (IVSW). :52-55.

Physical Unclonable Functions (PUFs) have been designed for many security applications such as identification, authentication of devices and key generation, especially for lightweight electronics. Traditional approaches to enhancing security, such as hash functions, may be expensive and resource dependent. However, modelling attacks using machine learning (ML) show the vulnerability of most PUFs. In this paper, a combination of a 32-bit current mirror and 16-bit arbiter PUFs in 65nm CMOS technology is proposed to improve resilience against modelling attacks. Both PUFs are vulnerable to machine learning attacks and we reduce the output prediction rate from 99.2% and 98.8% individually, to 60%.

2018-02-06
Mispan, M. S., Halak, B., Zwolinski, M..  2017.  Lightweight Obfuscation Techniques for Modeling Attacks Resistant PUFs. 2017 IEEE 2nd International Verification and Security Workshop (IVSW). :19–24.

Building lightweight security for low-cost pervasive devices is a major challenge considering the design requirements of a small footprint and low power consumption. Physical Unclonable Functions (PUFs) have emerged as a promising technology to provide a low-cost authentication for such devices. By exploiting intrinsic manufacturing process variations, PUFs are able to generate unique and apparently random chip identifiers. Strong-PUFs represent a variant of PUFs that have been suggested for lightweight authentication applications. Unfortunately, many of the Strong-PUFs have been shown to be susceptible to modelling attacks (i.e., using machine learning techniques) in which an adversary has access to challenge and response pairs. In this study, we propose an obfuscation technique during post-processing of Strong-PUF responses to increase the resilience against machine learning attacks. We conduct machine learning experiments using Support Vector Machines and Artificial Neural Networks on two Strong-PUFs: a 32-bit Arbiter-PUF and a 2-XOR 32-bit Arbiter-PUF. The predictability of the 32-bit Arbiter-PUF is reduced to $\approx$ 70% by using an obfuscation technique. Combining the obfuscation technique with 2-XOR 32-bit Arbiter-PUF helps to reduce the predictability to $\approx$ 64%. More reduction in predictability has been observed in an XOR Arbiter-PUF because this PUF architecture has a good uniformity. The area overhead with an obfuscation technique consumes only 788 and 1080 gate equivalents for the 32-bit Arbiter-PUF and 2-XOR 32-bit Arbiter-PUF, respectively.