Visible to the public Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection

TitleSecurity Evaluation of Deep Neural Network Resistance Against Laser Fault Injection
Publication TypeConference Paper
Year of Publication2020
AuthorsHou, Xiaolu, Breier, Jakub, Jap, Dirmanto, Ma, Lei, Bhasin, Shivam, Liu, Yang
Conference Name2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA)
KeywordsBiological neural networks, Circuit faults, Deep Learning, Fault attack, Human Behavior, human factors, Laser modes, Mathematical model, Metrics, Neural networks, Neurons, pubcrawl, Scalability, semiconductor lasers, Tamper resistance, Timing
AbstractDeep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
DOI10.1109/IPFA49335.2020.9261013
Citation Keyhou_security_2020