Evaluating Fault Resiliency of Compressed Deep Neural Networks
Title | Evaluating Fault Resiliency of Compressed Deep Neural Networks |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Sabbagh, Majid, Gongye, Cheng, Fei, Yunsi, Wang, Yanzhi |
Conference Name | 2019 IEEE International Conference on Embedded Software and Systems (ICESS) |
Date Published | jun |
ISBN Number | 978-1-7281-2437-7 |
Keywords | Analytical models, compressed deep neural networks, compressed DNN models, Computational modeling, data compression, Data models, Deep Learning, Deep neural network model compression, Efficient and secure inference engines, Fault Attacks, fault resilience, Fault resiliency, Fault tolerance, fault tolerant computing, Hardware, hardware mitigation techniques, inference mechanisms, learning (artificial intelligence), LeNet-5, neural nets, Predictive models, pubcrawl, Quantization (signal), resilience, Resiliency, software mitigation techniques, storage faults, VGG16 |
Abstract | Model compression is considered to be an effective way to reduce the implementation cost of deep neural networks (DNNs) while maintaining the inference accuracy. Many recent studies have developed efficient model compression algorithms and implementations in accelerators on various devices. Protecting integrity of DNN inference against fault attacks is important for diverse deep learning enabled applications. However, there has been little research investigating the fault resilience of DNNs and the impact of model compression on fault tolerance. In this work, we consider faults on different data types and develop a simulation framework for understanding the fault resiliency of compressed DNN models as compared to uncompressed models. We perform our experiments on two common DNNs, LeNet-5 and VGG16, and evaluate their fault resiliency with different types of compression. The results show that binary quantization can effectively increase the fault resilience of DNN models by 10000x for both LeNet5 and VGG16. Finally, we propose software and hardware mitigation techniques to increase the fault resiliency of DNN models. |
URL | https://ieeexplore.ieee.org/document/8782505 |
DOI | 10.1109/ICESS.2019.8782505 |
Citation Key | sabbagh_evaluating_2019 |
- Hardware
- VGG16
- storage faults
- software mitigation techniques
- Resiliency
- resilience
- Quantization (signal)
- pubcrawl
- Predictive models
- neural nets
- LeNet-5
- learning (artificial intelligence)
- inference mechanisms
- hardware mitigation techniques
- Analytical models
- fault tolerant computing
- fault tolerance
- Fault resiliency
- Fault Resilience
- Fault Attacks
- Efficient and secure inference engines
- Deep neural network model compression
- deep learning
- Data models
- data compression
- Computational modeling
- compressed DNN models
- compressed deep neural networks