Title | Evaluating Data Resilience in CNNs from an Approximate Memory Perspective |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Chen, Yuanchang, Zhu, Yizhe, Qiao, Fei, Han, Jie, Liu, Yuansheng, Yang, Huazhong |
Conference Name | Proceedings of the on Great Lakes Symposium on VLSI 2017 |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-4972-7 |
Keywords | approximate memory, convolutional neural network, data resilience evaluation, Neural Network Resilience, pubcrawl, resilience, Resiliency |
Abstract | Due to the large volumes of data that need to be processed, efficient memory access and data transmission are crucial for high-performance implementations of convolutional neural networks (CNNs). Approximate memory is a promising technique to achieve efficient memory access and data transmission in CNN hardware implementations. To assess the feasibility of applying approximate memory techniques, we propose a framework for the data resilience evaluation (DRE) of CNNs and verify its effectiveness on a suite of prevalent CNNs. Simulation results show that a high degree of data resilience exists in these networks. By scaling the bit-width of the first five dominant data subsets, the data volume can be reduced by 80.38% on average with a 2.69% loss in relative prediction accuracy. For approximate memory with random errors, all the synaptic weights can be stored in the approximate part when the error rate is less than 10-4, while 3 MSBs must be protected if the error rate is fixed at 10-3. These results indicate a great potential for exploiting approximate memory techniques in CNN hardware design. |
URL | http://doi.acm.org/10.1145/3060403.3060435 |
DOI | 10.1145/3060403.3060435 |
Citation Key | chen_evaluating_2017 |