Visible to the public Biblio

Filters: Author is Yang, Huazhong  [Clear All Filters]
2019-01-31
Jia, Kaige, Liu, Zheyu, Wei, Qi, Qiao, Fei, Liu, Xinjun, Yang, Yi, Fan, Hua, Yang, Huazhong.  2018.  Calibrating Process Variation at System Level with In-Situ Low-Precision Transfer Learning for Analog Neural Network Processors. Proceedings of the 55th Annual Design Automation Conference. :12:1–12:6.

Process Variation (PV) may cause accuracy loss of the analog neural network (ANN) processors, and make it hard to be scaled down, as well as feasibility degrading. This paper first analyses the impact of PV on the performance of ANN chips. Then proposes an in-situ transfer learning method at system level to reduce PV's influence with low-precision back-propagation. Simulation results show the proposed method could increase 50% tolerance of operating point drift and 70% $\sim$ 100% tolerance of mismatch with less than 1% accuracy loss of benchmarks. It also reduces 66.7% memories and has about 50× energy-efficiency improvement of multiplication in the learning stage, compared with the conventional full-precision (32bit float) training system.

2018-06-07
Chen, Yuanchang, Zhu, Yizhe, Qiao, Fei, Han, Jie, Liu, Yuansheng, Yang, Huazhong.  2017.  Evaluating Data Resilience in CNNs from an Approximate Memory Perspective. Proceedings of the on Great Lakes Symposium on VLSI 2017. :89–94.
Due to the large volumes of data that need to be processed, efficient memory access and data transmission are crucial for high-performance implementations of convolutional neural networks (CNNs). Approximate memory is a promising technique to achieve efficient memory access and data transmission in CNN hardware implementations. To assess the feasibility of applying approximate memory techniques, we propose a framework for the data resilience evaluation (DRE) of CNNs and verify its effectiveness on a suite of prevalent CNNs. Simulation results show that a high degree of data resilience exists in these networks. By scaling the bit-width of the first five dominant data subsets, the data volume can be reduced by 80.38% on average with a 2.69% loss in relative prediction accuracy. For approximate memory with random errors, all the synaptic weights can be stored in the approximate part when the error rate is less than 10–4, while 3 MSBs must be protected if the error rate is fixed at 10–3. These results indicate a great potential for exploiting approximate memory techniques in CNN hardware design.
2017-05-19
Xia, Lixue, Tang, Tianqi, Huangfu, Wenqin, Cheng, Ming, Yin, Xiling, Li, Boxun, Wang, Yu, Yang, Huazhong.  2016.  Switched by Input: Power Efficient Structure for RRAM-based Convolutional Neural Network. Proceedings of the 53rd Annual Design Automation Conference. :125:1–125:6.

Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging metal-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80% area and more than 95% energy while maintaining the same or comparable classification accuracy of CNN on MNIST.