Title | Minimum energy quantized neural networks |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Moons, B., Goetschalckx, K., Berckelaer, N. Van, Verhelst, M. |
Conference Name | 2017 51st Asilomar Conference on Signals, Systems, and Computers |
Keywords | approximate computing, arbitrary fixed point precision, automated minimum-energy optimization, BinaryNets, complex arithmetic, Deep Learning, energy conservation, energy consumption, fixed point arithmetic, fundamental trade-off, generic hardware platform, Hardware, higher precision operators, int4 implementations, int8 networks, iso-accuracy depending, low precision weights, Memory management, Metrics, Minimum Energy, minimum energy QNN, Mobile communication, network on chip security, neural nets, Neural networks, power aware computing, pubcrawl, QNN training, Quantized Neural Network, quantized neural networks, Random access memory, resilience, Resiliency, Scalability, system-on-chip, telecommunication security, Training, wider network architectures |
Abstract | This work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) - networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex arithmetic and less bits per weights. This fundamental trade-off is analyzed and quantified to find the minimum energy QNN for any benchmark and hence optimize energy-efficiency. To this end, the energy consumption of inference is modeled for a generic hardware platform. This allows drawing several conclusions across different benchmarks. First, energy consumption varies orders of magnitude at iso-accuracy depending on the number of bits used in the QNN. Second, in a typical system, BinaryNets or int4 implementations lead to the minimum energy solution, outperforming int8 networks up to 2-10x at iso-accuracy. All code used for QNN training is available from https://github.com/BertMoons/. |
DOI | 10.1109/ACSSC.2017.8335699 |
Citation Key | moons_minimum_2017 |