Visible to the public Biblio

Filters: Keyword is iso-accuracy depending  [Clear All Filters]
2018-06-11
Moons, B., Goetschalckx, K., Berckelaer, N. Van, Verhelst, M..  2017.  Minimum energy quantized neural networks. 2017 51st Asilomar Conference on Signals, Systems, and Computers. :1921–1925.
This work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) - networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex arithmetic and less bits per weights. This fundamental trade-off is analyzed and quantified to find the minimum energy QNN for any benchmark and hence optimize energy-efficiency. To this end, the energy consumption of inference is modeled for a generic hardware platform. This allows drawing several conclusions across different benchmarks. First, energy consumption varies orders of magnitude at iso-accuracy depending on the number of bits used in the QNN. Second, in a typical system, BinaryNets or int4 implementations lead to the minimum energy solution, outperforming int8 networks up to 2-10× at iso-accuracy. All code used for QNN training is available from https://github.com/BertMoons/.