Visible to the public Biblio

Filters: Keyword is fixed point arithmetic  [Clear All Filters]
2020-09-04
Li, Chengqing, Feng, Bingbing, Li, Shujun, Kurths, Jüergen, Chen, Guanrong.  2019.  Dynamic Analysis of Digital Chaotic Maps via State-Mapping Networks. IEEE Transactions on Circuits and Systems I: Regular Papers. 66:2322—2335.
Chaotic dynamics is widely used to design pseudo-random number generators and for other applications, such as secure communications and encryption. This paper aims to study the dynamics of the discrete-time chaotic maps in the digital (i.e., finite-precision) domain. Differing from the traditional approaches treating a digital chaotic map as a black box with different explanations according to the test results of the output, the dynamical properties of such chaotic maps are first explored with a fixed-point arithmetic, using the Logistic map and the Tent map as two representative examples, from a new perspective with the corresponding state-mapping networks (SMNs). In an SMN, every possible value in the digital domain is considered as a node and the mapping relationship between any pair of nodes is a directed edge. The scale-free properties of the Logistic map's SMN are proved. The analytic results are further extended to the scenario of floating-point arithmetic and for other chaotic maps. Understanding the network structure of a chaotic map's SMN in digital computers can facilitate counteracting the undesirable degeneration of chaotic dynamics in finite-precision domains, also helping to classify and improve the randomness of pseudo-random number sequences generated by iterating the chaotic maps.
2020-01-20
Mansouri, Asma, Martel, Matthieu, Serea, Oana Silvia.  2019.  Fixed Point Computation by Exponentiating Linear Operators. 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT). :1096–1102.

In this article, we introduce a new method for computing fixed points of a class of iterated functions in a finite time, by exponentiating linear multivalued operators. To better illustrate this approach and show that our method can give fast and accurate results, we have chosen two well-known applications which are difficult to handle by usual techniques. First, we apply the exponentiation of linear operators to a digital filter in order to get a fine approximation of its behavior at an arbitrary time. Second, we consider a PID controller. To get a reliable estimate of its control function, we apply the exponentiation of a bundle of linear operators. Note that, our technique can be applied in a more general setting, i.e. for any multivalued linear map and that the general method is also introduced in this article.

2019-03-06
Liu, Y., Wang, Y., Lombardi, F., Han, J..  2018.  An Energy-Efficient Stochastic Computational Deep Belief Network. 2018 Design, Automation Test in Europe Conference Exhibition (DATE). :1175-1178.

Deep neural networks (DNNs) are effective machine learning models to solve a large class of recognition problems, including the classification of nonlinearly separable patterns. The applications of DNNs are, however, limited by the large size and high energy consumption of the networks. Recently, stochastic computation (SC) has been considered to implement DNNs to reduce the hardware cost. However, it requires a large number of random number generators (RNGs) that lower the energy efficiency of the network. To overcome these limitations, we propose the design of an energy-efficient deep belief network (DBN) based on stochastic computation. An approximate SC activation unit (A-SCAU) is designed to implement different types of activation functions in the neurons. The A-SCAU is immune to signal correlations, so the RNGs can be shared among all neurons in the same layer with no accuracy loss. The area and energy of the proposed design are 5.27% and 3.31% (or 26.55% and 29.89%) of a 32-bit floating-point (or an 8-bit fixed-point) implementation. It is shown that the proposed SC-DBN design achieves a higher classification accuracy compared to the fixed-point implementation. The accuracy is only lower by 0.12% than the floating-point design at a similar computation speed, but with a significantly lower energy consumption.

2018-06-11
Moons, B., Goetschalckx, K., Berckelaer, N. Van, Verhelst, M..  2017.  Minimum energy quantized neural networks. 2017 51st Asilomar Conference on Signals, Systems, and Computers. :1921–1925.
This work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) - networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex arithmetic and less bits per weights. This fundamental trade-off is analyzed and quantified to find the minimum energy QNN for any benchmark and hence optimize energy-efficiency. To this end, the energy consumption of inference is modeled for a generic hardware platform. This allows drawing several conclusions across different benchmarks. First, energy consumption varies orders of magnitude at iso-accuracy depending on the number of bits used in the QNN. Second, in a typical system, BinaryNets or int4 implementations lead to the minimum energy solution, outperforming int8 networks up to 2-10× at iso-accuracy. All code used for QNN training is available from https://github.com/BertMoons/.