Visible to the public Biblio

Filters: Keyword is stochastic gradient descent method  [Clear All Filters]
2020-05-08
Shen, Weiguo, Wang, Wei.  2018.  Node Identification in Wireless Network Based on Convolutional Neural Network. 2018 14th International Conference on Computational Intelligence and Security (CIS). :238—241.
Aiming at the problem of node identification in wireless networks, a method of node identification based on deep learning is proposed, which starts with the tiny features of nodes in radiofrequency layer. Firstly, in order to cut down the computational complexity, Principal Component Analysis is used to reduce the dimension of node sample data. Secondly, a convolution neural network containing two hidden layers is designed to extract local features of the preprocessed data. Stochastic gradient descent method is used to optimize the parameters, and the Softmax Model is used to determine the output label. Finally, the effectiveness of the method is verified by experiments on practical wireless ad-hoc network.
2017-03-08
Song, D., Liu, W., Ji, R., Meyer, D. A., Smith, J. R..  2015.  Top Rank Supervised Binary Coding for Visual Search. 2015 IEEE International Conference on Computer Vision (ICCV). :1922–1930.

In recent years, binary coding techniques are becoming increasingly popular because of their high efficiency in handling large-scale computer vision applications. It has been demonstrated that supervised binary coding techniques that leverage supervised information can significantly enhance the coding quality, and hence greatly benefit visual search tasks. Typically, a modern binary coding method seeks to learn a group of coding functions which compress data samples into binary codes. However, few methods pursued the coding functions such that the precision at the top of a ranking list according to Hamming distances of the generated binary codes is optimized. In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information. The core idea is to train the disciplined coding functions, by which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To solve such coding functions, we relax the original discrete optimization objective with a continuous surrogate, and derive a stochastic gradient descent to optimize the surrogate objective. To further reduce the training time cost, we also design an online learning algorithm to optimize the surrogate objective more efficiently. Empirical studies based upon three benchmark image datasets demonstrate that the proposed binary coding approach achieves superior image search accuracy over the state-of-the-arts.