Visible to the public Supervised Max Hashing for Similarity Image Retrieval

TitleSupervised Max Hashing for Similarity Image Retrieval
Publication TypeConference Paper
Year of Publication2018
AuthorsAl Kobaisi, Ali, Wocjan, Pawel
Conference Name2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)
Keywords-Convolutional-Neural-Networks, -Learning-to-Hash, -Winner-Take-All-Hash-Family, Approximation algorithms, argmax function, compositionality, convolutional neural nets, convolutional neural networks, deep neural network architecture, feature extraction, feature vector generation, file organisation, gradient descent methods, gradient methods, hand-crafted feature vectors, hash algorithm, hash algorithms, hash code, Hash Function, Hash functions, hashing, image retrieval, k-ary base, labeled image datasets, learning (artificial intelligence), mathematically differentiable approximation, nearest neighbor search, pubcrawl, Quantization (signal), resilience, Resiliency, rich feature vectors, Semantics, similarity image retrieval, storage efficiency, supervised hashing methods, supervised max hashing, Training
Abstract

The storage efficiency of hash codes and their application in the fast approximate nearest neighbor search, along with the explosion in the size of available labeled image datasets caused an intensive interest in developing learning based hash algorithms recently. In this paper, we present a learning based hash algorithm that utilize ordinal information of feature vectors. We have proposed a novel mathematically differentiable approximation of argmax function for this hash algorithm. It has enabled seamless integration of hash function with deep neural network architecture which can exploit the rich feature vectors generated by convolutional neural networks. We have also proposed a loss function for the case that the hash code is not binary and its entries are digits of arbitrary k-ary base. The resultant model comprised of feature vector generation and hashing layer is amenable to end-to-end training using gradient descent methods. In contrast to the majority of current hashing algorithms that are either not learning based or use hand-crafted feature vectors as input, simultaneous training of the components of our system results in better optimization. Extensive evaluations on NUS-WIDE, CIFAR-10 and MIRFlickr benchmarks show that the proposed algorithm outperforms state-of-art and classical data agnostic, unsupervised and supervised hashing methods by 2.6% to 19.8% mean average precision under various settings.

URLhttps://ieeexplore.ieee.org/document/8614085/
DOI10.1109/ICMLA.2018.00060
Citation Keyal_kobaisi_supervised_2018