Visible to the public Biblio

Filters: Keyword is Image databases  [Clear All Filters]
2023-09-01
Liu, Zhiqin, Zhu, Nan, Wang, Kun.  2022.  Recaptured Image Forensics Based on Generalized Central Difference Convolution Network. 2022 IEEE 2nd International Conference on Software Engineering and Artificial Intelligence (SEAI). :59—63.
With large advancements in image display technology, recapturing high-quality images from high-fidelity LCD screens becomes much easier. Such recaptured images can be used to hide image tampering traces and fool some intelligent identification systems. In order to prevent such a security loophole, we propose a recaptured image detection approach based on generalized central difference convolution (GCDC) network. Specifically, by using GCDC instead of vanilla convolution, more detailed features can be extracted from both intensity and gradient information from an image. Meanwhile, we concatenate the feature maps from multiple GCDC modules to fuse low-, mid-, and high-level features for higher performance. Extensive experiments on three public recaptured image databases demonstrate the superior of our proposed method when compared with the state-of-the-art approaches.
2022-06-30
Kumar, Ashwani, Singh, Aditya Pratap.  2021.  Contour Based Deep Learning Engine to Solve CAPTCHA. 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS). 1:723—727.
A 'Completely Automated Public Turing test to tell Computers and Humans Apart' or better known as CAPTCHA is a image based test used to determine the authenticity of a user (ie. whether the user is human or not). In today's world, almost all the web services, such as online shopping sites, require users to solve CAPTCHAs that must be read and typed correctly. The challenge is that recognizing the CAPTCHAs is a relatively easy task for humans, but it is still hard to solve for computers. Ideally, a well-designed CAPTCHA should be solvable by humans at least 90% of the time, while programs using appropriate resources should succeed in less than 0.01% of the cases. In this paper, a deep neural network architecture is presented to extract text from CAPTCHA images on various platforms. The central theme of the paper is to develop an efficient & intelligent model that converts image-based CAPTCHA to text. We used convolutional neural network based architecture design instead of the traditional methods of CAPTCHA detection using image processing segmentation modules. The model consists of seven layers to efficiently correlate image features to the output character sequence. We tried a wide variety of configurations, including various loss and activation functions. We generated our own images database and the efficacy of our model was proven by the accuracy levels of 99.7%.
2022-03-08
Razeghi, Behrooz, Ferdowsi, Sohrab, Kostadinov, Dimche, Calmon, Flavio P., Voloshynovskiy, Slava.  2021.  Privacy-Preserving near Neighbor Search via Sparse Coding with Ambiguation. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2635—2639.
In this paper, we propose a framework for privacy-preserving approximate near neighbor search via stochastic sparsifying encoding. The core of the framework relies on sparse coding with ambiguation (SCA) mechanism that introduces the notion of inherent shared secrecy based on the support intersection of sparse codes. This approach is ‘fairness-aware’, in the sense that any point in the neighborhood has an equiprobable chance to be chosen. Our approach can be applied to raw data, latent representation of autoencoders, and aggregated local descriptors. The proposed method is tested on both synthetic i.i.d data and real image databases.
2021-02-08
Nisperos, Z. A., Gerardo, B., Hernandez, A..  2020.  Key Generation for Zero Steganography Using DNA Sequences. 2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1–6.
Some of the key challenges in steganography are imperceptibility and resistance to detection of steganalysis algorithms. Zero steganography is an approach to data hiding such that the cover image is not modified. This paper focuses on the generation of stego-key, which is an essential component of this steganographic approach. This approach utilizes DNA sequences and shifting and flipping operations in its binary code representation. Experimental results show that the key generation algorithm has a low cracking probability. The algorithm satisfies the avalanche criterion.
2018-01-10
Schaefer, Gerald, Budnik, Mateusz, Krawczyk, Bartosz.  2017.  Immersive Browsing in an Image Sphere. Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication. :26:1–26:4.
In this paper, we present an immersive image database navigation system. Images are visualised in a spherical visualisation space and arranged, on a grid, by colour so that images of similar colour are located close to each other, while access to large image sets is possible through a hierarchical browsing structure. The user is wearing a 3-D head mounted display (HMD) and is immersed inside the image sphere. Navigation is performed by head movement using a 6-degree-of-freedom tracker integrated in the HMD in conjunction with a wiimote remote control.
2017-03-08
Windisch, G., Kozlovszky, M..  2015.  Image sharpness metrics for digital microscopy. 2015 IEEE 13th International Symposium on Applied Machine Intelligence and Informatics (SAMI). :273–276.

Image sharpness measurements are important parts of many image processing applications. To measure image sharpness multiple algorithms have been proposed and measured in the past but they have been developed with having out-of-focus photographs in mind and they do not work so well with images taken using a digital microscope. In this article we show the difference between images taken with digital cameras, images taken with a digital microscope and artificially blurred images. The conventional sharpness measures are executed on all these categories to measure the difference and a standard image set taken with a digital microscope is proposed and described to serve as a common baseline for further sharpness measures in the field.