Biblio
The recent success of brain-inspired deep neural networks (DNNs) in solving complex, high-level visual tasks has led to rising expectations for their potential to match the human visual system. However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision. One limitation of DNNs is that they are vulnerable to adversarial examples, input images on which subtle, carefully designed noises are added to fool a machine classifier. The robustness of the human visual system against adversarial examples is potentially of great importance as it could uncover a key mechanistic feature that machine vision is yet to incorporate. In this study, we compare the visual representations of white- and black-box adversarial examples in DNNs and humans by leveraging functional magnetic resonance imaging (fMRI). We find a small but significant difference in representation patterns for different (i.e. white- versus black-box) types of adversarial examples for both humans and DNNs. However, human performance on categorical judgment is not degraded by noise regardless of the type unlike DNN. These results suggest that adversarial examples may be differentially represented in the human visual system, but unable to affect the perceptual experience.
Steganography is the science of hiding information to send secret messages using the carrier object known as stego object. Steganographic technology is based on three principles including security, robustness and capacity. In this paper, we present a digital image hidden by using the compressive sensing technology to increase security of stego image based on human visual system features. The results represent which our proposed method provides higher security in comparison with the other presented methods. Bit Correction Rate between original secret message and extracted message is used to show the accuracy of this method.
Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.