Visible to the public Biblio

Filters: Keyword is human visual system  [Clear All Filters]
2021-02-08
Geetha, C. R., Basavaraju, S., Puttamadappa, C..  2013.  Variable load image steganography using multiple edge detection and minimum error replacement method. 2013 IEEE Conference on Information Communication Technologies. :53—58.

This paper proposes a steganography method using the digital images. Here, we are embedding the data which is to be secured into the digital image. Human Visual System proved that the changes in the image edges are insensitive to human eyes. Therefore we are using edge detection method in steganography to increase data hiding capacity by embedding more data in these edge pixels. So, if we can increase number of edge pixels, we can increase the amount of data that can be hidden in the image. To increase the number of edge pixels, multiple edge detection is employed. Edge detection is carried out using more sophisticated operator like canny operator. To compensate for the resulting decrease in the PSNR because of increase in the amount of data hidden, Minimum Error Replacement [MER] method is used. Therefore, the main goal of image steganography i.e. security with highest embedding capacity and good visual qualities are achieved. To extract the data we need the original image and the embedding ratio. Extraction is done by taking multiple edges detecting the original image and the data is extracted corresponding to the embedding ratio.

2020-02-18
Han, Chihye, Yoon, Wonjun, Kwon, Gihyun, Kim, Daeshik, Nam, Seungkyu.  2019.  Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.

The recent success of brain-inspired deep neural networks (DNNs) in solving complex, high-level visual tasks has led to rising expectations for their potential to match the human visual system. However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision. One limitation of DNNs is that they are vulnerable to adversarial examples, input images on which subtle, carefully designed noises are added to fool a machine classifier. The robustness of the human visual system against adversarial examples is potentially of great importance as it could uncover a key mechanistic feature that machine vision is yet to incorporate. In this study, we compare the visual representations of white- and black-box adversarial examples in DNNs and humans by leveraging functional magnetic resonance imaging (fMRI). We find a small but significant difference in representation patterns for different (i.e. white- versus black-box) types of adversarial examples for both humans and DNNs. However, human performance on categorical judgment is not degraded by noise regardless of the type unlike DNN. These results suggest that adversarial examples may be differentially represented in the human visual system, but unable to affect the perceptual experience.

2019-09-23
Tan, L., Liu, K., Yan, X., Wan, S., Chen, J., Chang, C..  2018.  Visual Secret Sharing Scheme for Color QR Code. 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC). :961–965.

In this paper, we propose a novel visual secret sharing (VSS) scheme for color QR code (VSSCQR) with (n, n) threshold based on high capacity, admirable visual effects and popularity of color QR code. By splitting and encoding a secret image into QR codes and then fusing QR codes to generate color QR code shares, the scheme can share the secret among a certain number of participants. However, less than n participants cannot reveal any information about the secret. The embedding amount and position of the secret image bits generated by VSS are in the range of the error correction ability of the QR code. Each color share is readable, which can be decoded and thus may not come into notice. On one hand, the secret image can be reconstructed by first decomposing three QR codes from each color QR code share and then stacking the corresponding QR codes based on only human visual system without computational devices. On the other hand, by decomposing three QR codes from each color QR code share and then XORing the three QR codes respectively, we can reconstruct the secret image losslessly. The experiment results display the effect of our scheme.

2018-11-19
Yildiz, O., Gulbahar, B..  2018.  FoVLC: Foveation Based Data Hiding in Display Transmitters for Visible Light Communications. 2018 14th International Wireless Communications Mobile Computing Conference (IWCMC). :629–635.

Visible light communications is an emerging architecture with unlicensed and huge bandwidth resources, security, and experimental implementations and standardization efforts. Display based transmitter and camera based receiver architectures are alternatives for device-to-device (D2D) and home area networking (HAN) systems by utilizing widely available TV, tablet and mobile phone screens as transmitters while commercially available cameras as receivers. Current architectures utilizing data hiding and unobtrusive steganography methods promise data transmission without user distraction on the screen. however, current architectures have challenges with the limited capability of data hiding in translucency or color shift based methods of hiding by uniformly distributing modulation throughout the screen and keeping eye discomfort at an acceptable level. In this article, foveation property of human visual system is utilized to define a novel modulation method denoted by FoVLC which adaptively improves data hiding capability throughout the screen based on the current eye focus point of viewer. Theoretical modeling of modulation and demodulation mechanisms hiding data in color shifts of pixel blocks is provided while experiments are performed for both FoVLC method and uniform data hiding denoted as conventional method. Experimental tests for the simple design as a proof of concept decreases average bit error rate (BER) to approximately half of the value obtained with the conventional method without user distraction while promising future efforts for optimizing block sizes and utilizing error correction codes.

2017-03-08
Kerouh, F., Serir, A..  2015.  A no reference perceptual blur quality metric in the DCT domain. 2015 3rd International Conference on Control, Engineering Information Technology (CEIT). :1–6.

Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.

Liu, H., Wang, W., He, Z., Tong, Q., Wang, X., Yu, W., Lv, M..  2015.  Blind image quality evaluation metrics design for UAV photographic application. 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER). :293–297.

A number of blind Image Quality Evaluation Metrics (IQEMs) for Unmanned Aerial Vehicle (UAV) photograph application are presented. Nowadays, the visible light camera is widely used for UAV photograph application because of its vivid imaging effect; however, the outdoor environment light will produce great negative influences on its imaging output unfortunately. In this paper, to conquer this problem above, we design and reuse a series of blind IQEMs to analyze the imaging quality of UAV application. The Human Visual System (HVS) based IQEMs, including the image brightness level, the image contrast level, the image noise level, the image edge blur level, the image texture intensity level, the image jitter level, and the image flicker level, are all considered in our application. Once these IQEMs are calculated, they can be utilized to provide a computational reference for the following image processing application, such as image understanding and recognition. Some preliminary experiments for image enhancement have proved the correctness and validity of our proposed technique.