Visible to the public Biblio

Filters: Keyword is image restoration  [Clear All Filters]
2022-11-08
Javaheripi, Mojan, Samragh, Mohammad, Fields, Gregory, Javidi, Tara, Koushanfar, Farinaz.  2020.  CleaNN: Accelerated Trojan Shield for Embedded Neural Networks. 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD). :1–9.
We propose Cleann, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications. A Trojan attack works by injecting a backdoor in the DNN while training; during inference, the Trojan can be activated by the specific backdoor trigger. What differentiates Cleann from the prior work is its lightweight methodology which recovers the ground-truth class of Trojan samples without the need for labeled data, model retraining, or prior assumptions on the trigger or the attack. We leverage dictionary learning and sparse approximation to characterize the statistical behavior of benign data and identify Trojan triggers. Cleann is devised based on algorithm/hardware co-design and is equipped with specialized hardware to enable efficient real-time execution on resource-constrained embedded platforms. Proof of concept evaluations on Cleann for the state-of-the-art Neural Trojan attacks on visual benchmarks demonstrate its competitive advantage in terms of attack resiliency and execution overhead.
2021-06-24
Lee, Dongseop, Kim, Hyunjin, Ryou, Jaecheol.  2020.  Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :538–541.
Recently, deep neural network technology has been developed and used in various fields. The image recognition model can be used for automatic safety checks at the electric factory. However, as the deep neural network develops, the importance of security increases. A poisoning attack is one of security problems. It is an attack that breaks down by entering malicious data into the training data set of the model. This paper generates adversarial data that modulates feature values to different targets by manipulating less RGB values. Then, poisoning attacks in one of the image recognition models, the show and tell model. Then use autoencoder to defend adversarial data.
2021-04-27
Sekar, K., Devi, K. Suganya, Srinivasan, P., SenthilKumar, V. M..  2020.  Deep Wavelet Architecture for Compressive sensing Recovery. 2020 Seventh International Conference on Information Technology Trends (ITT). :185–189.
The deep learning-based compressive Sensing (CS) has shown substantial improved performance and in run-time reduction with signal sampling and reconstruction. In most cases, moreover, these techniques suffer from disrupting artefacts or high-frequency contents at low sampling ratios. Similarly, this occurs in the multi-resolution sampling method, which further collects more components with lower frequencies. A promising innovation combining CS with convolutionary neural network has eliminated the sparsity constraint yet recovery persists slow. We propose a Deep wavelet based compressive sensing with multi-resolution framework provides better improvement in reconstruction as well as run time. The proposed model demonstrates outstanding quality on test functions over previous approaches.
2021-02-01
Jiang, H., Du, M., Whiteside, D., Moursy, O., Yang, Y..  2020.  An Approach to Embedding a Style Transfer Model into a Mobile APP. 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :307–316.
The prevalence of photo processing apps suggests the demands of picture editing. As an implementation of the convolutional neural network, style transfer has been deep investigated and there are supported materials to realize it on PC platform. However, few approaches are mentioned to deploy a style transfer model on the mobile and meet the requirements of mobile users. The traditional style transfer model takes hours to proceed, therefore, based on a Perceptual Losses algorithm [1], we created a feedforward neural network for each style and the proceeding time was reduced to a few seconds. The training data were generated from a pre-trained convolutional neural network model, VGG-19. The algorithm took thousandth time and generated similar output as the original. Furthermore, we optimized the model and deployed the model with TensorFlow Mobile library. We froze the model and adopted a bitmap to scale the inputs to 720×720 and reverted back to the original resolution. The reverting process may create some blur but it can be regarded as a feature of art. The generated images have reliable quality and the waiting time is independent of the content and pattern of input images. The main factor that influences the proceeding time is the input resolution. The average waiting time of our model on the mobile phone, HUAWEI P20 Pro, is less than 2 seconds for 720p images and around 2.8 seconds for 1080p images, which are ten times slower than that on the PC GPU, Tesla T40. The performance difference depends on the architecture of the model.
2021-01-15
Gandhi, A., Jain, S..  2020.  Adversarial Perturbations Fool Deepfake Detectors. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.
This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We created adversarial perturbations using the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack in both blackbox and whitebox settings. Detectors achieved over 95% accuracy on unperturbed deepfakes, but less than 27% accuracy on perturbed deepfakes. We also explore two improvements to deep-fake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Lipschitz regularization constrains the gradient of the detector with respect to the input in order to increase robustness to input perturbations. The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner. Regularization improved the detection of perturbed deepfakes on average, including a 10% accuracy boost in the blackbox case. The DIP defense achieved 95% accuracy on perturbed deepfakes that fooled the original detector while retaining 98% accuracy in other cases on a 100 image subsample.
2020-12-28
Raju, R. S., Lipasti, M..  2020.  BlurNet: Defense by Filtering the Feature Maps. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :38—46.

Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.

2020-03-30
Li, Jian, Zhang, Zelin, Li, Shengyu, Benton, Ryan, Huang, Yulong, Kasukurthi, Mohan Vamsi, Li, Dongqi, Lin, Jingwei, Borchert, Glen M., Tan, Shaobo et al..  2019.  Reversible Data Hiding Based Key Region Protection Method in Medical Images. 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). :1526–1530.
The transmission of medical image data in an open network environment is subject to privacy issues including patient privacy and data leakage. In the past, image encryption and information-hiding technology have been used to solve such security problems. But these methodologies, in general, suffered from difficulties in retrieving original images. We present in this paper an algorithm to protect key regions in medical images. First, coefficient of variation is used to locate the key regions, a.k.a. the lesion areas, of an image; other areas are then processed in blocks and analyzed for texture complexity. Next, our reversible data-hiding algorithm is used to embed the contents from the lesion areas into a high-texture area, and the Arnold transformation is performed to protect the original lesion information. In addition to this, we use the ciphertext of the basic information about the image and the decryption parameter to generate the Quick Response (QR) Code to replace the original key regions. Consequently, only authorized customers can obtain the encryption key to extract information from encrypted images. Experimental results show that our algorithm can not only restore the original image without information loss, but also safely transfer the medical image copyright and patient-sensitive information.
2019-08-12
Uto, K., Mura, M. D., Chanussot, J..  2018.  Spatial Resolution Enhancement of Optical Images Based on Tensor Decomposition. IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium. :8058-8061.

There is an inevitable trade-off between spatial and spectral resolutions in optical remote sensing images. A number of data fusion techniques of multimodal images with different spatial and spectral characteristics have been developed to generate optical images with both spatial and spectral high resolution. Although some of the techniques take the spectral and spatial blurring process into account, there is no method that attempts to retrieve an optical image with both spatial and spectral high resolution, a spectral blurring filter and a spectral response simultaneously. In this paper, we propose a new framework of spatial resolution enhancement by a fusion of multiple optical images with different characteristics based on tensor decomposition. An optical image with both spatial and spectral high resolution, together with a spatial blurring filter and a spectral response, is generated via canonical polyadic (CP) decomposition of a set of tensors. Experimental results featured that relatively reasonable results were obtained by regularization based on nonnegativity and coupling.

2019-03-25
Li, Y., Guan, Z., Xu, C..  2018.  Digital Image Self Restoration Based on Information Hiding. 2018 37th Chinese Control Conference (CCC). :4368–4372.
With the rapid development of computer networks, multimedia information is widely used, and the security of digital media has drawn much attention. The revised photo as a forensic evidence will distort the truth of the case badly tampered pictures on the social network can have a negative impact on the parties as well. In order to ensure the authenticity and integrity of digital media, self-recovery of digital images based on information hiding is studied in this paper. Jarvis half-tone change is used to compress the digital image and obtain the backup data, and then spread the backup data to generate the reference data. Hash algorithm aims at generating hash data by calling reference data and original data. Reference data and hash data together as a digital watermark scattered embedded in the digital image of the low-effective bits. When the image is maliciously tampered with, the hash bit is used to detect and locate the tampered area, and the image self-recovery is performed by extracting the reference data hidden in the whole image. In this paper, a thorough rebuild quality assessment of self-healing images is performed and better performance than the traditional DCT(Discrete Cosine Transform)quantization truncation approach is achieved. Regardless of the quality of the tampered content, a reference authentication system designed according to the principles presented in this paper allows higher-quality reconstruction to recover the original image with good quality even when the large area of the image is tampered.
2017-03-08
Chammas, E., Mokbel, C., Likforman-Sulem, L..  2015.  Arabic handwritten document preprocessing and recognition. 2015 13th International Conference on Document Analysis and Recognition (ICDAR). :451–455.

Arabic handwritten documents present specific challenges due to the cursive nature of the writing and the presence of diacritical marks. Moreover, one of the largest labeled database of Arabic handwritten documents, the OpenHart-NIST database includes specific noise, namely guidelines, that has to be addressed. We propose several approaches to process these documents. First a guideline detection approach has been developed, based on K-means, that detects the documents that include guidelines. We then propose a series of preprocessing at text-line level to reduce the noise effects. For text-lines including guidelines, a guideline removal preprocessing is described and existing keystroke restoration approaches are assessed. In addition, we propose a preprocessing that combines noise removal and deskewing by removing line fragments from neighboring text lines, while searching for the principal orientation of the text-line. We provide recognition results, showing the significant improvement brought by the proposed processings.

Kerouh, F., Serir, A..  2015.  A no reference perceptual blur quality metric in the DCT domain. 2015 3rd International Conference on Control, Engineering Information Technology (CEIT). :1–6.

Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.

Chauhan, A. S., Sahula, V..  2015.  High density impulsive Noise removal using decision based iterated conditional modes. 2015 International Conference on Signal Processing, Computing and Control (ISPCC). :24–29.

Salt and Pepper Noise is very common during transmission of images through a noisy channel or due to impairment in camera sensor module. For noise removal, methods have been proposed in literature, with two stage cascade various configuration. These methods, can remove low density impulse noise, are not suited for high density noise in terms of visible performance. We propose an efficient method for removal of high as well as low density impulse noise. Our approach is based on novel extension over iterated conditional modes (ICM). It is cascade configuration of two stages - noise detection and noise removal. Noise detection process is a combination of iterative decision based approach, while noise removal process is based on iterative noisy pixel estimation. Using improvised approach, up to 95% corrupted image have been recovered with good results, while 98% corrupted image have been recovered with quite satisfactory results. To benchmark the image quality, we have considered various metrics like PSNR (Peak Signal to Noise Ratio), MSE (Mean Square Error) and SSIM (Structure Similarity Index Measure).

Kerouh, F., Serir, A..  2015.  Perceptual blur detection and assessment in the DCT domain. 2015 4th International Conference on Electrical Engineering (ICEE). :1–4.

The main emphasis of this paper is to develop an approach able to detect and assess blindly the perceptual blur degradation in images. The idea deals with a statistical modelling of perceptual blur degradation in the frequency domain using the discrete cosine transform (DCT) and the Just Noticeable Blur (JNB) concept. A machine learning system is then trained using the considered statistical features to detect perceptual blur effect in the acquired image and eventually produces a quality score denoted BBQM for Blind Blur Quality Metric. The proposed BBQM efficiency is tested objectively by evaluating it's performance against some existing metrics in terms of correlation with subjective scores.

2015-05-05
Vantigodi, S., Babu, R.V..  2014.  Entropy constrained exemplar-based image inpainting. Signal Processing and Communications (SPCOM), 2014 International Conference on. :1-5.

Image inpainting is the process of filling the unwanted region in an image marked by the user. It is used for restoring old paintings and photographs, removal of red eyes from pictures, etc. In this paper, we propose an efficient inpainting algorithm which takes care of false edge propagation. We use the classical exemplar based technique to find out the priority term for each patch. To ensure that the edge content of the nearest neighbor patch found by minimizing L2 distance between patches, we impose an additional constraint that the entropy of the patches be similar. Entropy of the patch acts as a good measure of edge content. Additionally, we fill the image by considering overlapping patches to ensure smoothness in the output. We use structural similarity index as the measure of similarity between ground truth and inpainted image. The results of the proposed approach on a number of examples on real and synthetic images show the effectiveness of our algorithm in removing objects and thin scratches or text written on image. It is also shown that the proposed approach is robust to the shape of the manually selected target. Our results compare favorably to those obtained by existing techniques.
 

2015-05-04
Hui Zeng, Tengfei Qin, Xiangui Kang, Li Liu.  2014.  Countering anti-forensics of median filtering. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :2704-2708.

The statistical fingerprints left by median filtering can be a valuable clue for image forensics. However, these fingerprints may be maliciously erased by a forger. Recently, a tricky anti-forensic method has been proposed to remove median filtering traces by restoring images' pixel difference distribution. In this paper, we analyze the traces of this anti-forensic technique and propose a novel counter method. The experimental results show that our method could reveal this anti-forensics effectively at low computation load. According to our best knowledge, it's the first work on countering anti-forensics of median filtering.