Visible to the public Biblio

Filters: Keyword is image denoising  [Clear All Filters]
2022-04-25
Khasanova, Aliia, Makhmutova, Alisa, Anikin, Igor.  2021.  Image Denoising for Video Surveillance Cameras Based on Deep Learning Techniques. 2021 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). :713–718.
Nowadays, video surveillance cameras are widely used in many smart city applications for ensuring road safety. We can use video data from them to solve such tasks as traffic management, driving control, environmental monitoring, etc. Most of these applications are based on object recognition and tracking algorithms. However, the video image quality is not always meet the requirements of such algorithms due to the influence of different external factors. A variety of adverse weather conditions produce noise on the images, which often makes it difficult to detect objects correctly. Lately, deep learning methods show good results in image processing, including denoising tasks. This work is devoted to the study of using these methods for image quality enhancement in difficult weather conditions such as snow, rain, fog. Different deep learning techniques were evaluated in terms of their impact on the quality of object detection/recognition. Finally, the system for automatic image denoising was developed.
2021-12-20
Shelke, Sandeep K., Sinha, Sanjeet K., Patel, Govind Singh.  2021.  Study of Improved Median Filtering Using Adaptive Window Architecture. 2021 International Conference on Computer Communication and Informatics (ICCCI). :1–6.
Over the past few years computer vision has become the essential aspect of modern era of technology. This computer vision is manly based on image processing whereas the image processing includes three important aspects as image filtering, image compression & image security. The image filtering can be achieved by using various filtering techniques but the PSNR & operating frequency are the most challenging aspects of image filtering. This paper mainly focused on overcoming the challenges appears while removing the salt & pepper noise with conventional median filtering by developing improved adaptive moving window architecture median filter & comparing its performance to have improved performance in terms of PSNR & operating frequency.
2021-02-08
Xu, P., Miao, Q., Liu, T., Chen, X..  2015.  Multi-direction Edge Detection Operator. 2015 11th International Conference on Computational Intelligence and Security (CIS). :187—190.

Due to the noise in the images, the edges extracted from these noisy images are always discontinuous and inaccurate by traditional operators. In order to solve these problems, this paper proposes multi-direction edge detection operator to detect edges from noisy images. The new operator is designed by introducing the shear transformation into the traditional operator. On the one hand, the shear transformation can provide a more favorable treatment for directions, which can make the new operator detect edges in different directions and overcome the directional limitation in the traditional operator. On the other hand, all the single pixel edge images in different directions can be fused. In this case, the edge information can complement each other. The experimental results indicate that the new operator is superior to the traditional ones in terms of the effectiveness of edge detection and the ability of noise rejection.

2020-08-03
Xin, Le, Li, Yuanji, Shang, Shize, Li, Guangrui, Yang, Yuhao.  2019.  A Template Matching Background Filtering Method for Millimeter Wave Human Security Image. 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR). :1–6.
In order to solve the interference of burrs, aliasing and other noises in the background area of millimeter wave human security inspection on the objects identification, an adaptive template matching filtering method is proposed. First, the preprocessed original image is segmented by level set algorithm, then the result is used as a template to filter the background of the original image. Finally, the image after background filtered is used as the input of bilateral filtering. The contrast experiments based on the actual millimeter wave image verifies the improvement of this algorithm compared with the traditional filtering method, and proves that this algorithm can filter the background noise of the human security image, retain the image details of the human body area, and is conducive to the object recognition and location in the millimeter wave security image.
2020-06-15
Puteaux, Pauline, Puech, William.  2018.  Noisy Encrypted Image Correction based on Shannon Entropy Measurement in Pixel Blocks of Very Small Size. 2018 26th European Signal Processing Conference (EUSIPCO). :161–165.
Many techniques have been presented to protect image content confidentiality. The owner of an image encrypts it using a key and transmits the encrypted image across a network. If the recipient is authorized to access the original content of the image, he can reconstruct it losslessly. However, if during the transmission the encrypted image is noised, some parts of the image can not be deciphered. In order to localize and correct these errors, we propose an approach based on the local Shannon entropy measurement. We first analyze this measure as a function of the block-size. We provide then a full description of our blind error localization and removal process. Experimental results show that the proposed approach, based on local entropy, can be used in practice to correct noisy encrypted images, even with blocks of very small size.
2020-06-12
Gu, Feng, Zhang, Hong, Wang, Chao, Wu, Fan.  2019.  SAR Image Super-Resolution Based on Noise-Free Generative Adversarial Network. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. :2575—2578.

Deep learning has been successfully applied to the ordinary image super-resolution (SR). However, since the synthetic aperture radar (SAR) images are often disturbed by multiplicative noise known as speckle and more blurry than ordinary images, there are few deep learning methods for the SAR image SR. In this paper, a deep generative adversarial network (DGAN) is proposed to reconstruct the pseudo high-resolution (HR) SAR images. First, a generator network is constructed to remove the noise of low-resolution SAR image and generate HR SAR image. Second, a discriminator network is used to differentiate between the pseudo super-resolution images and the realistic HR images. The adversarial objective function is introduced to make the pseudo HR SAR images closer to real SAR images. The experimental results show that our method can maintain the SAR image content with high-level noise suppression. The performance evaluation based on peak signal-to-noise-ratio and structural similarity index shows the superiority of the proposed method to the conventional CNN baselines.

2020-04-17
Xie, Cihang, Wu, Yuxin, Maaten, Laurens van der, Yuille, Alan L., He, Kaiming.  2019.  Feature Denoising for Improving Adversarial Robustness. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :501—509.

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 — it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.

2020-03-30
Abdolahi, Mahssa, Jiang, Hao, Kaminska, Bozena.  2019.  Robust data retrieval from high-security structural colour QR codes via histogram equalization and decorrelation stretching. 2019 IEEE 10th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :0340–0346.
In this work, robust readout of the data (232 English characters) stored in high-security structural colour QR codes, was achieved by using multiple image processing techniques, specifically, histogram equalization and decorrelation stretching. The decoded structural colour QR codes are generic diffractive RGB-pixelated periodic nanocones selectively activated by laser exposure to obtain the particular design of interest. The samples were imaged according to the criteria determined by the diffraction grating equation for the lighting and viewing angles given the red, green, and blue periodicities of the grating. However, illumination variations all through the samples, cross-module and cross-channel interference effects result in acquiring images with dissimilar lighting conditions which cannot be directly retrieved by the decoding script and need significant preprocessing. According to the intensity plots, even if the intensity values are very close (above 200) at some typical regions of the images with different lighting conditions, their inconsistencies (below 100) at the pixels of one representative region may lead to the requirement for using different methods for recovering the data from all red, green, and blue channels. In many cases, a successful data readout could be achieved by downscaling the images to 300-pixel dimensions (along with bilinear interpolation resampling), histogram equalization (HE), linear spatial low-pass mean filtering, and gamma function, each used either independently or with other complementary processes. The majority of images, however, could be fully decoded using decorrelation stretching (DS) either as a standalone or combinational process for obtaining a more distinctive colour definition.
2019-04-01
Rathour, N., Kaur, K., Bansal, S., Bhargava, C..  2018.  A Cross Correlation Approach for Breaking of Text CAPTCHA. 2018 International Conference on Intelligent Circuits and Systems (ICICS). :6–10.
Online web service providers generally protect themselves through CAPTCHA. A CAPTCHA is a type of challenge-response test used in computing as an attempt to ensure that the response is generated by a person. CAPTCHAS are mainly instigated as distorted text which the handler must correctly transcribe. Numerous schemes have been proposed till date in order to prevent attacks by Bots. This paper also presents a cross correlation based approach in breaking of famous service provider's text CAPTCHA i.e. PayPal.com and the other one is of India's most visited website IRCTC.co.in. The procedure can be fragmented down into 3 firmly tied tasks: pre-processing, segmentation, and classification. The pre-processing of the image is performed to remove all the background noise of the image. The noise in the CAPTCHA are unwanted on pixels in the background. The segmentation is performed by scanning the image for on pixels. The organization is performed by using the association values of the inputs and templates. Two types of templates have been used for classification purpose. One is the standard templates which give 30% success rate and other is the noisy templates made from the captcha images and success rate achieved with these is 100%.
2018-02-02
Abura'ed, Nour, Khan, Faisal Shah, Bhaskar, Harish.  2017.  Advances in the Quantum Theoretical Approach to Image Processing Applications. ACM Comput. Surv.. 49:75:1–75:49.
In this article, a detailed survey of the quantum approach to image processing is presented. Recently, it has been established that existing quantum algorithms are applicable to image processing tasks allowing quantum informational models of classical image processing. However, efforts continue in identifying the diversity of its applicability in various image processing domains. Here, in addition to reviewing some of the critical image processing applications that quantum mechanics have targeted, such as denoising, edge detection, image storage, retrieval, and compression, this study will also highlight the complexities in transitioning from the classical to the quantum domain. This article shall establish theoretical fundamentals, analyze performance and evaluation, draw key statistical evidence to support claims, and provide recommendations based on published literature mostly during the period from 2010 to 2015.
2017-11-20
Aqel, S., Aarab, A., Sabri, M. A..  2016.  Shadow detection and removal for traffic sequences. 2016 International Conference on Electrical and Information Technologies (ICEIT). :168–173.

This paper address the problem of shadow detection and removal in traffic vision analysis. Basically, the presence of the shadow in the traffic sequences is imminent, and therefore leads to errors at segmentation stage and often misclassified as an object region or as a moving object. This paper presents a shadow removal method, based on both color and texture features, aiming to contribute to retrieve efficiently the moving objects whose detection are usually under the influence of cast-shadows. Additionally, in order to get a shadow-free foreground segmentation image, a morphology reconstruction algorithm is used to recover the foreground disturbed by shadow removal. Once shadows are detected, an automatic shadow removal model is proposed based on the information retrieved from the histogram shape. Experimental results on a real traffic sequence is presented to test the proposed approach and to validate the algorithm's performance.

2017-03-08
Chammas, E., Mokbel, C., Likforman-Sulem, L..  2015.  Arabic handwritten document preprocessing and recognition. 2015 13th International Conference on Document Analysis and Recognition (ICDAR). :451–455.

Arabic handwritten documents present specific challenges due to the cursive nature of the writing and the presence of diacritical marks. Moreover, one of the largest labeled database of Arabic handwritten documents, the OpenHart-NIST database includes specific noise, namely guidelines, that has to be addressed. We propose several approaches to process these documents. First a guideline detection approach has been developed, based on K-means, that detects the documents that include guidelines. We then propose a series of preprocessing at text-line level to reduce the noise effects. For text-lines including guidelines, a guideline removal preprocessing is described and existing keystroke restoration approaches are assessed. In addition, we propose a preprocessing that combines noise removal and deskewing by removing line fragments from neighboring text lines, while searching for the principal orientation of the text-line. We provide recognition results, showing the significant improvement brought by the proposed processings.

Moradi, M., Falahati, A., Shahbahrami, A., Zare-Hassanpour, R..  2015.  Improving visual quality in wireless capsule endoscopy images with contrast-limited adaptive histogram equalization. 2015 2nd International Conference on Pattern Recognition and Image Analysis (IPRIA). :1–5.

Wireless Capsule Endoscopy (WCE) is a noninvasive device for detection of gastrointestinal problems especially small bowel diseases, such as polyps which causes gastrointestinal bleeding. The quality of WCE images is very important for diagnosis. In this paper, a new method is proposed to improve the quality of WCE images. In our proposed method for improving the quality of WCE images, Removing Noise and Contrast Enhancement (RNCE) algorithm is used. The algorithm have been implemented and tested on some real images. Quality metrics used for performance evaluation of the proposed method is Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Edge Strength Similarity for Image (ESSIM). The results obtained from SSIM, PSNR and ESSIM indicate that the implemented RNCE method improve the quality of WCE images significantly.

Gómez-Valverde, J. J., Ortuño, J. E., Guerra, P., Hermann, B., Zabihian, B., Rubio-Guivernau, J. L., Santos, A., Drexler, W., Ledesma-Carbayo, M. J..  2015.  Evaluation of speckle reduction with denoising filtering in optical coherence tomography for dermatology. 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). :494–497.

Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.

Behjat-Jamal, S., Demirci, R., Rahkar-Farshi, T..  2015.  Hybrid bilateral filter. 2015 International Symposium on Computer Science and Software Engineering (CSSE). :1–6.

A variety of methods for images noise reduction has been developed so far. Most of them successfully remove noise but their edge preserving capabilities are weak. Therefore bilateral image filter is helpful to deal with this problem. Nevertheless, their performances depend on spatial and photometric parameters which are chosen by user. Conventionally, the geometric weight is calculated by means of distance of neighboring pixels and the photometric weight is calculated by means of color components of neighboring pixels. The range of weights is between zero and one. In this paper, geometric weights are estimated by fuzzy metrics and photometric weights are estimated by using fuzzy rule based system which does not require any predefined parameter. Experimental results of conventional, fuzzy bilateral filter and proposed approach have been included.

Chauhan, A. S., Sahula, V..  2015.  High density impulsive Noise removal using decision based iterated conditional modes. 2015 International Conference on Signal Processing, Computing and Control (ISPCC). :24–29.

Salt and Pepper Noise is very common during transmission of images through a noisy channel or due to impairment in camera sensor module. For noise removal, methods have been proposed in literature, with two stage cascade various configuration. These methods, can remove low density impulse noise, are not suited for high density noise in terms of visible performance. We propose an efficient method for removal of high as well as low density impulse noise. Our approach is based on novel extension over iterated conditional modes (ICM). It is cascade configuration of two stages - noise detection and noise removal. Noise detection process is a combination of iterative decision based approach, while noise removal process is based on iterative noisy pixel estimation. Using improvised approach, up to 95% corrupted image have been recovered with good results, while 98% corrupted image have been recovered with quite satisfactory results. To benchmark the image quality, we have considered various metrics like PSNR (Peak Signal to Noise Ratio), MSE (Mean Square Error) and SSIM (Structure Similarity Index Measure).

Saurabh, A., Kumar, A., Anitha, U..  2015.  Performance analysis of various wavelet thresholding techniques for despeckiling of sonar images. 2015 3rd International Conference on Signal Processing, Communication and Networking (ICSCN). :1–7.

Image Denoising nowadays is a great Challenge in the field of image processing. Since Discrete wavelet transform (DWT) is one of the powerful and perspective approaches in the area of image de noising. But fixing an optimal threshold is the key factor to determine the performance of denoising algorithm using (DWT). The optimal threshold can be estimated from the image statistics for getting better performance of denoising in terms of clarity or quality of the images. In this paper we analyzed various methods of denoising from the sonar image by using various thresholding methods (Vishnu Shrink, Bayes Shrink and Neigh Shrink) experimentally and compare the result in terms of various image quality parameters. (PSNR,MSE,SSIM and Entropy). The results of the proposed method show that there is an improvenment in the visual quality of sonar images by suppressing the speckle noise and retaining edge details.

Rubel, O., Ponomarenko, N., Lukin, V., Astola, J., Egiazarian, K..  2015.  HVS-based local analysis of denoising efficiency for DCT-based filters. 2015 Second International Scientific-Practical Conference Problems of Infocommunications Science and Technology (PIC S T). :189–192.

Images acquired and processed in communication and multimedia systems are often noisy. Thus, pre-filtering is a typical stage to remove noise. At this stage, a special attention has to be paid to image visual quality. This paper analyzes denoising efficiency from the viewpoint of visual quality improvement using metrics that take into account human vision system (HVS). Specific features of the paper consist in, first, considering filters based on discrete cosine transform (DCT) and, second, analyzing the filter performance locally. Such an analysis is possible due to the structure and peculiarities of the metric PSNR-HVS-M. It is shown that a more advanced DCT-based filter BM3D outperforms a simpler (and faster) conventional DCT-based filter in locally active regions, i.e., neighborhoods of edges and small-sized objects. This conclusions allows accelerating BM3D filter and can be used in further improvement of the analyzed denoising techniques.

2017-02-21
A. Roy, S. P. Maity.  2015.  "On segmentation of CS reconstructed MR images". 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR). :1-6.

This paper addresses the issue of magnetic resonance (MR) Image reconstruction at compressive sampling (or compressed sensing) paradigm followed by its segmentation. To improve image reconstruction problem at low measurement space, weighted linear prediction and random noise injection at unobserved space are done first, followed by spatial domain de-noising through adaptive recursive filtering. Reconstructed image, however, suffers from imprecise and/or missing edges, boundaries, lines, curvatures etc. and residual noise. Curvelet transform is purposely used for removal of noise and edge enhancement through hard thresholding and suppression of approximate sub-bands, respectively. Finally Genetic algorithms (GAs) based clustering is done for segmentation of sharpen MR Image using weighted contribution of variance and entropy values. Extensive simulation results are shown to highlight performance improvement of both image reconstruction and segmentation problems.

H. Kiragu, G. Kamucha, E. Mwangi.  2015.  "A fast procedure for acquisition and reconstruction of magnetic resonance images using compressive sampling". AFRICON 2015. :1-5.

This paper proposes a fast and robust procedure for sensing and reconstruction of sparse or compressible magnetic resonance images based on the compressive sampling theory. The algorithm starts with incoherent undersampling of the k-space data of the image using a random matrix. The undersampled data is sparsified using Haar transformation. The Haar transform coefficients of the k-space data are then reconstructed using the orthogonal matching Pursuit algorithm. The reconstructed coefficients are inverse transformed into k-space data and then into the image in spatial domain. Finally, a median filter is used to suppress the recovery noise artifacts. Experimental results show that the proposed procedure greatly reduces the image data acquisition time without significantly reducing the image quality. The results also show that the error in the reconstructed image is reduced by median filtering.

2015-05-06
Bin Sun, Shutao Li, Jun Sun.  2014.  Scanned Image Descreening With Image Redundancy and Adaptive Filtering. Image Processing, IEEE Transactions on. 23:3698-3710.

Currently, most electrophotographic printers use halftoning technique to print continuous tone images, so scanned images obtained from such hard copies are usually corrupted by screen like artifacts. In this paper, a new model of scanned halftone image is proposed to consider both printing distortions and halftone patterns. Based on this model, an adaptive filtering based descreening method is proposed to recover high quality contone images from the scanned images. Image redundancy based denoising algorithm is first adopted to reduce printing noise and attenuate distortions. Then, screen frequency of the scanned image and local gradient features are used for adaptive filtering. Basic contone estimate is obtained by filtering the denoised scanned image with an anisotropic Gaussian kernel, whose parameters are automatically adjusted with the screen frequency and local gradient information. Finally, an edge-preserving filter is used to further enhance the sharpness of edges to recover a high quality contone image. Experiments on real scanned images demonstrate that the proposed method can recover high quality contone images from the scanned images. Compared with the state-of-the-art methods, the proposed method produces very sharp edges and much cleaner smooth regions.

Jian Wang, Lin Mei, Yi Li, Jian-Ye Li, Kun Zhao, Yuan Yao.  2014.  Variable Window for Outlier Detection and Impulsive Noise Recognition in Range Images. Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on. :857-864.

To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
 

2015-05-04
Dirik, A.E., Sencar, H.T., Memon, N..  2014.  Analysis of Seam-Carving-Based Anonymization of Images Against PRNU Noise Pattern-Based Source Attribution. Information Forensics and Security, IEEE Transactions on. 9:2277-2290.

The availability of sophisticated source attribution techniques raises new concerns about privacy and anonymity of photographers, activists, and human right defenders who need to stay anonymous while spreading their images and videos. Recently, the use of seam-carving, a content-aware resizing method, has been proposed to anonymize the source camera of images against the well-known photoresponse nonuniformity (PRNU)-based source attribution technique. In this paper, we provide an analysis of the seam-carving-based source camera anonymization method by determining the limits of its performance introducing two adversarial models. Our analysis shows that the effectiveness of the deanonymization attacks depend on various factors that include the parameters of the seam-carving method, strength of the PRNU noise pattern of the camera, and an adversary's ability to identify uncarved image blocks in a seam-carved image. Our results show that, for the general case, there should not be many uncarved blocks larger than the size of 50×50 pixels for successful anonymization of the source camera.