Visible to the public Biblio

Filters: Keyword is pubcrawl170111  [Clear All Filters]
2017-03-08
Moradi, M., Falahati, A., Shahbahrami, A., Zare-Hassanpour, R..  2015.  Improving visual quality in wireless capsule endoscopy images with contrast-limited adaptive histogram equalization. 2015 2nd International Conference on Pattern Recognition and Image Analysis (IPRIA). :1–5.

Wireless Capsule Endoscopy (WCE) is a noninvasive device for detection of gastrointestinal problems especially small bowel diseases, such as polyps which causes gastrointestinal bleeding. The quality of WCE images is very important for diagnosis. In this paper, a new method is proposed to improve the quality of WCE images. In our proposed method for improving the quality of WCE images, Removing Noise and Contrast Enhancement (RNCE) algorithm is used. The algorithm have been implemented and tested on some real images. Quality metrics used for performance evaluation of the proposed method is Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Edge Strength Similarity for Image (ESSIM). The results obtained from SSIM, PSNR and ESSIM indicate that the implemented RNCE method improve the quality of WCE images significantly.

Gómez-Valverde, J. J., Ortuño, J. E., Guerra, P., Hermann, B., Zabihian, B., Rubio-Guivernau, J. L., Santos, A., Drexler, W., Ledesma-Carbayo, M. J..  2015.  Evaluation of speckle reduction with denoising filtering in optical coherence tomography for dermatology. 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). :494–497.

Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.

Marburg, A., Hayes, M. P..  2015.  SMARTPIG: Simultaneous mosaicking and resectioning through planar image graphs. 2015 IEEE International Conference on Robotics and Automation (ICRA). :5767–5774.

This paper describes Smartpig, an algorithm for the iterative mosaicking of images of a planar surface using a unique parameterization which decomposes inter-image projective warps into camera intrinsics, fronto-parallel projections, and inter-image similarities. The constraints resulting from the inter-image alignments within an image set are stored in an undirected graph structure allowing efficient optimization of image projections on the plane. Camera pose is also directly recoverable from the graph, making Smartpig a feasible solution to the problem of simultaneous location and mapping (SLAM). Smartpig is demonstrated on a set of 144 high resolution aerial images and evaluated with a number of metrics against ground control.

Mukherjee, M., Edwards, J., Kwon, H., Porta, T. F. L..  2015.  Quality of information-aware real-time traffic flow analysis and reporting. 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops). :69–74.

In this paper we present a framework for Quality of Information (QoI)-aware networking. QoI quantifies how useful a piece of information is for a given query or application. Herein, we present a general QoI model, as well as a specific example instantiation that carries throughout the rest of the paper. In this model, we focus on the tradeoffs between precision and accuracy. As a motivating example, we look at traffic video analysis. We present simple algorithms for deriving various traffic metrics from video, such as vehicle count and average speed. We implement these algorithms both on a desktop workstation and less-capable mobile device. We then show how QoI-awareness enables end devices to make intelligent decisions about how to process queries and form responses, such that huge bandwidth savings are realized.

Lee, K., Kolsch, M..  2015.  Shot Boundary Detection with Graph Theory Using Keypoint Features and Color Histograms. 2015 IEEE Winter Conference on Applications of Computer Vision. :1177–1184.

The TRECVID report of 2010 [14] evaluated video shot boundary detectors as achieving "excellent performance on [hard] cuts and gradual transitions." Unfortunately, while re-evaluating the state of the art of the shot boundary detection, we found that they need to be improved because the characteristics of consumer-produced videos have changed significantly since the introduction of mobile gadgets, such as smartphones, tablets and outdoor activity purposed cameras, and video editing software has been evolving rapidly. In this paper, we evaluate the best-known approach on a contemporary, publicly accessible corpus, and present a method that achieves better performance, particularly on soft transitions. Our method combines color histograms with key point feature matching to extract comprehensive frame information. Two similarity metrics, one for individual frames and one for sets of frames, are defined based on graph cuts. These metrics are formed into temporal feature vectors on which a SVM is trained to perform the final segmentation. The evaluation on said "modern" corpus of relatively short videos yields a performance of 92% recall (at 89% precision) overall, compared to 69% (91%) of the best-known method.

Sandic-Stankovic, D., Kukolj, D., Callet, P. Le.  2015.  DIBR synthesized image quality assessment based on morphological wavelets. 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX). :1–6.

Most of the Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain nonuniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. In this paper, morphological wavelet peak signal-to-noise ratio measure, MW-PSNR, based on morphological wavelet decomposition is proposed to tackle the evaluation of DIBR synthesized images. It is shown that MW-PSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.

Xu, R., Naman, A. T., Mathew, R., Rüfenacht, D., Taubman, D..  2015.  Motion estimation with accurate boundaries. 2015 Picture Coding Symposium (PCS). :184–188.

This paper investigates several techniques that increase the accuracy of motion boundaries in estimated motion fields of a local dense estimation scheme. In particular, we examine two matching metrics, one is MSE in the image domain and the other one is a recently proposed multiresolution metric that has been shown to produce more accurate motion boundaries. We also examine several different edge-preserving filters. The edge-aware moving average filter, proposed in this paper, takes an input image and the result of an edge detection algorithm, and outputs an image that is smooth except at the detected edges. Compared to the adoption of edge-preserving filters, we find that matching metrics play a more important role in estimating accurate and compressible motion fields. Nevertheless, the proposed filter may provide further improvements in the accuracy of the motion boundaries. These findings can be very useful for a number of recently proposed scalable interactive video coding schemes.

Sandic-Stankovic, D., Kukolj, D., Callet, P. Le.  2015.  DIBR synthesized image quality assessment based on morphological pyramids. 2015 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON). :1–4.

Most Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain non-uniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. There is inherent congruence between the morphological pyramid decomposition scheme and human visual perception. In this paper, multi-scale measure, morphological pyramid peak signal-to-noise ratio MP-PSNR, based on morphological pyramid decomposition is proposed for the evaluation of DIBR synthesized images. It is shown that MPPSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.

Kerouh, F., Serir, A..  2015.  A no reference perceptual blur quality metric in the DCT domain. 2015 3rd International Conference on Control, Engineering Information Technology (CEIT). :1–6.

Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.

Lokhande, S. S., Dawande, N. A..  2015.  A Survey on Document Image Binarization Techniques. 2015 International Conference on Computing Communication Control and Automation. :742–746.

Document image binarization is performed to segment foreground text from background text in badly degraded documents. In this paper, a comprehensive survey has been conducted on some state-of-the-art document image binarization techniques. After describing these document images binarization techniques, their performance have been compared with the help of various evaluation performance metrics which are widely used for document image analysis and recognition. On the basis of this comparison, it has been found out that the adaptive contrast method is the best performing method. Accordingly, the partial results that we have obtained for the adaptive contrast method have been stated and also the mathematical model and block diagram of the adaptive contrast method has been described in detail.

Behjat-Jamal, S., Demirci, R., Rahkar-Farshi, T..  2015.  Hybrid bilateral filter. 2015 International Symposium on Computer Science and Software Engineering (CSSE). :1–6.

A variety of methods for images noise reduction has been developed so far. Most of them successfully remove noise but their edge preserving capabilities are weak. Therefore bilateral image filter is helpful to deal with this problem. Nevertheless, their performances depend on spatial and photometric parameters which are chosen by user. Conventionally, the geometric weight is calculated by means of distance of neighboring pixels and the photometric weight is calculated by means of color components of neighboring pixels. The range of weights is between zero and one. In this paper, geometric weights are estimated by fuzzy metrics and photometric weights are estimated by using fuzzy rule based system which does not require any predefined parameter. Experimental results of conventional, fuzzy bilateral filter and proposed approach have been included.

Kesiman, M. W. A., Prum, S., Sunarya, I. M. G., Burie, J. C., Ogier, J. M..  2015.  An analysis of ground truth binarized image variability of palm leaf manuscripts. 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA). :229–233.

As a very valuable cultural heritage, palm leaf manuscripts offer a new challenge in document analysis system due to the specific characteristics on physical support of the manuscript. With the aim of finding an optimal binarization method for palm leaf manuscript images, creating a new ground truth binarized image is a necessary step in document analysis of palm leaf manuscript. But, regarding to the human intervention in ground truthing process, an important remark about the subjectivity effect on the construction of ground truth binarized image has been analysed and reported. In this paper, we present an experiment in a real condition to analyse the existance of human subjectivity on the construction of ground truth binarized image of palm leaf manuscript images and to measure quantitatively the ground truth variability with several binarization evaluation metrics.

Chauhan, A. S., Sahula, V..  2015.  High density impulsive Noise removal using decision based iterated conditional modes. 2015 International Conference on Signal Processing, Computing and Control (ISPCC). :24–29.

Salt and Pepper Noise is very common during transmission of images through a noisy channel or due to impairment in camera sensor module. For noise removal, methods have been proposed in literature, with two stage cascade various configuration. These methods, can remove low density impulse noise, are not suited for high density noise in terms of visible performance. We propose an efficient method for removal of high as well as low density impulse noise. Our approach is based on novel extension over iterated conditional modes (ICM). It is cascade configuration of two stages - noise detection and noise removal. Noise detection process is a combination of iterative decision based approach, while noise removal process is based on iterative noisy pixel estimation. Using improvised approach, up to 95% corrupted image have been recovered with good results, while 98% corrupted image have been recovered with quite satisfactory results. To benchmark the image quality, we have considered various metrics like PSNR (Peak Signal to Noise Ratio), MSE (Mean Square Error) and SSIM (Structure Similarity Index Measure).

Mao, Y., Yang, J., Zhu, B., Yang, Y..  2015.  A new mesh simplification algorithm based on quadric error metric. 2015 IEEE 5th International Conference on Consumer Electronics - Berlin (ICCE-Berlin). :463–466.

This paper proposes an improved mesh simplification algorithm based on quadric error metrics (QEM) to efficiently processing the huge data in 3D image processing. This method fully uses geometric information around vertices to avoid model edge from being simplified and to keep details. Meanwhile, the differences between simplified triangular meshes and equilateral triangles are added as weights of errors to decrease the possibilities of narrow triangle and then to avoid the visual mutation. Experiments show that our algorithm has obvious advantages over the time cost, and can better save the visual characteristics of model, which is suitable for solving most image processing, that is, "Real-time interactive" problem.

Farias, F. d S., Waldir, S. S., Filho, E. B. de Lima, Melo, W. C..  2015.  Automated content detection on TVs and computer monitors. 2015 IEEE 4th Global Conference on Consumer Electronics (GCCE). :177–178.

In a system manufacturing process that use screens, for exemple, TVs, computer monitors, or notebook, the inspection images is one of the most important quality tests. Due to increasing complexity of these systems, manual inspection became complex and slow. Thus, automatic inspection is an attractive alternative. In this paper, we present an automatic inspection system images using edge and line detection algorithms, rectangles recognition and image comparison metrics. The experiments, performed to 504 images (TVs, computer monitors, and notebook) demonstrate that the system has good performance.

Ridel, D. A., Shinzato, P. Y., Wolf, D. F..  2015.  A Clustering-Based Obstacle Segmentation Approach for Urban Environments. 2015 12th Latin American Robotic Symposium and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR). :265–270.

The detection of obstacles is a fundamental issue in autonomous navigation, as it is the main key for collision prevention. This paper presents a method for the segmentation of general obstacles by stereo vision with no need of dense disparity maps or assumptions about the scenario. A sparse set of points is selected according to a local spatial condition and then clustered in function of its neighborhood, disparity values and a cost associated with the possibility of each point being part of an obstacle. The method was evaluated in hand-labeled images from KITTI object detection benchmark and the precision and recall metrics were calculated. The quantitative and qualitative results showed satisfactory in scenarios with different types of objects.

Cook, B., Graceffo, S..  2015.  Semi-automated land/water segmentation of multi-spectral imagery. OCEANS 2015 - MTS/IEEE Washington. :1–7.

Segmentation of land and water regions is necessary in many applications involving analysis of remote sensing imagery. Not only is manual segmentation of these regions prone to considerable subjective variability, but the large volume of imagery collected by modern platforms makes manual segmentation extremely tedious to perform, particularly in applications that require frequent re-measurement. This paper examines a robust, semi-automated approach that utilizes simple and efficient machine learning algorithms to perform supervised classification of multi-spectral image data into land and water regions. By combining the four wavelength bands widely available in imaging platforms such as IKONOS, QuickBird, and GeoEye-1 with basic texture metrics, high quality segmentation can be achieved. An efficient workflow was created by constructing a Graphical User Interface (GUI) to these machine learning algorithms.

Chang, C., Liu, F., Liu, K..  2015.  Software Structure Analysis Using Network Theory. 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC). :519–522.

Software structure analysis is crucial in software testing. Using complex network theory, we present a series of methods and build a two-layer network model for software analysis, including network metrics calculation and features extraction. Through identifying the critical functions and reused modules, we can reduce nearly 80% workload in software testing on average. Besides, the structure network shows some interesting features that can assist to understand the software more clearly.

Kerouh, F., Serir, A..  2015.  Perceptual blur detection and assessment in the DCT domain. 2015 4th International Conference on Electrical Engineering (ICEE). :1–4.

The main emphasis of this paper is to develop an approach able to detect and assess blindly the perceptual blur degradation in images. The idea deals with a statistical modelling of perceptual blur degradation in the frequency domain using the discrete cosine transform (DCT) and the Just Noticeable Blur (JNB) concept. A machine learning system is then trained using the considered statistical features to detect perceptual blur effect in the acquired image and eventually produces a quality score denoted BBQM for Blind Blur Quality Metric. The proposed BBQM efficiency is tested objectively by evaluating it's performance against some existing metrics in terms of correlation with subjective scores.

Nirmala, D. E., Vaidehi, V..  2015.  Non-subsampled contourlet based feature level fusion using fuzzy logic and golden section algorithm for multisensor imaging systems. 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS). :110–115.

With the recent developments in the field of visual sensor technology, multiple imaging sensors are used in several applications such as surveillance, medical imaging and machine vision, in order to improve their capabilities. The goal of any efficient image fusion algorithm is to combine the visual information, obtained from a number of disparate imaging sensors, into a single fused image without the introduction of distortion or loss of information. The existing fusion algorithms employ either the mean or choose-max fusion rule for selecting the best features for fusion. The choose-max rule distorts constants background information whereas the mean rule blurs the edges. In this paper, Non-Subsampled Contourlet Transform (NSCT) based two feature-level fusion schemes are proposed and compared. In the first method Fuzzy logic is applied to determine the weights to be assigned to each segmented region using the salient region feature values computed. The second method employs Golden Section Algorithm (GSA) to achieve the optimal fusion weights of each region based on its Petrovic metric. The regions are merged adaptively using the weights determined. Experiments show that the proposed feature-level fusion methods provide better visual quality with clear edge information and objective quality metrics than individual multi-resolution-based methods such as Dual Tree Complex Wavelet Transform and NSCT.

Liu, Weijian, Chen, Zeqi, Chen, Yunhua, Yao, Ruohe.  2015.  An \#8467;1/2-BTV regularization algorithm for super-resolution. 2015 4th International Conference on Computer Science and Network Technology (ICCSNT). 01:1274–1281.

In this paper, we propose a novelregularization term for super-resolution by combining a bilateral total variation (BTV) regularizer and a sparsity prior model on the image. The term is composed of the weighted least squares minimization and the bilateral filter proposed by Elad, but adding an ℓ1/2 regularizer. It is referred to as ℓ1/2-BTV. The proposed algorithm serves to restore image details more precisely and eliminate image noise more effectively by introducing the sparsity of the ℓ1/2 regularizer into the traditional bilateral total variation (BTV) regularizer. Experiments were conducted on both simulated and real image sequences. The results show that the proposed algorithm generates high-resolution images of better quality, as defined by both de-noising and edge-preservation metrics, than other methods.

Jaiswal, A., Garg, B., Kaushal, V., Sharma, G. K..  2015.  SPAA-Aware 2D Gaussian Smoothing Filter Design Using Efficient Approximation Techniques. 2015 28th International Conference on VLSI Design. :333–338.

The limited battery lifetime and rapidly increasing functionality of portable multimedia devices demand energy-efficient designs. The filters employed mainly in these devices are based on Gaussian smoothing, which is slow and, severely affects the performance. In this paper, we propose a novel energy-efficient approximate 2D Gaussian smoothing filter (2D-GSF) architecture by exploiting "nearest pixel approximation" and rounding-off Gaussian kernel coefficients. The proposed architecture significantly improves Speed-Power-Area-Accuracy (SPAA) metrics in designing energy-efficient filters. The efficacy of the proposed approximate 2D-GSF is demonstrated on real application such as edge detection. The simulation results show 72%, 79% and 76% reduction in area, power and delay, respectively with acceptable 0.4dB loss in PSNR as compared to the well-known approximate 2D-GSF.

Saurabh, A., Kumar, A., Anitha, U..  2015.  Performance analysis of various wavelet thresholding techniques for despeckiling of sonar images. 2015 3rd International Conference on Signal Processing, Communication and Networking (ICSCN). :1–7.

Image Denoising nowadays is a great Challenge in the field of image processing. Since Discrete wavelet transform (DWT) is one of the powerful and perspective approaches in the area of image de noising. But fixing an optimal threshold is the key factor to determine the performance of denoising algorithm using (DWT). The optimal threshold can be estimated from the image statistics for getting better performance of denoising in terms of clarity or quality of the images. In this paper we analyzed various methods of denoising from the sonar image by using various thresholding methods (Vishnu Shrink, Bayes Shrink and Neigh Shrink) experimentally and compare the result in terms of various image quality parameters. (PSNR,MSE,SSIM and Entropy). The results of the proposed method show that there is an improvenment in the visual quality of sonar images by suppressing the speckle noise and retaining edge details.

Rubel, O., Ponomarenko, N., Lukin, V., Astola, J., Egiazarian, K..  2015.  HVS-based local analysis of denoising efficiency for DCT-based filters. 2015 Second International Scientific-Practical Conference Problems of Infocommunications Science and Technology (PIC S T). :189–192.

Images acquired and processed in communication and multimedia systems are often noisy. Thus, pre-filtering is a typical stage to remove noise. At this stage, a special attention has to be paid to image visual quality. This paper analyzes denoising efficiency from the viewpoint of visual quality improvement using metrics that take into account human vision system (HVS). Specific features of the paper consist in, first, considering filters based on discrete cosine transform (DCT) and, second, analyzing the filter performance locally. Such an analysis is possible due to the structure and peculiarities of the metric PSNR-HVS-M. It is shown that a more advanced DCT-based filter BM3D outperforms a simpler (and faster) conventional DCT-based filter in locally active regions, i.e., neighborhoods of edges and small-sized objects. This conclusions allows accelerating BM3D filter and can be used in further improvement of the analyzed denoising techniques.

Windisch, G., Kozlovszky, M..  2015.  Image sharpness metrics for digital microscopy. 2015 IEEE 13th International Symposium on Applied Machine Intelligence and Informatics (SAMI). :273–276.

Image sharpness measurements are important parts of many image processing applications. To measure image sharpness multiple algorithms have been proposed and measured in the past but they have been developed with having out-of-focus photographs in mind and they do not work so well with images taken using a digital microscope. In this article we show the difference between images taken with digital cameras, images taken with a digital microscope and artificially blurred images. The conventional sharpness measures are executed on all these categories to measure the difference and a standard image set taken with a digital microscope is proposed and described to serve as a common baseline for further sharpness measures in the field.