Biblio
Software structure analysis is crucial in software testing. Using complex network theory, we present a series of methods and build a two-layer network model for software analysis, including network metrics calculation and features extraction. Through identifying the critical functions and reused modules, we can reduce nearly 80% workload in software testing on average. Besides, the structure network shows some interesting features that can assist to understand the software more clearly.
The main emphasis of this paper is to develop an approach able to detect and assess blindly the perceptual blur degradation in images. The idea deals with a statistical modelling of perceptual blur degradation in the frequency domain using the discrete cosine transform (DCT) and the Just Noticeable Blur (JNB) concept. A machine learning system is then trained using the considered statistical features to detect perceptual blur effect in the acquired image and eventually produces a quality score denoted BBQM for Blind Blur Quality Metric. The proposed BBQM efficiency is tested objectively by evaluating it's performance against some existing metrics in terms of correlation with subjective scores.
With the recent developments in the field of visual sensor technology, multiple imaging sensors are used in several applications such as surveillance, medical imaging and machine vision, in order to improve their capabilities. The goal of any efficient image fusion algorithm is to combine the visual information, obtained from a number of disparate imaging sensors, into a single fused image without the introduction of distortion or loss of information. The existing fusion algorithms employ either the mean or choose-max fusion rule for selecting the best features for fusion. The choose-max rule distorts constants background information whereas the mean rule blurs the edges. In this paper, Non-Subsampled Contourlet Transform (NSCT) based two feature-level fusion schemes are proposed and compared. In the first method Fuzzy logic is applied to determine the weights to be assigned to each segmented region using the salient region feature values computed. The second method employs Golden Section Algorithm (GSA) to achieve the optimal fusion weights of each region based on its Petrovic metric. The regions are merged adaptively using the weights determined. Experiments show that the proposed feature-level fusion methods provide better visual quality with clear edge information and objective quality metrics than individual multi-resolution-based methods such as Dual Tree Complex Wavelet Transform and NSCT.
In this paper, we propose a novelregularization term for super-resolution by combining a bilateral total variation (BTV) regularizer and a sparsity prior model on the image. The term is composed of the weighted least squares minimization and the bilateral filter proposed by Elad, but adding an ℓ1/2 regularizer. It is referred to as ℓ1/2-BTV. The proposed algorithm serves to restore image details more precisely and eliminate image noise more effectively by introducing the sparsity of the ℓ1/2 regularizer into the traditional bilateral total variation (BTV) regularizer. Experiments were conducted on both simulated and real image sequences. The results show that the proposed algorithm generates high-resolution images of better quality, as defined by both de-noising and edge-preservation metrics, than other methods.
The limited battery lifetime and rapidly increasing functionality of portable multimedia devices demand energy-efficient designs. The filters employed mainly in these devices are based on Gaussian smoothing, which is slow and, severely affects the performance. In this paper, we propose a novel energy-efficient approximate 2D Gaussian smoothing filter (2D-GSF) architecture by exploiting "nearest pixel approximation" and rounding-off Gaussian kernel coefficients. The proposed architecture significantly improves Speed-Power-Area-Accuracy (SPAA) metrics in designing energy-efficient filters. The efficacy of the proposed approximate 2D-GSF is demonstrated on real application such as edge detection. The simulation results show 72%, 79% and 76% reduction in area, power and delay, respectively with acceptable 0.4dB loss in PSNR as compared to the well-known approximate 2D-GSF.
Image Denoising nowadays is a great Challenge in the field of image processing. Since Discrete wavelet transform (DWT) is one of the powerful and perspective approaches in the area of image de noising. But fixing an optimal threshold is the key factor to determine the performance of denoising algorithm using (DWT). The optimal threshold can be estimated from the image statistics for getting better performance of denoising in terms of clarity or quality of the images. In this paper we analyzed various methods of denoising from the sonar image by using various thresholding methods (Vishnu Shrink, Bayes Shrink and Neigh Shrink) experimentally and compare the result in terms of various image quality parameters. (PSNR,MSE,SSIM and Entropy). The results of the proposed method show that there is an improvenment in the visual quality of sonar images by suppressing the speckle noise and retaining edge details.
Images acquired and processed in communication and multimedia systems are often noisy. Thus, pre-filtering is a typical stage to remove noise. At this stage, a special attention has to be paid to image visual quality. This paper analyzes denoising efficiency from the viewpoint of visual quality improvement using metrics that take into account human vision system (HVS). Specific features of the paper consist in, first, considering filters based on discrete cosine transform (DCT) and, second, analyzing the filter performance locally. Such an analysis is possible due to the structure and peculiarities of the metric PSNR-HVS-M. It is shown that a more advanced DCT-based filter BM3D outperforms a simpler (and faster) conventional DCT-based filter in locally active regions, i.e., neighborhoods of edges and small-sized objects. This conclusions allows accelerating BM3D filter and can be used in further improvement of the analyzed denoising techniques.
Image sharpness measurements are important parts of many image processing applications. To measure image sharpness multiple algorithms have been proposed and measured in the past but they have been developed with having out-of-focus photographs in mind and they do not work so well with images taken using a digital microscope. In this article we show the difference between images taken with digital cameras, images taken with a digital microscope and artificially blurred images. The conventional sharpness measures are executed on all these categories to measure the difference and a standard image set taken with a digital microscope is proposed and described to serve as a common baseline for further sharpness measures in the field.
A number of blind Image Quality Evaluation Metrics (IQEMs) for Unmanned Aerial Vehicle (UAV) photograph application are presented. Nowadays, the visible light camera is widely used for UAV photograph application because of its vivid imaging effect; however, the outdoor environment light will produce great negative influences on its imaging output unfortunately. In this paper, to conquer this problem above, we design and reuse a series of blind IQEMs to analyze the imaging quality of UAV application. The Human Visual System (HVS) based IQEMs, including the image brightness level, the image contrast level, the image noise level, the image edge blur level, the image texture intensity level, the image jitter level, and the image flicker level, are all considered in our application. Once these IQEMs are calculated, they can be utilized to provide a computational reference for the following image processing application, such as image understanding and recognition. Some preliminary experiments for image enhancement have proved the correctness and validity of our proposed technique.
Online Social Networks have emerged as an interesting area for analysis where each user having a personalized user profile interact and share information with each other. Apart from analyzing the structural characteristics, detection of abnormal and anomalous activities in social networks has become need of the hour. These anomalous activities represent the rare and mischievous activities that take place in the network. Graphical structure of social networks has encouraged the researchers to use various graph metrics to detect the anomalous activities. One such measure that seemed to be highly beneficial to detect the anomalies was brokerage value which helped to detect the anomalies with high accuracy. Also, further application of the measure to different datasets verified the fact that the anomalous behavior detected by the proposed measure was efficient as compared to the already proposed measures in Oddball Algorithm.
Steganography is the art of the hidden data in such a way that it detection of hidden knowledge prevents. As the necessity of security and privacy increases, the need of the hiding secret data is ongoing. In this paper proposed an enhanced detection of the 1-2-4 LSB steganography and RSA cryptography in Gray Scale and Color images. For color images, we apply 1-2-4 LSB on component of the RGB, then encrypt information applying RSA technique. For Gray Images, we use LSB to then encrypt information and also detect edges of gray image. In the experimental outcomes, calculate PSNR and MSE. We calculate peak signal noise ratio for quality and brightness. This method makes sure that the information has been encrypted before hiding it into an input image. If in any case the cipher text got revealed from the input image, the middle person other than receiver can't access the information as it is in encrypted form.
This paper addresses the issue of magnetic resonance (MR) Image reconstruction at compressive sampling (or compressed sensing) paradigm followed by its segmentation. To improve image reconstruction problem at low measurement space, weighted linear prediction and random noise injection at unobserved space are done first, followed by spatial domain de-noising through adaptive recursive filtering. Reconstructed image, however, suffers from imprecise and/or missing edges, boundaries, lines, curvatures etc. and residual noise. Curvelet transform is purposely used for removal of noise and edge enhancement through hard thresholding and suppression of approximate sub-bands, respectively. Finally Genetic algorithms (GAs) based clustering is done for segmentation of sharpen MR Image using weighted contribution of variance and entropy values. Extensive simulation results are shown to highlight performance improvement of both image reconstruction and segmentation problems.
Security of secret data has been a major issue of concern from ancient time. Steganography and cryptography are the two techniques which are used to reduce the security threat. Cryptography is an art of converting secret message in other than human readable form. Steganography is an art of hiding the existence of secret message. These techniques are required to protect the data theft over rapidly growing network. To achieve this there is a need of such a system which is very less susceptible to human visual system. In this paper a new technique is going to be introducing for data transmission over an unsecure channel. In this paper secret data is compressed first using LZW algorithm before embedding it behind any cover media. Data is compressed to reduce its size. After compression data encryption is performed to increase the security. Encryption is performed with the help of a key which make it difficult to get the secret message even if the existence of the secret message is reveled. Now the edge of secret message is detected by using canny edge detector and then embedded secret data is stored there with the help of a hash function. Proposed technique is implemented in MATLAB and key strength of this project is its huge data hiding capacity and least distortion in Stego image. This technique is applied over various images and the results show least distortion in altered image.
Currently, most electrophotographic printers use halftoning technique to print continuous tone images, so scanned images obtained from such hard copies are usually corrupted by screen like artifacts. In this paper, a new model of scanned halftone image is proposed to consider both printing distortions and halftone patterns. Based on this model, an adaptive filtering based descreening method is proposed to recover high quality contone images from the scanned images. Image redundancy based denoising algorithm is first adopted to reduce printing noise and attenuate distortions. Then, screen frequency of the scanned image and local gradient features are used for adaptive filtering. Basic contone estimate is obtained by filtering the denoised scanned image with an anisotropic Gaussian kernel, whose parameters are automatically adjusted with the screen frequency and local gradient information. Finally, an edge-preserving filter is used to further enhance the sharpness of edges to recover a high quality contone image. Experiments on real scanned images demonstrate that the proposed method can recover high quality contone images from the scanned images. Compared with the state-of-the-art methods, the proposed method produces very sharp edges and much cleaner smooth regions.
Image inpainting is the process of filling the unwanted region in an image marked by the user. It is used for restoring old paintings and photographs, removal of red eyes from pictures, etc. In this paper, we propose an efficient inpainting algorithm which takes care of false edge propagation. We use the classical exemplar based technique to find out the priority term for each patch. To ensure that the edge content of the nearest neighbor patch found by minimizing L2 distance between patches, we impose an additional constraint that the entropy of the patches be similar. Entropy of the patch acts as a good measure of edge content. Additionally, we fill the image by considering overlapping patches to ensure smoothness in the output. We use structural similarity index as the measure of similarity between ground truth and inpainted image. The results of the proposed approach on a number of examples on real and synthetic images show the effectiveness of our algorithm in removing objects and thin scratches or text written on image. It is also shown that the proposed approach is robust to the shape of the manually selected target. Our results compare favorably to those obtained by existing techniques.
This paper propose a fast human detection algorithm of video surveillance in emergencies. Firstly through the background subtraction based on the single Guassian model and frame subtraction, we get the target mask which is optimized by Gaussian filter and dilation. Then the interest points of head is obtained from figures with target mask and edge detection. Finally according to detecting these pionts we can track the head and count the number of people with the frequence of moving target at the same place. Simulation results show that the algorithm can detect the moving object quickly and accurately.
Distributed Denial of Service (DDoS) attacks are one of the challenging network security problems to address. The existing defense mechanisms against DDoS attacks usually filter the attack traffic at the victim side. The problem is exacerbated when there are spoofed IP addresses in the attack packets. In this case, even if the attacking traffic can be filtered by the victim, the attacker may reach the goal of blocking the access to the victim by consuming the computing resources or by consuming a big portion of the bandwidth to the victim. This paper proposes a Trace back-based Defense against DDoS Flooding Attacks (TDFA) approach to counter this problem. TDFA consists of three main components: Detection, Trace back, and Traffic Control. In this approach, the goal is to place the packet filtering as close to the attack source as possible. In doing so, the traffic control component at the victim side aims to set up a limit on the packet forwarding rate to the victim. This mechanism effectively reduces the rate of forwarding the attack packets and therefore improves the throughput of the legitimate traffic. Our results based on real world data sets show that TDFA is effective to reduce the attack traffic and to defend the quality of service for the legitimate traffic.