Biblio
Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.
Due to the noise in the images, the edges extracted from these noisy images are always discontinuous and inaccurate by traditional operators. In order to solve these problems, this paper proposes multi-direction edge detection operator to detect edges from noisy images. The new operator is designed by introducing the shear transformation into the traditional operator. On the one hand, the shear transformation can provide a more favorable treatment for directions, which can make the new operator detect edges in different directions and overcome the directional limitation in the traditional operator. On the other hand, all the single pixel edge images in different directions can be fused. In this case, the edge information can complement each other. The experimental results indicate that the new operator is superior to the traditional ones in terms of the effectiveness of edge detection and the ability of noise rejection.
In this paper, the principle of the kernel extreme learning machine (ELM) is analyzed. Based on that, we introduce a kind of multi-scale wavelet kernel extreme learning machine classifier and apply it to electroencephalographic (EEG) signal feature classification. Experiments show that our classifier achieves excellent performance.
Most of the Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain nonuniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. In this paper, morphological wavelet peak signal-to-noise ratio measure, MW-PSNR, based on morphological wavelet decomposition is proposed to tackle the evaluation of DIBR synthesized images. It is shown that MW-PSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.
Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.
More and more advanced persistent threat attacks has happened since 2009. This kind of attacks usually use more than one zero-day exploit to achieve its goal. Most of the times, the target computer will execute malicious program after the user open an infected compound document. The original detection method becomes inefficient as the attackers using a zero-day exploit to structure these compound documents. Inspired by the detection method based on structural entropy, we apply wavelet analysis to malicious document detection system. In our research, we use wavelet analysis to extract features from the raw data. These features will be used todetect whether the compound document was embed malicious code.
In view of the difficulty in selecting wavelet base and decomposition level for wavelet-based de-noising method, this paper proposes an adaptive de-noising method based on Ensemble Empirical Mode Decomposition (EEMD). The autocorrelation, cross-correlation method is used to adaptively find the signal-to-noise boundary layer of the EEMD in this method. Then the noise dominant layer is filtered directly and the signal dominant layer is threshold de-noised. Finally, the de-noising signal is reconstructed by each layer component which is de-noised. This method solves the problem of mode mixing in Empirical Mode Decomposition (EMD) by using EEMD and combines the advantage of wavelet threshold. In this paper, we focus on the analysis and verification of the correctness of the adaptive determination of the noise dominant layer. The simulation experiment results prove that this de-noising method is efficient and has good adaptability.
Denial-of-Service (DoS) and probe attacks are growing more modern and sophisticated in order to evade detection by Intrusion Detection Systems (IDSs) and to increase the potent threat to the availability of network services. Detecting these attacks is quite tough for network operators using misuse-based IDSs because they need to see through attackers and upgrade their IDSs by adding new accurate attack signatures. In this paper, we proposed a novel signal and image processing-based method for detecting network probe and DoS attacks in which prior knowledge of attacks is not required. The method uses a time-frequency representation technique called S-transform, which is an extension of Wavelet Transform, to reveal abnormal frequency components caused by attacks in a traffic signal (e.g., a time-series of the number of packets). Firstly, S-Transform converts the traffic signal to a two-dimensional image which describes time-frequency behavior of the traffic signal. The frequencies that behave abnormally are discovered as abnormal regions in the image. Secondly, Otsu's method is used to detect the abnormal regions and identify time that attacks occur. We evaluated the effectiveness of the proposed method with several network probe and DoS attacks such as port scans, packet flooding attacks, and a low-intensity DoS attack. The results clearly indicated that the method is effective for detecting the probe and DoS attack streams which were generated to real-world Internet.
This paper proposes an enhanced method for personal authentication based on finger Knuckle Print using Kekre's wavelet transform (KWT). Finger-knuckle-print (FKP) is the inherent skin patterns of the outer surface around the phalangeal joint of one's finger. It is highly discriminable and unique which makes it an emerging promising biometric identifier. Kekre's wavelet transform is constructed from Kekre's transform. The proposed system is evaluated on prepared FKP database that involves all categories of FKP. The total database of 500 samples of FKP. This paper focuses the different image enhancement techniques for the pre-processing of the captured images. The proposed algorithm is examined on 350 training and 150 testing samples of database and shows that the quality of database and pre-processing techniques plays important role to recognize the individual. The experimental result calculate the performance parameters like false acceptance rate (FAR), false rejection rate (FRR), True Acceptance rate (TAR), True rejection rate (TRR). The tested result demonstrated the improvement in EER (Error Equal Rate) which is very much important for authentication. The experimental result using Kekre's algorithm along with image enhancement shows that the finger knuckle recognition rate is better than the conventional method.
The E-mail messaging is one of the most popular uses of the Internet and the multiple Internet users can exchange messages within short span of time. Although the security of the E-mail messages is an important issue, no such security is supported by the Internet standards. One well known scheme, called PGP (Pretty Good Privacy) is used for personal security of E-mail messages. There is an attack on CFB Mode Encryption as used by OpenPGP. To overcome the attacks and to improve the security a new model is proposed which is "Secure Mail using Visual Cryptography". In the secure mail using visual cryptography the message to be transmitted is converted into a gray scale image. Then (2, 2) visual cryptographic shares are generated from the gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform and authenticated using Public Key based Image Authentication method. One of the shares is send to a server and the second share is send to the receipent's mail box. The two shares are transmitted through two different transmission medium so man in the middle attack is not possible. If an adversary has only one out of the two shares, then he has absolutely no information about the message. At the receiver side the two shares are fetched, decrypted and stacked to generate the grey scale image. From the grey scale image the message is reconstructed.
Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering algorithms have been proposed that make no assumptions regarding the linearity of the input space. Additionally, advances in wavelet data compression/dimension reduction have also led to new algorithms that are appropriate for producing a hybrid nonlinear filtering framework. In this paper we utilize a combination of wavelet dimension reduction and kernel adaptive filtering. We derive algorithms in which the dimension of the data is reduced by a wavelet transform. We follow this by kernel adaptive filtering algorithms on the reduced-domain data to find the appropriate model parameters demonstrating improved minimization of the mean-squared error (MSE). Another important feature of our methods is that the wavelet filter is also chosen based on the data, on-the-fly. In particular, it is shown that by using a few optimal wavelet coefficients from the constructed wavelet filter for both training and testing data sets as the input to the kernel adaptive filter, convergence to the near optimal learning curve (MSE) results. We demonstrate these algorithms on simulated and a real data set from food processing.
We propose a method for analysis of surveillance video by using low rank and sparse decomposition (LRSD) with low latency combined with compressive sensing to segment the background and extract moving objects in a surveillance video. Video is acquired by compressive measurements, and the measurements are used to analyze the video by a low rank and sparse decomposition of a matrix. The low rank component represents the background, and the sparse component, which is obtained in a tight wavelet frame domain, is used to identify moving objects in the surveillance video. An important feature of the proposed low latency method is that the decomposition can be performed with a small number of video frames, which reduces latency in the reconstruction and makes it possible for real time processing of surveillance video. The low latency method is both justified theoretically and validated experimentally.
To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
A THz image edge detection approach based on wavelet and neural network is proposed in this paper. First, the source image is decomposed by wavelet, the edges in the low-frequency sub-image are detected using neural network method and the edges in the high-frequency sub-images are detected using wavelet transform method on the coarsest level of the wavelet decomposition, the two edge images are fused according to some fusion rules to obtain the edge image of this level, it then is projected to the next level. Afterwards the final edge image of L-1 level is got according to some fusion rule. This process is repeated until reaching the 0 level thus to get the final integrated and clear edge image. The experimental results show that our approach based on fusion technique is superior to Canny operator method and wavelet transform method alone.