Visible to the public Biblio

Filters: Keyword is wavelet transforms  [Clear All Filters]
2018-01-23
Hemanth, D. J., Popescu, D. E., Mittal, M., Maheswari, S. U..  2017.  Analysis of wavelet, ridgelet, curvelet and bandelet transforms for QR code based image steganography. 2017 14th International Conference on Engineering of Modern Electric Systems (EMES). :121–126.

Transform based image steganography methods are commonly used in security applications. However, the application of several recent transforms for image steganography remains unexplored. This paper presents bit-plane based steganography method using different transforms. In this work, the bit-plane of the transform coefficients is selected to embed the secret message. The characteristics of four transforms used in the steganography have been analyzed and the results of the four transforms are compared. This has been proven in the experimental results.

2017-12-28
El-Khamy, S. E., Korany, N. O., El-Sherif, M. H..  2017.  Correlation based highly secure image hiding in audio signals using wavelet decomposition and chaotic maps hopping for 5G multimedia communications. 2017 XXXIInd General Assembly and Scientific Symposium of the International Union of Radio Science (URSI GASS). :1–3.

Audio Steganography is the technique of hiding any secret information behind a cover audio file without impairing its quality. Data hiding in audio signals has various applications such as secret communications and concealing data that may influence the security and safety of governments and personnel and has possible important applications in 5G communication systems. This paper proposes an efficient secure steganography scheme based on the high correlation between successive audio signals. This is similar to the case of differential pulse coding modulation technique (DPCM) where encoding uses the redundancy in sample values to encode the signals with lower bit rate. Discrete Wavelet Transform (DWT) of audio samples is used to store hidden data in the least important coefficients of Haar transform. We use the benefit of the small differences between successive samples generated from encoding of the cover audio signal wavelet coefficients to hide image data without making a remarkable change in the cover audio signal. instead of changing of actual audio samples so this doesn't perceptually degrade the audio signal and provides higher hiding capacity with lower distortion. To further increase the security of the image hiding process, the image to be hidden is divided into blocks and the bits of each block are XORed with a different random sequence of logistic maps using hopping technique. The performance of the proposed algorithm has been estimated extensively against attacks and experimental results show that the proposed method achieves good robustness and imperceptibility.

2017-03-08
Gómez-Valverde, J. J., Ortuño, J. E., Guerra, P., Hermann, B., Zabihian, B., Rubio-Guivernau, J. L., Santos, A., Drexler, W., Ledesma-Carbayo, M. J..  2015.  Evaluation of speckle reduction with denoising filtering in optical coherence tomography for dermatology. 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). :494–497.

Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.

Sandic-Stankovic, D., Kukolj, D., Callet, P. Le.  2015.  DIBR synthesized image quality assessment based on morphological wavelets. 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX). :1–6.

Most of the Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain nonuniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. In this paper, morphological wavelet peak signal-to-noise ratio measure, MW-PSNR, based on morphological wavelet decomposition is proposed to tackle the evaluation of DIBR synthesized images. It is shown that MW-PSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.

Kerouh, F., Serir, A..  2015.  A no reference perceptual blur quality metric in the DCT domain. 2015 3rd International Conference on Control, Engineering Information Technology (CEIT). :1–6.

Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.

2017-03-07
Liu, Q., Zhao, X. g, Hou, Z. g, Liu, H. g.  2015.  Multi-scale wavelet kernel extreme learning machine for EEG feature classification. 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER). :1546–1551.

In this paper, the principle of the kernel extreme learning machine (ELM) is analyzed. Based on that, we introduce a kind of multi-scale wavelet kernel extreme learning machine classifier and apply it to electroencephalographic (EEG) signal feature classification. Experiments show that our classifier achieves excellent performance.

2017-02-14
B. Gu, Y. Fang, P. Jia, L. Liu, L. Zhang, M. Wang.  2015.  "A New Static Detection Method of Malicious Document Based on Wavelet Package Analysis". 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). :333-336.

More and more advanced persistent threat attacks has happened since 2009. This kind of attacks usually use more than one zero-day exploit to achieve its goal. Most of the times, the target computer will execute malicious program after the user open an infected compound document. The original detection method becomes inefficient as the attackers using a zero-day exploit to structure these compound documents. Inspired by the detection method based on structural entropy, we apply wavelet analysis to malicious document detection system. In our research, we use wavelet analysis to extract features from the raw data. These features will be used todetect whether the compound document was embed malicious code.

2015-05-06
Huang, T., Drake, B., Aalfs, D., Vidakovic, B..  2014.  Nonlinear Adaptive Filtering with Dimension Reduction in the Wavelet Domain. Data Compression Conference (DCC), 2014. :408-408.

Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering algorithms have been proposed that make no assumptions regarding the linearity of the input space. Additionally, advances in wavelet data compression/dimension reduction have also led to new algorithms that are appropriate for producing a hybrid nonlinear filtering framework. In this paper we utilize a combination of wavelet dimension reduction and kernel adaptive filtering. We derive algorithms in which the dimension of the data is reduced by a wavelet transform. We follow this by kernel adaptive filtering algorithms on the reduced-domain data to find the appropriate model parameters demonstrating improved minimization of the mean-squared error (MSE). Another important feature of our methods is that the wavelet filter is also chosen based on the data, on-the-fly. In particular, it is shown that by using a few optimal wavelet coefficients from the constructed wavelet filter for both training and testing data sets as the input to the kernel adaptive filter, convergence to the near optimal learning curve (MSE) results. We demonstrate these algorithms on simulated and a real data set from food processing.

Jian Wang, Lin Mei, Yi Li, Jian-Ye Li, Kun Zhao, Yuan Yao.  2014.  Variable Window for Outlier Detection and Impulsive Noise Recognition in Range Images. Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on. :857-864.

To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
 

2015-05-05
Ajish, S., Rajasree, R..  2014.  Secure Mail using Visual Cryptography (SMVC). Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on. :1-7.

The E-mail messaging is one of the most popular uses of the Internet and the multiple Internet users can exchange messages within short span of time. Although the security of the E-mail messages is an important issue, no such security is supported by the Internet standards. One well known scheme, called PGP (Pretty Good Privacy) is used for personal security of E-mail messages. There is an attack on CFB Mode Encryption as used by OpenPGP. To overcome the attacks and to improve the security a new model is proposed which is "Secure Mail using Visual Cryptography". In the secure mail using visual cryptography the message to be transmitted is converted into a gray scale image. Then (2, 2) visual cryptographic shares are generated from the gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform and authenticated using Public Key based Image Authentication method. One of the shares is send to a server and the second share is send to the receipent's mail box. The two shares are transmitted through two different transmission medium so man in the middle attack is not possible. If an adversary has only one out of the two shares, then he has absolutely no information about the message. At the receiver side the two shares are fetched, decrypted and stacked to generate the grey scale image. From the grey scale image the message is reconstructed.
 

Jialing Mo, Qiang He, Weiping Hu.  2014.  An adaptive threshold de-noising method based on EEMD. Signal Processing, Communications and Computing (ICSPCC), 2014 IEEE International Conference on. :209-214.

In view of the difficulty in selecting wavelet base and decomposition level for wavelet-based de-noising method, this paper proposes an adaptive de-noising method based on Ensemble Empirical Mode Decomposition (EEMD). The autocorrelation, cross-correlation method is used to adaptively find the signal-to-noise boundary layer of the EEMD in this method. Then the noise dominant layer is filtered directly and the signal dominant layer is threshold de-noised. Finally, the de-noising signal is reconstructed by each layer component which is de-noised. This method solves the problem of mode mixing in Empirical Mode Decomposition (EMD) by using EEMD and combines the advantage of wavelet threshold. In this paper, we focus on the analysis and verification of the correctness of the adaptive determination of the noise dominant layer. The simulation experiment results prove that this de-noising method is efficient and has good adaptability.
 

Raut, R.D., Kulkarni, S., Gharat, N.N..  2014.  Biometric Authentication Using Kekre's Wavelet Transform. Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on. :99-104.

This paper proposes an enhanced method for personal authentication based on finger Knuckle Print using Kekre's wavelet transform (KWT). Finger-knuckle-print (FKP) is the inherent skin patterns of the outer surface around the phalangeal joint of one's finger. It is highly discriminable and unique which makes it an emerging promising biometric identifier. Kekre's wavelet transform is constructed from Kekre's transform. The proposed system is evaluated on prepared FKP database that involves all categories of FKP. The total database of 500 samples of FKP. This paper focuses the different image enhancement techniques for the pre-processing of the captured images. The proposed algorithm is examined on 350 training and 150 testing samples of database and shows that the quality of database and pre-processing techniques plays important role to recognize the individual. The experimental result calculate the performance parameters like false acceptance rate (FAR), false rejection rate (FRR), True Acceptance rate (TAR), True rejection rate (TRR). The tested result demonstrated the improvement in EER (Error Equal Rate) which is very much important for authentication. The experimental result using Kekre's algorithm along with image enhancement shows that the finger knuckle recognition rate is better than the conventional method.
 

2015-05-01
Hong Jiang, Songqing Zhao, Zuowei Shen, Wei Deng, Wilford, P.A., Haimi-Cohen, R..  2014.  Surveillance video analysis using compressive sensing with low latency. Bell Labs Technical Journal. 18:63-74.

We propose a method for analysis of surveillance video by using low rank and sparse decomposition (LRSD) with low latency combined with compressive sensing to segment the background and extract moving objects in a surveillance video. Video is acquired by compressive measurements, and the measurements are used to analyze the video by a low rank and sparse decomposition of a matrix. The low rank component represents the background, and the sparse component, which is obtained in a tight wavelet frame domain, is used to identify moving objects in the surveillance video. An important feature of the proposed low latency method is that the decomposition can be performed with a small number of video frames, which reduces latency in the reconstruction and makes it possible for real time processing of surveillance video. The low latency method is both justified theoretically and validated experimentally.

Pukkawanna, S., Hazeyama, H., Kadobayashi, Y., Yamaguchi, S..  2014.  Investigating the utility of S-transform for detecting Denial-of-Service and probe attacks. Information Networking (ICOIN), 2014 International Conference on. :282-287.

Denial-of-Service (DoS) and probe attacks are growing more modern and sophisticated in order to evade detection by Intrusion Detection Systems (IDSs) and to increase the potent threat to the availability of network services. Detecting these attacks is quite tough for network operators using misuse-based IDSs because they need to see through attackers and upgrade their IDSs by adding new accurate attack signatures. In this paper, we proposed a novel signal and image processing-based method for detecting network probe and DoS attacks in which prior knowledge of attacks is not required. The method uses a time-frequency representation technique called S-transform, which is an extension of Wavelet Transform, to reveal abnormal frequency components caused by attacks in a traffic signal (e.g., a time-series of the number of packets). Firstly, S-Transform converts the traffic signal to a two-dimensional image which describes time-frequency behavior of the traffic signal. The frequencies that behave abnormally are discovered as abnormal regions in the image. Secondly, Otsu's method is used to detect the abnormal regions and identify time that attacks occur. We evaluated the effectiveness of the proposed method with several network probe and DoS attacks such as port scans, packet flooding attacks, and a low-intensity DoS attack. The results clearly indicated that the method is effective for detecting the probe and DoS attack streams which were generated to real-world Internet.