Biblio
Transform based image steganography methods are commonly used in security applications. However, the application of several recent transforms for image steganography remains unexplored. This paper presents bit-plane based steganography method using different transforms. In this work, the bit-plane of the transform coefficients is selected to embed the secret message. The characteristics of four transforms used in the steganography have been analyzed and the results of the four transforms are compared. This has been proven in the experimental results.
Emerging communication technologies in distributed network systems require transfer of biometric digital images with high security. Network security is identified by the changes in system behavior which is either Dynamic or Deterministic. Performance computation is complex in dynamic system where cryptographic techniques are not highly suitable. Chaotic theory solves complex problems of nonlinear deterministic system. Several chaotic methods are combined to get hyper chaotic system for more security. Chaotic theory along with DNA sequence enhances security of biometric image encryption. Implementation proves the encrypted image is highly chaotic and resistant to various attacks.
In recent years, the chaos based cryptographic algorithms have enabled some new and efficient ways to develop secure image encryption techniques. In this paper, we propose a new approach for image encryption based on chaotic maps in order to meet the requirements of secure image encryption. The chaos based image encryption technique uses simple chaotic maps which are very sensitive to original conditions. Using mixed chaotic maps which works based on simple substitution and transposition techniques to encrypt the original image yields better performance with less computation complexity which in turn gives high crypto-secrecy. The initial conditions for the chaotic maps are assigned and using that seed only the receiver can decrypt the message. The results of the experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for image encryption.
Image encryption takes been used by armies and governments to help top-secret communication. Nowadays, this one is frequently used for guarding info among various civilian systems. To perform secure image encryption by means of various chaotic maps, in such system a legal party may perhaps decrypt the image with the support of encryption key. This reversible chaotic encryption technique makes use of Arnold's cat map, in which pixel shuffling offers mystifying the image pixels based on the number of iterations decided by the authorized image owner. This is followed by other chaotic encryption techniques such as Logistic map and Tent map, which ensures secure image encryption. The simulation result shows the planned system achieves better NPCR, UACI, MSE and PSNR respectively.
This paper proposes a new hybrid technique for combined encryption text and image based on hyperchaos system. Since antiquity, man has continued looking for ways to send messages to his correspondents in order to communicate with them safely. It needed, through successive epochs, both physical and intellectual efforts in order to find an effective and appropriate communication technique. On another note, there is a behavior between the rigid regularity and randomness. This behavior is called chaos. In fact, it is a new field of investigation that is opened along with a new understanding of the frequently misunderstood long effects. The chaotic cryptography is thus born by inclusion of chaos in encryption algorithms. This article is in this particular context. Its objective is to create and implement an encryption algorithm based on a hyperchaotic system. This algorithm is composed of four methods: two for encrypting images and two for encrypting texts. The user chose the type of the input of the encryption (image or text) and as well as of the output. This new algorithm is considered a renovation in the science of cryptology, with the hybrid methods. This research opened a new features.
This paper proposes an improved mesh simplification algorithm based on quadric error metrics (QEM) to efficiently processing the huge data in 3D image processing. This method fully uses geometric information around vertices to avoid model edge from being simplified and to keep details. Meanwhile, the differences between simplified triangular meshes and equilateral triangles are added as weights of errors to decrease the possibilities of narrow triangle and then to avoid the visual mutation. Experiments show that our algorithm has obvious advantages over the time cost, and can better save the visual characteristics of model, which is suitable for solving most image processing, that is, "Real-time interactive" problem.
Image Denoising nowadays is a great Challenge in the field of image processing. Since Discrete wavelet transform (DWT) is one of the powerful and perspective approaches in the area of image de noising. But fixing an optimal threshold is the key factor to determine the performance of denoising algorithm using (DWT). The optimal threshold can be estimated from the image statistics for getting better performance of denoising in terms of clarity or quality of the images. In this paper we analyzed various methods of denoising from the sonar image by using various thresholding methods (Vishnu Shrink, Bayes Shrink and Neigh Shrink) experimentally and compare the result in terms of various image quality parameters. (PSNR,MSE,SSIM and Entropy). The results of the proposed method show that there is an improvenment in the visual quality of sonar images by suppressing the speckle noise and retaining edge details.
Image sharpness measurements are important parts of many image processing applications. To measure image sharpness multiple algorithms have been proposed and measured in the past but they have been developed with having out-of-focus photographs in mind and they do not work so well with images taken using a digital microscope. In this article we show the difference between images taken with digital cameras, images taken with a digital microscope and artificially blurred images. The conventional sharpness measures are executed on all these categories to measure the difference and a standard image set taken with a digital microscope is proposed and described to serve as a common baseline for further sharpness measures in the field.
A number of blind Image Quality Evaluation Metrics (IQEMs) for Unmanned Aerial Vehicle (UAV) photograph application are presented. Nowadays, the visible light camera is widely used for UAV photograph application because of its vivid imaging effect; however, the outdoor environment light will produce great negative influences on its imaging output unfortunately. In this paper, to conquer this problem above, we design and reuse a series of blind IQEMs to analyze the imaging quality of UAV application. The Human Visual System (HVS) based IQEMs, including the image brightness level, the image contrast level, the image noise level, the image edge blur level, the image texture intensity level, the image jitter level, and the image flicker level, are all considered in our application. Once these IQEMs are calculated, they can be utilized to provide a computational reference for the following image processing application, such as image understanding and recognition. Some preliminary experiments for image enhancement have proved the correctness and validity of our proposed technique.
Conductor copies of musical scores are typically rich in handwritten annotations. Ongoing archival efforts to digitize orchestral conductors' scores have made scanned copies of hundreds of these annotated scores available in digital formats. The extraction of handwritten annotations from digitized printed documents is a difficult task for computer vision, with most approaches focusing on the extraction of handwritten text. However, conductors' annotation practices provide us with at least two affordances, which make the task more tractable in the musical domain. First, many conductors opt to mark their scores using colored pencils, which contrast with the black and white print of sheet music. Consequently, we show promising results when using color separation techniques alone to recover handwritten annotations from conductors' scores. We also compare annotated scores to unannotated copies and use a printed sheet music comparison tool to recover handwritten annotations as additions to the clean copy. We then investigate the use of both of these techniques in a combined method, which improves the results of the color separation technique. These techniques are demonstrated using a sample of orchestral scores annotated by professional conductors of the New York Philharmonic. Handwritten annotation extraction in musical scores has applications to the systematic investigation of score annotation practices by performers, annotator attribution, and to the interactive presentation of annotated scores, which we briefly discuss.
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These hashes find extensive applications in content authentication, image indexing for database search and watermarking. Modern robust hashing algorithms consist of feature extraction, a randomization stage to introduce non-invertibility, followed by quantization and binary encoding to produce a binary hash. This paper describes a novel algorithm for generating an image hash based on Log-Polar transform features. The Log-Polar transform is a part of the Fourier-Mellin transformation, often used in image recognition and registration techniques due to its invariant properties to geometric operations. First, we show that the proposed perceptual hash is resistant to content-preserving operations like compression, noise addition, moderate geometric and filtering. Second, we illustrate the discriminative capability of our hash in order to rapidly distinguish between two perceptually different images. Third, we study the security of our method for image authentication purposes. Finally, we show that the proposed hashing method can provide both excellent security and robustness.
The E-mail messaging is one of the most popular uses of the Internet and the multiple Internet users can exchange messages within short span of time. Although the security of the E-mail messages is an important issue, no such security is supported by the Internet standards. One well known scheme, called PGP (Pretty Good Privacy) is used for personal security of E-mail messages. There is an attack on CFB Mode Encryption as used by OpenPGP. To overcome the attacks and to improve the security a new model is proposed which is "Secure Mail using Visual Cryptography". In the secure mail using visual cryptography the message to be transmitted is converted into a gray scale image. Then (2, 2) visual cryptographic shares are generated from the gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform and authenticated using Public Key based Image Authentication method. One of the shares is send to a server and the second share is send to the receipent's mail box. The two shares are transmitted through two different transmission medium so man in the middle attack is not possible. If an adversary has only one out of the two shares, then he has absolutely no information about the message. At the receiver side the two shares are fetched, decrypted and stacked to generate the grey scale image. From the grey scale image the message is reconstructed.
Through-wall sensing of hidden objects is a topic that is receiving a wide interest in several application contexts, especially in the field of security. The success of the object retrieval relies on accurate scattering models as well as on reliable inversion algorithms. In this paper, a contribution to the modeling of direct scattering for Through-Wall Imaging applications is given. The approach deals with hidden scatterers that are circular cross-section metallic cylinders placed below a dielectric layer, and it is based on an analytical-numerical technique implementing Cylindrical Wave Approach. As the burial medium of the scatterers may be a dielectric of arbitrary permittivity, general problems of scattering by hidden objects may be considered.When the burial medium is filled with air, the technique can simulate objects concealed in a building interior. Otherwise, simulation of geophysical problems of targets buried in a layered soil can be performed. Numerical results of practical cases are reported in the paper, showing the potentialities of the technique for its use in inversion algorithms.
Acoustic microscopy is characterized by relatively long scanning time, which is required for the motion of the transducer over the entire scanning area. This time may be reduced by using a multi-channel acoustical system which has several identical transducers arranged as an array and is mounted on a mechanical scanner so that each transducer scans only a fraction of the total area. The resulting image is formed as a combination of all acquired partial data sets. The mechanical instability of the scanner, as well as the difference in parameters of the individual transducers causes a misalignment of the image fractures. This distortion may be partially compensated for by the introduction of constant or dynamical signal leveling and data shift procedures. However, a reduction of the random instability component requires more advanced algorithms, including auto-adjustment of processing parameters. The described procedure was implemented into the prototype of an ultrasonic fingerprint reading system. The specialized cylindrical scanner provides a helical spiral lens trajectory which eliminates repeatable acceleration, reduces vibration and allows constant data flow on maximal rate. It is equipped with an array of four spherically focused 50 MHz acoustic lenses operating in pulse-echo mode. Each transducer is connected to a separate channel including pulser, receiver and digitizer. The output 3D data volume contains interlaced B-scans coming from each channel. Afterward, data processing includes pre-determined procedures of constant layer shift in order to compensate for the transducer displacement, phase shift and amplitude leveling for compensation of variation in transducer characteristics. Analysis of statistical parameters of individual scans allows adaptive eliminating of the axial misalignment and mechanical vibrations. Further 2D correlation of overlapping partial C-scans will realize an interpolative adjustment which essentially improves the output image. Implementation of this adaptive algorithm into a data processing sequence allows us to significantly reduce misreading due to hardware noise and finger motion during scanning. The system provides a high quality acoustic image of the fingerprint including different levels of information: fingerprint pattern, sweat porous locations, internal dermis structures. These additional features can effectively facilitate fingerprint based identification. The developed principles and algorithm implementations allow improved quality, stability and reliability of acoustical data obtained with the mechanical scanner, accommodating several transducers. General principles developed during this work can be applied to other configurations of advanced ultrasonic systems designed for various biomedical and NDE applications. The data processing algorithm, developed for a specific biometric task, can be adapted for the compensation of mechanical imperfections of the other devices.
- « first
- ‹ previous
- 1
- 2
- 3
- 4