Visible to the public Biblio

Filters: Keyword is Image reconstruction  [Clear All Filters]
2020-06-15
Puteaux, Pauline, Puech, William.  2018.  Noisy Encrypted Image Correction based on Shannon Entropy Measurement in Pixel Blocks of Very Small Size. 2018 26th European Signal Processing Conference (EUSIPCO). :161–165.
Many techniques have been presented to protect image content confidentiality. The owner of an image encrypts it using a key and transmits the encrypted image across a network. If the recipient is authorized to access the original content of the image, he can reconstruct it losslessly. However, if during the transmission the encrypted image is noised, some parts of the image can not be deciphered. In order to localize and correct these errors, we propose an approach based on the local Shannon entropy measurement. We first analyze this measure as a function of the block-size. We provide then a full description of our blind error localization and removal process. Experimental results show that the proposed approach, based on local entropy, can be used in practice to correct noisy encrypted images, even with blocks of very small size.
2020-06-12
Gu, Feng, Zhang, Hong, Wang, Chao, Wu, Fan.  2019.  SAR Image Super-Resolution Based on Noise-Free Generative Adversarial Network. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. :2575—2578.

Deep learning has been successfully applied to the ordinary image super-resolution (SR). However, since the synthetic aperture radar (SAR) images are often disturbed by multiplicative noise known as speckle and more blurry than ordinary images, there are few deep learning methods for the SAR image SR. In this paper, a deep generative adversarial network (DGAN) is proposed to reconstruct the pseudo high-resolution (HR) SAR images. First, a generator network is constructed to remove the noise of low-resolution SAR image and generate HR SAR image. Second, a discriminator network is used to differentiate between the pseudo super-resolution images and the realistic HR images. The adversarial objective function is introduced to make the pseudo HR SAR images closer to real SAR images. The experimental results show that our method can maintain the SAR image content with high-level noise suppression. The performance evaluation based on peak signal-to-noise-ratio and structural similarity index shows the superiority of the proposed method to the conventional CNN baselines.

2019-12-10
Ponuma, R, Amutha, R, Haritha, B.  2018.  Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1-5.

A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.

2019-09-23
Tan, L., Liu, K., Yan, X., Wan, S., Chen, J., Chang, C..  2018.  Visual Secret Sharing Scheme for Color QR Code. 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC). :961–965.

In this paper, we propose a novel visual secret sharing (VSS) scheme for color QR code (VSSCQR) with (n, n) threshold based on high capacity, admirable visual effects and popularity of color QR code. By splitting and encoding a secret image into QR codes and then fusing QR codes to generate color QR code shares, the scheme can share the secret among a certain number of participants. However, less than n participants cannot reveal any information about the secret. The embedding amount and position of the secret image bits generated by VSS are in the range of the error correction ability of the QR code. Each color share is readable, which can be decoded and thus may not come into notice. On one hand, the secret image can be reconstructed by first decomposing three QR codes from each color QR code share and then stacking the corresponding QR codes based on only human visual system without computational devices. On the other hand, by decomposing three QR codes from each color QR code share and then XORing the three QR codes respectively, we can reconstruct the secret image losslessly. The experiment results display the effect of our scheme.

2019-07-01
Rasin, A., Wagner, J., Heart, K., Grier, J..  2018.  Establishing Independent Audit Mechanisms for Database Management Systems. 2018 IEEE International Symposium on Technologies for Homeland Security (HST). :1-7.

The pervasive use of databases for the storage of critical and sensitive information in many organizations has led to an increase in the rate at which databases are exploited in computer crimes. While there are several techniques and tools available for database forensic analysis, such tools usually assume an apriori database preparation, such as relying on tamper-detection software to already be in place and the use of detailed logging. Further, such tools are built-in and thus can be compromised or corrupted along with the database itself. In practice, investigators need forensic and security audit tools that work on poorlyconfigured systems and make no assumptions about the extent of damage or malicious hacking in a database.In this paper, we present our database forensics methods, which are capable of examining database content from a storage (disk or RAM) image without using any log or file system metadata. We describe how these methods can be used to detect security breaches in an untrusted environment where the security threat arose from a privileged user (or someone who has obtained such privileges). Finally, we argue that a comprehensive and independent audit framework is necessary in order to detect and counteract threats in an environment where the security breach originates from an administrator (either at database or operating system level).

2019-03-25
Li, Y., Guan, Z., Xu, C..  2018.  Digital Image Self Restoration Based on Information Hiding. 2018 37th Chinese Control Conference (CCC). :4368–4372.
With the rapid development of computer networks, multimedia information is widely used, and the security of digital media has drawn much attention. The revised photo as a forensic evidence will distort the truth of the case badly tampered pictures on the social network can have a negative impact on the parties as well. In order to ensure the authenticity and integrity of digital media, self-recovery of digital images based on information hiding is studied in this paper. Jarvis half-tone change is used to compress the digital image and obtain the backup data, and then spread the backup data to generate the reference data. Hash algorithm aims at generating hash data by calling reference data and original data. Reference data and hash data together as a digital watermark scattered embedded in the digital image of the low-effective bits. When the image is maliciously tampered with, the hash bit is used to detect and locate the tampered area, and the image self-recovery is performed by extracting the reference data hidden in the whole image. In this paper, a thorough rebuild quality assessment of self-healing images is performed and better performance than the traditional DCT(Discrete Cosine Transform)quantization truncation approach is achieved. Regardless of the quality of the tampered content, a reference authentication system designed according to the principles presented in this paper allows higher-quality reconstruction to recover the original image with good quality even when the large area of the image is tampered.
2019-01-21
Kos, J., Fischer, I., Song, D..  2018.  Adversarial Examples for Generative Models. 2018 IEEE Security and Privacy Workshops (SPW). :36–42.

We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.

2019-01-16
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J..  2018.  Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. :1778–1787.
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin.1
2018-11-19
Chen, Y., Lai, Y., Liu, Y..  2017.  Transforming Photos to Comics Using Convolutional Neural Networks. 2017 IEEE International Conference on Image Processing (ICIP). :2010–2014.

In this paper, inspired by Gatys's recent work, we propose a novel approach that transforms photos to comics using deep convolutional neural networks (CNNs). While Gatys's method that uses a pre-trained VGG network generally works well for transferring artistic styles such as painting from a style image to a content image, for more minimalist styles such as comics, the method often fails to produce satisfactory results. To address this, we further introduce a dedicated comic style CNN, which is trained for classifying comic images and photos. This new network is effective in capturing various comic styles and thus helps to produce better comic stylization results. Even with a grayscale style image, Gatys's method can still produce colored output, which is not desirable for comics. We develop a modified optimization framework such that a grayscale image is guaranteed to be synthesized. To avoid converging to poor local minima, we further initialize the output image using grayscale version of the content image. Various examples show that our method synthesizes better comic images than the state-of-the-art method.

2018-08-23
Xu, W., Yan, Z., Tian, Y., Cui, Y., Lin, J..  2017.  Detection with compressive measurements corrupted by sparse errors. 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). :1–5.

Compressed sensing can represent the sparse signal with a small number of measurements compared to Nyquist-rate samples. Considering the high-complexity of reconstruction algorithms in CS, recently compressive detection is proposed, which performs detection directly in compressive domain without reconstruction. Different from existing work that generally considers the measurements corrupted by dense noises, this paper studies the compressive detection problem when the measurements are corrupted by both dense noises and sparse errors. The sparse errors exist in many practical systems, such as the ones affected by impulse noise or narrowband interference. We derive the theoretical performance of compressive detection when the sparse error is either deterministic or random. The theoretical results are further verified by simulations.

2018-04-04
Bao, D., Yang, F., Jiang, Q., Li, S., He, X..  2017.  Block RLS algorithm for surveillance video processing based on image sparse representation. 2017 29th Chinese Control And Decision Conference (CCDC). :2195–2200.

Block recursive least square (BRLS) algorithm for dictionary learning in compressed sensing system is developed for surveillance video processing. The new method uses image blocks directly and iteratively to train dictionaries via BRLS algorithm, which is different from classical methods that require to transform blocks to columns first and then giving all training blocks at one time. Since the background in surveillance video is almost fixed, the residual of foreground can be represented sparsely and reconstructed with background subtraction directly. The new method and framework are applied in real image and surveillance video processing. Simulation results show that the new method achieves better representation performance than classical ones in both image and surveillance video.

2017-11-20
Aqel, S., Aarab, A., Sabri, M. A..  2016.  Shadow detection and removal for traffic sequences. 2016 International Conference on Electrical and Information Technologies (ICEIT). :168–173.

This paper address the problem of shadow detection and removal in traffic vision analysis. Basically, the presence of the shadow in the traffic sequences is imminent, and therefore leads to errors at segmentation stage and often misclassified as an object region or as a moving object. This paper presents a shadow removal method, based on both color and texture features, aiming to contribute to retrieve efficiently the moving objects whose detection are usually under the influence of cast-shadows. Additionally, in order to get a shadow-free foreground segmentation image, a morphology reconstruction algorithm is used to recover the foreground disturbed by shadow removal. Once shadows are detected, an automatic shadow removal model is proposed based on the information retrieved from the histogram shape. Experimental results on a real traffic sequence is presented to test the proposed approach and to validate the algorithm's performance.

2017-03-08
Liu, Weijian, Chen, Zeqi, Chen, Yunhua, Yao, Ruohe.  2015.  An \#8467;1/2-BTV regularization algorithm for super-resolution. 2015 4th International Conference on Computer Science and Network Technology (ICCSNT). 01:1274–1281.

In this paper, we propose a novelregularization term for super-resolution by combining a bilateral total variation (BTV) regularizer and a sparsity prior model on the image. The term is composed of the weighted least squares minimization and the bilateral filter proposed by Elad, but adding an ℓ1/2 regularizer. It is referred to as ℓ1/2-BTV. The proposed algorithm serves to restore image details more precisely and eliminate image noise more effectively by introducing the sparsity of the ℓ1/2 regularizer into the traditional bilateral total variation (BTV) regularizer. Experiments were conducted on both simulated and real image sequences. The results show that the proposed algorithm generates high-resolution images of better quality, as defined by both de-noising and edge-preservation metrics, than other methods.

Boykov, Y., Isack, H., Olsson, C., Ayed, I. B..  2015.  Volumetric Bias in Segmentation and Reconstruction: Secrets and Solutions. 2015 IEEE International Conference on Computer Vision (ICCV). :1769–1777.

Many standard optimization methods for segmentation and reconstruction compute ML model estimates for appearance or geometry of segments, e.g. Zhu-Yuille [23], Torr [20], Chan-Vese [6], GrabCut [18], Delong et al. [8]. We observe that the standard likelihood term in these formu-lations corresponds to a generalized probabilistic K-means energy. In learning it is well known that this energy has a strong bias to clusters of equal size [11], which we express as a penalty for KL divergence from a uniform distribution of cardinalities. However, this volumetric bias has been mostly ignored in computer vision. We demonstrate signif- icant artifacts in standard segmentation and reconstruction methods due to this bias. Moreover, we propose binary and multi-label optimization techniques that either (a) remove this bias or (b) replace it by a KL divergence term for any given target volume distribution. Our general ideas apply to continuous or discrete energy formulations in segmenta- tion, stereo, and other reconstruction problems.

Kerl, C., Stückler, J., Cremers, D..  2015.  Dense Continuous-Time Tracking and Mapping with Rolling Shutter RGB-D Cameras. 2015 IEEE International Conference on Computer Vision (ICCV). :2264–2272.

We propose a dense continuous-time tracking and mapping method for RGB-D cameras. We parametrize the camera trajectory using continuous B-splines and optimize the trajectory through dense, direct image alignment. Our method also directly models rolling shutter in both RGB and depth images within the optimization, which improves tracking and reconstruction quality for low-cost CMOS sensors. Using a continuous trajectory representation has a number of advantages over a discrete-time representation (e.g. camera poses at the frame interval). With splines, less variables need to be optimized than with a discrete representation, since the trajectory can be represented with fewer control points than frames. Splines also naturally include smoothness constraints on derivatives of the trajectory estimate. Finally, the continuous trajectory representation allows to compensate for rolling shutter effects, since a pose estimate is available at any exposure time of an image. Our approach demonstrates superior quality in tracking and reconstruction compared to approaches with discrete-time or global shutter assumptions.

2017-02-21
S. Lohit, K. Kulkarni, P. Turaga, J. Wang, A. C. Sankaranarayanan.  2015.  "Reconstruction-free inference on compressive measurements". 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :16-24.

Spatial-multiplexing cameras have emerged as a promising alternative to classical imaging devices, often enabling acquisition of `more for less'. One popular architecture for spatial multiplexing is the single-pixel camera (SPC), which acquires coded measurements of the scene with pseudo-random spatial masks. Significant theoretical developments over the past few years provide a means for reconstruction of the original imagery from coded measurements at sub-Nyquist sampling rates. Yet, accurate reconstruction generally requires high measurement rates and high signal-to-noise ratios. In this paper, we enquire if one can perform high-level visual inference problems (e.g. face recognition or action recognition) from compressive cameras without the need for image reconstruction. This is an interesting question since in many practical scenarios, our goals extend beyond image reconstruction. However, most inference tasks often require non-linear features and it is not clear how to extract such features directly from compressed measurements. In this paper, we show that one can extract nontrivial correlational features directly without reconstruction of the imagery. As a specific example, we consider the problem of face recognition beyond the visible spectrum e.g in the short-wave infra-red region (SWIR) - where pixels are expensive. We base our framework on smashed filters which suggests that inner-products between high-dimensional signals can be computed in the compressive domain to a high degree of accuracy. We collect a new face image dataset of 30 subjects, obtained using an SPC. Using face recognition as an example, we show that one can indeed perform reconstruction-free inference with a very small loss of accuracy at very high compression ratios of 100 and more.

A. Roy, S. P. Maity.  2015.  "On segmentation of CS reconstructed MR images". 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR). :1-6.

This paper addresses the issue of magnetic resonance (MR) Image reconstruction at compressive sampling (or compressed sensing) paradigm followed by its segmentation. To improve image reconstruction problem at low measurement space, weighted linear prediction and random noise injection at unobserved space are done first, followed by spatial domain de-noising through adaptive recursive filtering. Reconstructed image, however, suffers from imprecise and/or missing edges, boundaries, lines, curvatures etc. and residual noise. Curvelet transform is purposely used for removal of noise and edge enhancement through hard thresholding and suppression of approximate sub-bands, respectively. Finally Genetic algorithms (GAs) based clustering is done for segmentation of sharpen MR Image using weighted contribution of variance and entropy values. Extensive simulation results are shown to highlight performance improvement of both image reconstruction and segmentation problems.

A. Pramanik, S. P. Maity.  2015.  "DPCM-quantized block-based compressed sensing of images using Robbins Monro approach". 2015 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE). :18-21.

Compressed Sensing or Compressive Sampling is the process of signal reconstruction from the samples obtained at a rate far below the Nyquist rate. In this work, Differential Pulse Coded Modulation (DPCM) is coupled with Block Based Compressed Sensing (CS) reconstruction with Robbins Monro (RM) approach. RM is a parametric iterative CS reconstruction technique. In this work extensive simulation is done to report that RM gives better performance than the existing DPCM Block Based Smoothed Projected Landweber (SPL) reconstruction technique. The noise seen in Block SPL algorithm is not much evident in this non-parametric approach. To achieve further compression of data, Lempel-Ziv-Welch channel coding technique is proposed.

H. Kiragu, G. Kamucha, E. Mwangi.  2015.  "A fast procedure for acquisition and reconstruction of magnetic resonance images using compressive sampling". AFRICON 2015. :1-5.

This paper proposes a fast and robust procedure for sensing and reconstruction of sparse or compressible magnetic resonance images based on the compressive sampling theory. The algorithm starts with incoherent undersampling of the k-space data of the image using a random matrix. The undersampled data is sparsified using Haar transformation. The Haar transform coefficients of the k-space data are then reconstructed using the orthogonal matching Pursuit algorithm. The reconstructed coefficients are inverse transformed into k-space data and then into the image in spatial domain. Finally, a median filter is used to suppress the recovery noise artifacts. Experimental results show that the proposed procedure greatly reduces the image data acquisition time without significantly reducing the image quality. The results also show that the error in the reconstructed image is reduced by median filtering.

Liang Zhongyin, Huang Jianjun, Huang Jingxiong.  2015.  "Sub-sampled IFFT based compressive sampling". TENCON 2015 - 2015 IEEE Region 10 Conference. :1-4.

In this paper, a new approach based on Sub-sampled Inverse Fast Fourier Transform (SSIFFT) for efficiently acquiring compressive measurements is proposed, which is motivated by random filter based method and sub-sampled FFT. In our approach, to start with, we multiply the FFT of input signal and that of random-tap FIR filter in frequency domain and then utilize SSIFFT to obtain compressive measurements in the time domain. It requires less data storage and computation than the existing methods based on random filter. Moreover, it is suitable for both one-dimensional and two-dimensional signals. Experimental results show that the proposed approach is effective and efficient.

S. R. Islam, S. P. Maity, A. K. Ray.  2015.  "On compressed sensing image reconstruction using linear prediction in adaptive filtering". 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :2317-2323.

Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/ measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.

2017-02-14
V. Mishra, K. Choudhary, S. Maheshwari.  2015.  "Video Streaming Using Dual-Channel Dual-Path Routing to Prevent Packet Copy Attack". 2015 IEEE International Conference on Computational Intelligence Communication Technology. :645-650.

The video streaming between the sender and the receiver involves multiple unsecured hops where the video data can be illegally copied if the nodes run malicious forwarding logic. This paper introduces a novel method to stream video data through dual channels using dual data paths. The frames' pixels are also scrambled. The video frames are divided into two frame streams. At the receiver side video is re-constructed and played for a limited time period. As soon as small chunk of merged video is played, it is deleted from video buffer. The approach has been tried to formalize and initial simulation has been done over MATLAB. Preliminary results are optimistic and a refined approach may lead to a formal designing of network layer routing protocol with corrections in transport layer.

2015-05-06
Malik, O.A., Arosha Senanayake, S.M.N., Zaheer, D..  2015.  An Intelligent Recovery Progress Evaluation System for ACL Reconstructed Subjects Using Integrated 3-D Kinematics and EMG Features. Biomedical and Health Informatics, IEEE Journal of. 19:453-463.

An intelligent recovery evaluation system is presented for objective assessment and performance monitoring of anterior cruciate ligament reconstructed (ACL-R) subjects. The system acquires 3-D kinematics of tibiofemoral joint and electromyography (EMG) data from surrounding muscles during various ambulatory and balance testing activities through wireless body-mounted inertial and EMG sensors, respectively. An integrated feature set is generated based on different features extracted from data collected for each activity. The fuzzy clustering and adaptive neuro-fuzzy inference techniques are applied to these integrated feature sets in order to provide different recovery progress assessment indicators (e.g., current stage of recovery, percentage of recovery progress as compared to healthy group, etc.) for ACL-R subjects. The system was trained and tested on data collected from a group of healthy and ACL-R subjects. For recovery stage identification, the average testing accuracy of the system was found above 95% (95-99%) for ambulatory activities and above 80% (80-84%) for balance testing activities. The overall recovery evaluation performed by the proposed system was found consistent with the assessment made by the physiotherapists using standard subjective/objective scores. The validated system can potentially be used as a decision supporting tool by physiatrists, physiotherapists, and clinicians for quantitative rehabilitation analysis of ACL-R subjects in conjunction with the existing recovery monitoring systems.
 

2015-05-05
Vantigodi, S., Babu, R.V..  2014.  Entropy constrained exemplar-based image inpainting. Signal Processing and Communications (SPCOM), 2014 International Conference on. :1-5.

Image inpainting is the process of filling the unwanted region in an image marked by the user. It is used for restoring old paintings and photographs, removal of red eyes from pictures, etc. In this paper, we propose an efficient inpainting algorithm which takes care of false edge propagation. We use the classical exemplar based technique to find out the priority term for each patch. To ensure that the edge content of the nearest neighbor patch found by minimizing L2 distance between patches, we impose an additional constraint that the entropy of the patches be similar. Entropy of the patch acts as a good measure of edge content. Additionally, we fill the image by considering overlapping patches to ensure smoothness in the output. We use structural similarity index as the measure of similarity between ground truth and inpainted image. The results of the proposed approach on a number of examples on real and synthetic images show the effectiveness of our algorithm in removing objects and thin scratches or text written on image. It is also shown that the proposed approach is robust to the shape of the manually selected target. Our results compare favorably to those obtained by existing techniques.
 

2015-05-04
Chitnis, P.V., Lloyd, H., Silverman, R.H..  2014.  An adaptive interferometric sensor for all-optical photoacoustic microscopy. Ultrasonics Symposium (IUS), 2014 IEEE International. :353-356.

Conventional photoacoustic microscopy (PAM) involves detection of optically induced thermo-elastic waves using ultrasound transducers. This approach requires acoustic coupling and the spatial resolution is limited by the focusing properties of the transducer. We present an all-optical PAM approach that involved detection of the photoacoustically induced surface displacements using an adaptive, two-wave mixing interferometer. The interferometer consisted of a 532-nm, CW laser and a Bismuth Silicon Oxide photorefractive crystal (PRC) that was 5×5×5 mm3. The laser beam was expanded to 3 mm and split into two paths, a reference beam that passed directly through the PRC and a signal beam that was focused at the surface through a 100-X, infinity-corrected objective and returned to the PRC. The PRC matched the wave front of the reference beam to that of the signal beam for optimal interference. The interference of the two beams produced optical-intensity modulations that were correlated with surface displacements. A GHz-bandwidth photoreceiver, a low-noise 20-dB amplifier, and a 12-bit digitizer were employed for time-resolved detection of the surface-displacement signals. In combination with a 5-ns, 532-nm pump laser, the interferometric probe was employed for imaging ink patterns, such as a fingerprint, on a glass slide. The signal beam was focused at a reflective cover slip that was separated from the fingerprint by 5 mm of acoustic-coupling gel. A 3×5 mm2 area of the coverslip was raster scanned with 100-μm steps and surface-displacement signals at each location were averaged 20 times. Image reconstruction based on time reversal of the PA-induced displacement signals produced the photoacoustic image of the ink patterns. The reconstructed image of the fingerprint was consistent with its photograph, which demonstrated the ability of our system to resolve micron-scaled features at a depth of 5 mm.