Visible to the public Biblio

Filters: Keyword is compressive sensing  [Clear All Filters]
2017-12-28
Shafee, S., Rajaei, B..  2017.  A secure steganography algorithm using compressive sensing based on HVS feature. 2017 Seventh International Conference on Emerging Security Technologies (EST). :74–78.

Steganography is the science of hiding information to send secret messages using the carrier object known as stego object. Steganographic technology is based on three principles including security, robustness and capacity. In this paper, we present a digital image hidden by using the compressive sensing technology to increase security of stego image based on human visual system features. The results represent which our proposed method provides higher security in comparison with the other presented methods. Bit Correction Rate between original secret message and extracted message is used to show the accuracy of this method.

2017-12-12
Gilbert, Anna C., Li, Yi, Porat, Ely, Strauss, Martin J..  2017.  For-All Sparse Recovery in Near-Optimal Time. ACM Trans. Algorithms. 13:32:1–32:26.

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by &xwidehat; = R(Φ x), which must satisfy ‖ &xwidehat;-x‖1 ≤ (1+ε)‖ x - xk‖1. We consider the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(ε−3klog (N/k)) measurements and runs in time O(k1 − αNα) for any constant α textgreater 0. In this article, we improve the number of measurements to O(ε − 2klog (N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(log N,1/ε)), with a modest restriction that k ⩽ N1 − α and ε ⩽ (log k/log N)γ for any constants α, β, γ textgreater 0. When k ⩽ log cN for some c textgreater 0, the runtime is reduced to O(kpoly(N,1/ε)). With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog (N/k)((log N/log k)γ + 1/ε)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message mi, which will be encoded to a longer message m′i (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message m′i correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages m′i and then decoding to the original messages mi, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages \mi\ are independent of the hashing, which enables us to obtain a better result.

2017-09-15
Shi, Tianlin, Agostinelli, Forest, Staib, Matthew, Wipf, David, Moscibroda, Thomas.  2016.  Improving Survey Aggregation with Sparsely Represented Signals. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1845–1854.

In this paper, we develop a new aggregation technique to reduce the cost of surveying. Our method aims to jointly estimate a vector of target quantities such as public opinion or voter intent across time and maintain good estimates when using only a fraction of the data. Inspired by the James-Stein estimator, we resolve this challenge by shrinking the estimates to a global mean which is assumed to have a sparse representation in some known basis. This assumption has lead to two different methods for estimating the global mean: orthogonal matching pursuit and deep learning. Both of which significantly reduce the number of samples needed to achieve good estimates of the true means of the data and, in the case of presidential elections, can estimate the outcome of the 2012 United States elections while saving hundreds of thousands of samples and maintaining accuracy.

Li, Zheng, Xia, Yuli, Ye, Ruiqi, Zhao, Junsuo.  2016.  Compressive Sensing for Space Image Compressing. Proceedings of the 2016 International Conference on Intelligent Information Processing. :23:1–23:5.

Compressive sensing is a new technique by which sparse signals are sampled and recovered from a few measurements. To address the disadvantages of traditional space image compressing methods, a complete new compressing scheme under the compressive sensing framework was developed in this paper. Firstly, in the coding stage, a simple binary measurement matrix was constructed to obtain signal measurements. Secondly, the input image was divided into small blocks. The image blocks then would be used as training sets to get a dictionary basis for sparse representation with learning algorithm. At last, sparse reconstruction algorithm was used to recover the original input image. Experimental results show that both the compressing rate and image recovering quality of the proposed method are high. Besides, as the computation cost is very low in the sampling stage, it is suitable for on-board applications in astronomy.

2017-02-21
Chen Bai, S. Xu, B. Jing, Miao Yang, M. Wan.  2015.  "Compressive adaptive beamforming in 2D and 3D ultrafast active cavitation imaging". 2015 IEEE International Ultrasonics Symposium (IUS). :1-4.

The ultrafast active cavitation imaging (UACI) based on plane wave can be implemented with high frame rate, in which adaptive beamforming technique was introduced to enhance resolutions and signal-to-noise ratio (SNR) of images. However, regular adaptive beamforming continuously updates the spatial filter for each sample point, which requires a huge amount of calculation, especially in the case of a high sampling rate, and, moreover, 3D imaging. In order to achieve UACI rapidly with satisfactory resolution and SNR, this paper proposed an adaptive beamforming on the basis of compressive sensing (CS), which can retain the quality of adaptive beamforming but reduce the calculating amount substantially. The results of simulations and experiments showed that comparing with regular adaptive beamforming, this new method successfully achieved about eightfold in time consuming.

A. Dutta, R. K. Mangang.  2015.  "Analog to information converter based on random demodulation". 2015 International Conference on Electronic Design, Computer Networks Automated Verification (EDCAV). :105-109.

With the increase in signal's bandwidth, the conventional analog to digital converters (ADCs), operating on the basis of Shannon/Nyquist theorem, are forced to work at very high rates leading to low dynamic range and high power consumptions. This paper here tells about one Analog to Information converter developed based on compressive sensing techniques. The high sampling rates, which is the main drawback for ADCs, is being successfully reduced to 4 times lower than the conventional rates. The system is also accompanied with the advantage of low power dissipation.

Z. Zhu, M. B. Wakin.  2015.  "Wall clutter mitigation and target detection using Discrete Prolate Spheroidal Sequences". 2015 3rd International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa). :41-45.

We present a new method for mitigating wall return and a new greedy algorithm for detecting stationary targets after wall clutter has been cancelled. Given limited measurements of a stepped-frequency radar signal consisting of both wall and target return, our objective is to detect and localize the potential targets. Modulated Discrete Prolate Spheroidal Sequences (DPSS's) form an efficient basis for sampled bandpass signals. We mitigate the wall clutter efficiently within the compressive measurements through the use of a bandpass modulated DPSS basis. Then, in each step of an iterative algorithm for detecting the target positions, we use a modulated DPSS basis to cancel nearly all of the target return corresponding to previously selected targets. With this basis, we improve upon the target detection sensitivity of a Fourier-based technique.

I. Ilhan, A. C. Gurbuz, O. Arikan.  2015.  "Sparsity based robust Stretch Processing". 2015 IEEE International Conference on Digital Signal Processing (DSP). :95-99.

Strecth Processing (SP) is a radar signal processing technique that provides high-range resolution with processing large bandwidth signals with lower rate Analog to Digital Converter(ADC)s. The range resolution of the large bandwidth signal is obtained through looking into a limited range window and low rate ADC samples. The target space in the observed range window is sparse and Compressive sensing(CS) is an important tool to further decrease the number of measurements and sparsely reconstruct the target space for sparse scenes with a known basis which is the Fourier basis in the general application of SP. Although classical CS techniques might be directly applied to SP, due to off-grid targets reconstruction performance degrades. In this paper, applicability of compressive sensing framework and its sparse signal recovery techniques to stretch processing is studied considering off-grid cases. For sparsity based robust SP, Perturbed Parameter Orthogonal Matching Pursuit(PPOMP) algorithm is proposed. PPOMP is an iterative technique that estimates off-grid target parameters through a gradient descent. To compute the error between actual and reconstructed parameters, Earth Movers Distance(EMD) is used. Performance of proposed algorithm are compared with classical CS and SP techniques.

2015-05-01
Hong Jiang, Songqing Zhao, Zuowei Shen, Wei Deng, Wilford, P.A., Haimi-Cohen, R..  2014.  Surveillance video analysis using compressive sensing with low latency. Bell Labs Technical Journal. 18:63-74.

We propose a method for analysis of surveillance video by using low rank and sparse decomposition (LRSD) with low latency combined with compressive sensing to segment the background and extract moving objects in a surveillance video. Video is acquired by compressive measurements, and the measurements are used to analyze the video by a low rank and sparse decomposition of a matrix. The low rank component represents the background, and the sparse component, which is obtained in a tight wavelet frame domain, is used to identify moving objects in the surveillance video. An important feature of the proposed low latency method is that the decomposition can be performed with a small number of video frames, which reduces latency in the reconstruction and makes it possible for real time processing of surveillance video. The low latency method is both justified theoretically and validated experimentally.