Visible to the public Biblio

Filters: Keyword is compressive sampling  [Clear All Filters]
2019-12-10
Huang, Xuping.  2018.  Mechanism and Implementation of Watermarked Sample Scanning Method for Speech Data Tampering Detection. Proceedings of the 2Nd International Workshop on Multimedia Privacy and Security. :54-60.

The integrity and reliability of speech data have been important issues to probative use. Watermarking technologies supplies an alternative solution to guarantee the the authenticity of multiple data besides digital signature. This work proposes a novel digital watermarking based on a reversible compression algorithm with sample scanning to detect tampering in time domain. In order to detect tampering precisely, the digital speech data is divided into length-fixed frames and the content-based hash information of each frame is calculated and embedded into the speech data for verification. Huffman compression algorithm is applied to each four sampling bits from least significant bit in each sample after pulse-code modulation processing to achieve low distortion and high capacity for hiding payload. Experimental experiments on audio quality, detection precision and robustness towards attacks are taken, and the results show the effectiveness of tampering detection with a precision with an error around 0.032 s for a 10 s speech clip. Distortion is imperceptible with an average 22.068 dB for Huffman-based and 24.139 dB for intDCT-based method in terms of signal-to-noise, and with an average MOS 3.478 for Huffman-based and 4.378 for intDCT-based method. The bit error rate (BER) between stego data and attacked stego data in both of time-domain and frequency domain is approximate 28.6% in average, which indicates the robustness of the proposed hiding method.

2019-01-16
Shi, T., Shi, W., Wang, C., Wang, Z..  2018.  Compressed Sensing based Intrusion Detection System for Hybrid Wireless Mesh Networks. 2018 International Conference on Computing, Networking and Communications (ICNC). :11–15.
As wireless mesh networks (WMNs) develop rapidly, security issue becomes increasingly important. Intrusion Detection System (IDS) is one of the crucial ways to detect attacks. However, IDS in wireless networks including WMNs brings high detection overhead, which degrades network performance. In this paper, we apply compressed sensing (CS) theory to IDS and propose a CS based IDS for hybrid WMNs. Since CS can reconstruct a sparse signal with compressive sampling, we process the detected data and construct sparse original signals. Through reconstruction algorithm, the compressive sampled data can be reconstructed and used for detecting intrusions, which reduces the detection overhead. We also propose Active State Metric (ASM) as an attack metric for recognizing attacks, which measures the activity in PHY layer and energy consumption of each node. Through intensive simulations, the results show that under 50% attack density, our proposed IDS can ensure 95% detection rate while reducing about 40% detection overhead on average.
2018-08-23
Bailer, Werner.  2017.  Efficient Approximate Medoids of Temporal Sequences. Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing. :3:1–3:6.
In order to compactly represent a set of data, its medoid (the element with minimum summed distance to all other elements) is a useful choice. This has applications in clustering, compression and visualisation of data. In multimedia data, the set of data is often sampled as a sequence in time or space, such as a video shot or views of a scene. The exact calculation of the medoid may be costly, especially if the distance function between elements is not trivial. While approximation methods for medoid selection exist, we show in this work that they do not perform well on sequences of images. We thus propose a novel algorithm for efficiently selecting an approximate medoid of a temporal sequence and assess its performance on two large-scale video data sets.
Tian, Sen, Ye, Songtao, Iqbal, Muhammad Faisal Buland, Zhang, Jin.  2017.  A New Approach to the Block-based Compressive Sensing. Proceedings of the 2017 International Conference on Computer Graphics and Digital Image Processing. :21:1–21:5.
The traditional block-based compressive sensing (BCS) approach considers the image to be segmented. However, there is not much literature available on how many numbers of blocks or segments per image would be the best choice for the compression and recovery methods. In this article, we propose a BCS method to find out the optimal way of image retrieval, and the number of the blocks to which into image should be divided. In the theoretical analysis, we analyzed the effect of noise under compression perspective and derived the range of error probability. Experimental results show that the number of blocks of an image has a strong correlation with the image recovery process. As the sampling rate M/N increases, we can find the appropriate number of image blocks by comparing each line.
Yu, Chenhan D., Levitt, James, Reiz, Severin, Biros, George.  2017.  Geometry-oblivious FMM for Compressing Dense SPD Matrices. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. :53:1–53:14.
We present GOFMM (geometry-oblivious FMM), a novel method that creates a hierarchical low-rank approximation, or "compression," of an arbitrary dense symmetric positive definite (SPD) matrix. For many applications, GOFMM enables an approximate matrix-vector multiplication in N log N or even N time, where N is the matrix size. Compression requires N log N storage and work. In general, our scheme belongs to the family of hierarchical matrix approximation methods. In particular, it generalizes the fast multipole method (FMM) to a purely algebraic setting by only requiring the ability to sample matrix entries. Neither geometric information (i.e., point coordinates) nor knowledge of how the matrix entries have been generated is required, thus the term "geometry-oblivious." Also, we introduce a shared-memory parallel scheme for hierarchical matrix computations that reduces synchronization barriers. We present results on the Intel Knights Landing and Haswell architectures, and on the NVIDIA Pascal architecture for a variety of matrices.
Zheng, Yan, Phillips, Jeff M..  2017.  Coresets for Kernel Regression. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :645–654.
Kernel regression is an essential and ubiquitous tool for non-parametric data analysis, particularly popular among time series and spatial data. However, the central operation which is performed many times, evaluating a kernel on the data set, takes linear time. This is impractical for modern large data sets. In this paper we describe coresets for kernel regression: compressed data sets which can be used as proxy for the original data and have provably bounded worst case error. The size of the coresets are independent of the raw number of data points; rather they only depend on the error guarantee, and in some cases the size of domain and amount of smoothing. We evaluate our methods on very large time series and spatial data, and demonstrate that they incur negligible error, can be constructed extremely efficiently, and allow for great computational gains.
Zhang, Kai, Liu, Chuanren, Zhang, Jie, Xiong, Hui, Xing, Eric, Ye, Jieping.  2017.  Randomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :615–623.
Matrix sketching is aimed at finding compact representations of a matrix while simultaneously preserving most of its properties, which is a fundamental building block in modern scientific computing. Randomized algorithms represent state-of-the-art and have attracted huge interest from the fields of machine learning, data mining, and theoretic computer science. However, it still requires the use of the entire input matrix in producing desired factorizations, which can be a major computational and memory bottleneck in truly large problems. In this paper, we uncover an interesting theoretic connection between matrix low-rank decomposition and lossy signal compression, based on which a cascaded compression sampling framework is devised to approximate an m-by-n matrix in only O(m+n) time and space. Indeed, the proposed method accesses only a small number of matrix rows and columns, which significantly improves the memory footprint. Meanwhile, by sequentially teaming two rounds of approximation procedures and upgrading the sampling strategy from a uniform probability to more sophisticated, encoding-orientated sampling, significant algorithmic boosting is achieved to uncover more granular structures in the data. Empirical results on a wide spectrum of real-world, large-scale matrices show that by taking only linear time and space, the accuracy of our method rivals those state-of-the-art randomized algorithms consuming a quadratic, O(mn), amount of resources.
Birch, G. C., Woo, B. L., LaCasse, C. F., Stubbs, J. J., Dagel, A. L..  2017.  Computational optical physical unclonable functions. 2017 International Carnahan Conference on Security Technology (ICCST). :1–6.

Physical unclonable functions (PUFs) are devices which are easily probed but difficult to predict. Optical PUFs have been discussed within the literature, with traditional optical PUFs typically using spatial light modulators, coherent illumination, and scattering volumes; however, these systems can be large, expensive, and difficult to maintain alignment in practical conditions. We propose and demonstrate a new kind of optical PUF based on computational imaging and compressive sensing to address these challenges with traditional optical PUFs. This work describes the design, simulation, and prototyping of this computational optical PUF (COPUF) that utilizes incoherent polychromatic illumination passing through an additively manufactured refracting optical polymer element. We demonstrate the ability to pass information through a COPUF using a variety of sampling methods, including the use of compressive sensing. The sensitivity of the COPUF system is also explored. We explore non-traditional PUF configurations enabled by the COPUF architecture. The double COPUF system, which employees two serially connected COPUFs, is proposed and analyzed as a means to authenticate and communicate between two entities that have previously agreed to communicate. This configuration enables estimation of a message inversion key without the calculation of individual COPUF inversion keys at any point in the PUF life cycle. Our results show that it is possible to construct inexpensive optical PUFs using computational imaging. This could lead to new uses of PUFs in places where electrical PUFs cannot be utilized effectively, as low cost tags and seals, and potentially as authenticating and communicating devices.

Li, Q., Xu, B., Li, S., Liu, Y., Cui, D..  2017.  Reconstruction of measurements in state estimation strategy against cyber attacks for cyber physical systems. 2017 36th Chinese Control Conference (CCC). :7571–7576.

To improve the resilience of state estimation strategy against cyber attacks, the Compressive Sensing (CS) is applied in reconstruction of incomplete measurements for cyber physical systems. First, observability analysis is used to decide the time to run the reconstruction and the damage level from attacks. In particular, the dictionary learning is proposed to form the over-completed dictionary by K-Singular Value Decomposition (K-SVD). Besides, due to the irregularity of incomplete measurements, sampling matrix is designed as the measurement matrix. Finally, the simulation experiments on 6-bus power system illustrate that the proposed method achieves the incomplete measurements reconstruction perfectly, which is better than the joint dictionary. When only 29% available measurements are left, the proposed method has generality for four kinds of recovery algorithms.

Lagunas, E., Rugini, L..  2017.  Performance of compressive sensing based energy detection. 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). :1–5.

This paper investigates closed-form expressions to evaluate the performance of the Compressive Sensing (CS) based Energy Detector (ED). The conventional way to approximate the probability density function of the ED test statistic invokes the central limit theorem and considers the decision variable as Gaussian. This approach, however, provides good approximation only if the number of samples is large enough. This is not usually the case in CS framework, where the goal is to keep the sample size low. Moreover, working with a reduced number of measurements is of practical interest for general spectrum sensing in cognitive radio applications, where the sensing time should be sufficiently short since any time spent for sensing cannot be used for data transmission on the detected idle channels. In this paper, we make use of low-complexity approximations based on algebraic transformations of the one-dimensional Gaussian Q-function. More precisely, this paper provides new closed-form expressions for accurate evaluation of the CS-based ED performance as a function of the compressive ratio and the Signal-to-Noise Ratio (SNR). Simulation results demonstrate the increased accuracy of the proposed equations compared to existing works.

Xu, W., Yan, Z., Tian, Y., Cui, Y., Lin, J..  2017.  Detection with compressive measurements corrupted by sparse errors. 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP). :1–5.

Compressed sensing can represent the sparse signal with a small number of measurements compared to Nyquist-rate samples. Considering the high-complexity of reconstruction algorithms in CS, recently compressive detection is proposed, which performs detection directly in compressive domain without reconstruction. Different from existing work that generally considers the measurements corrupted by dense noises, this paper studies the compressive detection problem when the measurements are corrupted by both dense noises and sparse errors. The sparse errors exist in many practical systems, such as the ones affected by impulse noise or narrowband interference. We derive the theoretical performance of compressive detection when the sparse error is either deterministic or random. The theoretical results are further verified by simulations.

Ming, X., Shu, T., Xianzhong, X..  2017.  An energy-efficient wireless image transmission method based on adaptive block compressive sensing and softcast. 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC). :712–717.

With the rapid and radical evolution of information and communication technology, energy consumption for wireless communication is growing at a staggering rate, especially for wireless multimedia communication. Recently, reducing energy consumption in wireless multimedia communication has attracted increasing attention. In this paper, we propose an energy-efficient wireless image transmission scheme based on adaptive block compressive sensing (ABCS) and SoftCast, which is called ABCS-SoftCast. In ABCS-SoftCast, the compression distortion and transmission distortion are considered in a joint manner, and the energy-distortion model is formulated for each image block. Then, the sampling rate (SR) and power allocation factors of each image block are optimized simultaneously. Comparing with conventional SoftCast scheme, experimental results demonstrate that the energy consumption can be greatly reduced even when the receiving image qualities are approximately the same.

2017-09-15
Ballester-Ripoll, R., Paredes, E. G., Pajarola, R..  2016.  A Surrogate Visualization Model Using the Tensor Train Format. SIGGRAPH ASIA 2016 Symposium on Visualization. :13:1–13:8.

Complex simulations and numerical experiments typically rely on a number of parameters and have an associated score function, e.g. with the goal of maximizing accuracy or minimizing computation time. However, the influence of each individual parameter is often poorly understood a priori and the joint parameter space can be difficult to explore, visualize and optimize. We model this space as an N-dimensional black-box tensor and apply a cross approximation strategy to sample it. Upon learning and compactly expressing this space as a surrogate visualization model, informative subspaces are interactively reconstructed and navigated in the form of charts, images, surface plots, etc. By exploiting efficient operations in the tensor train format, we are able to produce diagrams such as parallel coordinates, bivariate projections and dimensional stacking out of highly-compressed parameter spaces. We demonstrate the proposed framework with several scientific simulations that contain up to 6 parameters and billions of tensor grid points.

Yang, Lei, Li, Yao, Lin, Qiongzheng, Li, Xiang-Yang, Liu, Yunhao.  2016.  Making Sense of Mechanical Vibration Period with Sub-millisecond Accuracy Using Backscatter Signals. Proceedings of the 22Nd Annual International Conference on Mobile Computing and Networking. :16–28.

Traditional vibration inspection systems, equipped with separated sensing and communication modules, are either very expensive (e.g., hundreds of dollars) and/or suffer from occlusion and narrow field of view (e.g., laser). In this work, we present an RFID-based solution, Tagbeat, to inspect mechanical vibration using COTS RFID tags and readers. Making sense of micro and high-frequency vibration using random and low-frequency readings of tag has been a daunting task, especially challenging for achieving sub-millisecond period accuracy. Our system achieves these three goals by discerning the change pattern of backscatter signal replied from the tag, which is attached on the vibrating surface and displaced by the vibration within a small range. This work introduces three main innovations. First, it shows how one can utilize COTS RFID to sense mechanical vibration and accurately discover its period with a few periods of short and noisy samples. Second, a new digital microscope is designed to amplify the micro-vibration-induced weak signals. Third, Tagbeat introduces compressive reading to inspect high-frequency vibration with relatively low RFID read rate. We implement Tagbeat using a COTS RFID device and evaluate it with a commercial centrifugal machine. Empirical benchmarks with a prototype show that Tagbeat can inspect the vibration period with a mean accuracy of 0.36ms and a relative error rate of 0.03%. We also study three cases to demonstrate how to associate our inspection solution with the specific domain requirements.

Wang, Aosen, Jin, Zhanpeng, Xu, Wenyao.  2016.  A Programmable Analog-to-Information Converter for Agile Biosensing. Proceedings of the 2016 International Symposium on Low Power Electronics and Design. :206–211.

In recent years, the analog-to-information converter (AIC), based on compressed sensing (CS) paradigm, is a promising solution to overcome the performance and energy-efficiency limitations of traditional analog-to-digital converters (ADC). Especially, AIC can enable sub-Nyquist signal sampling proportional to the intrinsic information in biomedical applications. However, the legacy AIC structure is tailored toward specific applications, which lacks of flexibility and prevents its universality. In this paper, we introduce a novel programmable AIC architecture, Pro-AIC, to enable effective configurability and reduce its energy overhead by integrating efficient multiplexing hardware design. To improve the quality and time-efficiency of Pro-AIC configuration, we also develop a rapid configuration algorithm, called RapSpiral, to quickly find the near-optimal parameter configuration in Pro-AIC architecture. Specifically, we present a design metric, trade-off penalty, to quantitatively evaluate the performance-energy trade-off. The RapSpiral controls a penalty-driven shrinking triangle to progressively approximate to the optimal trade-off. Our proposed RapSpiral is with log(n) complexity yet high accuracy, without pretraining and complex parameter tuning procedure. RapSpiral is also probable to avoid the local minimum pitfalls. Experimental results indicate that our RapSpiral algorithm can achieve more than 30x speedup compared with the brute force algorithm, with only about 3% trade-off compromise to the optimum in Pro-AIC. Furthermore, the scalability is also verified on larger size benchmarks.

Shi, Tianlin, Agostinelli, Forest, Staib, Matthew, Wipf, David, Moscibroda, Thomas.  2016.  Improving Survey Aggregation with Sparsely Represented Signals. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1845–1854.

In this paper, we develop a new aggregation technique to reduce the cost of surveying. Our method aims to jointly estimate a vector of target quantities such as public opinion or voter intent across time and maintain good estimates when using only a fraction of the data. Inspired by the James-Stein estimator, we resolve this challenge by shrinking the estimates to a global mean which is assumed to have a sparse representation in some known basis. This assumption has lead to two different methods for estimating the global mean: orthogonal matching pursuit and deep learning. Both of which significantly reduce the number of samples needed to achieve good estimates of the true means of the data and, in the case of presidential elections, can estimate the outcome of the 2012 United States elections while saving hundreds of thousands of samples and maintaining accuracy.

Yang, Bo, He, Suining, Chan, S.-H. Gary.  2016.  Updating Wireless Signal Map with Bayesian Compressive Sensing. Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. :310–317.

In a wireless system, a signal map shows the signal strength at different locations termed reference points (RPs). As access points (APs) and their transmission power may change over time, keeping an updated signal map is important for applications such as Wi-Fi optimization and indoor localization. Traditionally, the signal map is obtained by a full site survey, which is time-consuming and costly. We address in this paper how to efficiently update a signal map given sparse samples randomly crowdsourced in the space (e.g., by signal monitors, explicit human input, or implicit user participation). We propose Compressive Signal Reconstruction (CSR), a novel learning system employing Bayesian compressive sensing (BCS) for online signal map update. CSR does not rely on any path loss model or line of sight, and is generic enough to serve as a plug-in of any wireless system. Besides signal map update, CSR also computes the estimation error of signals in terms of confidence interval. CSR models the signal correlation with a kernel function. Using it, CSR constructs a sensing matrix based on the newly sampled signals. The sensing matrix is then used to compute the signal change at all the RPs with any BCS algorithm. We have conducted extensive experiments on CSR in our university campus. Our results show that CSR outperforms other state-of-the-art algorithms by a wide margin (reducing signal error by about 30% and sampling points by 20%).

Gu, Zhaoyu, Wang, Wei, Wang, Guoyu.  2016.  HRRP Reconstruction of Sub-Nyquist Sampled Chirp Signals with CS-based Dechirping. Proceedings of the 8th International Conference on Signal Processing Systems. :123–126.

Benefiting bythe large time-bandwidth product, chirp signals arefrequentlyadopted in modern radars. In this paper, the influence on thehigh-resolution range profile (HRRP) reconstruction of chirp waveform after sub-Nyquist sampling is investigated, where the (compressive sensing) CS-based dechirpingalgorithms are applied to achieve the range compression of the sub-Nyquist sampled chirp signals. The conditions that the HRRP can be recovered from the sub-Nyquist sampled chirp signals via CS-based dechirping are addressed. The simulated echoes, formed by the sub-Nyquist sampled chirp signals and scattered by moving targets, are collected by radars to yieldthe high-resolution range profile (HRRP) which validate the correctness of the analyses.

Qi, Jie, Cao, Zheng, Sun, Haixin.  2016.  An Effective Method for Underwater Target Radiation Signal Detecting and Reconstructing. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :48:1–48:2.

Using the sparse feature of the signal, compressed sensing theory can take a sample to compress data at a rate lower than the Nyquist sampling rate. The signal must be represented by the sparse matrix, however. Based on the above theory, this article puts forward a sparse degree of adaptive algorithms which can be used for the detection and reconstruction of the underwater target radiation signal. The received underwater target radiation signal, at first, transits the noise energy into signal energy under test by the stochastic resonance system, and then based on Gerschgorin disk criterion, it can make out the number of underwater target radiation signals in order to determine the optimal sparse degree of compressed sensing, and finally, the detection and reconstruction of the original signal can be realized by utilizing the compressed sensing technique. The simulation results show that this method can effectively detect underwater target radiation signals, and they can also be detected quite well under low signal-to-noise ratio(SNR).

Bortolotti, D., Bartolini, A., Benini, L., Pamula, V. Rajesh, Van Helleputte, N., Van Hoof, C., Verhelst, M., Gemmeke, T., Lopez, R. Braojos, Ansaloni, G. et al..  2016.  PHIDIAS: Ultra-low-power Holistic Design for Smart Bio-signals Computing Platforms. Proceedings of the ACM International Conference on Computing Frontiers. :309–314.

Emerging and future HealthCare policies are fueling up an application-driven shift toward long-term monitoring of biosignals by means of embedded ultra-low power Wireless Body Sensor Networks (WBSNs). In order to break out, these applications needed the emergence of new technologies to allow the development of extremely power-efficient bio-sensing nodes. The PHIDIAS project aims at unlocking the development of ultra-low power bio-sensing WBSNs by tackling multiple and interlocking technological breakthroughs: (i) the development of new signal processing models and methods based on the recently proposed Compressive Sampling paradigm, which allows the design of energy-minimal computational architectures and analog front-ends, (ii) the efficient hardware implementation of components, both analog and digital, building upon an innovative ultra-low-power signal processing front-end, (iii) the evaluation of the global power reduction using a system wide integration of hardware and software components focused on compressed-sensing-based bio-signals analysis. PHIDIAS brought together a mixed consortium of academic and industrial research partners representing pan-European excellence in different fields impacting the energy-aware optimization of WBSNs, including experts in signal processing and digital/analog IC design. In this way, PHIDIAS pioneered a unique holistic approach, ensuring that key breakthroughs worked out in a cooperative way toward the global objective of the project.

Li, Zheng, Xia, Yuli, Ye, Ruiqi, Zhao, Junsuo.  2016.  Compressive Sensing for Space Image Compressing. Proceedings of the 2016 International Conference on Intelligent Information Processing. :23:1–23:5.

Compressive sensing is a new technique by which sparse signals are sampled and recovered from a few measurements. To address the disadvantages of traditional space image compressing methods, a complete new compressing scheme under the compressive sensing framework was developed in this paper. Firstly, in the coding stage, a simple binary measurement matrix was constructed to obtain signal measurements. Secondly, the input image was divided into small blocks. The image blocks then would be used as training sets to get a dictionary basis for sparse representation with learning algorithm. At last, sparse reconstruction algorithm was used to recover the original input image. Experimental results show that both the compressing rate and image recovering quality of the proposed method are high. Besides, as the computation cost is very low in the sampling stage, it is suitable for on-board applications in astronomy.

Silva, Rodrigo M., Gomes, Guilherme C.M., Alvim, Mário S., Gonçalves, Marcos A..  2016.  Compression-Based Selective Sampling for Learning to Rank. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :247–256.

Learning to rank (L2R) algorithms use a labeled training set to generate a ranking model that can be later used to rank new query results. These training sets are very costly and laborious to produce, requiring human annotators to assess the relevance or order of the documents in relation to a query. Active learning (AL) algorithms are able to reduce the labeling effort by actively sampling an unlabeled set and choosing data instances that maximize the effectiveness of a learning function. But AL methods require constant supervision, as documents have to be labeled at each round of the process. In this paper, we propose that certain characteristics of unlabeled L2R datasets allow for an unsupervised, compression-based selection process to be used to create small and yet highly informative and effective initial sets that can later be labeled and used to bootstrap a L2R system. We implement our ideas through a novel unsupervised selective sampling method, which we call Cover, that has several advantages over AL methods tailored to L2R. First, it does not need an initial labeled seed set and can select documents from scratch. Second, selected documents do not need to be labeled as the iterations of the method progress since it is unsupervised (i.e., no learning model needs to be updated). Thus, an arbitrarily sized training set can be selected without human intervention depending on the available budget. Third, the method is efficient and can be run on unlabeled collections containing millions of query-document instances. We run various experiments with two important L2R benchmarking collections to show that the proposed method allows for the creation of small, yet very effective training sets. It achieves full training-like performance with less than 10% of the original sets selected, outperforming the baselines in both effectiveness and scalability.

2017-02-21
A. Roy, S. P. Maity.  2015.  "On segmentation of CS reconstructed MR images". 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR). :1-6.

This paper addresses the issue of magnetic resonance (MR) Image reconstruction at compressive sampling (or compressed sensing) paradigm followed by its segmentation. To improve image reconstruction problem at low measurement space, weighted linear prediction and random noise injection at unobserved space are done first, followed by spatial domain de-noising through adaptive recursive filtering. Reconstructed image, however, suffers from imprecise and/or missing edges, boundaries, lines, curvatures etc. and residual noise. Curvelet transform is purposely used for removal of noise and edge enhancement through hard thresholding and suppression of approximate sub-bands, respectively. Finally Genetic algorithms (GAs) based clustering is done for segmentation of sharpen MR Image using weighted contribution of variance and entropy values. Extensive simulation results are shown to highlight performance improvement of both image reconstruction and segmentation problems.

A. Pramanik, S. P. Maity.  2015.  "DPCM-quantized block-based compressed sensing of images using Robbins Monro approach". 2015 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE). :18-21.

Compressed Sensing or Compressive Sampling is the process of signal reconstruction from the samples obtained at a rate far below the Nyquist rate. In this work, Differential Pulse Coded Modulation (DPCM) is coupled with Block Based Compressed Sensing (CS) reconstruction with Robbins Monro (RM) approach. RM is a parametric iterative CS reconstruction technique. In this work extensive simulation is done to report that RM gives better performance than the existing DPCM Block Based Smoothed Projected Landweber (SPL) reconstruction technique. The noise seen in Block SPL algorithm is not much evident in this non-parametric approach. To achieve further compression of data, Lempel-Ziv-Welch channel coding technique is proposed.

S. Chen, F. Xi, Z. Liu, B. Bao.  2015.  "Quadrature compressive sampling of multiband radar signals at sub-Landau rate". 2015 IEEE International Conference on Digital Signal Processing (DSP). :234-238.

Sampling multiband radar signals is an essential issue of multiband/multifunction radar. This paper proposes a multiband quadrature compressive sampling (MQCS) system to perform the sampling at sub-Landau rate. The MQCS system randomly projects the multiband signal into a compressive multiband one by modulating each subband signal with a low-pass signal and then samples the compressive multiband signal at Landau-rate with output of compressive measurements. The compressive inphase and quadrature (I/Q) components of each subband are extracted from the compressive measurements respectively and are exploited to recover the baseband I/Q components. As effective bandwidth of the compressive multiband signal is much less than that of the received multiband one, the sampling rate is much less than Landau rate of the received signal. Simulation results validate that the proposed MQCS system can effectively acquire and reconstruct the baseband I/Q components of the multiband signals.