Biblio
Drinking water availability is a crucial problem that must be addressed in order to improve the quality of life of individuals living developing nations. Improving water supply availability is important for public health, as it is the third highest risk factor for poor health in developing nations with high mortality rates. This project researched drinking water filtration for areas of Sub-Saharan Africa near existing bodies of water, where the populations are completely reliant on collecting from surface water sources: the most contaminated water source type. Water filtration methods that can be completely created by the consumer would alleviate aid organization dependence in developing nations, put the consumers in control, and improve public health. Filtration processes pass water through a medium that will catch contaminants through physical entrapment or absorption and thus yield a cleaner effluent. When exploring different materials for filtration, removal of contaminants and hydraulic conductivity are the two most important components. Not only does the method have to treat the water, but also it has to do so in a timeframe that is quick enough to produce potable water at a rate that keeps up with everyday needs. Cement is easily accessible in Sub- Saharan regions. Most concrete mixtures are not meant to be pervious, as it is a construction material used for its compressive strength, however, reduced water content in a cement mixture gives it higher permeability. Several different concrete samples of varying thicknesses and water concentrations were created. Bacterial count tests were performed on both pre-filtered and filtered water samples. Concrete filtration does remove bacteria from drinking water, however, the method can still be improved upon.
Spatial-multiplexing cameras have emerged as a promising alternative to classical imaging devices, often enabling acquisition of `more for less'. One popular architecture for spatial multiplexing is the single-pixel camera (SPC), which acquires coded measurements of the scene with pseudo-random spatial masks. Significant theoretical developments over the past few years provide a means for reconstruction of the original imagery from coded measurements at sub-Nyquist sampling rates. Yet, accurate reconstruction generally requires high measurement rates and high signal-to-noise ratios. In this paper, we enquire if one can perform high-level visual inference problems (e.g. face recognition or action recognition) from compressive cameras without the need for image reconstruction. This is an interesting question since in many practical scenarios, our goals extend beyond image reconstruction. However, most inference tasks often require non-linear features and it is not clear how to extract such features directly from compressed measurements. In this paper, we show that one can extract nontrivial correlational features directly without reconstruction of the imagery. As a specific example, we consider the problem of face recognition beyond the visible spectrum e.g in the short-wave infra-red region (SWIR) - where pixels are expensive. We base our framework on smashed filters which suggests that inner-products between high-dimensional signals can be computed in the compressive domain to a high degree of accuracy. We collect a new face image dataset of 30 subjects, obtained using an SPC. Using face recognition as an example, we show that one can indeed perform reconstruction-free inference with a very small loss of accuracy at very high compression ratios of 100 and more.
Quadrature compressive sampling (QuadCS) is a newly introduced sub-Nyquist sampling for acquiring inphase and quadrature components of radio-frequency signals. This paper develops a target detection scheme of pulsed-type radars in the presence of digital radio frequency memory (DRFM) repeat jammers with the radar echoes sampled by the QuadCS system. For diversifying pulses, the proposed scheme first separates the target echoes from the DRFM repeat jammers using CS recovery algorithms, and then removes the jammers to perform the target detection. Because of the separation processing, the jammer leakage through range sidelobe variation of the classical match-filter processing will not appear. Simulation results confirm our findings. The proposed scheme with the data at one eighth the Nyquist rate outperforms the classic processing with Nyquist samples in the presence of DRFM repeat jammers.
The passive radar also known as Green Radar exploits the available commercial communication signals and is useful for target tracking and detection in general. Recent communications standards frequently employ Orthogonal Frequency Division Multiplexing (OFDM) waveforms and wideband for broadcasting. This paper focuses on the recent developments of the target detection algorithms in the OFDM passive radar framework where its channel estimates have been derived using the matched filter concept using the knowledge of the transmitted signals. The MUSIC algorithm, which has been modified to solve this two dimensional delay-Doppler detection problem, is first reviewed. As the target detection problem can be represented as sparse signals, this paper employs compressive sensing to compare with the detection capability of the 2-D MUSIC algorithm. It is found that the previously proposed single time sample compressive sensing cannot significantly reduce the leakage from the direct signal component. Furthermore, this paper proposes the compressive sensing method utilizing multiple time samples, namely l1-SVD, for the detection of multiple targets. In comparison between the MUSIC and compressive sensing, the results show that l1-SVD can decrease the direct signal leakage but its prerequisite of computational resources remains a major issue. This paper also presents the detection performance of these two algorithms for closely spaced targets.
The ultrafast active cavitation imaging (UACI) based on plane wave can be implemented with high frame rate, in which adaptive beamforming technique was introduced to enhance resolutions and signal-to-noise ratio (SNR) of images. However, regular adaptive beamforming continuously updates the spatial filter for each sample point, which requires a huge amount of calculation, especially in the case of a high sampling rate, and, moreover, 3D imaging. In order to achieve UACI rapidly with satisfactory resolution and SNR, this paper proposed an adaptive beamforming on the basis of compressive sensing (CS), which can retain the quality of adaptive beamforming but reduce the calculating amount substantially. The results of simulations and experiments showed that comparing with regular adaptive beamforming, this new method successfully achieved about eightfold in time consuming.
This paper addresses the issue of magnetic resonance (MR) Image reconstruction at compressive sampling (or compressed sensing) paradigm followed by its segmentation. To improve image reconstruction problem at low measurement space, weighted linear prediction and random noise injection at unobserved space are done first, followed by spatial domain de-noising through adaptive recursive filtering. Reconstructed image, however, suffers from imprecise and/or missing edges, boundaries, lines, curvatures etc. and residual noise. Curvelet transform is purposely used for removal of noise and edge enhancement through hard thresholding and suppression of approximate sub-bands, respectively. Finally Genetic algorithms (GAs) based clustering is done for segmentation of sharpen MR Image using weighted contribution of variance and entropy values. Extensive simulation results are shown to highlight performance improvement of both image reconstruction and segmentation problems.
With the increase in signal's bandwidth, the conventional analog to digital converters (ADCs), operating on the basis of Shannon/Nyquist theorem, are forced to work at very high rates leading to low dynamic range and high power consumptions. This paper here tells about one Analog to Information converter developed based on compressive sensing techniques. The high sampling rates, which is the main drawback for ADCs, is being successfully reduced to 4 times lower than the conventional rates. The system is also accompanied with the advantage of low power dissipation.
We present a new method for mitigating wall return and a new greedy algorithm for detecting stationary targets after wall clutter has been cancelled. Given limited measurements of a stepped-frequency radar signal consisting of both wall and target return, our objective is to detect and localize the potential targets. Modulated Discrete Prolate Spheroidal Sequences (DPSS's) form an efficient basis for sampled bandpass signals. We mitigate the wall clutter efficiently within the compressive measurements through the use of a bandpass modulated DPSS basis. Then, in each step of an iterative algorithm for detecting the target positions, we use a modulated DPSS basis to cancel nearly all of the target return corresponding to previously selected targets. With this basis, we improve upon the target detection sensitivity of a Fourier-based technique.
Strecth Processing (SP) is a radar signal processing technique that provides high-range resolution with processing large bandwidth signals with lower rate Analog to Digital Converter(ADC)s. The range resolution of the large bandwidth signal is obtained through looking into a limited range window and low rate ADC samples. The target space in the observed range window is sparse and Compressive sensing(CS) is an important tool to further decrease the number of measurements and sparsely reconstruct the target space for sparse scenes with a known basis which is the Fourier basis in the general application of SP. Although classical CS techniques might be directly applied to SP, due to off-grid targets reconstruction performance degrades. In this paper, applicability of compressive sensing framework and its sparse signal recovery techniques to stretch processing is studied considering off-grid cases. For sparsity based robust SP, Perturbed Parameter Orthogonal Matching Pursuit(PPOMP) algorithm is proposed. PPOMP is an iterative technique that estimates off-grid target parameters through a gradient descent. To compute the error between actual and reconstructed parameters, Earth Movers Distance(EMD) is used. Performance of proposed algorithm are compared with classical CS and SP techniques.
Compressive sensing (CS) is a novel technology for sparse signal acquisition with sub-Nyquist sampling rate but with relative high resolution. Photonics-assisted CS has attracted much attention recently due the benefit of wide bandwidth provided by photonics. This paper discusses the approaches to realizing photonics-assisted CS.
A technical method regarding to the improvement of transmission capacity of an optical wireless orthogonal frequency division multiplexing (OFDM) link based on a visible light emitting diode (LED) is proposed in this paper. An original OFDM signal, which is encoded by various multilevel digital modulations such as quadrature phase shift keying (QPSK), and quadrature amplitude modulation (QAM), is converted into a sparse one and then compressed using an adaptive sampling with inverse discrete cosine transform, while its error-free reconstruction is implemented using a L1-minimization based on a Bayesian compressive sensing (CS). In case of QPSK symbols, the transmission capacity of the optical wireless OFDM link was increased from 31.12 Mb/s to 51.87 Mb/s at the compression ratio of 40 %, while It was improved from 62.5 Mb/s to 78.13 Mb/s at the compression ratio of 20 % under the 16-QAM symbols in the error free wireless transmission (forward error correction limit: bit error rate of 10-3).
Compressed Sensing or Compressive Sampling is the process of signal reconstruction from the samples obtained at a rate far below the Nyquist rate. In this work, Differential Pulse Coded Modulation (DPCM) is coupled with Block Based Compressed Sensing (CS) reconstruction with Robbins Monro (RM) approach. RM is a parametric iterative CS reconstruction technique. In this work extensive simulation is done to report that RM gives better performance than the existing DPCM Block Based Smoothed Projected Landweber (SPL) reconstruction technique. The noise seen in Block SPL algorithm is not much evident in this non-parametric approach. To achieve further compression of data, Lempel-Ziv-Welch channel coding technique is proposed.
Sampling multiband radar signals is an essential issue of multiband/multifunction radar. This paper proposes a multiband quadrature compressive sampling (MQCS) system to perform the sampling at sub-Landau rate. The MQCS system randomly projects the multiband signal into a compressive multiband one by modulating each subband signal with a low-pass signal and then samples the compressive multiband signal at Landau-rate with output of compressive measurements. The compressive inphase and quadrature (I/Q) components of each subband are extracted from the compressive measurements respectively and are exploited to recover the baseband I/Q components. As effective bandwidth of the compressive multiband signal is much less than that of the received multiband one, the sampling rate is much less than Landau rate of the received signal. Simulation results validate that the proposed MQCS system can effectively acquire and reconstruct the baseband I/Q components of the multiband signals.
This paper proposes a fast and robust procedure for sensing and reconstruction of sparse or compressible magnetic resonance images based on the compressive sampling theory. The algorithm starts with incoherent undersampling of the k-space data of the image using a random matrix. The undersampled data is sparsified using Haar transformation. The Haar transform coefficients of the k-space data are then reconstructed using the orthogonal matching Pursuit algorithm. The reconstructed coefficients are inverse transformed into k-space data and then into the image in spatial domain. Finally, a median filter is used to suppress the recovery noise artifacts. Experimental results show that the proposed procedure greatly reduces the image data acquisition time without significantly reducing the image quality. The results also show that the error in the reconstructed image is reduced by median filtering.
In this paper, a new approach based on Sub-sampled Inverse Fast Fourier Transform (SSIFFT) for efficiently acquiring compressive measurements is proposed, which is motivated by random filter based method and sub-sampled FFT. In our approach, to start with, we multiply the FFT of input signal and that of random-tap FIR filter in frequency domain and then utilize SSIFFT to obtain compressive measurements in the time domain. It requires less data storage and computation than the existing methods based on random filter. Moreover, it is suitable for both one-dimensional and two-dimensional signals. Experimental results show that the proposed approach is effective and efficient.
Compressive Sampling and Sparse reconstruction theory is applied to a linearly frequency modulated continuous wave hybrid lidar/radar system. The goal is to show that high resolution time of flight measurements to underwater targets can be obtained utilizing far fewer samples than dictated by Nyquist sampling theorems. Traditional mixing/down-conversion and matched filter signal processing methods are reviewed and compared to the Compressive Sampling and Sparse Reconstruction methods. Simulated evidence is provided to show the possible sampling rate reductions, and experiments are used to observe the effects that turbid underwater environments have on recovery. Results show that by using compressive sensing theory and sparse reconstruction, it is possible to achieve significant sample rate reduction while maintaining centimeter range resolution.
Non-intrusive load monitoring (NILM) extracts information about how energy is being used in a building from electricity measurements collected at a single location. Obtaining measurements at only one location is attractive because it is inexpensive and convenient, but it can result in large amounts of data from high frequency electrical measurements. Different ways to compress or selectively measure this data are therefore required for practical implementations of NILM. We explore the use of random filtering and random demodulation, techniques that are closely related to compressed sensing, to offer a computationally simple way of compressing the electrical data. We show how these techniques can allow one to reduce the sampling rate of the electricity measurements, while requiring only one sampling channel and allowing accurate NILM performance. Our tests are performed using real measurements of electrical signals from a public data set, thus demonstrating their effectiveness on real appliances and allowing for reproducibility and comparison with other data management strategies for NILM.
A robust appearance model is usually required in visual tracking, which can handle pose variation, illumination variation, occlusion and many other interferences occurring in video. So far, a number of tracking algorithms make use of image samples in previous frames to update appearance models. There are many limitations of that approach: 1) At the beginning of tracking, there exists no sufficient amount of data for online update because these adaptive models are data-dependent and 2) in many challenging situations, robustly updating the appearance models is difficult, which often results in drift problems. In this paper, we proposed a tracking algorithm based on compressive sensing theory and particle filter framework. Features are extracted by random projection with data-independent basis. Particle filter is employed to make a more accurate estimation of the target location and make much of the updated classifier. The robustness and the effectiveness of our tracker have been demonstrated in several experiments.
Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/ measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.
This paper reviews some existing Speech Enhancement techniques and also proposes a new method for enhancing the speech by combining Compressed Sensing and Kalman filter approaches. This approach is based on reconstruction of noisy speech signal using Compressive Sampling Matching Pursuit (CoSaMP) algorithm and further enhanced by Kalman filter. The performance of the proposed method is evaluated and compared with that of the existing techniques in terms of intelligibility and quality measure parameters of speech. The proposed algorithm shows an improved performance compared to Spectral Subtraction, MMSE, Wiener filter, Signal Subspace, Kalman filter in terms of WSS, LLR, SegSNR, SNRloss, PESQ and overall quality.
This paper presents a model calibration algorithm for the modulated wideband converter (MWC) with non-ideal analog lowpass filter (LPF). The presented technique uses a test signal to estimate the finite impulse response (FIR) of the practical non-ideal LPF, and then a digital compensation filter is designed to calibrate the approximated FIR filter in the digital domain. At the cost of a moderate oversampling rate, the calibrated filter performs as an ideal LPF. The calibrated model uses the MWC system with non-ideal LPF to capture the samples of underlying signal, and then the samples are filtered by the digital compensation filter. Experimental results indicate that, without making any changes to the architecture of MWC, the proposed algorithm can obtain the samples as that of standard MWC with ideal LPF, and the signal can be reconstructed with overwhelming probability.