Biblio
Global Navigation Satellite System (GNSS) jamming is an evolving technology where new modulations are progressively introduced in order to reduce the impact of interference mitigation techniques such as Adaptive Notch Filters (ANFs). The Standardisation of GNSS Threat reporting and Receiver testing through International Knowledge Exchange, Experimentation and Exploitation (STRIKE3) project recently described a new class of jamming signals, called tick signals, where a basic frequency tick is hopped over a large frequency range. In this way, discontinuities are introduced in the instantaneous frequency of the jamming signals. These discontinuities reduce the effectiveness of ANFs, which unable to track the jamming signal. This paper analyses the effectiveness of interference mitigation techniques with respect to frequency-hopped tick jamming signals. ANFs and Robust Interference Mitigation (RIM) techniques are analysed. From the analysis, it emerges that, despite the presence of frequency discontinuities, ANFs provide some margin against tick signals. However, frequency discontinuities prevent ANFs to remove all the jamming components and receiver operations are denied for moderate Jamming to Noise power ratio (J/N) values, RIM techniques are not affected by the presence of frequency discontinuities and significantly higher jamming power are sustained by the receiver when this type of techniques is adopted.
Publicly available blacklists are popular tools to capture and spread information about misbehaving entities on the Internet. In some cases, their straight-forward utilization leads to many false positives. In this work, we propose a system that combines blacklists with network flow data while introducing automated evaluation techniques to avoid reporting unreliable alerts. The core of the system is formed by an Adaptive Filter together with an Evaluator module. The assessment of the system was performed on data obtained from a national backbone network. The results show the contribution of such a system to the reduction of unreliable alerts.
{The paper considers the efficiency of an adaptive non-recursive filter using the adjustment algorithm for weighting coefficients taking into account the constant envelope of the desired signal when receiving signals with multi-position phase shift keying against the background of noise and non-fluctuation interference. Two types of such interference are considered - harmonic and retranslated. The optimal filter parameters (adaptation coefficient and length) are determined by using simulation; the effect of the filter on the noise immunity of a quadrature coherent signal receiver with multi-position phase shift keying for different combinations of interference and their intensity is estimated. It is shown that such an adaptive filter can successfully deal with the most dangerous sighting harmonic interference}.
Intentional interference presents a major threat to the operation of the Global Navigation Satellite Systems. Adaptive notch filtering provides an excellent countermeasure and deterrence against narrowband interference. This paper presents a comparative performance analysis of two adaptive notch filtering algorithms for GPS specific applications which are based on Direct form Second Order and Lattice-Based notch filter structures. Performance of each algorithm is evaluated considering the ratio of jamming to noise density against the effective signal to noise ratio at the output of the correlator. A fully adaptive lattice notch filter is proposed, which is able to simultaneously adapt its coefficients to alter the notch frequency along with the bandwidth of the notch filter. The filter demonstrated a superior tracking performance and convergence rate in comparison to an existing algorithm taken from the literature. Moreover, this paper describes the complete GPS modelling platform implemented in Simulink too.
This paper addresses the issue of magnetic resonance (MR) Image reconstruction at compressive sampling (or compressed sensing) paradigm followed by its segmentation. To improve image reconstruction problem at low measurement space, weighted linear prediction and random noise injection at unobserved space are done first, followed by spatial domain de-noising through adaptive recursive filtering. Reconstructed image, however, suffers from imprecise and/or missing edges, boundaries, lines, curvatures etc. and residual noise. Curvelet transform is purposely used for removal of noise and edge enhancement through hard thresholding and suppression of approximate sub-bands, respectively. Finally Genetic algorithms (GAs) based clustering is done for segmentation of sharpen MR Image using weighted contribution of variance and entropy values. Extensive simulation results are shown to highlight performance improvement of both image reconstruction and segmentation problems.
Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/ measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.
A novel short-time Fourier transform (STFT) domain adaptive filtering scheme is proposed that can be easily combined with nonlinear post filters such as residual echo or noise reduction in acoustic echo cancellation. Unlike normal STFT subband adaptive filters, which suffers from aliasing artifacts due to its poor prototype filter, our scheme achieves good accuracy by exploiting the relationship between the linear convolution and the poor prototype filter, i.e., the STFT window function. The effectiveness of our scheme was confirmed through the results of simulations conducted to compare it with conventional methods.
This brief presents a methodology to develop recursive filters in reproducing kernel Hilbert spaces. Unlike previous approaches that exploit the kernel trick on filtered and then mapped samples, we explicitly define the model recursivity in the Hilbert space. For that, we exploit some properties of functional analysis and recursive computation of dot products without the need of preimaging or a training dataset. We illustrate the feasibility of the methodology in the particular case of the γ-filter, which is an infinite impulse response filter with controlled stability and memory depth. Different algorithmic formulations emerge from the signal model. Experiments in chaotic and electroencephalographic time series prediction, complex nonlinear system identification, and adaptive antenna array processing demonstrate the potential of the approach for scenarios where recursivity and nonlinearity have to be readily combined.
Misalignment angles estimation of strapdown inertial navigation system (INS) using global positioning system (GPS) data is highly affected by measurement noises, especially with noises displaying time varying statistical properties. Hence, adaptive filtering approach is recommended for the purpose of improving the accuracy of in-motion alignment. In this paper, a simplified form of Celso's adaptive stochastic filtering is derived and applied to estimate both the INS error states and measurement noise statistics. To detect and bound the influence of outliers in INS/GPS integration, outlier detection based on jerk tracking model is also proposed. The accuracy and validity of the proposed algorithm is tested through ground based navigation experiments.
A robust adaptive filtering algorithm based on the convex combination of two adaptive filters under the maximum correntropy criterion (MCC) is proposed. Compared with conventional minimum mean square error (MSE) criterion-based adaptive filtering algorithm, the MCC-based algorithm shows a better robustness against impulsive interference. However, its major drawback is the conflicting requirements between convergence speed and steady-state mean square error. In this letter, we use the convex combination method to overcome the tradeoff problem. Instead of minimizing the squared error to update the mixing parameter in conventional convex combination scheme, the method of maximizing the correntropy is introduced to make the proposed algorithm more robust against impulsive interference. Additionally, we report a novel weight transfer method to further improve the tracking performance. The good performance in terms of convergence rate and steady-state mean square error is demonstrated in plant identification scenarios that include impulsive interference and abrupt changes.
In adaptive processing applications, the design of the adaptive filter requires estimation of the unknown interference-plus-noise covariance matrix from secondary training data. The presence of outliers in the training data can severely degrade the performance of adaptive processing. By exploiting the sparse prior of the outliers, a Bayesian framework to develop a computationally efficient outlier-resistant adaptive filter based on sparse Bayesian learning (SBL) is proposed. The expectation-maximisation (EM) algorithm is used therein to obtain a maximum a posteriori (MAP) estimate of the interference-plus-noise covariance matrix. Numerical simulations demonstrate the superiority of the proposed method over existing methods.
In this paper, the use of some of the most popular adaptive filtering algorithms for the purpose of linearizing power amplifiers by the well-known digital predistortion (DPD) technique is investigated. First, an introduction to the problem of power amplifier linearization is given, followed by a discussion of the model used for this purpose. Next, a variety of adaptive algorithms are used to construct the digital predistorter function for a highly nonlinear power amplifier and their performance is comparatively analyzed. Based on the simulations presented in this paper, conclusions regarding the choice of algorithm are derived.
This letter presents an adaptive filtering approach of synthetic aperture radar (SAR) image times series based on the analysis of the temporal evolution. First, change detection matrices (CDMs) containing information on changed and unchanged pixels are constructed for each spatial position over the time series by implementing coefficient of variation (CV) cross tests. Afterward, the CDM provides for each pixel in each image an adaptive spatiotemporal neighborhood, which is used to derive the filtered value. The proposed approach is illustrated on a time series of 25 ascending TerraSAR-X images acquired from November 6, 2009 to September 25, 2011 over the Chamonix-Mont-Blanc test-site, which includes different kinds of change, such as parking occupation, glacier surface evolution, etc.
The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
Currently, most electrophotographic printers use halftoning technique to print continuous tone images, so scanned images obtained from such hard copies are usually corrupted by screen like artifacts. In this paper, a new model of scanned halftone image is proposed to consider both printing distortions and halftone patterns. Based on this model, an adaptive filtering based descreening method is proposed to recover high quality contone images from the scanned images. Image redundancy based denoising algorithm is first adopted to reduce printing noise and attenuate distortions. Then, screen frequency of the scanned image and local gradient features are used for adaptive filtering. Basic contone estimate is obtained by filtering the denoised scanned image with an anisotropic Gaussian kernel, whose parameters are automatically adjusted with the screen frequency and local gradient information. Finally, an edge-preserving filter is used to further enhance the sharpness of edges to recover a high quality contone image. Experiments on real scanned images demonstrate that the proposed method can recover high quality contone images from the scanned images. Compared with the state-of-the-art methods, the proposed method produces very sharp edges and much cleaner smooth regions.
A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.
This paper proposes a modified empirical-mode decomposition (EMD) filtering-based adaptive dynamic phasor estimation algorithm for the removal of exponentially decaying dc offset. Discrete Fourier transform does not have the ability to attain the accurate phasor of the fundamental frequency component in digital protective relays under dynamic system fault conditions because the characteristic of exponentially decaying dc offset is not consistent. EMD is a fully data-driven, not model-based, adaptive filtering procedure for extracting signal components. But the original EMD technique has high computational complexity and requires a large data series. In this paper, a short data series-based EMD filtering procedure is proposed and an optimum hermite polynomial fitting (OHPF) method is used in this modified procedure. The proposed filtering technique has high accuracy and convergent speed, and is greatly appropriate for relay applications. This paper illustrates the characteristics of the proposed technique and evaluates its performance by computer-simulated signals, PSCAD/EMTDC-generated signals, and real power system fault signals.
Recently, there has been a pronounced increase of interest in the field of renewable energy. In this area power inverters are crucial building blocks in a segment of energy converters, since they change direct current (DC) to alternating current (AC). Grid connected power inverters should operate in synchronism with the grid voltage. In this paper, the structure of a power system based on adaptive filtering is described. The main purpose of the adaptive filter is to adapt the output signal of the inverter to the corresponding load and/or grid signal. By involving adaptive filtering the response time decreases and quality of power delivery to the load or grid increases. A comparative analysis which relates to power system operation without and with adaptive filtering is given. In addition, the impact of variable impedance of load on quality of delivered power is considered. Results which relates to total harmonic distortion (THD) factor are obtained by Matlab/Simulink software.
Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering algorithms have been proposed that make no assumptions regarding the linearity of the input space. Additionally, advances in wavelet data compression/dimension reduction have also led to new algorithms that are appropriate for producing a hybrid nonlinear filtering framework. In this paper we utilize a combination of wavelet dimension reduction and kernel adaptive filtering. We derive algorithms in which the dimension of the data is reduced by a wavelet transform. We follow this by kernel adaptive filtering algorithms on the reduced-domain data to find the appropriate model parameters demonstrating improved minimization of the mean-squared error (MSE). Another important feature of our methods is that the wavelet filter is also chosen based on the data, on-the-fly. In particular, it is shown that by using a few optimal wavelet coefficients from the constructed wavelet filter for both training and testing data sets as the input to the kernel adaptive filter, convergence to the near optimal learning curve (MSE) results. We demonstrate these algorithms on simulated and a real data set from food processing.