Biblio
While email plays a growingly important role on the Internet, we are faced with more severe challenges brought by compromised email accounts, especially for the administrators of institutional email service providers. Inspired by the previous experience on spam filtering and compromised accounts detection, we propose several criteria, like Success Outdegree Proportion, Reverse Pagerank, Recipient Clustering Coefficient and Legitimate Recipient Proportion, for compromised email accounts detection from the perspective of graph topology in this paper. Specifically, several widely used social network analysis metrics are used and adapted according to the characteristics of mail log analysis. We evaluate our methods on a dataset constructed by mining the one month (30 days) mail log from an university with 118,617 local users and 11,460,399 mail log entries. The experimental results demonstrate that our methods achieve very positive performance, and we also prove that these methods can be efficiently applied on even larger datasets.
After being widely studied in theory, physical layer security schemes are getting closer to enter the consumer market. Still, a thorough practical analysis of their resilience against attacks is missing. In this work, we use software-defined radios to implement such a physical layer security scheme, namely, orthogonal blinding. To this end, we use orthogonal frequency-division multiplexing (OFDM) as a physical layer, similarly to WiFi. In orthogonal blinding, a multi-antenna transmitter overlays the data it transmits with noise in such a way that every node except the intended receiver is disturbed by the noise. Still, our known-plaintext attack can extract the data signal at an eavesdropper by means of an adaptive filter trained using a few known data symbols. Our demonstrator illustrates the iterative training process at the symbol level, thus showing the practicability of the attack.
The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.
Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/ measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.
This brief presents a methodology to develop recursive filters in reproducing kernel Hilbert spaces. Unlike previous approaches that exploit the kernel trick on filtered and then mapped samples, we explicitly define the model recursivity in the Hilbert space. For that, we exploit some properties of functional analysis and recursive computation of dot products without the need of preimaging or a training dataset. We illustrate the feasibility of the methodology in the particular case of the γ-filter, which is an infinite impulse response filter with controlled stability and memory depth. Different algorithmic formulations emerge from the signal model. Experiments in chaotic and electroencephalographic time series prediction, complex nonlinear system identification, and adaptive antenna array processing demonstrate the potential of the approach for scenarios where recursivity and nonlinearity have to be readily combined.
Misalignment angles estimation of strapdown inertial navigation system (INS) using global positioning system (GPS) data is highly affected by measurement noises, especially with noises displaying time varying statistical properties. Hence, adaptive filtering approach is recommended for the purpose of improving the accuracy of in-motion alignment. In this paper, a simplified form of Celso's adaptive stochastic filtering is derived and applied to estimate both the INS error states and measurement noise statistics. To detect and bound the influence of outliers in INS/GPS integration, outlier detection based on jerk tracking model is also proposed. The accuracy and validity of the proposed algorithm is tested through ground based navigation experiments.
A robust adaptive filtering algorithm based on the convex combination of two adaptive filters under the maximum correntropy criterion (MCC) is proposed. Compared with conventional minimum mean square error (MSE) criterion-based adaptive filtering algorithm, the MCC-based algorithm shows a better robustness against impulsive interference. However, its major drawback is the conflicting requirements between convergence speed and steady-state mean square error. In this letter, we use the convex combination method to overcome the tradeoff problem. Instead of minimizing the squared error to update the mixing parameter in conventional convex combination scheme, the method of maximizing the correntropy is introduced to make the proposed algorithm more robust against impulsive interference. Additionally, we report a novel weight transfer method to further improve the tracking performance. The good performance in terms of convergence rate and steady-state mean square error is demonstrated in plant identification scenarios that include impulsive interference and abrupt changes.
In this paper, the use of some of the most popular adaptive filtering algorithms for the purpose of linearizing power amplifiers by the well-known digital predistortion (DPD) technique is investigated. First, an introduction to the problem of power amplifier linearization is given, followed by a discussion of the model used for this purpose. Next, a variety of adaptive algorithms are used to construct the digital predistorter function for a highly nonlinear power amplifier and their performance is comparatively analyzed. Based on the simulations presented in this paper, conclusions regarding the choice of algorithm are derived.
The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
Currently, most electrophotographic printers use halftoning technique to print continuous tone images, so scanned images obtained from such hard copies are usually corrupted by screen like artifacts. In this paper, a new model of scanned halftone image is proposed to consider both printing distortions and halftone patterns. Based on this model, an adaptive filtering based descreening method is proposed to recover high quality contone images from the scanned images. Image redundancy based denoising algorithm is first adopted to reduce printing noise and attenuate distortions. Then, screen frequency of the scanned image and local gradient features are used for adaptive filtering. Basic contone estimate is obtained by filtering the denoised scanned image with an anisotropic Gaussian kernel, whose parameters are automatically adjusted with the screen frequency and local gradient information. Finally, an edge-preserving filter is used to further enhance the sharpness of edges to recover a high quality contone image. Experiments on real scanned images demonstrate that the proposed method can recover high quality contone images from the scanned images. Compared with the state-of-the-art methods, the proposed method produces very sharp edges and much cleaner smooth regions.
This paper proposes a modified empirical-mode decomposition (EMD) filtering-based adaptive dynamic phasor estimation algorithm for the removal of exponentially decaying dc offset. Discrete Fourier transform does not have the ability to attain the accurate phasor of the fundamental frequency component in digital protective relays under dynamic system fault conditions because the characteristic of exponentially decaying dc offset is not consistent. EMD is a fully data-driven, not model-based, adaptive filtering procedure for extracting signal components. But the original EMD technique has high computational complexity and requires a large data series. In this paper, a short data series-based EMD filtering procedure is proposed and an optimum hermite polynomial fitting (OHPF) method is used in this modified procedure. The proposed filtering technique has high accuracy and convergent speed, and is greatly appropriate for relay applications. This paper illustrates the characteristics of the proposed technique and evaluates its performance by computer-simulated signals, PSCAD/EMTDC-generated signals, and real power system fault signals.
Recently, there has been a pronounced increase of interest in the field of renewable energy. In this area power inverters are crucial building blocks in a segment of energy converters, since they change direct current (DC) to alternating current (AC). Grid connected power inverters should operate in synchronism with the grid voltage. In this paper, the structure of a power system based on adaptive filtering is described. The main purpose of the adaptive filter is to adapt the output signal of the inverter to the corresponding load and/or grid signal. By involving adaptive filtering the response time decreases and quality of power delivery to the load or grid increases. A comparative analysis which relates to power system operation without and with adaptive filtering is given. In addition, the impact of variable impedance of load on quality of delivered power is considered. Results which relates to total harmonic distortion (THD) factor are obtained by Matlab/Simulink software.
- « first
- ‹ previous
- 1
- 2
- 3