Biblio
The paper proposes a novel technique of EEG induced Brain-Computer Interface system for user authentication of personal devices. The scheme enables a human user to lock and unlock any personal device using his/her mind generated password. A two stage security verification is employed in the scheme. In the first stage, a 3 × 3 spatial matrix of flickering circles will appear on the screen of which, rows are blinked randomly and user has to mentally select a row which contains his desired circle.P300 is released when the desired row is blinked. Successful selection of row is followed by the selection of a flickering circle in the desired row. Gazing at a particular flickering circle generates SSVEP brain pattern which is decoded to trace the mentally selected circle. User is able to store mentally uttered number in the selected circle, later the number with it's spatial position will serve as the password for the unlocking phase. Here, the user is equipped with a headphone where numbers starting from zero to nine are spelled randomly. Spelled number matching with the mentally uttered number generates auditory P300 in the subject's brain. The particular choice of mentally uttered number is detected by successful detection of auditory P300. A novel weight update algorithm of Recurrent Neural Network (RNN), based on Extended-Kalman Filter and Particle Filter is used here for classifying the brain pattern. The proposed classifier achieves the best classification accuracy of 95.6%, 86.5% and 83.5% for SSVEP, visual P300 and auditory P300 respectively.
Visual Tracking methods based on particle filter framework uses frequently the state space information of the target object to calculate the observation model, However this often gives a poor estimate if unexpected motions happen, or under conditions of cluttered backgrounds illumination changes, because the model explores the state space without any additional information of current state. In order to avoid the tracking failure, we address in this paper, Particle filter based visual tracking, in which the target appearance model is represented through an adaptive conjunction of color histogram, and space based appearance combining with velocity parameters, then the appearance models is estimated using particles whose weights, are incrementally updated for dynamic adaptation of the cue parametrization.
A robust appearance model is usually required in visual tracking, which can handle pose variation, illumination variation, occlusion and many other interferences occurring in video. So far, a number of tracking algorithms make use of image samples in previous frames to update appearance models. There are many limitations of that approach: 1) At the beginning of tracking, there exists no sufficient amount of data for online update because these adaptive models are data-dependent and 2) in many challenging situations, robustly updating the appearance models is difficult, which often results in drift problems. In this paper, we proposed a tracking algorithm based on compressive sensing theory and particle filter framework. Features are extracted by random projection with data-independent basis. Particle filter is employed to make a more accurate estimation of the target location and make much of the updated classifier. The robustness and the effectiveness of our tracker have been demonstrated in several experiments.