Visible to the public Biblio

Filters: Keyword is electroencephalogram signals  [Clear All Filters]
2020-08-13
Sadeghi, Koosha, Banerjee, Ayan, Gupta, Sandeep K. S..  2019.  An Analytical Framework for Security-Tuning of Artificial Intelligence Applications Under Attack. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :111—118.
Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.
2019-11-26
Shukla, Anjali, Rakshit, Arnab, Konar, Amit, Ghosh, Lidia, Nagar, Atulya K..  2018.  Decoding of Mind-Generated Pattern Locks for Security Checking Using Type-2 Fuzzy Classifier. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :1976-1981.

Brain Computer Interface (BCI) aims at providing a better quality of life to people suffering from neuromuscular disability. This paper establishes a BCI paradigm to provide a biometric security option, used for locking and unlocking personal computers or mobile phones. Although it is primarily meant for the people with neurological disorder, its application can safely be extended for the use of normal people. The proposed scheme decodes the electroencephalogram signals liberated by the brain of the subjects, when they are engaged in selecting a sequence of dots in(6×6)2-dimensional array, representing a pattern lock. The subject, while selecting the right dot in a row, would yield a P300 signal, which is decoded later by the brain-computer interface system to understand the subject's intention. In case the right dots in all the 6 rows are correctly selected, the subject would yield P300 signals six times, which on being decoded by a BCI system would allow the subject to access the system. Because of intra-subjective variation in the amplitude and wave-shape of the P300 signal, a type 2 fuzzy classifier has been employed to classify the presence/absence of the P300 signal in the desired window. A comparison of performances of the proposed classifier with others is also included. The functionality of the proposed system has been validated using the training instances generated for 30 subjects. Experimental results confirm that the classification accuracy for the present scheme is above 90% irrespective of subjects.