Visible to the public Biblio

Filters: Keyword is otoacoustic emissions  [Clear All Filters]
2015-05-04
Yuxi Liu, Hatzinakos, D..  2014.  Human acoustic fingerprints: A novel biometric modality for mobile security. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :3784-3788.

Recently, the demand for more robust protection against unauthorized use of mobile devices has been rapidly growing. This paper presents a novel biometric modality Transient Evoked Otoacoustic Emission (TEOAE) for mobile security. Prior works have investigated TEOAE for biometrics in a setting where an individual is to be identified among a pre-enrolled identity gallery. However, this limits the applicability to mobile environment, where attacks in most cases are from imposters unknown to the system before. Therefore, we employ an unsupervised learning approach based on Autoencoder Neural Network to tackle such blind recognition problem. The learning model is trained upon a generic dataset and used to verify an individual in a random population. We also introduce the framework of mobile biometric system considering practical application. Experiments show the merits of the proposed method and system performance is further evaluated by cross-validation with an average EER 2.41% achieved.

2015-05-01
Yuxi Liu, Hatzinakos, D..  2014.  Earprint: Transient Evoked Otoacoustic Emission for Biometrics. Information Forensics and Security, IEEE Transactions on. 9:2291-2301.

Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears.