Earprint: Transient Evoked Otoacoustic Emission for Biometrics
Title | Earprint: Transient Evoked Otoacoustic Emission for Biometrics |
Publication Type | Journal Article |
Year of Publication | 2014 |
Authors | Yuxi Liu, Hatzinakos, D. |
Journal | Information Forensics and Security, IEEE Transactions on |
Volume | 9 |
Pagination | 2291-2301 |
Date Published | Dec |
ISSN | 1556-6013 |
Keywords | Access Control, acoustic response, advance forgery methods, advanced forgery techniques, advanced spoofing techniques, Auditory system, authorisation, biometric fusion, biometrics, biometrics (access control), click stimulus, cochlea, data privacy, Data security, earprint, falsification attacks, feature extraction, financial transaction, human auditory system, information fusion, learning (artificial intelligence), Linear discriminant analysis, machine learning technique linear discriminant analysis, nonstationary signal time-frequency representation, otoacoustic emissions, replay attacks, Robust Biometric Modality, sensor fusion, signal representation, TEOAE, Time-frequency Analysis, transient evoked otoacoustic emission, wavelet analysis |
Abstract | Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears. |
URL | https://ieeexplore.ieee.org/document/6914592 |
DOI | 10.1109/TIFS.2014.2361205 |
Citation Key | 6914592 |
- replay attacks
- financial transaction
- human auditory system
- information fusion
- learning (artificial intelligence)
- Linear discriminant analysis
- machine learning technique linear discriminant analysis
- nonstationary signal time-frequency representation
- otoacoustic emissions
- feature extraction
- Robust Biometric Modality
- sensor fusion
- signal representation
- TEOAE
- Time-frequency Analysis
- transient evoked otoacoustic emission
- wavelet analysis
- Access Control
- falsification attacks
- earprint
- Data Security
- data privacy
- cochlea
- click stimulus
- biometrics (access control)
- biometrics
- biometric fusion
- authorisation
- Auditory system
- advanced spoofing techniques
- advance forgery methods
- advanced forgery techniques
- acoustic response