LipPass: Lip Reading-based User Authentication on Smartphones Leveraging Acoustic Signals
Title | LipPass: Lip Reading-based User Authentication on Smartphones Leveraging Acoustic Signals |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Lu, L., Yu, J., Chen, Y., Liu, H., Zhu, Y., Liu, Y., Li, M. |
Conference Name | IEEE INFOCOM 2018 - IEEE Conference on Computer Communications |
Date Published | April 2018 |
Publisher | IEEE |
ISBN Number | 978-1-5386-4128-6 |
Keywords | Acoustic Fingerprints, Acoustic signal processing, acoustic signals, Acoustics, audio signal processing, authentication, authorisation, binary classifiers, binary tree-based authentication, biometric-based authentication, build-in audio devices, composability, data protection, deep learning-based method, Doppler effect, Doppler profiles, feature extraction, Human Behavior, learning (artificial intelligence), lip movement patterns, lip reading-based user authentication system, LipPass, Lips, message authentication, mobile computing, pattern classification, privacy protection, pubcrawl, replay attacks, Resiliency, smart phones, smartphones, spoofer detectors, support vector machine, Support vector machines |
Abstract | To prevent users' privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, etc., to enhance the privacy protection. However, these approaches are vulnerable to replay attacks. Although state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises. Towards this end, we explore liveness verification of user authentication leveraging users' lip movements, which are robust to noisy environments. In this paper, we propose a lip reading-based user authentication system, LipPass, which extracts unique behavioral characteristics of users' speaking lips leveraging build-in audio devices on smartphones for user authentication. We first investigate Doppler profiles of acoustic signals caused by users' speaking lips, and find that there are unique lip movement patterns for different individuals. To characterize the lip movements, we propose a deep learning-based method to extract efficient features from Doppler profiles, and employ Support Vector Machine and Support Vector Domain Description to construct binary classifiers and spoofer detectors for user identification and spoofer detection, respectively. Afterwards, we develop a binary tree-based authentication approach to accurately identify each individual leveraging these binary classifiers and spoofer detectors with respect to registered users. Through extensive experiments involving 48 volunteers in four real environments, LipPass can achieve 90.21% accuracy in user identification and 93.1% accuracy in spoofer detection. |
URL | https://ieeexplore.ieee.org/document/8486283 |
DOI | 10.1109/INFOCOM.2018.8486283 |
Citation Key | lu_lippass:_2018 |
- privacy protection
- learning (artificial intelligence)
- lip movement patterns
- lip reading-based user authentication system
- LipPass
- Lips
- message authentication
- mobile computing
- pattern classification
- Human behavior
- pubcrawl
- replay attacks
- Resiliency
- smart phones
- Smartphones
- spoofer detectors
- support vector machine
- Support vector machines
- biometric-based authentication
- Acoustic signal processing
- acoustic signals
- Acoustics
- audio signal processing
- authentication
- authorisation
- binary classifiers
- binary tree-based authentication
- Acoustic Fingerprints
- build-in audio devices
- composability
- Data protection
- deep learning-based method
- Doppler effect
- Doppler profiles
- feature extraction