Biblio
Pattern lock has been widely used for authentication to protect user privacy on mobile devices (e.g., smartphones and tablets). Several attacks have been constructed to crack the lock. However, these approaches require the attackers to be either physically close to the target device or able to manipulate the network facilities (e.g., wifi hotspots) used by the victims. Therefore, the effectiveness of the attacks is highly sensitive to the setting of the environment where the users use the mobile devices. Also, these attacks are not scalable since they cannot easily infer patterns of a large number of users. Motivated by an observation that fingertip motions on the screen of a mobile device can be captured by analyzing surrounding acoustic signals on it, we propose PatternListener, a novel acoustic attack that cracks pattern lock by leveraging and analyzing imperceptible acoustic signals reflected by the fingertip. It leverages speakers and microphones of the victim's device to play imperceptible audio and record the acoustic signals reflected from the fingertip. In particular, it infers each unlock pattern by analyzing individual lines that are the trajectories of the fingertip and composed of the pattern. We propose several algorithms to construct signal segments for each line and infer possible candidates of each individual line according to the signal segments. Finally, we produce a tree to map all line candidates into grid patterns and thereby obtain the candidates of the entire unlock pattern. We implement a PatternListener prototype by using off-the-shelf smartphones and thoroughly evaluate it using 130 unique patterns. The real experimental results demonstrate that PatternListener can successfully exploit over 90% patterns in five attempts.
To prevent users' privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, etc., to enhance the privacy protection. However, these approaches are vulnerable to replay attacks. Although state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises. Towards this end, we explore liveness verification of user authentication leveraging users' lip movements, which are robust to noisy environments. In this paper, we propose a lip reading-based user authentication system, LipPass, which extracts unique behavioral characteristics of users' speaking lips leveraging build-in audio devices on smartphones for user authentication. We first investigate Doppler profiles of acoustic signals caused by users' speaking lips, and find that there are unique lip movement patterns for different individuals. To characterize the lip movements, we propose a deep learning-based method to extract efficient features from Doppler profiles, and employ Support Vector Machine and Support Vector Domain Description to construct binary classifiers and spoofer detectors for user identification and spoofer detection, respectively. Afterwards, we develop a binary tree-based authentication approach to accurately identify each individual leveraging these binary classifiers and spoofer detectors with respect to registered users. Through extensive experiments involving 48 volunteers in four real environments, LipPass can achieve 90.21% accuracy in user identification and 93.1% accuracy in spoofer detection.