Biblio
Pattern lock has been widely used for authentication to protect user privacy on mobile devices (e.g., smartphones and tablets). Several attacks have been constructed to crack the lock. However, these approaches require the attackers to be either physically close to the target device or able to manipulate the network facilities (e.g., wifi hotspots) used by the victims. Therefore, the effectiveness of the attacks is highly sensitive to the setting of the environment where the users use the mobile devices. Also, these attacks are not scalable since they cannot easily infer patterns of a large number of users. Motivated by an observation that fingertip motions on the screen of a mobile device can be captured by analyzing surrounding acoustic signals on it, we propose PatternListener, a novel acoustic attack that cracks pattern lock by leveraging and analyzing imperceptible acoustic signals reflected by the fingertip. It leverages speakers and microphones of the victim's device to play imperceptible audio and record the acoustic signals reflected from the fingertip. In particular, it infers each unlock pattern by analyzing individual lines that are the trajectories of the fingertip and composed of the pattern. We propose several algorithms to construct signal segments for each line and infer possible candidates of each individual line according to the signal segments. Finally, we produce a tree to map all line candidates into grid patterns and thereby obtain the candidates of the entire unlock pattern. We implement a PatternListener prototype by using off-the-shelf smartphones and thoroughly evaluate it using 130 unique patterns. The real experimental results demonstrate that PatternListener can successfully exploit over 90% patterns in five attempts.
Silicon Physical Unclonable Function (PUF) is arguably the most promising hardware security primitive. In particular, PUFs that are capable of generating a large amount of challenge response pairs (CRPs) can be used in many security applications. However, these CRPs can also be exploited by machine learning attacks to model the PUF and predict its response. In this paper, we first show that, based on data in the public domain, two popular PUFs that can generate CRPs (i.e., arbiter PUF and reconfigurable ring oscillator (RO) PUF) can be broken by simple logistic regression (LR) attack with about 99% accuracy. We then propose a feedback structure to XOR the PUF response with the challenge and challenge the PUF again to generate the response. Results show that this successfully reduces LR's learning accuracy to the lower 50%, but artificial neural network (ANN) learning attack still has an 80% success rate. Therefore, we propose a configurable ring oscillator based dual-mode PUF which works with both odd number of inverters (like the reconfigurable RO PUF) and even number of inverters (like a bistable ring (BR) PUF). Since currently there are no known attacks that can model both RO PUF and BR PUF, the dual-mode PUF will be resistant to modeling attacks as long as we can hide its working mode from the attackers, which we achieve with two practical methods. Finally, we implement the proposed dual-mode PUF on Nexys 4 FPGA boards and collect real measurement to show that it reduces the learning accuracy of LR and ANN to the mid-50% and low 60%, respectively. In addition, it meets the PUF requirements of uniqueness, randomness, and robustness.
Abundant multimedia data generated in our daily life has intrigued a variety of very important and useful real-world applications such as object detection and recognition etc. Accompany with these applications, many popular feature descriptors have been developed, e.g., SIFT, SURF and HOG. Manipulating massive multimedia data locally, however, is a storage and computation intensive task, especially for resource-constrained clients. In this work, we focus on exploring how to securely outsource the famous feature extraction algorithm–Histogram of Oriented Gradients (HOG) to untrusted cloud servers, without revealing the data owner's private information. For the first time, we investigate this secure outsourcing computation problem under two different models and accordingly propose two novel privacy-preserving HOG outsourcing protocols, by efficiently encrypting image data by somewhat homomorphic encryption (SHE) integrated with single-instruction multiple-data (SIMD), designing a new batched secure comparison protocol, and carefully redesigning every step of HOG to adapt it to the ciphertext domain. Explicit Security and effectiveness analysis are presented to show that our protocols are practically-secure and can approximate well the performance of the original HOG executed in the plaintext domain. Our extensive experimental evaluations further demonstrate that our solutions achieve high efficiency and perform comparably to the original HOG when being applied to human detection.