Visible to the public Boosting the Guessing Attack Performance on Android Lock Patterns with Smudge Attacks

TitleBoosting the Guessing Attack Performance on Android Lock Patterns with Smudge Attacks
Publication TypeConference Paper
Year of Publication2017
AuthorsCha, Seunghun, Kwag, Sungsu, Kim, Hyoungshick, Huh, Jun Ho
Conference NameProceedings of the 2017 ACM on Asia Conference on Computer and Communications Security
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-4944-4
Keywordsguessing attack, Human Behavior, pattern lock, pattern locks, pubcrawl, resilience, Resiliency, Scalability, smudge attack
AbstractAndroid allows 20 consecutive fail attempts on unlocking a device. This makes it difficult for pure guessing attacks to crack user patterns on a stolen device before it permanently locks itself. We investigate the effectiveness of combining Markov model-based guessing attacks with smudge attacks on unlocking Android devices within 20 attempts. Detected smudges are used to pre-compute all the possible segments and patterns, significantly reducing the pattern space that needs to be brute-forced. Our Markov-model was trained using 70% of a real-world pattern dataset that consists of 312 patterns. We recruited 12 participants to draw the remaining 30% on Samsung Galaxy S4, and used smudges they left behind to analyze the performance of the combined attack. Our results show that this combined method can significantly improve the performance of pure guessing attacks, cracking 74.17% of patterns compared to just 13.33% when the Markov model-based guessing attack was performed alone--those results were collected from a naive usage scenario where the participants were merely asked to unlock a given device. Even under a more complex scenario that asked the participants to use the Facebook app for a few minutes--obscuring smudges were added as a result--our combined attack, at 31.94%, still outperformed the pure guessing attack at 13.33%. Obscuring smudges can significantly affect the performance of smudge-based attacks. Based on this finding, we recommend that a mitigation technique should be designed to help users add obscurity, e.g., by asking users to draw a second random pattern upon unlocking a device.
URLhttp://doi.acm.org/10.1145/3052973.3052989
DOI10.1145/3052973.3052989
Citation Keycha_boosting_2017