Visible to the public Biblio

Filters: Author is Sharif, Mahmood  [Clear All Filters]
2019-01-16
Sharif, Mahmood, Urakawa, Jumpei, Christin, Nicolas, Kubota, Ayumu, Yamada, Akira.  2018.  Predicting Impending Exposure to Malicious Content from User Behavior. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1487–1501.
Many computer-security defenses are reactive—they operate only when security incidents take place, or immediately thereafter. Recent efforts have attempted to predict security incidents before they occur, to enable defenders to proactively protect their devices and networks. These efforts have primarily focused on long-term predictions. We propose a system that enables proactive defenses at the level of a single browsing session. By observing user behavior, it can predict whether they will be exposed to malicious content on the web seconds before the moment of exposure, thus opening a window of opportunity for proactive defenses. We evaluate our system using three months' worth of HTTP traffic generated by 20,645 users of a large cellular provider in 2017 and show that it can be helpful, even when only very low false positive rates are acceptable, and despite the difficulty of making "on-the-fly” predictions. We also engage directly with the users through surveys asking them demographic and security-related questions, to evaluate the utility of self-reported data for predicting exposure to malicious content. We find that self-reported data can help forecast exposure risk over long periods of time. However, even on the long-term, self-reported data is not as crucial as behavioral measurements to accurately predict exposure.
2018-01-10
Sawaya, Yukiko, Sharif, Mahmood, Christin, Nicolas, Kubota, Ayumu, Nakarai, Akihiro, Yamada, Akira.  2017.  Self-Confidence Trumps Knowledge: A Cross-Cultural Study of Security Behavior. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. :2202–2214.
Computer security tools usually provide universal solutions without taking user characteristics (origin, income level, ...) into account. In this paper, we test the validity of using such universal security defenses, with a particular focus on culture. We apply the previously proposed Security Behavior Intentions Scale (SeBIS) to 3,500 participants from seven countries. We first translate the scale into seven languages while preserving its reliability and structure validity. We then build a regression model to study which factors affect participants' security behavior. We find that participants from different countries exhibit different behavior. For instance, participants from Asian countries, and especially Japan, tend to exhibit less secure behavior. Surprisingly to us, we also find that actual knowledge influences user behavior much less than user self-confidence in their computer security knowledge. Stated differently, what people think they know affects their security behavior more than what they do know.
2017-09-19
Sharif, Mahmood, Bhagavatula, Sruti, Bauer, Lujo, Reiter, Michael K..  2016.  Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1528–1540.

Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.