Visible to the public Biblio

Filters: Author is Mittal, Prateek  [Clear All Filters]
2021-11-29
Sun, Yixin, Jee, Kangkook, Sivakorn, Suphannee, Li, Zhichun, Lumezanu, Cristian, Korts-Parn, Lauri, Wu, Zhenyu, Rhee, Junghwan, Kim, Chung Hwan, Chiang, Mung et al..  2020.  Detecting Malware Injection with Program-DNS Behavior. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :552–568.
Analyzing the DNS traffic of Internet hosts has been a successful technique to counter cyberattacks and identify connections to malicious domains. However, recent stealthy attacks hide malicious activities within seemingly legitimate connections to popular web services made by benign programs. Traditional DNS monitoring and signature-based detection techniques are ineffective against such attacks. To tackle this challenge, we present a new program-level approach that can effectively detect such stealthy attacks. Our method builds a fine-grained Program-DNS profile for each benign program that characterizes what should be the “expected” DNS behavior. We find that malware-injected processes have DNS activities which significantly deviate from the Program-DNS profile of the benign program. We then develop six novel features based on the Program-DNS profile, and evaluate the features on a dataset of over 130 million DNS requests collected from a real-world enterprise and 8 million requests from malware-samples executed in a sandbox environment. We compare our detection results with that of previously-proposed features and demonstrate that our new features successfully detect 190 malware-injected processes which fail to be detected by previously-proposed features. Overall, our study demonstrates that fine-grained Program-DNS profiles can provide meaningful and effective features in building detectors for attack campaigns that bypass existing detection systems.
2020-04-03
Song, Liwei, Shokri, Reza, Mittal, Prateek.  2019.  Membership Inference Attacks Against Adversarially Robust Deep Learning Models. 2019 IEEE Security and Privacy Workshops (SPW). :50—56.
In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.
2019-12-30
Ahn, Surin, Gorlatova, Maria, Naghizadeh, Parinaz, Chiang, Mung, Mittal, Prateek.  2018.  Adaptive Fog-Based Output Security for Augmented Reality. Proceedings of the 2018 Morning Workshop on Virtual Reality and Augmented Reality Network. :1–6.
Augmented reality (AR) technologies are rapidly being adopted across multiple sectors, but little work has been done to ensure the security of such systems against potentially harmful or distracting visual output produced by malicious or bug-ridden applications. Past research has proposed to incorporate manually specified policies into AR devices to constrain their visual output. However, these policies can be cumbersome to specify and implement, and may not generalize well to complex and unpredictable environmental conditions. We propose a method for generating adaptive policies to secure visual output in AR systems using deep reinforcement learning. This approach utilizes a local fog computing node, which runs training simulations to automatically learn an appropriate policy for filtering potentially malicious or distracting content produced by an application. Through empirical evaluations, we show that these policies are able to intelligently displace AR content to reduce obstruction of real-world objects, while maintaining a favorable user experience.
2018-02-27
Song, Liwei, Mittal, Prateek.  2017.  POSTER: Inaudible Voice Commands. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2583–2585.

Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries. Previous papers attack voice assistants with obfuscated voice commands by leveraging the gap between speech recognition system and human voice perception. The limitation is that these obfuscated commands are audible and thus conspicuous to device owners. In this poster, we propose a novel mechanism to directly attack the microphone used for sensing voice data with inaudible voice commands. We show that the adversary can exploit the microphone's non-linearity and play well-designed inaudible ultrasounds to cause the microphone to record normal voice commands, and thus control the victim device inconspicuously. We demonstrate via end-to-end real-world experiments that our inaudible voice commands can attack an Android phone and an Amazon Echo device with high success rates at a range of 2-3 meters.