Visible to the public Biblio

Filters: Author is McDaniel, Patrick  [Clear All Filters]
2018-11-19
Papernot, Nicolas, McDaniel, Patrick, Goodfellow, Ian, Jha, Somesh, Celik, Z. Berkay, Swami, Ananthram.  2017.  Practical Black-Box Attacks Against Machine Learning. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :506–519.

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.

2018-05-09
Achleitner, Stefan, La Porta, Thomas, Jaeger, Trent, McDaniel, Patrick.  2017.  Adversarial Network Forensics in Software Defined Networking: Demo. Proceedings of the Symposium on SDN Research. :177–178.
The essential part of an SDN-based network are flow rules that enable network elements to steer and control the traffic and deploy policy enforcement points with a fine granularity at any entry-point in a network. Such applications, implemented with the usage of OpenFlow rules, are already integral components of widely used SDN controllers such as Floodlight or OpenDayLight. The implementation details of network policies are reflected in the composition of flow rules and leakage of such information provides adversaries with a significant attack advantage such as bypassing Access Control Lists (ACL), reconstructing the resource distribution of Load Balancers or revealing of Moving Target Defense techniques. In this demo [4, 5] we present our open-source scanner SDNMap and demonstrate the findings discussed in the paper "Adversarial Network Forensics in Software Defined Networking" [6]. On two real world examples, Floodlight's Access Control Lists (ACL) and Floodlight's Load Balancer (LBaaS), we show that severe security issues arise with the ability to reconstruct the details of OpenFlow rules on the data-plane.
2018-03-05
Celik, Z. Berkay, McDaniel, Patrick, Izmailov, Rauf.  2017.  Feature Cultivation in Privileged Information-Augmented Detection. Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics. :73–80.

Modern detection systems use sensor outputs available in the deployment environment to probabilistically identify attacks. These systems are trained on past or synthetic feature vectors to create a model of anomalous or normal behavior. Thereafter, run-time collected sensor outputs are compared to the model to identify attacks (or the lack of attack). While this approach to detection has been proven to be effective in many environments, it is limited to training on only features that can be reliably collected at detection time. Hence, they fail to leverage the often vast amount of ancillary information available from past forensic analysis and post-mortem data. In short, detection systems do not train (and thus do not learn from) features that are unavailable or too costly to collect at run-time. Recent work proposed an alternate model construction approach that integrates forensic "privilege" information–-features reliably available at training time, but not at run-time–-to improve accuracy and resilience of detection systems. In this paper, we further evaluate two of proposed techniques to model training with privileged information: knowledge transfer, and model influence. We explore the cultivation of privileged features, the efficiency of those processes and their influence on the detection accuracy. We observe that the improved integration of privileged features makes the resulting detection models more accurate. Our evaluation shows that use of privileged information leads to up to 8.2% relative decrease in detection error for fast-flux bot detection over a system with no privileged information, and 5.5% for malware classification.

2018-02-21
Achleitner, Stefan, La Porta, Thomas, Jaeger, Trent, McDaniel, Patrick.  2017.  Adversarial Network Forensics in Software Defined Networking. Proceedings of the Symposium on SDN Research. :8–20.
Software Defined Networking (SDN), and its popular implementation OpenFlow, represent the foundation for the design and implementation of modern networks. The essential part of an SDN-based network are flow rules that enable network elements to steer and control the traffic and deploy policy enforcement points with a fine granularity at any entry-point in a network. Such applications, implemented with the usage of OpenFlow rules, are already integral components of widely used SDN controllers such as Floodlight or OpenDayLight. The implementation details of network policies are reflected in the composition of flow rules and leakage of such information provides adversaries with a significant attack advantage such as bypassing Access Control Lists (ACL), reconstructing the resource distribution of Load Balancers or revealing of Moving Target Defense techniques. In this paper we introduce a new attack vector on SDN by showing how the detailed composition of flow rules can be reconstructed by network users without any prior knowledge of the SDN controller or its architecture. To our best knowledge, in SDN, such reconnaissance techniques have not been considered so far. We introduce SDNMap, an open-source scanner that is able to accurately reconstruct the detailed composition of flow rules by performing active probing and listening to the network traffic. We demonstrate in a number of real-world SDN applications that this ability provides adversaries with a significant attack advantage and discuss ways to prevent the introduced reconnaissance techniques. Our SDNMap scanner is able to reconstruct flow rules between network endpoints with an accuracy of over 96%.
2017-04-20
Achleitner, Stefan, La Porta, Thomas, McDaniel, Patrick, Sugrim, Shridatt, Krishnamurthy, Srikanth V., Chadha, Ritu.  2016.  Cyber Deception: Virtual Networks to Defend Insider Reconnaissance. Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :57–68.

Advanced targeted cyber attacks often rely on reconnaissance missions to gather information about potential targets and their location in a networked environment to identify vulnerabilities which can be exploited for further attack maneuvers. Advanced network scanning techniques are often used for this purpose and are automatically executed by malware infected hosts. In this paper we formally define network deception to defend reconnaissance and develop RDS (Reconnaissance Deception System), which is based on SDN (Software Defined Networking), to achieve deception by simulating virtual network topologies. Our system thwarts network reconnaissance by delaying the scanning techniques of adversaries and invalidating their collected information, while minimizing the performance impact on benign network traffic. We introduce approaches to defend malicious network discovery and reconnaissance in computer networks, which are required for targeted cyber attacks such as Advanced Persistent Threats (APT). We show, that our system is able to invalidate an attackers information, delay the process of finding vulnerable hosts and identify the source of adversarial reconnaissance within a network, while only causing a minuscule performance overhead of 0.2 milliseconds per packet flow on average.