Visible to the public Biblio

Filters: Author is Wu, Xi  [Clear All Filters]
2020-10-05
Zhao, Yongxin, Wu, Xi, Liu, Jing, Yang, Yilong.  2018.  Formal Modeling and Security Analysis for OpenFlow-Based Networks. 2018 23rd International Conference on Engineering of Complex Computer Systems (ICECCS). :201–204.
We present a formal OpenFlow-based network programming language (OF) including various flow rules, which can not only describe the behaviors of an individual switch, but also support to model a network of switches connected in the point-to-point topology. Besides, a topology-oriented operational semantics of the proposed language is explored to specify how the packet is processed and delivered in the OpenFlow-based networks. Based on the formal framework, we also propose an approach to detect potential security threats caused by the conflict of dynamic flow rules imposed by dynamic OpenFlow applications.
2020-02-18
Chen, Jiefeng, Wu, Xi, Rastogi, Vaibhav, Liang, Yingyu, Jha, Somesh.  2019.  Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :480–495.

Wide adoption of artificial neural networks in various domains has led to an increasing interest in defending adversarial attacks against them. Preprocessing defense methods such as pixel discretization are particularly attractive in practice due to their simplicity, low computational overhead, and applicability to various systems. It is observed that such methods work well on simple datasets like MNIST, but break on more complicated ones like ImageNet under recently proposed strong white-box attacks. To understand the conditions for success and potentials for improvement, we study the pixel discretization defense method, including more sophisticated variants that take into account the properties of the dataset being discretized. Our results again show poor resistance against the strong attacks. We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets. Furthermore, our arguments present insights why some other preprocessing defenses may be insecure.

2018-09-12
Jang, Uyeong, Wu, Xi, Jha, Somesh.  2017.  Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning. Proceedings of the 33rd Annual Computer Security Applications Conference. :262–277.
Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms are being used in diverse domains where security is a concern, such as, automotive systems, finance, health-care, computer vision, speech recognition, natural-language processing, and malware detection. Of particular concern is use of ML in cyberphysical systems, such as driver-less cars and aviation, where the presence of an adversary can cause serious consequences. In this paper we focus on attacks caused by adversarial samples, which are inputs crafted by adding small, often imperceptible, perturbations to force a ML model to misclassify. We present a simple gradient-descent based algorithm for finding adversarial samples, which performs well in comparison to existing algorithms. The second issue that this paper tackles is that of metrics. We present a novel metric based on few computer-vision algorithms for measuring the quality of adversarial samples.
2018-06-07
Wu, Xi, Li, Fengan, Kumar, Arun, Chaudhuri, Kamalika, Jha, Somesh, Naughton, Jeffrey.  2017.  Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics. Proceedings of the 2017 ACM International Conference on Management of Data. :1307–1322.

While significant progress has been made separately on analytics systems for scalable stochastic gradient descent (SGD) and private SGD, none of the major scalable analytics frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address both issues in an integrated manner. In contrast to the white-box approach adopted by previous work, we revisit and use the classical technique of output perturbation to devise a novel “bolt-on” approach to private SGD. While our approach trivially addresses (2), it makes (1) even more challenging. We address this challenge by providing a novel analysis of the L2-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We integrate our algorithm, as well as other state-of-the-art differentially private SGD, into Bismarck, a popular scalable SGD-based analytics system on top of an RDBMS. Extensive experiments show that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms on many real datasets.