Biblio
Presented at the NSA Science of Security Quarterly Meeting, October 2014.
Presented as part of the Illinois Science of Security Lablet Bi-Weekly Meeting, September 2014.
In this paper, we develop a new framework to analyze stability and stabilizability of Linear Switched Systems (LSS) as well as their gain computations. Our approach is based on a combination of state space operator descriptions and the Youla parametrization and provides a unified way for analysis and synthesis of LSS, and in fact of Linear Time Varying (LTV) systems, in any lp induced norm sense. By specializing to the l∞ case, we show how Linear Programming (LP) can be used to test stability, stabilizability and to synthesize stabilizing controllers that guarantee a near optimal closed-loop gain.
In this work we are interested in the stability and L2-gain of hybrid systems with linear flow dynamics, periodic time-triggered jumps and nonlinear possibly set-valued jump maps. This class of hybrid systems includes various interesting applications such as periodic event-triggered control. In this paper we also show that sampled-data systems with arbitrarily switching controllers can be captured in this framework by requiring the jump map to be set-valued. We provide novel conditions for the internal stability and L2-gain analysis of these systems adopting a lifting-based approach. In particular, we establish that the internal stability and contractivity in terms of an L2-gain smaller than 1 are equivalent to the internal stability and contractivity of a particular discretetime set-valued nonlinear system. Despite earlier works in this direction, these novel characterisations are the first necessary and sufficient conditions for the stability and the contractivity of this class of hybrid systems. The results are illustrated through multiple new examples.
Presented at the NSA Science of Security Quarterly Meeting, November 2016.
Presented at the NSA Science of Security Quarterly Meeting, November 2016.
Presented at the NSA Science of Security Quarterly Meeting, November 2016.
Presented at the NSA Science of Security Quarterly Meeting, November 2016.
Presented at the NSA Science of Security Quarterly Meeting, July 2016.
Presented at the Science of Security Quarterly Meeting, July 2016.
Presented at NSA Science of Security Quarterly Meeting, July 2016.
Invited Tutorial, Symposium and Bootcamp on the Science of Security (HotSoS 2016), April 2016.
Maintaining the security and privacy hygiene of mobile apps is a critical challenge. Unfortunately, no program analysis algorithm can determine that an application is “secure” or “malware-free.” For example, if an application records audio during a phone call, it may be malware. However, the user may want to use such an application to record phone calls for archival and benign purposes. A key challenge for automated program analysis tools is determining whether or not that behavior is actually desired by the user (i.e., user expectation). This talk presents recent research progress in exploring user expectations in mobile app security.
Presented at the ITI Joint Trust and Security/Science of Security Seminar, January 26, 2016.
Presented at the NSA Science of Security Quarterly Meeting, July 2015.
Since computers are machines, it's tempting to think of computer security as purely a technical problem. However, computing systems are created, used, and maintained by humans, and exist to serve the goals of human and institutional stakeholders. Consequently, effectively addressing the security problem requires understanding this human dimension.
In this tutorial, we discuss this challenge and survey principal research approaches to it.
Invited Tutorial, Symposium and Bootcamp on the Science of Security (HotSoS 2015), April 2015, Urbana, IL.
Presented at the Illinois SoS Lablet Bi-Weekly Meeting, February 2016.
Prentation at Illinois SoS Lablet Bi-Weekly Meeting, January 2015.
Given the ever increasing number of research tools to automatically generate inputs to test Android applications (or simply apps), researchers recently asked the question "Are we there yet?" (in terms of the practicality of the tools). By conducting an empirical study of the various tools, the researchers found that Monkey (the most widely used tool of this category in industrial settings) outperformed all of the research tools in the study. In this paper, we present two signi cant extensions of that study. First, we conduct the rst industrial case study of applying Monkey against WeChat, a popular messenger app with over 762 million monthly active users, and report the empirical ndings on Monkey's limitations in an industrial setting. Second, we develop a new approach to address major limitations of Monkey and accomplish substantial code-coverage improvements over Monkey. We conclude the paper with empirical insights for future enhancements to both Monkey and our approach.
Code Hunt (https://www.codehunt.com/) from Microsoft Research is a web-based serious gaming platform being popularly used for various programming contests. In this paper, we demonstrate preliminary statistical analysis of a Code Hunt data set that contains the programs written by students (only) worldwide during a contest over 48 hours. There are 259 users, 24 puzzles (organized into 4 sectors), and about 13,000 programs submitted by these users. Our analysis results can help improve the creation of puzzles in a future contest.
Healthcare technology—sometimes called “healthtech” or “healthsec”—is enmeshed with security and privacy via usability, performance, and cost-effectiveness issues. It is multidisciplinary, distributed, and complex—and it also involves many competing stakeholders and interests. To address the problems that arise in such a multifaceted field—comprised of physicians, IT professionals, management information specialists, computer scientists, edical informaticists, and epidemiologists, to name a few—the Healthtech Declaration was initiated at the most recent USENIX Summit on Information Technologies for Health (Healthtech 2015) held in Washington, DC. This Healthtech Declaration includes an easy-touse—and easy-to-cite—checklist of key issues that anyone proposing a solution must consider (see “The Healthtech Declaration Checklist” sidebar). In this article, we provide the context and motivation for the declaration.
The prevalence of smart devices has promoted the popularity of mobile applications (a.k.a. apps) in recent years. A number of interesting and important questions remain unanswered, such as why a user likes/dislikes an app, how an app becomes popular or eventually perishes, how a user selects apps to install and interacts with them, how frequently an app is used and how much trac it generates, etc. This paper presents an empirical analysis of app usage behaviors collected from millions of users of Wandoujia, a leading Android app marketplace in China. The dataset covers two types of user behaviors of using over 0.2 million Android apps, including (1) app management activities (i.e., installation, updating, and uninstallation) of over 0.8 million unique users and (2) app network trac from over 2 million unique users. We explore multiple aspects of such behavior data and present interesting patterns of app usage. The results provide many useful implications to the developers, users, and disseminators of mobile apps.
Presented at the Illinois Information Trust Institute Assured Cloud Computing Weekly Research Seminar, September 28, 2016.
Presented at NSA Science of Security Quarterly Lablet Meeting, July 2016.
Using stolen or weak credentials to bypass authentication is one of the top 10 network threats, as shown in recent studies. Disguising as legitimate users, attackers use stealthy techniques such as rootkits and covert channels to gain persistent access to a target system. However, such attacks are often detected after the system misuse stage, i.e., the attackers have already executed attack payloads such as: i) stealing secrets, ii) tampering with system services, and ii) disrupting the availability of production services.
In this talk, we analyze a real-world credential stealing attack observed at the National Center for Supercomputing Applications. We show the disadvantages of traditional detection techniques such as signature-based and anomaly-based detection for such attacks. Our approach is a complement to existing detection techniques. We investigate the use of Probabilistic Graphical Model, specifically Factor Graphs, to integrate security logs from multiple sources for a more accurate detection. Finally, we propose a security testbed architecture to: i) simulate variants of known attacks that may happen in the future, ii) replay such attack variants in an isolated environment, and iii) collect and share security logs of such replays for the security research community.
Pesented at the Illinois Information Trust Institute Joint Trust and Security and Science of Security Seminar, May 3, 2016.
This talk will explore a scalable data analytics pipeline for real-time attack detection through the use of customized honeypots at the National Center for Supercomputing Applications (NCSA). Attack detection tools are common and are constantly improving, but validating these tools is challenging. You must: (i) identify data (e.g., system-level events) that is essential for detecting attacks, (ii) extract this data from multiple data logs collected by runtime monitors, and (iii) present the data to the attack detection tools. On top of this, such an approach must scale with an ever-increasing amount of data, while allowing integration of new monitors and attack detection tools. All of these require an infrastructure to host and validate the developed tools before deployment into a production environment.
We will present a generalized architecture that aims for a real-time, scalable, and extensible pipeline that can be deployed in diverse infrastructures to validate arbitrary attack detection tools. To motivate our approach, we will show an example deployment of our pipeline based on open-sourced tools. The example deployment uses as its data sources: (i) a customized honeypot environment at NCSA and (ii) a container-based testbed infrastructure for interactive attack replay. Each of these data sources is equipped with network and host-based monitoring tools such as Bro (a network-based intrusion detection system) and OSSEC (a host-based intrusion detection system) to allow for the runtime collection of data on system/user behavior. Finally, we will present an attack detection tool that we developed and that we look to validate through our pipeline. In conclusion, the talk will discuss the challenges of transitioning attack detection from theory to practice and how the proposed data analytics pipeline can help that transition.
Presented at the Illinois Information Trust Institute Joint Trust and Security/Science of Security Seminar, October 6, 2016.
Presented at the NSA SoS Quarterly Lablet Meeting, October 2015.