Biblio
A proactive approach to security can be adopted by organizations through the use of deception technology. The application of deception technology allows organization to reduce dwell time, quickly detect attackers, and lessen false positives. Modern deception platforms use machine learning and AI to be scalable and easy to manage.
Despite corporate cyber intrusions attracting all the attention, privacy breaches that we, as ordinary users, should be worried about occur every day without any scrutiny. Smartphones, a household item, have inadvertently become a major enabler of privacy breaches. Smartphone platforms use permission systems to regulate access to sensitive resources. These permission systems, however, lack the ability to understand users’ privacy expectations leaving a significant gap between how permission models behave and how users would want the platform to protect their sensitive data. This dissertation provides an in-depth analysis of how users make privacy decisions in the context of Smartphones and how platforms can accommodate user’s privacy requirements systematically. We first performed a 36-person field study to quantify how often applications access protected resources when users are not expecting it. We found that when the application requesting the permission is running invisibly to the user, they are more likely to deny applications access to protected resources. At least 80% of our participants would have preferred to prevent at least one permission request. To explore the feasibility of predicting user’s privacy decisions based on their past decisions, we performed a longitudinal 131-person field study. Based on the data, we built a classifier to make privacy decisions on the user’s behalf by detecting when the context has changed and inferring privacy preferences based on the user’s past decisions. We showed that our approach can accurately predict users’ privacy decisions 96.8% of the time, which is an 80% reduction in error rate compared to current systems. Based on these findings, we developed a custom Android version with a contextually aware permission model. The new model guards resources based on user’s past decisions under similar contextual circumstances. We performed a 38-person field study to measure the efficiency and usability of the new permission model. Based on exit interviews and 5M data points, we found that the new system is effective in reducing the potential violations by 75%. Despite being significantly more restrictive over the default permission systems, participants did not find the new model to cause any usability issues in terms of application functionality.
Software patterns are created with the goal of capturing expert
knowledge so it can be efficiently and effectively shared with the
software development community. However, patterns in practice
may or may not achieve these goals. Empirical studies of the use
of software patterns can help in providing deeper insight into
whether these goals have been met. The objective of this paper is
to aid researchers in designing empirical studies of software
patterns by summarizing the study designs of software patterns
available in the literature. The important components of these
study designs include the evaluation criteria and how the patterns
are presented to study participants. We select and analyze 19
distinct empirical studies and identify 17 independent variables in
three different categories (participants demographics; pattern
presentation; problem presentation). We also extract 10 evaluation
criteria with 23 associated observable measures. Additionally, by
synthesizing the reported observations, we identify challenges
faced during study execution. Provision of multiple domainspecific
examples of pattern application and tool support to assist
in pattern selection are helpful for the study participants in
understanding and completing the study task. Capturing data
regarding the cognitive processes of participants can provide
insights into the findings of the study.
Multi-module Cyber-Physical Systems (CPSs), such as satellite clusters, swarms of Unmanned Aerial Vehicles (UAV), and fleets of Unmanned Underwater Vehicles (UUV) are examples of managed distributed real-time systems where mission-critical applications, such as sensor fusion or coordinated flight control, are hosted. These systems are dynamic and reconfigurable, and provide a “CPS cluster-as-a-service” for mission-specific scientific applications that can benefit from the elasticity of the cluster membership and heterogeneity of the cluster members. Distributed and remote nature of these systems often necessitates the use of Deployment and Configuration (D&C) services to manage lifecycle of software applications. Fluctuating resources, volatile cluster membership and changing environmental conditions require resilience. However, due to the dynamic nature of the system, human intervention is often infeasible. This necessitates a self-adaptive D&C infrastructure that supports autonomous resilience. Such an infrastructure must have the ability to adapt existing applications on the fly in order to provide application resilience and must itself be able to adapt to account for changes in the system as well as tolerate failures.
This paper describes the design and architectural considerations to realize a self-adaptive, D&C infrastructure for CPSs. Previous efforts in this area have resulted in D&C infrastructures that support application adaptation via dynamic re-deployment and re-configuration mechanisms. Our work, presented in this paper, improves upon these past efforts by implementing a self- adaptive D&C infrastructure which itself is resilient. The paper concludes with experimental results that demonstrate the autonomous resilience capabilities of our new D&C infrastructure.
The human factor is often regarded as the weakest link in cybersecurity systems. The investigation of several security breaches reveals an important impact of human errors in exhibiting security vulnerabilities. Although security researchers have long observed the impact of human behavior, few improvements have been made in designing secure systems that are resilient to the uncertainties of the human element.
In this talk, we discuss several psychological theories that attempt to understand and influence the human behavior in the cyber world. Our goal is to use such theories in order to build predictive cyber security models that include the behavior of typical users, as well as system administrators. We then illustrate the importance of our approach by presenting a case study that incorporates models of human users. We analyze our preliminary results and discuss their challenges and our approaches to address them in the future.
Presented at the ITI Joint Trust and Security/Science of Security Seminar, October 20, 2016.
Stealthy attackers often disable or tamper with system monitors to hide their tracks and evade detection. In this poster, we present a data-driven technique to detect such monitor compromise using evidential reasoning. Leveraging the fact that hiding from multiple, redundant monitors is difficult for an attacker, to identify potential monitor compromise, we combine alerts from different sets of monitors by using Dempster-Shafer theory, and compare the results to find outliers. We describe our ongoing work in this area.
Presented to the Illinois SoS Bi-weekly Meeting, April 2015.
Presented at the Illinois SoS Bi-Weekly Meeting, February 2015.
Presented at NSA SoS Quarterly Meeting, July 2016 and November 2016
Presented at NSA Science of Security Quarterly Meeting, July 2014.
Presented at the Illinois Science of Security Bi-weekly Meeting, April 2015.