Visible to the public Biblio

Found 164 results

Filters: Keyword is CMU  [Clear All Filters]
2016-04-25
Michael Maass.  2016.  A Theory and Tools for Applying Sandboxes Effectively. Institute for Software Research, School of Computer Science. PhD Philosphy:166.

It is more expensive and time consuming to build modern software without extensive supply chains. Supply chains decrease these development risks, but typically at the cost of increased security risk. In particular, it is often difficult to understand or verify what a software component delivered by a third party does or could do. Such a component could contain unwanted behaviors, vulnerabilities, or malicious code, many of which become incorporated in applications utilizing the component. Sandboxes provide relief by encapsulating a component and imposing a security policy on it. This limits the operations the component can perform without as much need to trust or verify the component. Instead, a component user must trust or verify the relatively simple sandbox. Given this appealing prospect, researchers have spent the last few decades developing new sandboxing techniques and sandboxes. However, while sandboxes have been adopted in practice, they are not as pervasive as they could be. Why are sandboxes not achieving ubiquity at the same rate as extensive supply chains? This thesis advances our understanding of and overcomes some barriers to sandbox adoption. We systematically analyze ten years (2004 – 2014) of sandboxing research from top-tier security and systems conferences. We uncover two barriers: (1) sandboxes are often validated using relatively subjective techniques and (2) usability for sandbox deployers is often ignored by the studied community. We then focus on the Java sandbox to empirically study its use within the open source community. We find features in the sandbox that benign applications do not use, which have promoted a thriving exploit landscape. We develop run time monitors for the Java Virtual Machine (JVM) to turn off these features, stopping all known sandbox escaping JVM exploits without breaking benign applications. Furthermore, we find that the sandbox contains a high degree of complexity benign applications need that hampers sandbox use. When studying the sandbox’s use, we did not find a single application that successfully deployed the sandbox for security purposes, which motivated us to overcome benignly-used complexity via tooling. We develop and evaluate a series of tools to automate the most complex tasks, which currently require error-prone manual effort. Our tools help users derive, express, and refine a security policy and impose it on targeted Java application JARs and classes. This tooling is evaluated through case studies with industrial collaborators where we sandbox components that were previously difficult to sandbox securely. Finally, we observe that design and implementation complexity causes sandbox developers to accidentally create vulnerable sandboxes. Thus, we develop and evaluate a sandboxing technique that leverages existing cloud computing environments to execute untrusted computations. Malicious outcomes produced by the computations are contained by ephemeral virtual machines. We describe a field trial using this technique with Adobe Reader and compare the new sandbox to existing sandboxes using a qualitative framework we developed.

Junjie Qian, Witawas Srisa-an, Hong Jiang, Sharad Seth, Du Li, Pan Yi.  2016.  Exploiting FIFO Scheduler to Improve Parallel Garbage Collection Performance.. VEE '16 12th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments.

Recent studies have found that parallel garbage collection performs worse with more CPUs and more collector threads. As part of this work, we further investigate this enomenon and find that poor scalability is worst in highly scalable Java applications. Our investigation to find the causes clearly reveals that efficient multi-threading in an application can prolong the average object lifespan, which results in less effective garbage collection. We also find that prolonging lifespan is the direct result of Linux's Completely Fair Scheduler due to its round-robin like behavior that can increase the heap contention between the application threads. Instead, if we use pseudo first-in-first-out to schedule application threads in large multicore systems, the garbage collection scalability is significantly improved while the time spent in garbage collection is reduced by as much as 21%. The average execution time of the 24 Java applications used in our study is also reduced by 11%. Based on this observation, we propose two approaches to optimally select scheduling policies based on application scalability profile. Our first approach uses the profile information from one execution to tune the subsequent executions. Our second approach dynamically collects profile information and performs policy selection during execution.

Momin Malik, Jurgen Pfeffer, Gabriel Ferreira, Christian Kästner.  2016.  Visualizing the variational callgraph of the Linux Kernel: An approach for reasoning about dependencies. HotSos '16 Proceedings of the Symposium and Bootcamp on the Science of Security.

Software developers use #ifdef statements to support code configurability, allowing software product diversification. But because functions can be in many executions paths that depend on complex combinations of configuration options, the introduction of an #ifdef for a given purpose (such as adding a new feature to a program) can enable unintended function calls, which can be a source of vulnerabilities. Part of the difficulty lies in maintaining mental models of all dependencies. We propose analytic visualizations of thevariational callgraph to capture dependencies across configurations and create visualizations to demonstrate how it would help developers visually reason through the implications of diversification, for example through visually doing change impact analysis.

James Herbsleb, Christian Kästner, Christopher Bogart.  2015.  Intelligently Transparent Software Ecosystems. IEEE Software. 33(1)

Today's social-coding tools foreshadow a transformation of the software industry, as it relies increasingly on open libraries, frameworks, and code fragments. Our vision calls for new intelligently transparent services that support rapid development of innovative products while helping developers manage risk and issuing them early warnings of looming failures. Intelligent transparency is enabled by an infrastructure that applies analytics to data from all phases of the life cycle of open source projects, from development to deployment. Such an infrastructure brings stakeholders the information they need when they need it.

2016-02-15
Flavio Medeiros, Christian Kästner, Marcio Ribeiro, Sarah Nadi, Rohit Gheyl.  2015.  The Love/Hate Relationship with The C Preprocessor: An Interview Study.. European Conference on Object-Oriented Programming (ECOOP).

The C preprocessor has received strong criticism in academia, among others regarding separation of concerns, error proneness, and code obfuscation, but is widely used in practice. Many (mostly academic) alternatives to the preprocessor exist, but have not been adopted in practice. Since developers continue to use the preprocessor despite all criticism and research, we ask how practitioners perceive the C preprocessor. We performed interviews with 40 developers, used grounded theory to analyze the data, and cross-validated the results with data from a survey among 202 developers, repository mining, and results from previous studies. In particular, we investigated four research questions related to why the preprocessor is still widely used in practice, common problems, alternatives, and the impact of undisciplined annotations. Our study shows that developers are aware of the criticism the C preprocessor receives, but use it nonetheless, mainly for portability and variability. Many developers indicate that they regularly face preprocessor-related problems and preprocessor-related bugs. The majority of our interviewees do not see any current C-native technologies that can entirely replace the C preprocessor. However, developers tend to mitigate problems with guidelines, but those guidelines are not enforced consistently. We report the key insights gained from our study and discuss implications for practitioners and researchers on how to better use the C preprocessor to minimize its negative impact.

Sarah Nadi, Thorsten Berger, Christian Kästner, Krzysztof Czarnecki.  2015.  Where Do Configuration Constraints Stem From? An Extraction Approach and an Empirical Study IEEE Transactions on Software Engineering. 41(8)

Highly configurable systems allow users to tailor software to specific needs. Valid combinations of configuration options are often restricted by intricate constraints. Describing options and constraints in a variability model allows reasoning about the supported configurations. To automate creating and verifying such models, we need to identify the origin of such constraints. We propose a static analysis approach, based on two rules, to extract configuration constraints from code. We apply it on four highly configurable systems to evaluate the accuracy of our approach and to determine which constraints are recoverable from the code. We find that our approach is highly accurate (93% and 77% respectively) and that we can recover 28% of existing constraints. We complement our approach with a qualitative study to identify constraint sources, triangulating results from our automatic extraction, manual inspections, and interviews with 27 developers. We find that, apart from low-level implementation dependencies, configuration constraints enforce correct runtime behavior, improve users' configuration experience, and prevent corner cases. While the majority of constraints is extractable from code, our results indicate that creating a complete model requires further substantial domain knowledge and testing. Our results aim at supporting researchers and practitioners working on variability model engineering, evolution, and verification techniques.

Shurui Zhou, Jafar Al-Kofahi, Tien Nguyen, Christian Kästner, Sarah Nadi.  2015.  Extracting configuration knowledge from build files with symbolic analysis. RELENG '15 Proceedings of the Third International Workshop on Release Engineering.

Build systems contain a lot of configuration knowledge about a software system, such as under which conditions specific files are compiled. Extracting such configuration knowledge is important for many tools analyzing highly-configurable systems, but very challenging due to the complex nature of build systems. We design an approach, based on SYMake, that symbolically evaluates Makefiles and extracts configuration knowledge in terms of file presence conditions and conditional parameters. We implement an initial prototype and demonstrate feasibility on small examples.

Gabriel Ferreira, Christian Kästner, Jurgen Pfeffer, Sven Apel.  2015.  Characterizing complexity of highly-configurable systems with variational call graphs: analyzing configuration options interactions complexity in function calls. HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security.

Security has consistently been the focus of attention in many highly-configurable software systems. Several vulnerabilities on widely-used systems, such as the Linux kernel and OpenSSL, are reported every day in the National Vulnerability Database (NVD). The configurability of these systems enables the rapid generation of customized products, but also creates security challenges in the development and maintenance processes. For instance, interactions caused by configurations may create serious security threats and make generated products more susceptible to attacks [6], but the causes of these problems may be harder to detect because they occur only in specific configurations.

Ghita Mezzour, Kathleen Carley, L. Richard Carley.  2015.  An empirical study of global malware encounters. HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security.

The number of trojans, worms, and viruses that computers encounter varies greatly across countries. Empirically identifying factors behind such variation can provide a scientific empirical basis to policy actions to reduce malware encounters in the most affected countries. However, our understanding of these factors is currently mainly based on expert opinions, not empirical evidence.

In this paper, we empirically test alternative hypotheses about factors behind international variation in the number of trojan, worm, and virus encounters. We use the Symantec Anti-Virus (AV) telemetry data collected from more than 10 million Symantec customer computers worldwide that we accessed through the Symantec Worldwide Intelligence Environment (WINE) platform. We use regression analysis to test for the effect of computing and monetary resources, web browsing behavior, computer piracy, cyber security expertise, and international relations on international variation in malware encounters.

We find that trojans, worms, and viruses are most prevalent in Sub-Saharan African countries. Many Asian countries also encounter substantial quantities of malware. Our regression analysis reveals that the main factor that explains high malware exposure of these countries is a widespread computer piracy especially when combined with poverty. Our regression analysis also reveals that, surprisingly, web browsing behavior, cyber security expertise, and international relations have no significant effect.

Ghita Mezzour.  2015.  Assessing the Global Cyber and Biological Threat. Electrical and Computer Engineering Department and Institute for Software Research. Doctor of Philosophy

In today’s inter-connected world, threats from anywhere in the world can have serious global repercussions. In particular, two types of threats have a global impact: 1) cyber crime and 2) cyber and biological weapons. If a country’s environment is conducive to cyber criminal activities, cyber criminals will use that country as a basis to attack end-users around the world. Cyber weapons and biological weapons can now allow a small actor to inflict major damage on a major military power. If cyber and biological weapons are used in combination, the damage can be amplified significantly. Given that the cyber and biological threat is global, it is important to identify countries that pose the greatest threat and design action plans to reduce the threat from these countries. However, prior work on cyber crime lacks empirical substantiation for reasons why some countries’ environments are conducive to cyber crime. Prior work on cyber and biological weapon capabilities mainly consists of case studies which only focus on select countries and thus are not generalizeable. To sum up, assessing the global cyber and biological threat currently lacks a systematic empirical approach. In this thesis, I take an empirical and systematic approach towards assessing the global cyber and biological threat. The first part of the thesis focuses on cyber crime. I examine international variation in cyber crime infrastructure hosting and cyber crime exposure. I also empirically test hypotheses about factors behind such variation. In that work, I use Symantec’s telemetry data, collected from 10 million Symantec customer computers worldwide and accessed through the Symantec’s Worldwide Intelligence Network Environment (WINE). I find that addressing corruption in Eastern Europe or computer piracy in Sub-Saharan Africa has the potential to reduce the global cyber crime. The second part of the thesis focuses on cyber and biological weapon capabilities. I develop two computational methodologies: one to assess countries’ biological capabilities and one to assess countries’ cyber capabilities. The methodologies examine all countries in the world and can be used by non-experts that only have access to publicly available data. I validate the biological weapon assessment methodology by comparing the methodology’s assessment to historical data. This work has the potential to proactively reduce the global cyber and biological weapon threat.

Jonathan Shahen, Jianwei Niu, Mahesh Tripunitara.  2015.  Mohawk+T: Efficient Analysis of Administrative Temporal Role-Based Access Control (ATRBAC) Policies. SACMAT '15 Proceedings of the 20th ACM Symposium on Access Control Models and Technologies.

Safety analysis is recognized as a fundamental problem in access control. It has been studied for various access control schemes in the literature. Recent work has proposed an administrative model for Temporal Role-Based Access Control (TRBAC) policies called Administrative TRBAC (ATRBAC). We address ATRBAC-safety. We first identify that the problem is PSPACE-Complete. This is a much tighter identification of the computational complexity of the problem than prior work, which shows only that the problem is decidable. With this result as the basis, we propose an approach that leverages an existing open-source software tool called Mohawk to address ATRBAC-safety. Our approach is to efficiently reduce ATRBAC-safety to ARBAC-safety, and then use Mohawk. We have conducted a thorough empirical assessment. In the course of our assessment, we came up with a "reduction toolkit," which allows us to reduce Mohawk+T input instances to instances that existing tools support. Our results suggest that there are some input classes for which Mohawk+T outperforms existing tools, and others for which existing tools outperform Mohawk+T. The source code for Mohawk+T is available for public download.

Hanan Hibshi, Travis Breaux, Stephen Broomell.  2015.  Assessment of Risk Perception in Security Requirements Composition. IEEE 23rd International Requirements Engineering Conference (RE'15).

Security requirements analysis depends on how well-trained analysts perceive security risk, understand the impact of various vulnerabilities, and mitigate threats. When systems are composed of multiple machines, configurations, and software components that interact with each other, risk perception must account for the composition of security requirements. In this paper, we report on how changes to security requirements affect analysts risk perceptions and their decisions about how to modify the requirements to reach adequate security levels. We conducted two user surveys of 174 participants wherein participants assess security levels across 64 factorial vignettes. We analyzed the survey results using multi-level modeling to test for the effect of security requirements composition on participants’ overall security adequacy ratings and on their ratings of individual requirements. We accompanied this analysis with grounded analysis of elicited requirements aimed at lowering the security risk. Our results suggest that requirements composition affects experts’ adequacy ratings on security requirements. In addition, we identified three categories of requirements modifications, called refinements, replacements and reinforcements, and we measured how these categories compare with overall perceived security risk. Finally, we discuss the future impact of our work in security requirements assessment practice.

Travis Breaux, Daniel Smullen, Hanan Hibshi.  2015.  Detecting Repurposing and Over-collection in Multi-Party Privacy Requirements Specifications. IEEE 23rd International Requirements Engineering Conference (RE'15).

Mobile and web applications increasingly leverage service-oriented architectures in which developers integrate third-party services into end user applications. This includes identity management, mapping and navigation, cloud storage, and advertising services, among others. While service reuse reduces development time, it introduces new privacy and security risks due to data repurposing and over-collection as data is shared among multiple parties who lack transparency into third-party data practices. To address this challenge, we propose new techniques based on Description Logic (DL) for modeling multi-party data flow requirements and verifying the purpose specification and collection and use limitation principles, which are prominent privacy properties found in international standards and guidelines. We evaluate our techniques in an empirical case study that examines the data practices of the Waze mobile application and three of their service providers: Facebook Login, Amazon Web Services (a cloud storage provider), and Flurry.com (a popular mobile analytics and advertising platform). The study results include detected conflicts and violations of the principles as well as two patterns for balancing privacy and data use flexibility in requirements specifications. Analysis of automation reasoning over the DL models show that reasoning over complex compositions of multi-party systems is feasible within exponential asymptotic timeframes proportional to the policy size, the number of expressed data, and orthogonal to the number of conflicts found. 

Naeem Esfahani, Eric Yuan, Kyle Canavera, Sam Malek.  2016.  Inferring Software Component Interaction Dependencies for Adaptation Support. ACM Transactions on Autonomous and Adaptive Systems (TAAS). 10(4)

A self-managing software system should be able to monitor and analyze its runtime behavior and make adaptation decisions accordingly to meet certain desirable objectives. Traditional software adaptation techniques and recent “models@runtime” approaches usually require an a priori model for a system’s dynamic behavior. Oftentimes the model is difficult to define and labor-intensive to maintain, and tends to get out of date due to adaptation and architecture decay. We propose an alternative approach that does not require defining the system’s behavior model beforehand, but instead involves mining software component interactions from system execution traces to build a probabilistic usage model, which is in turn used to analyze, plan, and execute adaptations. In this article, we demonstrate how such an approach can be realized and effectively used to address a variety of adaptation concerns. In particular, we describe the details of one application of this approach for safely applying dynamic changes to a running software system without creating inconsistencies. We also provide an overview of two other applications of the approach, identifying potentially malicious (abnormal) behavior for self-protection, and improving deployment of software components in a distributed setting for performance self-optimization. Finally, we report on our experiments with engineering self-management features in an emergency deployment system using the proposed mining approach.

Michael Maass, Adam Sales, Benjamin Chung, Joshua Sunshine.  2016.  A systematic analysis of the science of sandboxing. PeerJ Computer Science. 2

Sandboxes are increasingly important building materials for secure software systems. In recognition of their potential to improve the security posture of many systems at various points in the development lifecycle, researchers have spent the last several decades developing, improving, and evaluating sandboxing techniques. What has been done in this space? Where are the barriers to advancement? What are the gaps in these efforts? We systematically analyze a decade of sandbox research from five top-tier security and systems conferences using qualitative content analysis, statistical clustering, and graph-based metrics to answer these questions and more. We find that the term “sandbox” currently has no widely accepted or acceptable definition. We use our broad scope to propose the first concise and comprehensive definition for “sandbox” that consistently encompasses research sandboxes. We learn that the sandboxing landscape covers a range of deployment options and policy enforcement techniques collectively capable of defending diverse sets of components while mitigating a wide range of vulnerabilities. Researchers consistently make security, performance, and applicability claims about their sandboxes and tend to narrowly define the claims to ensure they can be evaluated. Those claims are validated using multi-faceted strategies spanning proof, analytical analysis, benchmark suites, case studies, and argumentation. However, we find two cases for improvement: (1) the arguments researchers present are often ad hoc and (2) sandbox usability is mostly uncharted territory. We propose ways to structure arguments to ensure they fully support their corresponding claims and suggest lightweight means of evaluating sandbox usability.

Ivan Ruchkin, Ashwini Rao, Dio De Niz, Sagar Chaki, David Garlan.  2015.  Eliminating Inter-Domain Vulnerabilities in Cyber-PhysicalSystems: An Analysis Contracts Approach. CPS-SPC '15 Proceedings of the First ACM Workshop on Cyber-Physical Systems-Security and/or PrivaCy.

Designing secure cyber-physical systems (CPS) is a particularly difficult task since security vulnerabilities stem not only from traditional cybersecurity concerns, but also physical ones. Many of the standard methods for CPS design make strong and unverified assumptions about the trustworthiness of physical devices, such as sensors. When these assumptions are violated, subtle inter-domain vulnerabilities are introduced into the system model. In this paper we use formal specification of analysis contracts to expose security assumptions and guarantees of analyses from reliability, control, and sensor security domains. We show that this specification allows us to determine where these assumptions are violated, opening the door to malicious attacks. We demonstrate how this approach can help discover and prevent vulnerabilities using a self-driving car example.

Hamid Bagheri, Alireza Sadeghi, Sam Malek, Joshua Garcia.  2015.  COVERT: Compositional Analysis of Android Inter-App Permission Leakage. IEEE Transactions on Software Engineering . 41(9)

 

Android is the most popular platform for mobile devices. It facilitates sharing of data and services among applications using a rich inter-app communication system. While access to resources can be controlled by the Android permission system, enforcing permissions is not sufficient to prevent security violations, as permissions may be mismanaged, intentionally or unintentionally. Android's enforcement of the permissions is at the level of individual apps, allowing multiple malicious apps to collude and combine their permissions or to trick vulnerable apps to perform actions on their behalf that are beyond their individual privileges. In this paper, we present COVERT, a tool for compositional analysis of Android inter-app vulnerabilities. COVERT's analysis is modular to enable incremental analysis of applications as they are installed, updated, and removed. It statically analyzes the reverse engineered source code of each individual app, and extracts relevant security specifications in a format suitable for formal verification. Given a collection of specifications extracted in this way, a formal analysis engine (e.g., model checker) is then used to verify whether it is safe for a combination of applications-holding certain permissions and potentially interacting with each other-to be installed together. Our experience with using COVERT to examine over 500 real-world apps corroborates its ability to find inter-app vulnerabilities in bundles of some of the most popular apps on the market.

Nariman Mirzaei, Hamid Bagheri, Riyadh Mahmood, Sam Malek.  2015.  SIG-Droid: Automated System Input Generation for Android Applications. 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE).

Pervasiveness of smartphones and the vast number of corresponding apps have underlined the need for applicable automated software testing techniques. A wealth of research has been focused on either unit or GUI testing of smartphone apps, but little on automated support for end-to-end system testing. This paper presents SIG-Droid, a framework for system testing of Android apps, backed with automated program analysis to extract app models and symbolic execution of source code guided by such models for obtaining test inputs that ensure covering each reachable branch in the program. SIG-Droid leverages two automatically extracted models: Interface Model and Behavior Model. The Interface Model is used to find values that an app can receive through its interfaces. Those values are then exchanged with symbolic values to deal with constraints with the help of a symbolic execution engine. The Behavior Model is used to drive the apps for symbolic execution and generate sequences of events. We provide an efficient implementation of SIG-Droid based in part on Symbolic PathFinder, extended in this work to support automatic testing of Android apps. Our experiments show SIG-Droid is able to achieve significantly higher code coverage than existing automated testing tools targeted for Android.

Gabriel Moreno, Javier Camara, David Garlan, Bradley Schmerl.  2015.  Proactive Self-Adaptation under Uncertainty: a Probabilistic Model Checking Approach. ESEC/FSE 2015 Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering.

Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, when adaptations have latency, and take some time to produce their effect, they have to be started with sufficient lead time so that they complete by the time their effect is needed. Proactive latency-aware adaptation addresses these issues by making adaptation decisions with a look-ahead horizon and taking adaptation latency into account. In this paper we present an approach for proactive latency-aware adaptation under uncertainty that uses probabilistic model checking for adaptation decisions. The key idea is to use a formal model of the adaptive system in which the adaptation decision is left underspecified through nondeterminism, and have the model checker resolve the nondeterministic choices so that the accumulated utility over the horizon is maximized. The adaptation decision is optimal over the horizon, and takes into account the inherent uncertainty of the environment predictions needed for looking ahead. Our results show that the decision based on a look-ahead horizon, and the factoring of both tactic latency and environment uncertainty, considerably improve the effectiveness of adaptation decisions.

Waqar Ahmad, Joshua Sunshine, Christian Kästner, Adam Wynne.  2015.  Enforcing Fine-Grained Security and Privacy Policies in an Ecosystem within an Ecosystem. Systems, Programming, Languages and Applications: Software for Humanity (SPLASH).

Smart home automation and IoT promise to bring many advantages but they also expose their users to certain security and privacy vulnerabilities. For example, leaking the information about the absence of a person from home or the medicine somebody is taking may have serious security and privacy consequences for home users and potential legal implications for providers of home automation and IoT platforms. We envision that a new ecosystem within an existing smartphone ecosystem will be a suitable platform for distribution of apps for smart home and IoT devices. Android is increasingly becoming a popular platform for smart home and IoT devices and applications. Built-in security mechanisms in ecosystems such as Android have limitations that can be exploited by malicious apps to leak users' sensitive data to unintended recipients. For instance, Android enforces that an app requires the Internet permission in order to access a web server but it does not control which servers the app talks to or what data it shares with other apps. Therefore, sub-ecosystems that enforce additional fine-grained custom policies on top of existing policies of the smartphone ecosystems are necessary for smart home or IoT platforms. To this end, we have built a tool that enforces additional policies on inter-app interactions and permissions of Android apps. We have done preliminary testing of our tool on three proprietary apps developed by a future provider of a home automation platform. Our initial evaluation demonstrates that it is possible to develop mechanisms that allow definition and enforcement of custom security policies appropriate for ecosystems of the like smart home automation and IoT.

Hamid Bagheri, Eunsuk Kang, Sam Malek, Daniel Jackson.  2015.  Detection of Design Flaws in the Android Permission Protocol Through Bounded Verification. 20th International Symposium on Formal Methods.

The ever increasing expansion of mobile applications into nearly every aspect of modern life, from banking to healthcare systems, is making their security more important than ever. Modern smartphone operating systems (OS) rely substantially on the permission-based security model to enforce restrictions on the operations that each application can perform. In this paper, we perform an analysis of the permission protocol implemented in Android, a popular OS for smartphones. We propose a formal model of the Android permission protocol in Alloy, and describe a fully automatic analysis that identifies potential flaws in the protocol. A study of real-world Android applications corroborates our finding that the flaws in the Android permission protocol can have severe security implications, in some cases allowing the attacker to bypass the permission checks entirely.

Javier Camara, Gabriel Moreno, David Garlan.  2015.  Reasoning about Human Participation in Self-Adaptive Systems. SEAMS '15 Proceedings of the 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems.

Self-adaptive systems overcome many of the limitations of human supervision in complex software-intensive systems by endowing them with the ability to automatically adapt their structure and behavior in the presence of runtime changes. However, adaptation in some classes of systems (e.g., safety-critical) can benefit by receiving information from humans (e.g., acting as sophisticated sensors, decision-makers), or by involving them as system-level effectors to execute adaptations (e.g., when automation is not possible, or as a fallback mechanism). However, human participants are influenced by factors external to the system (e.g., training level, fatigue) that affect the likelihood of success when they perform a task, its duration, or even if they are willing to perform it in the first place. Without careful consideration of these factors, it is unclear how to decide when to involve humans in adaptation, and in which way. In this paper, we investigate how the explicit modeling of human participants can provide a better insight into the trade-offs of involving humans in adaptation. We contribute a formal framework to reason about human involvement in self-adaptation, focusing on the role of human participants as actors (i.e., effectors) during the execution stage of adaptation. The approach consists of: (i) a language to express adaptation models that capture factors affecting human behavior and its interactions with the system, and (ii) a formalization of these adaptation models as stochastic multiplayer games (SMGs) that can be used to analyze human-system-environment interactions. We illustrate our approach in an adaptive industrial middleware used to monitor and manage sensor networks in renewable energy production plants.

Bradley Schmerl, Jeff Gennari, David Garlan.  2015.  An Architecture Style for Android Security Analysis. HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security.

Modern frameworks are required to be extendable as well as secure. However, these two qualities are often at odds. In this poster we describe an approach that uses a combination of static analysis and run-time management, based on software architecture models, that can improve security while maintaining framework extendability.

Alireza Sadeghi, Hamid Bagheri, Sam Malek.  2015.  Analysis of Android Inter-App Security Vulnerabilities Using COVERT. ICSE '15 Proceedings of the 37th International Conference on Software Engineering. 2

The state-of-the-art in securing mobile software systems are substantially intended to detect and mitigate vulnerabilities in a single app, but fail to identify vulnerabilities that arise due to the interaction of multiple apps, such as collusion attacks and privilege escalation chaining, shown to be quite common in the apps on the market. This paper demonstrates COVERT, a novel approach and accompanying tool-suite that relies on a hybrid static analysis and lightweight formal analysis technique to enable compositional security assessment of complex software. Through static analysis of Android application packages, it extracts relevant security specifications in an analyzable formal specification language, and checks them as a whole for inter-app vulnerabilities. To our knowledge, COVERT is the first formally-precise analysis tool for automated compositional analysis of Android apps. Our study of hundreds of Android apps revealed dozens of inter-app vulnerabilities, many of which were previously unknown. A video highlighting the main features of the tool can be found at: http://youtu.be/bMKk7OW7dGg.

2016-02-11
Limin Jia, Shayak Sen, Deepak Garg, Anupam Datta.  2015.  A Logic of Programs with Interface-Confined Code. 2015 IEEE 28th Computer Security Foundations Symposium (CSF).

Interface-confinement is a common mechanism that secures untrusted code by executing it inside a sandbox. The sandbox limits (confines) the code's interaction with key system resources to a restricted set of interfaces. This practice is seen in web browsers, hypervisors, and other security-critical systems. Motivated by these systems, we present a program logic, called System M, for modeling and proving safety properties of systems that execute adversary-supplied code via interface-confinement. In addition to using computation types to specify effects of computations, System M includes a novel invariant type to specify the properties of interface-confined code. The interpretation of invariant type includes terms whose effects satisfy an invariant. We construct a step-indexed model built over traces and prove the soundness of System M relative to the model. System M is the first program logic that allows proofs of safety for programs that execute adversary-supplied code without forcing the adversarial code to be available for deep static analysis. System M can be used to model and verify protocols as well as system designs. We demonstrate the reasoning principles of System M by verifying the state integrity property of the design of Memoir, a previously proposed trusted computing system.