Biblio
Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, when adaptations have latency, and take some time to produce their effect, they have to be started with sufficient lead time so that they complete by the time their effect is needed. Proactive latency-aware adaptation addresses these issues by making adaptation decisions with a look-ahead horizon and taking adaptation latency into account. In this paper we present an approach for proactive latency-aware adaptation under uncertainty that uses probabilistic model checking for adaptation decisions. The key idea is to use a formal model of the adaptive system in which the adaptation decision is left underspecified through nondeterminism, and have the model checker resolve the nondeterministic choices so that the accumulated utility over the horizon is maximized. The adaptation decision is optimal over the horizon, and takes into account the inherent uncertainty of the environment predictions needed for looking ahead. Our results show that the decision based on a look-ahead horizon, and the factoring of both tactic latency and environment uncertainty, considerably improve the effectiveness of adaptation decisions.
Smart home automation and IoT promise to bring many advantages but they also expose their users to certain security and privacy vulnerabilities. For example, leaking the information about the absence of a person from home or the medicine somebody is taking may have serious security and privacy consequences for home users and potential legal implications for providers of home automation and IoT platforms. We envision that a new ecosystem within an existing smartphone ecosystem will be a suitable platform for distribution of apps for smart home and IoT devices. Android is increasingly becoming a popular platform for smart home and IoT devices and applications. Built-in security mechanisms in ecosystems such as Android have limitations that can be exploited by malicious apps to leak users' sensitive data to unintended recipients. For instance, Android enforces that an app requires the Internet permission in order to access a web server but it does not control which servers the app talks to or what data it shares with other apps. Therefore, sub-ecosystems that enforce additional fine-grained custom policies on top of existing policies of the smartphone ecosystems are necessary for smart home or IoT platforms. To this end, we have built a tool that enforces additional policies on inter-app interactions and permissions of Android apps. We have done preliminary testing of our tool on three proprietary apps developed by a future provider of a home automation platform. Our initial evaluation demonstrates that it is possible to develop mechanisms that allow definition and enforcement of custom security policies appropriate for ecosystems of the like smart home automation and IoT.
The ever increasing expansion of mobile applications into nearly every aspect of modern life, from banking to healthcare systems, is making their security more important than ever. Modern smartphone operating systems (OS) rely substantially on the permission-based security model to enforce restrictions on the operations that each application can perform. In this paper, we perform an analysis of the permission protocol implemented in Android, a popular OS for smartphones. We propose a formal model of the Android permission protocol in Alloy, and describe a fully automatic analysis that identifies potential flaws in the protocol. A study of real-world Android applications corroborates our finding that the flaws in the Android permission protocol can have severe security implications, in some cases allowing the attacker to bypass the permission checks entirely.
Self-adaptive systems overcome many of the limitations of human supervision in complex software-intensive systems by endowing them with the ability to automatically adapt their structure and behavior in the presence of runtime changes. However, adaptation in some classes of systems (e.g., safety-critical) can benefit by receiving information from humans (e.g., acting as sophisticated sensors, decision-makers), or by involving them as system-level effectors to execute adaptations (e.g., when automation is not possible, or as a fallback mechanism). However, human participants are influenced by factors external to the system (e.g., training level, fatigue) that affect the likelihood of success when they perform a task, its duration, or even if they are willing to perform it in the first place. Without careful consideration of these factors, it is unclear how to decide when to involve humans in adaptation, and in which way. In this paper, we investigate how the explicit modeling of human participants can provide a better insight into the trade-offs of involving humans in adaptation. We contribute a formal framework to reason about human involvement in self-adaptation, focusing on the role of human participants as actors (i.e., effectors) during the execution stage of adaptation. The approach consists of: (i) a language to express adaptation models that capture factors affecting human behavior and its interactions with the system, and (ii) a formalization of these adaptation models as stochastic multiplayer games (SMGs) that can be used to analyze human-system-environment interactions. We illustrate our approach in an adaptive industrial middleware used to monitor and manage sensor networks in renewable energy production plants.
Modern frameworks are required to be extendable as well as secure. However, these two qualities are often at odds. In this poster we describe an approach that uses a combination of static analysis and run-time management, based on software architecture models, that can improve security while maintaining framework extendability.
The state-of-the-art in securing mobile software systems are substantially intended to detect and mitigate vulnerabilities in a single app, but fail to identify vulnerabilities that arise due to the interaction of multiple apps, such as collusion attacks and privilege escalation chaining, shown to be quite common in the apps on the market. This paper demonstrates COVERT, a novel approach and accompanying tool-suite that relies on a hybrid static analysis and lightweight formal analysis technique to enable compositional security assessment of complex software. Through static analysis of Android application packages, it extracts relevant security specifications in an analyzable formal specification language, and checks them as a whole for inter-app vulnerabilities. To our knowledge, COVERT is the first formally-precise analysis tool for automated compositional analysis of Android apps. Our study of hundreds of Android apps revealed dozens of inter-app vulnerabilities, many of which were previously unknown. A video highlighting the main features of the tool can be found at: http://youtu.be/bMKk7OW7dGg.
Interface-confinement is a common mechanism that secures untrusted code by executing it inside a sandbox. The sandbox limits (confines) the code's interaction with key system resources to a restricted set of interfaces. This practice is seen in web browsers, hypervisors, and other security-critical systems. Motivated by these systems, we present a program logic, called System M, for modeling and proving safety properties of systems that execute adversary-supplied code via interface-confinement. In addition to using computation types to specify effects of computations, System M includes a novel invariant type to specify the properties of interface-confined code. The interpretation of invariant type includes terms whose effects satisfy an invariant. We construct a step-indexed model built over traces and prove the soundness of System M relative to the model. System M is the first program logic that allows proofs of safety for programs that execute adversary-supplied code without forcing the adversarial code to be available for deep static analysis. System M can be used to model and verify protocols as well as system designs. We demonstrate the reasoning principles of System M by verifying the state integrity property of the design of Memoir, a previously proposed trusted computing system.
Modern software systems are often compositions of entities that increasingly use self-adaptive capabilities to improve their behavior to achieve systemic quality goals. Self adaptive managers for each component system attempt to provide locally optimal results, but if they cooperated and potentially coordinated their efforts it might be possible to obtain more globally optimal results. The emergent properties that result from such composition and cooperation of self-adaptive systems are not well understood, difficult to reason about, and present a key challenge in the evolution of modern software systems. For example, the effects of coordination patterns and protocols on emergent properties, such as the resiliency of the collectives, need to be understood when designing these systems. In this paper we propose that probabilistic model checking of stochastic multiplayer games (SMG) provides a promising approach to analyze, understand, and reason about emergent properties in collectives of adaptive systems (CAS). Probabilistic Model Checking of SMGs is a technique particularly suited to analyzing emergent properties in CAS since SMG models capture: (i) the uncertainty and variability intrinsic to a CAS and its execution environment in the form of probabilistic and nondeterministic choices, and (ii) the competitive/cooperative aspects of the interplay among the constituent systems of the CAS. Analysis of SMGs allows us to reason about things like the worst case scenarios, which constitutes a new contribution to understanding emergent properties in CAS. We investigate the use of SMGs to show how they can be useful in analyzing the impact of communication topology for collections of fully cooperative systems defending against an external attack.
Designing secure cyber-physical systems (CPS) is a particularly difficult task since security vulnerabilities stem not only from traditional cybersecurity concerns, but also physical ones. Many of the standard methods for CPS design make strong and unverified assumptions about the trustworthiness of physical devices, such as sensors. When these assumptions are violated, subtle inter-domain vulnerabilities are introduced into the system model. In this paper we use formal specification of analysis contracts to expose security assumptions and guarantees of analyses from reliability, control, and sensor security domains. We show that this specification allows us to determine where these assumptions are violated, opening the door to malicious attacks. We demonstrate how this approach can help discover and prevent vulnerabilities using a self-driving car example.
Insider threats are a well-known problem, and previous studies have shown that it has a huge impact over a wide range of sectors like financial services, governments, critical infrastructure services and the telecommunications sector. Users, while interacting with any software system, leave a trace of what nodes they accessed and in what sequence. We propose to translate these sequences of observed activities into paths on the graph of the underlying software architectural model. We propose a clustering algorithm to find anomalies in the data, which can be combined with contextual information to confirm as an insider threat.
Parallel garbage collection has been used to speedup the collection process on multicore architectures. Similar to other parallel techniques, balancing the workload among threads is critical to ensuring good overall collection performance. To this end, work stealing is employed by the current stateof-the-art Java Virtual Machine, OpenJDK, to keep GC threads from idling during a collection process. However, we found that the current algorithm is not efficient. Its usage can often cause GC performance to be worse than when work stealing is not used. In this paper, we identify three factors that affect work stealing efficiency: determining tasks that can benefit from stealing, frequency with which to attempt stealing, and performance impacts of failed stealing attempts. Based on this analysis, we propose SmartStealing, a new algorithm that can automatically decide whether to attempt stealing at a particular point during execution. If stealing is attempted, it can efficiently identify a task to steal from. We then compare the collection performances when (i) the default work stealing algorithm is used, (ii) work stealing is not used at all, and (iii) the SmartStealing approach is used. Without modifying the remaining garbage collection system, the evaluation result shows that SmartStealing can reduce the parallel GC execution time for 19 of the 21 benchmarks. The average reduction is 50.4% and the highest reduction is 78.7%. We also investigate the performances of SmartStealing on NUMA and UMA architectures.
Cilk Plus and OpenMP are parallel language ex-tensions for the C and C++ programming languages. The CPLEX Study Group of the ISO/IEC C Standards Committee is developing a proposal for a parallel programming extension to C that combines ideas from Cilk Plus and OpenMP. We conducted a preliminary comparison of Cilk Plus and OpenMP in a master's level course on security to evaluate the design tradeoffs in the usability and security of these two approaches. The eventual goal is to inform decision making within the committee. We found several usability problems worthy of further investigation based on student performance, including declaring and using reductions, multi-line compiler directives, and the understandability of task assignment to threads.
Syntax extension mechanisms are powerful, but reasoning about syntax extensions can be difficult. Recent work on type-specific languages (TSLs) addressed reasoning about composition, hygiene and typing for extensions introducing new literal forms. We supplement TSLs with typed syntax macros (TSMs), which, unlike TSLs, are explicitly invoked to give meaning to delimited segments of arbitrary syntax. To maintain a typing discipline, we describe two avors of term-level TSMs: synthetic TSMs specify the type of term that they generate, while analytic TSMs can generate terms of arbitrary type, but can only be used in positions where the type is otherwise known. At the level of types, we describe a third avor of TSM that generates a type of a specified kind along with its TSL and show interesting use cases where the two mechanisms operate in concert.
Application Programming Interfaces (APIs) often define protocols --- restrictions on the order of client calls to API methods. API protocols are common and difficult to use, which has generated tremendous research effort in alternative specification, implementation, and verification techniques. However, little is understood about the barriers programmers face when using these APIs, and therefore the research effort may be misdirected.
To understand these barriers better, we perform a two-part qualitative study. First, we study developer forums to identify problems that developers have with protocols. Second, we perform a think-aloud observational study, in which we systematically observe professional programmers struggle with these same problems to get more detail on the nature of their struggles and how they use available resources. In our observations, programmer time was spent primarily on four types of searches of the protocol state space. These observations suggest protocol-targeted tools, languages, and verification techniques will be most effective if they enable programmers to efficiently perform state search.
Foundational models of object-oriented constructs typically model objects as records with a structural type. However, many object-oriented languages are class-based; statically-typed formal models of these languages tend to sacrifice the foundational nature of the record-based models, and in addition cannot express dynamic class loading or creation. In this paper, we explore how to model statically-typed object-oriented languages that support dynamic class creation using foundational constructs of type theory. We start with an extensible tag construct motivated by type theory, and adapt it to support static reasoning about class hierarchy and the tags supported by each object. The result is a model that better explains the relationship between object-oriented and functional programming paradigms, suggests a useful enhancement to functional programming languages, and paves the way for more expressive statically typed object-oriented languages. In that vein, we describe the design and implementation of the Wyvern language, which leverages our theory.
For several decades, inheritance and delegation have been widely adopted for code reuse in object-oriented languages. Though extensive research has explored the expressiveness of these techniques, little is known about how the choice between them affects formal reasoning. In this paper, we explore this question by describing two core languages that are identical except for the use of inheritance and delegation, respectively. We add support for formal reasoning about typestate to both languages, and evaluate the complexity of the formal semantics and compare the example specifications. Our study suggests that our variant of delegation can substantially simplify typestate reasoning, while inheritance makes code more succinct in the case where open recursion is used.
The ubiquitously-installed Java Runtime Environment (JRE) provides a complex, flexible set of mechanisms that support the execution of untrusted code inside a secure sandbox. However, many recent exploits have successfully escaped the sandbox, allowing attackers to infect numerous Java hosts. We hypothesize that the Java security model affords developers more flexibility than they need or use in practice, and thus its complexity compromises security without improving practical functionality. We describe an empirical study of the ways benign open-source Java applications use and interact with the Java security manager. We found that developers regularly misunderstand or misuse Java security mechanisms, that benign programs do not use all of the vast flexibility afforded by the Java security model, and that there are clear differences between the ways benign and exploit programs interact with the security manager. We validate these results by deriving two restrictions on application behavior that restrict (1) security manager modifications and (2) privilege escalation. We demonstrate that enforcing these rules at runtime stop a representative proportion of modern Java 7 exploits without breaking backwards compatibility with benign applications. These practical rules should be enforced in the JRE to fortify the Java sandbox.
The overarching vision of social machines is to facilitate social processes by having computers provide administrative support. We conceive of a social machine as a sociotechnical system (STS): a software-supported system in which autonomous principals such as humans and organizations interact to exchange information and services. Existing approaches for social machines emphasize the technical aspects and inadequately support the meanings of social processes, leaving them informally realized in human interactions. We posit that a fundamental rethinking is needed to incorporate accountability, essential for addressing the openness of the Web and the autonomy of its principals.
We introduce Interaction-Oriented Software Engineering (IOSE) as a paradigm expressly suited to capturing the social basis of STSs. Motivated by promoting openness and autonomy, IOSE focuses not on implementation but on social protocols, specifying how social relationships, characterizing the accountability of the concerned parties, progress as they interact. Motivated by providing computational support, IOSE adopts the accountability representation to capture the meaning of a social machine's states and transitions.
We demonstrate IOSE via examples drawn from healthcare. We reinterpret the classical software engineering (SE) principles for the STS setting and show how IOSE is better suited than traditional software engineering for supporting social processes. The contribution of this paper is a new paradigm for STSs, evaluated via conceptual analysis.
Detecting and preventing attacks before they compromise a system can be done using acceptance testing, redundancy based mechanisms, and using external consistency checking such external monitoring and watchdog processes. Diversity-based adjudication, is a step towards an oracle that uses knowable behavior of a healthy system. That approach, under best circumstances, is able to detect even zero-day attacks. In this approach we use functionally equivalent but in some way diverse components and we compare their output vectors and reactions for a given input vector. This paper discusses practical relevance of this approach in the context
Automated cyber attacks tend to be schedule and resource limited. The primary progress metric is often “coverage” of pre-determined “known” vulnerabilities that may not have been patched, along with possible zero-day exploits (if such exist). We present and discuss a hypergeometric process model that describes such attack patterns. We used web request signatures from the logs of a production web server to assess the applicability of the model.
We have been studying extension of the classical Software Reliability Engineering (SRE) methodology into the security space. We combine “classical” reliability modeling, when applied to reported vulnerabilities found under “normal” operational profile conditions, with safety oriented fault management processes. We illustrate with open source Fedora software.
Our initial results appear to indicate that generation of a repeatable automated test-strategy that would explicitly cover the “top 25” security problems may help considerably – eliminating perhaps as much as 50% of the field observable problems. However, genuine aleatoric and more process oriented incomplete analysis and design flaws remain. While we have made some progress in identifying focus areas, a number of questions remain, and we continue working on them.
The process of software development and evolution has proven difficult to improve. For example, well documented security issues such as SQL injection (SQLi), after more than a decade, still top most vulnerability lists. Quantitative security process and quality metrics are often subdued due to lack of time and resources. Security problems are hard to quantify and even harder to predict or relate to any process improvement activity. The goal of this thesis is to assess usefulness of “classical” software reliability engineering (SRE) models in the context of open source software security, the conditions under which they may be useful, and the information that they can provide with respect to the security quality of a software product. We start with security problem reports for open source Fedora series of software releases.We illustrate how one can learn from normal operational profile about the non-operational processes related to security problems. One aspect is classification of security problems based on the human traits that contribute to the injection of problems into code, whether due to poor practices or limited knowledge (epistemic errors), or due to random accidental events (aleatoric errors). Knowing the distribution aids in development of an attack profile. In the case of Fedora, the distribution of security problems found post-release was consistent across four different releases of the software. The security problem discovery rate appears to be roughly constant but much lower than the initial non-security problem discovery rate. Previous work has shown that non-operational testing can help accelerate and focus the problem discovery rate and that it can be successfully modeled.We find that some classical reliability models can be used with success to estimate the residual number of security problems, and through that provide a measure of the security characteristics of the software. We propose an agile software testing process that combines operational and non-operational (or attack related) testing with the intent of finding more security problems faster.
Can software reliability models be used to assess software security? One of the issues is that security problems are relatively rare under “normal” operational profiles, while “classical” reliability models may not be suitable for use in attack conditions. We investigated a range of Fedora open source software security problems to see if some of the basic assumptions behind software reliability growth models hold for discovery of security problems in non-attack situations. We find that in some cases, under “normal” operational use, security problem detection process may be described as a Poisson process. In those cases, we can use appropriate classical software reliability growth models to assess “security reliability” of that software in non-attack situations.We analyzed security problem discovery rate for RedHat Fedora. We find that security problems are relatively rare, their rate of discovery appears to be relatively constant under “normal” (non-attack) conditions. Discovery process often appears to satisfy Poisson assumption opening doors to use of classical reliability models. We illustrated using Yamada S-shaped model fit to v15 that in some cases such models may be effective in predicting the number of remaining security problems, and thus may offer a way of assessing security “quality” of the software product (although not necessarily its behavior under an attack).
In our previous work we showed that for Fedora, under normal operational conditions, security problem discovery appears to be a random process. While in the case of Fedora, and a number of other open source products, classical reliability models can be adapted to estimate the number of residual security problems under “normal” operational usage (not attacks), the predictive ability of the model is lower for security faults due to the rarity of security events and because there appears to be no real security reliability growth. The ratio of security to non-security faults is an indicator that the process needs improving, but it also may be leveraged to assess vulnerability profile of a release and possibly guide testing of its next version. We manually analyzed randomly sampled problems for four different versions of Fedora and classified them into security vulnerability categories. We also analyzed the distribution of these problems over the software’s lifespan and we found that they exhibit a symmetry which can perhaps be used in estimating the number of residual security problems in the software. Based on our work, we believe that an approach to vulnerability elimination based on a combination of “classical” and some non-operational “bounded” high-assurance testing along the lines discussed in may yield good vulnerability elimination results, as well as a way of estimating vulnerability level of a release. Classical SRE methods, metrics and models can be used to track both non-security and security problem detection under normal operational profile. We can then model the reliability growth, if any, and estimate the number of residual faults by estimating the lower and upper bounds on the total number of faults of a certain type. In production, there may be a simpler alternative. Just count the vulnerabilities and project over the next period assuming constant vulnerability discovery rate. In testing phase, to accelerate the process, one might leverage collected vulnerability information to generate non-operational test-cases aimed at vulnerability categories. The observed distributions of security problems reported under normal “operational” usage appear to support the above approach – i.e., what is learned say in the first x weeks can them be leveraged in selecting test cases in the next stage. Similarly, what is learned about a product Y weeks after its release may be very indicative of its vulnerability profile for the rest of its life given the assumption of constant vulnerability discovery rate.