Biblio
Self-adaptive systems overcome many of the limitations of human supervision in complex software-intensive systems by endowing them with the ability to automatically adapt their structure and behavior in the presence of runtime changes. However, adaptation in some classes of systems (e.g., safetycritical) can benefit by receiving information from humans (e.g., acting as sophisticated sensors, decision-makers), or by involving them as system-level effectors to execute adaptations (e.g., when automation is not possible, or as a fallback mechanism). However, human participants are influenced by factors external to the system (e.g., training level, fatigue) that affect the likelihood of success when they perform a task, its duration, or even if they are willing to perform it in the first place. Without careful consideration of these factors, it is unclear how to decide when to involve humans in adaptation, and in which way. In this paper, we investigate how the explicit modeling of human participants can provide a better insight into the trade-offs of involving humans in adaptation. We contribute a formal framework to reason about human involvement in self-adaptation, focusing on the role of human participants as actors (i.e., effectors) during the execution stage of adaptation. The approach consists of: (i) a language to express adaptation models that capture factors affecting human behavior and its interactions with the system, and (ii) a formalization of these adaptation models as stochastic multiplayer games (SMGs) that can be used to analyze humansystem-environment interactions. We illustrate our approach in an adaptive industrial middleware used to monitor and manage sensor networks in renewable energy production plants.
To protect against misuse of passwords compromised in a breach, consumers should promptly change affected passwords and any similar passwords on other accounts. Ideally, affected companies should strongly encourage this behavior and have mechanisms in place to mitigate harm. In order to make recommendations to companies about how to help their users perform these and other security-enhancing actions after breaches, we must first have some understanding of the current effectiveness of companies’ post-breach practices. To study the effectiveness of password-related breach notifications and practices enforced after a breach, we examine—based on real-world password data from 249 participants—whether and how constructively participants changed their passwords after a breach announcement. Of the 249 participants, 63 had accounts on breached domains; only 33% of the 63 changed their passwords and only 13% (of 63) did so within three months of the announcement. New passwords were on average 1.3× stronger than old passwords (when comparing log10-transformed strength), though most were weaker or of equal strength. Concerningly, new passwords were overall more similar to participants’ other passwords, and participants rarely changed passwords on other sites even when these were the same or similar to their password on the breached domain. Our results highlight the need for more rigorous passwordchanging requirements following a breach and more effective breach notifications that deliver comprehensive advice.
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and can detect problems that the system is unaware of. One way of achieving this is by placing the human operator on the loop – i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, explanation is sometimes helpful to allow the human to understand why the system is making certain decisions and calibrate confidence from the human perspective. However, explanations come with costs in terms of delayed actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether explanations will improve overall utility and, if so, what kinds of explanation to provide to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic adaptation approach that leverages a probabilistic reasoning technique to determine when the explanation should be used in order to improve overall system utility.
In this work we develop a novel Bayesian neural network methodology to achieve strong adversarial robustness without the need for online adversarial training. Unlike previous efforts in this direction, we do not rely solely on the stochasticity of network weights by minimizing the divergence between the learned parameter distribution and a prior. Instead, we additionally require that the model maintain some expected uncertainty with respect to all input covariates. We demonstrate that by encouraging the network to distribute evenly across inputs, the network becomes less susceptible to localized, brittle features which imparts a natural robustness to targeted perturbations. We show empirical robustness on several benchmark datasets.
End-users’ trust in automated agents is important as automated decision-making and planning is increasingly used in many aspects of people’s lives. In real-world applications of planning, multiple optimization objectives are often involved. Thus, planning agents’ decisions can involve complex tradeoffs among competing objectives. It can be difficult for the end-users to understand why an agent decides on a particular planning solution on the basis of its objective values. As a result, the users may not know whether the agent is making the right decisions, and may lack trust in it. In this work, we contribute an approach, based on contrastive explanation, that enables a multi-objective MDP planning agent to explain its decisions in a way that communicates its tradeoff rationale in terms of the domain-level concepts. We conduct a human subjects experiment to evaluate the effectiveness of our explanation approach in a mobile robot navigation domain. The results show that our approach significantly improves the users’ understanding, and confidence in their understanding, of the tradeoff rationale of the planning agent.
Programming languages exist to enable people to create and maintain software as effectively as possible. They are subject to two very different sets of requirements: first, the need to provide strong safety guarantees to protect against bugs; and second, the need for users to effectively and efficiently write software that meets their functional and quality requirements. This thesis argues that fusing formal methods for reasoning about programming languages with user-centered design methods is a practical approach to designing languages that make programmers more effective. By doing so, designers can create safer languages that are more effective for programmers than existing languages. The thesis is substantiated by the introduction of PLIERS: Programming Language Iterative Evaluation and Refinement System. PLIERS is a process for designing programming languages that integrates formal methods with user-centered design. The hypothesis that PLIERS is beneficial is supported by two language design projects. In these projects, I show how PLIERS benefits the programming language design process. Glacier is an extension to Java that enforces transitive class immutability, which is a much stronger property than that provided by languages that are in use today. Although there are hundreds of possible ways of restricting changes to state in programming languages, Glacier occupies a point in the design space that was justified by interview studies with professional software engineers. I evaluated Glacier in case studies, showing that it is expressive enough for some real-world applications. I also evaluated Glacier in lab studies and compared it to Java’s final keyword, finding both that Glacier is more effective at expressing immutability than final and that Glacier detects bugs that users are likely to insert in code. Blockchains are a distributed computing platform that aim to enable safe computation among users who do not necessarily trust each other. To improve safety relative to existing languages, in which programmers have repeatedly deployed software with serious bugs, I designed Obsidian, a new programming language for blockchain application development. From observations about typical blockchain applications, I derived two features that motivated the design of Obsidian. First, blockchain applications typically implement state machines, which support different operations in different states. Obsidian uses typestate, which lifts dynamic state into static types, to provide static guarantees regarding object state. Second, blockchain applications frequently manipulate resources, such as virtual currency. Obsidian provides a notion of ownership, which distinguishes one reference from all others. Obsidian supports resources via linear types, which ensure that owning references to resources are not accidentally lost. The combination of resources and typestate results in a novel set of design problems for the type system; although each idea has been explored individually, combining them requires unifying different perspectives on linearity. Furthermore, no language with either one of these features has been designed in a user-centered way or evaluated with users. Typical typestate systems have a complex set of permissions that provides safety properties, but these systems focused on expressiveness rather than on usability. Obsidian integrates typestate and resources in a novel way, resulting in a new type system design with a focus on simplicity and usability while retaining the desired safety properties. Obsidian is based on a core calculus I designed, Silica, for which I proved type soundness. In order to make Obsidian as usable as possible, I based its design on the results of formative studies with programmers. I evaluated Obsidian with two case studies, showing that Obsidian can be used to implement relevant programs. I also conducted a randomized controlled trial comparing Obsidian to Solidity, a popular language for writing smart contracts. I found that most of the Obsidian participants learned Obsidian and completed programming tasks after only a short training period; further, in one task, 70% of the participants who used Solidity accidentally inserted bugs that Obsidian’s compiler would have detected. Finally, Obsidian participants completed significantly more tasks correctly than did Solidity participants.
Some blockchain programs (smart contracts) have included serious security vulnerabilities. Obsidian is a new typestate-oriented programming language that uses a strong type system to rule out some of these vulnerabilities. Although Obsidian was designed to promote usability to make it as easy as possible to write programs, strong type systems can cause a language to be difficult to use. In particular, ownership, typestate, and assets, which Obsidian uses to provide safety guarantees, have not seen broad adoption together in popular languages and result in significant usability challenges. We performed an empirical study with 20 participants comparing Obsidian to Solidity, which is the language most commonly used for writing smart contracts today. We observed that Obsidian participants were able to successfully complete more of the programming tasks than the Solidity participants. We also found that the Solidity participants commonly inserted asset-related bugs, which Obsidian detects at compile time.
Blockchain platforms are coming into use for processing critical transactions among participants who have not established mutual trust. Many blockchains are programmable, supporting smart contracts, which maintain persistent state and support transactions that transform the state. Unfortunately, bugs in many smart contracts have been exploited by hackers. Obsidian is a novel programming language with a type system that enables static detection of bugs that are common in smart contracts today. Obsidian is based on a core calculus, Silica, for which we proved type soundness. Obsidian uses typestate to detect improper state manipulation and uses linear types to detect abuse of assets. We integrated a permissions system that encodes a notion of ownership to allow for safe, flexible aliasing. We describe two case studies that evaluate Obsidian’s applicability to the domains of parametric insurance and supply chain management, finding that Obsidian’s type system facilitates reasoning about high-level states and ownership of resources. We compared our Obsidian implementation to a Solidity implementation, observing that the Solidity implementation requires much boilerplate checking and tracking of state, whereas Obsidian does this work statically.
Blockchains host smart contracts for voting, tokens, and other purposes. Vulnerabilities in contracts are common, often leading to the loss of money. Psamathe is a new language we are designing around a new flow abstraction, reducing asset bugs and making contracts more concise than in existing languages. We present an overview of Psamathe, and discuss two example contracts in Psamathe and Solidity.
This paper proposes a new defense called $n$-ML against adversarial examples, i.e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers. Inspired by $n$-version programming, $n$-ML trains an ensemble of $n$ classifiers, and inputs are classified by a vote of the classifiers in the ensemble. Unlike prior such approaches, however, the classifiers in the ensemble are trained specifically to classify adversarial examples differently, rendering it very difficult for an adversarial example to obtain enough votes to be misclassified. We show that $n$-ML roughly retains the benign classification accuracies of state-of-the-art models on the MNIST, CIFAR10, and GTSRB datasets, while simultaneously defending against adversarial examples with better resilience than the best defenses known to date and, in most cases, with lower classification-time overhead.
Motivated by the transformative impact of deep neural networks (DNNs) on different areas (e.g., image and speech recognition), researchers and anti-virus vendors are proposing end-to-end DNNs for malware detection from raw bytes that do not require manual feature engineering. Given the security sensitivity of the task that these DNNs aim to solve, it is important to assess their susceptibility to evasion.
In this work, we propose an attack that guides binary-diversification tools via optimization to mislead DNNs for malware detection while preserving the functionality of binaries. Unlike previous attacks on such DNNs, ours manipulates instructions that are a functional part of the binary, which makes it particularly challenging to defend against. We evaluated our attack against three DNNs in white-box and black-box settings, and found that it can often achieve success rates near 100%. Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%. We explored several defenses, both new and old, and identified some that can successfully prevent over 80% of our evasion attempts. However, these defenses may still be susceptible to evasion by adaptive attackers, and so we advocate for augmenting malware-detection systems with methods that do not rely on machine learning.
As Cyber-Physical Systems (CPSs) become more autonomous, it becomes harder for humans who interact with the CPSs to understand the behavior of the systems. Particularly for CPSs that must perform tasks while optimizing for multiple quality objectives and acting under uncertainty, it can be difficult for humans to understand the system behavior generated by an automated planner. This work-in-progress presents an approach at clarifying system behavior through interactive explanation by allowing end-users to ask Why and Why-Not questions about specific behaviors of the system, and providing answers in the form of contrastive explanation.