Visible to the public CMU SoS Lablet Quarterly Executive Summary - July 2020Conflict Detection Enabled

A. Fundamental Research
High level report of result or partial result that helped move security science forward-- In most cases it should point to a "hard problem". These are the most important research accomplishments of the Lablet in the previous quarter.

Jonathan Aldrich

Obsidian: A Language for Secure-by-Construction Blockchain Programs

Highlights: Blockchains have been proposed to support transactions on distributed, shared state, but hackers have exploited security vulnerabilities in existing programs. We are working with the World Bank to develop a parametric insurance platform on the Blockchain with Obsidian to address the need for stable insurance markets to respond to severe weather events, such as floods or droughts.

 

Lujo Bauer

Securing Safety-Critical Machine Learning Algorithms

Highlights: We developed a new approach to train ensembles of classifiers to better resist attempts to create malicious inputs that would be misclassified. Similar to n-version programming, this approach relies on the assumption that each classifier will make mistakes independently of the others. This assumption does not hold for many ML classifiers, and the innovation in our work is in how we train classifiers to be more purposefully diverse, particularly under adversarial conditions.

 

Lorrie Cranor

Characterizing user behavior and anticipating its effects on computer security with a Security Behavior Observatory

Highlights: We are working on two papers related to user behavior after breaches. In one study, we used the Security Behavior Observatory (SBO) dataset to study, for a set of password breaches, how often people actually change their passwords in the aftermath of a breach and how constructive these changes are. In another study we used the SBO dataset to study how people come to learn about breaches online and the actions people take in the aftermath of breaches.

 

David Garlan

Model-Based Explanation For Human-in-the-Loop Security

Highlights: In this work, we contribute an explainable planning approach to agent-based decision-making, based on contrastive explanation, that enables the agent to communicate its preference for the different planning objectives and help the user better understand whether the agent's decision is optimal with respect to their own preference, despite a potential value misalignment. We conducted a human-subject experiment to evaluate the effectiveness of our explanation approach in the mobile robot navigation domain. The results show that our approach significantly improves the users' ability and reliable confidence in determining whether the agent's decisions are in line with their preferences.