Visible to the public SoS Quarterly Summary Report - NCSU - July 2015Conflict Detection Enabled

Lablet Summary Report
Purpose: To highlight progress. Information is generally at a higher level which is accessible to the interested public.

A). Fundamental Research
High level report of result or partial result that helped move security science forward-- In most cases it should point to a "hard problem".

  • For the humans hard problem, we conducted a study (expanding on our previous study) that reveals quantifiable networks of mental models that are not seen through qualitative techniques, such as interviews. Specifically, we observed that experts construct significantly richer mental models (with greater links between concepts) than novices. This finding could inform research on applying mental models to determine phishing vulnerability and the effectiveness of phishing training. In addition, we evaluated our browser phishing warning tool in a field experiment in which we attempted to phish participants via a fake website (during their normal browsing). Preliminary results confirm the effectiveness of our warning tool in helping users resist phishing.
  • For the resilience hard problem, we investigated dataflow-guided scientific workflows operating in cloud environments. We identified three security properties (input validation, remote access validation, and data integrity) as essential for improving workflow security. We are investigating the value of representing information provenance in provisioning workflows with respect to desired security properties. We formalized reusable framework for specifying, reasoning about, verifying, and certifying a broad range of system properties, including security resiliency.
  • For the policy-governed secure collaboration hard problem, we developed a formal model for norms that provides a mathematical basis for definitions of the consistency of a set of norms. This formalization is an essential component of dealing with conflicts among the norms that apply to a particular principal in a particular context. In addition, we conducted a human-subject study on comprehension of firewall policies presented in different forms. We found that our proposed modular language for expressing firewall policies yields enhanced comprehension over traditional representations.
  • For the security metrics hard problem, we collected data about Designed Defenses, helping us understand how developers interact with parts of the system designed to aid with defensive coding. In addition, we progressed further on our collaborative systematic literature review of intrusion detection metrics.
  • In addition to the above, we made progress on instrumentation and data preparation needed for conducting additional scientific investigations on the hard problems. Specifically, we developed a visualization of the Typing Game that helps gain insight into users' cognitive processes. This tool supports interactive exploration and visualization of Typing Game data, and provides labels of estimated cognitive phenomena. Importantly, we annotated the entire Android API -- essential to collecting attack surface metrics from Android apps.

B). Community Interaction
Work to explain or extend scientific rigor in the community culture. Workshops, Seminars, Competitions, etc.

  • We have identified a seed list of publication venues where Science of Security research appears. We are in the midst of engaging the community (at other lablets) on refining and ranking a list of venues.
  • On June 23-24, we hosted an invitation-only planning workshop for an upcoming NSA workshop on Science of Privacy. This gave us an opportunity to discuss Science of Security with visitors and to present posters on Lablet research.
  • We have scheduled our Community Day for October 29, 2015, where we will present our research and hold discussions with local industry and government colleagues.


C. Educational
Any changes to curriculum at your school or elsewhere that indicates an increased training or rigor in security research.

  • We continued to collect and organizing feedback to student presenters during our regular seminars. We further refined feedback instruments with a view to guiding presenters and the audience toward best practices in the science of security.
  • On May 27-28, 2015, we conducted a two-day summer workshop for the purpose of increasing our collective knowledge and experience with scientific research in security.
    • Our research methods team had previously published a template of components of a well-structured scientific research paper. Participants provided feedback on the template, customized the template for each hard problem, and evaluated previously published security research for the components in the template.
    • Two tutorials were given on statistical methods and bibliometrics.
    • In addition to lablet faculty and students, the workshop was attended by an industry researcher and some NSA personnel (primarily from the NCSU Laboratory for Analytic Sciences).