Visible to the public NCSU SoS Annual Summary Report - NCSU - April 2015Conflict Detection Enabled

Lablet Summary Report
Purpose: To highlight progress. Information is generally at a higher level which is accessible to the interested public.

A). Fundamental Research
High level report of result or partial result that helped move security science forward-- In most cases it should point to a "hard problem".

  • For the resilience hard problem, we have created a classification scheme of existing isolation techniques. The purpose of the scheme is to enable the identification of underlying design principles the tradeoffs between them. Discovering these principles will aid in the design of the next generation of smart isolation techniques to support resilient architectures. Similarly, we have created a taxonomy of resiliency metrics. The purpose of the metrics are to estimate the resiliency level that a system exhibit given specific attack model or scenario, and the implied recommended actions to improved resiliency.
  • For the security metrics hard problem, we have results related to attack surface metrics and vulnerabilities. Many organizations prioritize security efforts around the general idea of attack surface (entry and exit points of a software program), considering areas of the code not reachable by an attacker to be a lower priority. However, the process for practically identifying what part of the code is on the attack surface and specific attack surface metrics have not been validated. We have results (weakly) associating our attack surface metrics with vulnerabilities. Additionally, our analysis of crash dumps at Microsoft indicates that the code identified in crash dumps accounts for most (94.6%) of the vulnerabilities, indicating that crash dumps may be used to indicate whether a piece of code is on the attack surface or not.
  • For the humans hard problem, for end users, we have developed a cognitive model of users based on the well-known ACT-R framework. We have developed an understanding of observable human behaviors that indicates the level of thought a user puts into an action (as a measure of the naturalness of the action). These contributions provide some of the bases of the science underlying the humans hard problem by leading us to an understanding of (1) how users process information and make security-relevant mistakes and the bases on which we may identify such mistakes; (2) how to distinguish potential deceptive user behaviors through largely unobtrusive observations of users; and (3) how to generate cognitively and ecologically relevant warnings to users to assist them in their security-relevant decision making.
  • For the policy-governed secure collaboration problem, we have developed metrics of policy complexity that capture how difficult a set of policies is for people to comprehend, which are indicative of configuration errors that lead to vulnerabilities. We have identified such errors in practical enterprise policies. We have developed and evaluated an argumentation-based approach for capturing firewall requirements that reduces errors and improves comprehensibility over traditional methods. We are studying policy errors in software and how to ameliorate such errors based on principles pertaining to software analytics. We have developed a normative formulation of accountability that captures its essential features separately from traceability; we have additionally developed a (partial) approach that relates normative relationships to data representations as a basis for logging and analytics. We have developed a simulation framework in which to study the robustness and resilience of norms that modulate the security-relevant behaviors and interactions of users, as a basis for understanding and exploring potential norms.
  • Our collective efforts have advanced the science of security by bring forth best practices and research methodologies to the hard problems in security.
    • We are developing an agent-based simulation methodology for secure collaboration. Agent-based simulation is an established methodology for understanding complex systems formed of the interactions of autonomous parties and where analytical solutions are not expected to be found. It has been used in areas such as epidemiology and, given the complexity of systems in connection with cybersecurity, should be a component of security research.
    • We are advancing the nature and rigor of empirical studies of end users via lab experiments and surveys.
    • We are advancing the empirical study of system administration via evaluations of artifacts such as software and policies through tools as well as through experimental studies in which participants apply competing approaches to create specifications (thereby helping us address insidious security errors through errors in modeling or configuration).
    • Through example, we are advancing the systematic development of survey papers in the science of security.

B). Community Interaction
Work to explain or extend scientific rigor in the community culture. Workshops, Seminars, Competitions, etc.

  • In April 2014, we organized the first HotSoS in Raleigh.
  • In Summer 2014, we conducted a two-day research workshop for lablet participants on best practices for the science of security. This workshop included sessions conducted by a statistician on experimental design; by a "science of science" methodologist on the ways in which we can collectively advance the science of security; by a computer scientist on conducting empirical software engineering research. A similar workshop is planned for late May 2015.
  • In October 2014, we conducted an Industry Day workshop whose first part involved Pecha Kucha presentations by all lablet projects; presentations by invited industry speakers; and a poster session by students. We use this workshop as a way to engage more closely with industry colleagues both in advancing cybersecurity and in promoting the science of security.
  • We are developing a rubric for reviewing research publications in a way that seeks to bring out and evaluate their core scientific claims and findings. We offered a workshop at the January 2015 quarterly meeting based on this rubric. The idea is that this rubric would sensitize researchers to the scientific aspects of security research and thereby lead to papers and peer reviews that are more clearly scientific and thus lead to improved scientific research overall.
  • We have taken numerous opportunities to give keynote and other invited lectures on the science of security to broader computer science communities as a way to bring them into the fold.

C. Educational
Any changes to curriculum at your school or elsewhere that indicates an increased training or rigor in security research.

  • We have conducted a seminar series for students throughout the year consisting of two main kinds of discussions. These series are attended by NCSU as well as remote participants at our collaborating institutions.
    • In research design seminars, students present their designs for their proposed study, including not only the motivation and existing theoretical frameworks, but also details of the theoretical or empirical investigations they plan to carry out. Our motivation for discussing research designs is, first, to reflect on the nature of an investigation before launching into the effort and, second, to vet the proposed design in consultation with peers in the lablet. The intended benefit is in strengthening the scientific basis of the research by improving clarity of the hypotheses and metrics underlying the research as well as ensuring the design would help evaluate those hypotheses.
    • In manuscript review seminars, students make a presentation about a manuscript they are preparing for submission for peer review. The intended benefit is in strengthening the positioning of the research with respect to the literature and in discussing the robustness of the claimed evaluation of the hypotheses.