Visible to the public HotSoS 2015 - Research Presentations

SoS Newsletter- Advanced Book Block

 

 
SoS Logo

HotSoS 2015 Research Presentations

These papers were presented at HotSoS 2015. They covered a range of scientific issues related to the five hard problems of cybersecurity—scalability and composability, measurement, policy-governed secure collaboration, resilient architectures, and human behavior. The individual presentations are described below. They will be published in an upcoming ACM conference publication. The HoTSoS conference page is available at: http://cps-vo.org/group/hotsos

 

“Integrity Assurance in Resource-Bounded Systems through Stochastic Message Authentication”
Aron Laszka, Yevgeniy Vorobeychik, and Xenofon Koutsoukos.

Assuring communication integrity is a central problem in security. The presenters propose a formal game-theoretic framework for optimal stochastic message authentication, providing provable integrity guarantees for resource-bounded systems based on an existing MAC scheme. They use this framework to investigate attacker deterrence, optimal design of stochastic message authentication schemes, and provide experimental results on the computational performance of their framework in practice.

 

“Active Cyber Defense Dynamics Exhibiting Rich Phenomena”
Ren Zheng, Wenlian Lu, and Shouhuai Xu

The authors explore the rich phenomena that can be exhibited when the defender employs active defense to combat cyber attacks. This study shows that active cyber defense dynamics (or more generally, cybersecurity dynamics) can exhibit bifurcation and chaos phenomena that have implications for cyber security measurement and prediction. First, that it is infeasible (or even impossible) to accurately measure and predict cyber security under certain circumstances, and second, that the defender must manipulate the dynamics to avoid unmanageable situations in real-life defense operations.

 

“Towards a Science of Trust”
Dusko Pavlovic

This paper explores the idea that security is not just a suitable subject for science, but that the process of security is also similar to the process of science. This similarity arises from the fact that both science and security depend on the methods of inductive inference. Because of this dependency, a scientific theory can never be definitely proved, but can only be disproved by new evidence and improved into a better theory. Because of the same dependency, every security claim and method has a lifetime, and always eventually needs to be improved.

 

“Challenges with Applying Vulnerability Prediction Models”
Patrick Morrison, Kim Herzig, Brendan Murphy, and Laurie Williams

The authors address vulnerability prediction models (VPM) as a basis for software engineers to prioritize precious verification resources to search for vulnerabilities. The goal of this research is to measure whether vulnerability prediction models built using standard recommendations perform well enough to provide actionable results for engineering resource allocation. They define "actionable" in terms of the inspection effort required to evaluate model results. They conclude VPMs must be refined to achieve actionable performance, possibly through security-specific metrics.

 

“Preemptive Intrusion Detection: Theoretical Framework and Real-World Measurements”
Phuong Cao, Eric Badger, Zbigniew Kalbarczyk, Ravishankar Iyer, and Adam Slagell

This paper presents a framework for highly accurate and preemptive detection of attacks, i.e., before system misuse. Using security logs on real incidents that occurred over a six-year period at the National Center for Supercomputing Applications (NCSA), the authors evaluated their framework. The data consisted of security incidents that were only identified after the fact by security analysts. The framework detected 74 percent of attacks, and the majority them were detected before the system misuse. In addition, six hidden attacks were uncovered that were not detected by intrusion detection systems during the incidents or by security analysts in post-incident forensic analyses.

 

“Enabling Forensics by Proposing Heuristics to Identify Mandatory Log Events”
Jason King, Rahul Pandita, and Laurie Williams

Software engineers often implement logging mechanisms to debug software and diagnose faults. These logging mechanisms need to capture detailed traces of user activity to enable forensics and hold users accountable. Techniques for identifying what events to log are often subjective and produce inconsistent results. This study helps software engineers strengthen forensic-ability and user accountability by systematically identifying mandatory log events through processing of unconstrained natural language software artifacts; and then proposing empirically-derived heuristics to help determine whether an event must be logged.

 

“Modelling User Availability in Workflow Resiliency Analysis”
John C. Mace, Charles Morisset, and Aad van Moorsel

Workflows capture complex operational processes and include security constraints that limit which users can perform which tasks. An improper security policy may prevent certain tasks being assigned and may force a policy violation. Tools are required that allow automatic evaluation of workflow resiliency. Modelling user availability can be done in multiple ways for the same workflow. Finding the correct choice of model is a complex concern with a major impact on the calculated resiliency. The authors describe a number of user availability models and their encoding in the model checker PRISM, used to evaluate resiliency. They also show how model choice can affect resiliency computation in terms of its value, memory, and CPU time.

 

“An Empirical Study of Global Malware Encounters”
Ghita Mezzour, Kathleen M. Carley, and L. Richard Carley

The authors empirically test alternative hypotheses about factors behind international variation in the number of trojans, worm, and virus encounters using Symantec Anti-Virus (AV) telemetry data collected from more than 10 million Symantec global customer computers. They used regression analysis to test for the effect of computing and monetary resources, web browsing behavior, computer piracy, cyber security expertise, and international relations on international variation in malware encounters and found that trojans, worms, and viruses are most prevalent in Sub-Saharan African and Asian countries. The main factor that explains the high malware exposure of these countries is widespread computer piracy, especially when combined with poverty.

 

“An Integrated Computer-Aided Cognitive Task Analysis Method for Tracing Cyber-Attack Analysis Processes”
Chen Zhong, John Yen, Peng Liu, Rob Erbacher, Renee Etoty, and Christopher Garneau

Cyber-attack analysts are required to process large amounts of network data and to reason under uncertainty to detect cyber-attacks. Capturing and studying the fine-grained analysts’ cognitive processes helps researchers gain deep understanding of how they conduct analytical reasoning and elicit their procedure knowledge and experience to further improve their performance. To conduct cognitive task analysis studies in cyber-attack analysis, the authors proposed an integrated computer-aided data collection method for cognitive task analysis (CTA) with three building elements: a trace representation of the fine-grained cyber-attack analysis process, a computer tool supporting process tracing and a laboratory experiment for collecting traces of analysts’ cognitive processes in conducting a cyber-attack analysis task.

 

“All Signals Go: Investigating How Individual Differences Affect Performance on a Medical Diagnosis Task Designed to Parallel a Signals Intelligence Analyst Task”
Allaire K. Welk and Christopher B. Mayhorn

Signals intelligence analysts perform complex decision-making tasks that involve gathering, sorting, and analyzing information. This study aimed to evaluate how individual differences influence performance in an Internet search-based medical diagnosis task designed to simulate a signals analyst task. Individual differences included working memory capacity and previous experience with elements of the task, prior experience using the Internet, and prior experience conducting Internet searches. Results indicated that working memory significantly predicted performance on this medical diagnosis task and other factors were not significant predictors of performance. These results provide additional evidence that working memory capacity greatly influences performance on cognitively complex decision-making tasks, whereas experience with elements of the task may not. These findings suggest that working memory capacity should be considered when screening individuals for signals intelligence analyst positions.

 

“Detecting Abnormal User Behavior Through Pattern-mining Input Device Analytics”
Ignacio X. Domínguez, Alok Goel, David L. Roberts, and Robert St. Amant

This paper presents a method for detecting patterns in the usage of a computer mouse that can give insights into user's cognitive processes. The authors conducted a study using a computer version of the Memory game (also known as the Concentration game) that allowed some participants to reveal the content of the tiles, expecting their low-level mouse interaction patterns to deviate from those of normal players with no access to this information. They then trained models to detect these differences using task-independent input device features. The models detected cheating with 98.73% accuracy for players who cheated or did not cheat consistently for entire rounds of the game, and with 89.18% accuracy for cases in which players enabled and then disabled cheating within rounds.

 

“Understanding Sanction under Variable Observability in a Secure, Collaborative Environment”
Hongying Du, Bennett Narron, Nirav Ajmeri, Emily Berglund, Jon Doyle, and Munindar P. Singh

Many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and the observability of the sanctioner in a secure, collaborative environment using a simulation consisting of agents maintaining “compliance” to enforced security norms while remaining “motivated” as researchers. They tested whether delayed observability of the environment would lead to greater motivation of agents to complete research tasks than immediate observability, and if sanctioning a group for a violation would lead to greater compliance to security norms than sanctioning an individual. They found that only the latter hypothesis is supported.

 

“Measuring the Security Impacts of Password Policies Using Cognitive Behavioral Agent-Based Modeling”
Vijay Kothari, Jim Blythe, Sean W. Smith, and Ross Koppel

Agent-based modeling can serve as a valuable asset to security personnel who wish to better understand the security landscape within their organization, especially as it relates to user behavior and circumvention. The authors argue in favor of cognitive behavioral agent-based modeling for usable security, report on their work on developing an agent-based model for a password management scenario, perform a sensitivity analysis, which provides them with valuable insights into improving security, and provides directions for future work.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurty.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.