Biblio
To improve cyber defense, researchers have developed algorithms to allocate limited defense resources optimally. Through signaling theory, we have learned that it is possible to trick the human mind when using deceptive signals. The present work is an initial step towards developing a psychological theory of cyber deception. We use simulations to investigate how humans might make decisions under various conditions of deceptive signals in cyber-attack scenarios. We created an Instance-Based Learning (IBL) model of the attacker decisions using the ACT-R cognitive architecture. We ran simulations against the optimal deceptive signaling algorithm and against four alternative deceptive signal schemes. Our results show that the optimal deceptive algorithm is more effective at reducing the probability of attack and protecting assets compared to other signaling conditions, but it is not perfect. These results shed some light on the expected effectiveness of deceptive signals for defense. The implications of these findings are discussed.
Understanding cybersecurity in an environment is uniquely challenging due to highly dynamic and potentially-adversarial activity. At the same time, the stakes are high for performance during these tasks: failures to reason about the environment and make decisions can let attacks go unnoticed or worsen the effects of attacks. Opportunities exist to address these challenges by more tightly integrating computer agents with human operators. In this paper, we consider implications for this integration during three stages that contribute to cyber analysts developing insights and conclusions about their environment: data organization and interaction, toolsmithing and analytic interaction, and human-centered assessment that leads to insights and conclusions. In each area, we discuss current challenges and opportunities for improved human-machine teaming. Finally, we present a roadmap of research goals for advanced human-machine teaming in cybersecurity operations.
Cyber SA is described as the current and predictive knowledge of cyberspace in relation to the Network, Missions and Threats across friendly, neutral and adversary forces. While this model provides a good high-level understanding of Cyber SA, it does not contain actionable information to help inform the development of capabilities to improve SA. In this paper, we present a systematic, human-centered process that uses a card sort methodology to understand and conceptualize Senior Leader Cyber SA requirements. From the data collected, we were able to build a hierarchy of high- and low- priority Cyber SA information, as well as uncover items that represent high levels of disagreement with and across organizations. The findings of this study serve as a first step in developing a better understanding of what Cyber SA means to Senior Leaders, and can inform the development of future capabilities to improve their SA and Mission Performance.
Technology’s role in the fight against malicious cyber-attacks is critical to the increasingly networked world of today. Yet, technology does not exist in isolation: the human factor is an aspect of cyber-defense operations with increasingly recognized importance. Thus, the human factors community has a unique responsibility to help create and validate cyber defense systems according to basic principles and design philosophy. Concurrently, the collective science must advance. These goals are not mutually exclusive pursuits: therefore, toward both these ends, this research provides cyber-cognitive links between cyber defense challenges and major human factors and ergonomics (HFE) research areas that offer solutions and instructive paths forward. In each area, there exist cyber research opportunities and realms of core HFE science for exploration. We raise the cyber defense domain up to the HFE community at-large as a sprawling area for scientific discovery and contribution.
Traditional cyber security techniques have led to an asymmetric disadvantage for defenders. The defender must detect all possible threats at all times from all attackers and defend all systems against all possible exploitation. In contrast, an attacker needs only to find a single path to the defender's critical information. In this article, we discuss how this asymmetry can be rebalanced using cyber deception to change the attacker's perception of the network environment, and lead attackers to false beliefs about which systems contain critical information or are critical to a defender's computing infrastructure. We introduce game theory concepts and models to represent and reason over the use of cyber deception by the defender and the effect it has on attackerperception. Finally, we discuss techniques for combining artificial intelligence algorithms with game theory models to estimate hidden states of the attacker using feedback through payoffs to learn how best to defend the system using cyber deception. It is our opinion that adaptive cyber deception is a necessary component of future information systems and networks. The techniques we present can simultaneously decrease the risks and impacts suffered by defenders and dramatically increase the costs and risks of detection for attackers. Such techniques are likely to play a pivotal role in defending national and international security concerns.
We report on whether cyber attacker behaviors contain decision making biases. Data from a prior experiment were analyzed in an exploratory fashion, making use of think-aloud responses from a small group of red teamers. The analysis provided new observational evidence of traditional decision-making biases in red team behaviors (confirmation bias, anchoring, and take-the-best heuristic use). These biases may disrupt red team decisions and goals, and simultaneously increase their risk of detection. Interestingly, at least part of the bias induction may be related to the use of cyber deception. Future directions include the development of behavioral measurement techniques for these and additional cognitive biases in cyber operators, examining the role of attacker traits, and identifying the conditions where biases can be induced successfully in experimental conditions.
Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.
With the growing complexity of environments in which systems are expected to operate, adaptive human-machine teaming (HMT) has emerged as a key area of research. While human teams have been extensively studied in the psychological and training literature, and agent teams have been investigated in the artificial intelligence research community, the commitment to research in HMT is relatively new and fueled by several technological advances such as electrophysiological sensors, cognitive modeling, machine learning, and adaptive/adaptable human-machine systems. This paper presents an architectural framework for investigating HMT options in various simulated operational contexts including responding to systemic failures and external disruptions. The paper specifically discusses new and novel roles for machines made possible by new technology and offers key insights into adaptive human-machine teams. Landed aircraft perimeter security is used as an illustrative example of an adaptive cyber-physical-human system (CPHS). This example is used to illuminate the use of the HMT framework in identifying the different human and machine roles involved in this scenario. The framework is domain-independent and can be applied to both defense and civilian adaptive HMT. The paper concludes with recommendations for advancing the state-of-the-art in HMT.
This paper presents a learning theory pertinent to dynamic decision making (DDM) called instance-based learning theory (IBLT). IBLT proposes five learning mechanisms in the context of a decision-making process: instance-based knowledge, recognition-based retrieval, adaptive strategies, necessity-based choice, and feedback updates. IBLT suggests in DDM people learn with the accumulation and refinement of instances, containing the decision-making situation, action, and utility of decisions. As decision makers interact with a dynamic task, they recognize a situation according to its similarity to past instances, adapt their judgment strategies from heuristic-based to instance-based, and refine the accumulated knowledge according to feedback on the result of their actions. The IBLT’s learning mechanisms have been implemented in an ACT-R cognitive model. Through a series of experiments, this paper shows how the IBLT’s learning mechanisms closely approximate the relative trend magnitude and performance of human data. Although the cognitive model is bounded within the context of a dynamic task, the IBLT is a general theory of decision making applicable to other dynamic environments.
Recent research in cybersecurity has begun to develop active defense strategies using game-theoretic optimization of the allocation of limited defenses combined with deceptive signaling. While effective, the algorithms are optimized against perfectly rational adversaries. In a laboratory experiment, we pit humans against the defense algorithm in an online game designed to simulate an insider attack scenario. Humans attack far more often than predicted under perfect rationality. Optimizing against human bounded rationality is vitally important. We propose a cognitive model based on instancebased learning theory and built in ACT-R that accurately predicts human performance and biases in the game. We show that the algorithm does not defend well, largely due to its static nature and lack of adaptation to the particular individual’s actions. Thus, we propose an adaptive method of signaling that uses the cognitive model to trace an individual’s experience in real time, in order to optimize defenses. We discuss the results and implications of personalized defense.
A significant amount of work is invested in human-machine teaming (HMT) across multiple fields. Accurately and effectively measuring system performance of an HMT is crucial for moving the design of these systems forward. Metrics are the enabling tools to devise a benchmark in any system and serve as an evaluation platform for assessing the performance, along with the verification and validation, of a system. Currently, there is no agreed-upon set of benchmark metrics for developing HMT systems. Therefore, identification and classification of common metrics are imperative to create a benchmark in the HMT field. The key focus of this review is to conduct a detailed survey aimed at identification of metrics employed in different segments of HMT and to determine the common metrics that can be used in the future to benchmark HMTs. We have organized this review as follows: identification of metrics used in HMTs until now, and classification based on functionality and measuring techniques. Additionally, we have also attempted to analyze all the identified metrics in detail while classifying them as theoretical, applied, real-time, non-real-time, measurable, and observable metrics. We conclude this review with a detailed analysis of the identified common metrics along with their usage to benchmark HMTs.
With the growing complexity of environments in which systems are expected to operate, adaptive human-machine teaming (HMT) has emerged as a key area of research. While human teams have been extensively studied in the psychological and training literature, and agent teams have been investigated in the artificial intelligence research community, the commitment to research in HMT is relatively new and fueled by several technological advances such as electrophysiological sensors, cognitive modeling, machine learning, and adaptive/adaptable human-machine systems. This paper presents an architectural framework for investigating HMT options in various simulated operational contexts including responding to systemic failures and external disruptions. The paper specifically discusses new and novel roles for machines made possible by new technology and offers key insights into adaptive human-machine teams. Landed aircraft perimeter security is used as an illustrative example of an adaptive cyber-physical-human system (CPHS). This example is used to illuminate the use of the HMT framework in identifying the different human and machine roles involved in this scenario. The framework is domain-independent and can be applied to both defense and civilian adaptive HMT. The paper concludes with recommendations for advancing the state-of-the-art in HMT.
The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
With the explosion of Automation, Autonomy, and AI technology development today, amid encouragement to put humans at the center of AI, systems engineers and user story/requirements developers need research-based guidance on how to design for human machine teaming (HMT). Insights from more than two decades of human-automation interaction research, applied in the systems engineering process, provide building blocks for designing automation, autonomy, and AI-based systems that are effective teammates for people.
The HMT Systems Engineering Guide provides this guidance based on a 2016-17 literature search and analysis of applied research. The guide provides a framework organizing HMT research, along with methodology for engaging with users of a system to elicit user stories and/or requirements that reflect applied research findings. The framework uses organizing themes of Observability, Predictability, Directing Attention, Exploring the Solution Space, Directability, Adaptability, Common Ground, Calibrated Trust, Design Process, and Information Presentation.
The guide includes practice-oriented resources that can be used to bridge the gap between research and design, including a tailorable HMT Knowledge Audit interview methodology, step-by-step instructions for planning and conducting data collection sessions, and a set of general cognitive interface requirements that can be adapted to specific applications based upon domain-specific data collected.
As designers conceive and implement what are commonly (but mistakenly) called autonomous systems, they adhere to certain myths of autonomy that are not only damaging in their own right, but also by their continued propagation. This article busts such myths and gives reasons why each of these myths should be called out and cast aside.
Defensive deception provides promise in rebalancing the asymmetry of cybersecurity. It makes an attacker’s job harder because it does more than just block access; it impacts the decision making causing him or her to waste time and effort as well as expose his or her presence in the network. Pilot studies conducted by NSA research demonstrated the plausibility and necessity for metrics of success including difficulty attacking the system, behavioral changes caused, cognitive and emotional reactions aroused, and attacker strategy changes due to deception. Designing reliable and valid measures of effectiveness is a worthy (though often overlooked) goal for industry and government alike.
Security-critical systems demand multiple well-balanced mechanisms to detect ill-intentioned actions and protect valuable assets from damage while keeping costs in acceptable levels. The use of deception to enhance security has been studied for more than two decades. However, deception is still included in the software development process in an ad-hoc fashion, typically realized as single tools or entire solutions repackaged as honeypot machines. We propose a multi-paradigm modeling approach to specify deception tactics during the software development process so that conflicts and risks can be found in the initial phases of the development, reducing costs of ill-planned decisions. We describe a metamodel containing deception concepts that integrates other models, such as a goal-oriented model, feature model, and behavioral UML models to specify static and dynamic aspects of a deception operation. The outcome of this process is a set of deception tactics that is realized by a set of deception components integrated with the system components. The feasibility of this multi-paradigm approach is shown by designing deception defense strategies for a students’ presence control system for the Faculty of Science and Technology of Universidade NOVA de Lisboa.
The rapid increase in cybercrime, causing a reported annual economic loss of $600 billion [20], has prompted a critical need for effective cyber defense. Strategic criminals conduct network reconnaissance prior to executing attacks to avoid detection and establish situational awareness via scanning and fingerprinting tools. Cyber deception attempts to foil these reconnaissance efforts; by disguising network and system attributes, among several other techniques. Cyber Deception Games (CDG) is a game-theoretic model for optimizing strategic deception, and can apply to various deception methods. Recently introduced initial model for CDGs assumes zero-sum payoffs, implying directly conflicting attacker motives, and perfect defender knowledge on attacker preferences. These unrealistic assumptions are fundamental limitations of the initial zero-sum model, which we address by proposing a general-sum model that can also handle uncertainty in the defender’s knowledge.
The Tularosa study was designed to understand how defensive deception—including both cyber and psychological—affects cyber attackers. Over 130 red teamers participated in a network penetration test over two days in which we controlled both the presence of and explicit mention of deceptive defensive techniques. To our knowledge, this represents the largest study of its kind ever conducted on a professional red team population. The design was conducted with a battery of questionnaires (e.g., experience, personality, etc.) and cognitive tasks (e.g., fluid intelligence, working memory, etc.), allowing for the characterization of a "typical" red teamer, as well as physiological measures (e.g., galvanic skin response, heart rate, etc.) to be correlated with the cyber events. This paper focuses on the design, implementation, population characteristics, lessons learned, and planned analyses.
Existing approaches to cyber defense have been inadequate at defending the targets from advanced persistent threats (APTs). APTs are stealthy and orchestrated attacks, which target both corporations and governments to exfiltrate important data. In this paper, we present a novel comprehensibility manipulation framework (CMF) to generate a haystack of hard to comprehend fake documents, which can be used for deceiving attackers and increasing the cost of data exfiltration by wasting their time and resources. CMF requires an original document as input and generates fake documents that are both believable and readable for the attacker, possess no important information, and are hard to comprehend. To evaluate CMF, we experimented with college aptitude tests and compared the performance of many readers on separate reading comprehension exercises with fake and original content. Our results showed a statistically significant difference in the correct responses to the same questions across the fake and original exercises, thus validating the effectiveness of CMF operations to mislead.
Deception is a technique to mislead human or computer systems by manipulating beliefs and information. Successful deception is characterized by the information-asymmetric, dynamic, and strategic behaviors of the deceiver and the deceivee. This paper proposes a game-theoretic framework to capture these features of deception in which the deceiver sends the strategically manipulated information to the deceivee while the deceivee makes the best-effort decisions based on the information received and his belief. In particular, we consider the case when the deceivee adopts hypothesis testing to make binary decisions and the asymmetric information is modeled using a signaling game where the deceiver is a privately-informed player called sender and the deceivee is an uninformed player called receiver. We characterize perfect Bayesian Nash equilibrium (PBNE) solution of the game and study the deceivability of the game. Our results show that the hypothesis testing game admits pooling and partially-separating-pooling equilibria. In pooling equilibria, the deceivability depends on the true types, while in partially-separating-pooling equilibria, the deceivability depends on the cost of the deceiver. We introduce the receiver operating characteristic curve to visualize the equilibrium behavior of the deceiver and the performance of the decision making, thereby characterizing the deceivability of the hypothesis testing game.
With the increase use of cyber weapons for Internet-based cyber espionage, the need for cyber counterintelligence has become apparent, but counterintelligence remains more art than science because of its focus on tricking human nature—the way people think, feel, and behave. Nevertheless, counterintelligence theory and practice have been extended to domains such as industry and finance, and can be applied to cyber security and active cyber defense. Nonetheless, there are relatively few explicit counterintelligence applications to cyber security reported in the open literature. This chapter describes the mechanisms of cyber denial and deception operations, using a cyber deception methods matrix and a cyber deception chain to build a tailored active cyber defense system for cyber counterintelligence. Cyber counterintelligence with cyber deception can mitigate cyber spy actions within the cyber espionage “kill chain.” The chapter describes how defenders can apply cyber denial and deception in their cyber counterintelligence operations to mitigate a cyber espionage threat and thwart cyber spies. The chapter provides a hypothetical case, based on real cyber espionage operations by a state actor.
Deception, both offensive and defensive, is a fundamental tactic in warfare and a well-studied topic in biology. Living organisms use a variety deception tools, including mimicry, camouflage, and nocturnality. Evolutionary biologists have published a variety of formal models for deception in nature. Deception in these models is fundamentally based on misclassification of signals between the entities of the system, represented as a tripartite relation between two signal senders, the “model” and the “mimic”, and a signal receiver, called the “dupe”. Examples of relations between entities include attraction, repulsion and expected advantage gained or lost from the interaction. Using this representation, a multitude of deception systems can be described. Some deception systems in cybersecurity are well-known. Consider, for example, all of the many different varieties of “honey-things” used to ensnare attackers. The study of deception in cybersecurity is limited compared to the richness found in biology. While multiple ontologies of deception in cyberenvironments exist, these are primarily lists of terms without a greater organizing structure. This is both a lost opportunity and potentially quite dangerous: a lost opportunity because defenders may be missing useful defensive deception strategies; dangerous because defenders may be oblivious to ongoing attacks using previously unidentified types of offensive deception. In this paper, we extend deception models from biology to present a framework for identifying relations in the cyber-realm analogous to those found in nature. We show how modifications of these relations can create, enhance or on the contrary prevent deception. From these relations, we develop a framework of cyber-deception types, with examples, and a general model for cyber-deception. The signals used in cyber-systems, which are not directly tied to the “Natural” world, differ significantly from those utilized in biologic mimicry systems. However, similar concepts supporting identity exist and are discussed in brief.
Today, major corporations and government organizations must face the reality that they will be hacked by malicious actors. In this paper, we consider the case of defending enterprises that have been successfully hacked by imposing additional a posteriori costs on the attacker. Our idea is simple: for every real document d, we develop methods to automatically generate a set Fake(d) of fake documents that are very similar to d. The attacker who steals documents must wade through a large number of documents in detail in order to separate the real one from the fakes. Our FORGE system focuses on technical documents (e.g. engineering/design documents) and involves three major innovations. First, we represent the semantic content of documents via multi-layer graphs (MLGs). Second, we propose a novel concept of “meta-centrality” for multi-layer graphs. Our third innovation is to show that the problem of generating the set Fake(d) of fakes can be viewed as an optimization problem. We prove that this problem is NP-complete and then develop efficient heuristics to solve it in practice. We ran detailed experiments with a panel of 20 human subjects and show that FORGE generates highly believable fakes.