Visible to the public Biblio

Filters: Keyword is C3E 2019  [Clear All Filters]
2019-09-13
Neil C. Rabinowitz, Frank Perbet, H. Francis Song, Chiyuan Zhang, S. M. Ali Eslami, Matthew Botvinick.  2018.  Machine Theory of Mind. CoRR. abs/1802.07740

Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.

Madni, Azad, Madni, Carla.  2018.  Architectural Framework for Exploring Adaptive Human-Machine Teaming Options in Simulated Dynamic Environments. Systems. 6:44.

With the growing complexity of environments in which systems are expected to operate, adaptive human-machine teaming (HMT) has emerged as a key area of research. While human teams have been extensively studied in the psychological and training literature, and agent teams have been investigated in the artificial intelligence research community, the commitment to research in HMT is relatively new and fueled by several technological advances such as electrophysiological sensors, cognitive modeling, machine learning, and adaptive/adaptable human-machine systems. This paper presents an architectural framework for investigating HMT options in various simulated operational contexts including responding to systemic failures and external disruptions. The paper specifically discusses new and novel roles for machines made possible by new technology and offers key insights into adaptive human-machine teams. Landed aircraft perimeter security is used as an illustrative example of an adaptive cyber-physical-human system (CPHS). This example is used to illuminate the use of the HMT framework in identifying the different human and machine roles involved in this scenario. The framework is domain-independent and can be applied to both defense and civilian adaptive HMT. The paper concludes with recommendations for advancing the state-of-the-art in HMT. 

Gonzalez, Cleotilde, Lerch, Javier F, Lebiere, Christian.  2003.  Instance-based learning in dynamic decision making. Cognitive Science. 27:591–635.

This paper presents a learning theory pertinent to dynamic decision making (DDM) called instance-based learning theory (IBLT). IBLT proposes five learning mechanisms in the context of a decision-making process: instance-based knowledge, recognition-based retrieval, adaptive strategies, necessity-based choice, and feedback updates. IBLT suggests in DDM people learn with the accumulation and refinement of instances, containing the decision-making situation, action, and utility of decisions. As decision makers interact with a dynamic task, they recognize a situation according to its similarity to past instances, adapt their judgment strategies from heuristic-based to instance-based, and refine the accumulated knowledge according to feedback on the result of their actions. The IBLT’s learning mechanisms have been implemented in an ACT-R cognitive model. Through a series of experiments, this paper shows how the IBLT’s learning mechanisms closely approximate the relative trend magnitude and performance of human data. Although the cognitive model is bounded within the context of a dynamic task, the IBLT is a general theory of decision making applicable to other dynamic environments.

Cranford, Edward A, Gonzalez, Cleotilde, Aggarwal, Palvi, Lebiere, Christian.  2019.  Towards Personalized Deceptive Signaling for Cyber Defense Using Cognitive Models.

Recent research in cybersecurity has begun to develop active defense strategies using game-theoretic optimization of the allocation of limited defenses combined with deceptive signaling. While effective, the algorithms are optimized against perfectly rational adversaries. In a laboratory experiment, we pit humans against the defense algorithm in an online game designed to simulate an insider attack scenario. Humans attack far more often than predicted under perfect rationality. Optimizing against human bounded rationality is vitally important. We propose a cognitive model based on instancebased learning theory and built in ACT-R that accurately predicts human performance and biases in the game. We show that the algorithm does not defend well, largely due to its static nature and lack of adaptation to the particular individual’s actions. Thus, we propose an adaptive method of signaling that uses the cognitive model to trace an individual’s experience in real time, in order to optimize defenses. We discuss the results and implications of personalized defense.

P. Damacharla, A. Y. Javaid, J. J. Gallimore, V. K. Devabhaktuni.  2018.  Common Metrics to Benchmark Human-Machine Teams (HMT): A Review. IEEE Access. 6:38637-38655.

A significant amount of work is invested in human-machine teaming (HMT) across multiple fields. Accurately and effectively measuring system performance of an HMT is crucial for moving the design of these systems forward. Metrics are the enabling tools to devise a benchmark in any system and serve as an evaluation platform for assessing the performance, along with the verification and validation, of a system. Currently, there is no agreed-upon set of benchmark metrics for developing HMT systems. Therefore, identification and classification of common metrics are imperative to create a benchmark in the HMT field. The key focus of this review is to conduct a detailed survey aimed at identification of metrics employed in different segments of HMT and to determine the common metrics that can be used in the future to benchmark HMTs. We have organized this review as follows: identification of metrics used in HMTs until now, and classification based on functionality and measuring techniques. Additionally, we have also attempted to analyze all the identified metrics in detail while classifying them as theoretical, applied, real-time, non-real-time, measurable, and observable metrics. We conclude this review with a detailed analysis of the identified common metrics along with their usage to benchmark HMTs.

Madni, Azad, Madni, Carla.  2018.  Architectural Framework for Exploring Adaptive Human-Machine Teaming Options in Simulated Dynamic Environments. Systems. 6:44.

With the growing complexity of environments in which systems are expected to operate, adaptive human-machine teaming (HMT) has emerged as a key area of research. While human teams have been extensively studied in the psychological and training literature, and agent teams have been investigated in the artificial intelligence research community, the commitment to research in HMT is relatively new and fueled by several technological advances such as electrophysiological sensors, cognitive modeling, machine learning, and adaptive/adaptable human-machine systems. This paper presents an architectural framework for investigating HMT options in various simulated operational contexts including responding to systemic failures and external disruptions. The paper specifically discusses new and novel roles for machines made possible by new technology and offers key insights into adaptive human-machine teams. Landed aircraft perimeter security is used as an illustrative example of an adaptive cyber-physical-human system (CPHS). This example is used to illuminate the use of the HMT framework in identifying the different human and machine roles involved in this scenario. The framework is domain-independent and can be applied to both defense and civilian adaptive HMT. The paper concludes with recommendations for advancing the state-of-the-art in HMT.

N. Soule, B. Simidchieva, F. Yaman, R. Watro, J. Loyall, M. Atighetchi, M. Carvalho, D. Last, D. Myers, B. Flatley.  2015.  Quantifying & minimizing attack surfaces containing moving target defenses. 2015 Resilience Week (RWS). :1-6.

The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.

2019-09-12
Patricia L. McDermott, Cynthia O. Dominguez, Nicholas Kasdaglis, Matthew H. Ryan, Isabel M. Trahan, Alexander Nelson.  2018.  Human-Machine Teaming Systems Engineering Guide.

With the explosion of Automation, Autonomy, and AI technology development today, amid encouragement to put humans at the center of AI, systems engineers and user story/requirements developers need research-based guidance on how to design for human machine teaming (HMT). Insights from more than two decades of human-automation interaction research, applied in the systems engineering process, provide building blocks for designing automation, autonomy, and AI-based systems that are effective teammates for people.

The HMT Systems Engineering Guide provides this guidance based on a 2016-17 literature search and analysis of applied research. The guide provides a framework organizing HMT research, along with methodology for engaging with users of a system to elicit user stories and/or requirements that reflect applied research findings. The framework uses organizing themes of Observability, Predictability, Directing Attention, Exploring the Solution Space, Directability, Adaptability, Common Ground, Calibrated Trust, Design Process, and Information Presentation.

The guide includes practice-oriented resources that can be used to bridge the gap between research and design, including a tailorable HMT Knowledge Audit interview methodology, step-by-step instructions for planning and conducting data collection sessions, and a set of general cognitive interface requirements that can be adapted to specific applications based upon domain-specific data collected. 

Bradshaw, Jeffrey M, Hoffman, Robert R, Woods, David D, Johnson, Matthew.  2013.  The seven deadly myths of" autonomous systems". IEEE Intelligent Systems. 28:54–61.

As designers conceive and implement what are commonly (but mistakenly) called autonomous systems, they adhere to certain myths of autonomy that are not only damaging in their own right, but also by their continued propagation. This article busts such myths and gives reasons why each of these myths should be called out and cast aside.

Kimberly Ferguson-Walter, D. S. LaFon, T. B. Shade.  2017.  Friend or Faux: Deception for Cyber Defense. Journal of Information Warfare. 16(2):28-42.

Defensive deception provides promise in rebalancing the asymmetry of cybersecurity. It makes an attacker’s job harder because it does more than just block access; it impacts the decision making causing him or her to waste time and effort as well as expose his or her presence in the network. Pilot studies conducted by NSA research demonstrated the plausibility and necessity for metrics of success including difficulty attacking the system, behavioral changes caused, cognitive and emotional reactions aroused, and attacker strategy changes due to deception. Designing reliable and valid measures of effectiveness is a worthy (though often overlooked) goal for industry and government alike.

Cristiano De Faveri, Ana Moreira, Vasco Amaral.  2018.  Multi-paradigm deception modeling for cyber defense. Science Direct. 141:32-51.

Security-critical systems demand multiple well-balanced mechanisms to detect ill-intentioned actions and protect valuable assets from damage while keeping costs in acceptable levels. The use of deception to enhance security has been studied for more than two decades. However, deception is still included in the software development process in an ad-hoc fashion, typically realized as single tools or entire solutions repackaged as honeypot machines. We propose a multi-paradigm modeling approach to specify deception tactics during the software development process so that conflicts and risks can be found in the initial phases of the development, reducing costs of ill-planned decisions. We describe a metamodel containing deception concepts that integrates other models, such as a goal-oriented model, feature model, and behavioral UML models to specify static and dynamic aspects of a deception operation. The outcome of this process is a set of deception tactics that is realized by a set of deception components integrated with the system components. The feasibility of this multi-paradigm approach is shown by designing deception defense strategies for a students’ presence control system for the Faculty of Science and Technology of Universidade NOVA de Lisboa.

Omkar Thakoor, Milind Tambe, Phebe Vayanos, Haifeng Xu, Christopher Kiekintveld.  2019.  General-Sum Cyber Deception Games under Partial Attacker Valuation Information. Cais USC.

The rapid increase in cybercrime, causing a reported annual economic loss of $600 billion [20], has prompted a critical need for effective cyber defense. Strategic criminals conduct network reconnaissance prior to executing attacks to avoid detection and establish situational awareness via scanning and fingerprinting tools. Cyber deception attempts to foil these reconnaissance efforts; by disguising network and system attributes, among several other techniques. Cyber Deception Games (CDG) is a game-theoretic model for optimizing strategic deception, and can apply to various deception methods. Recently introduced initial model for CDGs assumes zero-sum payoffs, implying directly conflicting attacker motives, and perfect defender knowledge on attacker preferences. These unrealistic assumptions are fundamental limitations of the initial zero-sum model, which we address by proposing a general-sum model that can also handle uncertainty in the defender’s knowledge.

Kimberly Ferguson-Walter, Temmie Shade, Andrew Rogers, Michael Trumbo, Kevin Nauer, Kristin Divis, Aaron Jones, Angela Combs, Robert Abbott.  2018.  The Tularosa Study: An Experimental Design and Implementation to Quantify the Effectiveness of Cyber Deception.. Proposed for presentation at the Hawaii International Conference on System Sciences.

The Tularosa study was designed to understand how defensive deception—including both cyber and psychological—affects cyber attackers. Over 130 red teamers participated in a network penetration test over two days in which we controlled both the presence of and explicit mention of deceptive defensive techniques. To our knowledge, this represents the largest study of its kind ever conducted on a professional red team population. The design was conducted with a battery of questionnaires (e.g., experience, personality, etc.) and cognitive tasks (e.g., fluid intelligence, working memory, etc.), allowing for the characterization of a "typical" red teamer, as well as physiological measures (e.g., galvanic skin response, heart rate, etc.) to be correlated with the cyber events. This paper focuses on the design, implementation, population characteristics, lessons learned, and planned analyses.

Prakruthi Karuna, Hemant Purohit, Rajesh Ganesan, Sushil Jajodia.  2018.  Generating Hard to Comprehend Fake Documents for Defensive Cyber Deception. IEEE Xplore Digital Library. 33(5):16-25.

Existing approaches to cyber defense have been inadequate at defending the targets from advanced persistent threats (APTs). APTs are stealthy and orchestrated attacks, which target both corporations and governments to exfiltrate important data. In this paper, we present a novel comprehensibility manipulation framework (CMF) to generate a haystack of hard to comprehend fake documents, which can be used for deceiving attackers and increasing the cost of data exfiltration by wasting their time and resources. CMF requires an original document as input and generates fake documents that are both believable and readable for the attacker, possess no important information, and are hard to comprehend. To evaluate CMF, we experimented with college aptitude tests and compared the performance of many readers on separate reading comprehension exercises with fake and original content. Our results showed a statistically significant difference in the correct responses to the same questions across the fake and original exercises, thus validating the effectiveness of CMF operations to mislead.

Tao Zhang, Quanyan Zhu.  2018.  Hypothesis Testing Game for Cyber Deception. Springer Link. 11199

Deception is a technique to mislead human or computer systems by manipulating beliefs and information. Successful deception is characterized by the information-asymmetric, dynamic, and strategic behaviors of the deceiver and the deceivee. This paper proposes a game-theoretic framework to capture these features of deception in which the deceiver sends the strategically manipulated information to the deceivee while the deceivee makes the best-effort decisions based on the information received and his belief. In particular, we consider the case when the deceivee adopts hypothesis testing to make binary decisions and the asymmetric information is modeled using a signaling game where the deceiver is a privately-informed player called sender and the deceivee is an uninformed player called receiver. We characterize perfect Bayesian Nash equilibrium (PBNE) solution of the game and study the deceivability of the game. Our results show that the hypothesis testing game admits pooling and partially-separating-pooling equilibria. In pooling equilibria, the deceivability depends on the true types, while in partially-separating-pooling equilibria, the deceivability depends on the cost of the deceiver. We introduce the receiver operating characteristic curve to visualize the equilibrium behavior of the deceiver and the performance of the decision making, thereby characterizing the deceivability of the hypothesis testing game.

Frank Stech, Kristin Heckman.  2018.  Human Nature and Cyber Weaponry: Use of Denial and Deception in Cyber Counterintelligence. Springer Link. :13-27.

With the increase use of cyber weapons for Internet-based cyber espionage, the need for cyber counterintelligence has become apparent, but counterintelligence remains more art than science because of its focus on tricking human nature—the way people think, feel, and behave. Nevertheless, counterintelligence theory and practice have been extended to domains such as industry and finance, and can be applied to cyber security and active cyber defense. Nonetheless, there are relatively few explicit counterintelligence applications to cyber security reported in the open literature. This chapter describes the mechanisms of cyber denial and deception operations, using a cyber deception methods matrix and a cyber deception chain to build a tailored active cyber defense system for cyber counterintelligence. Cyber counterintelligence with cyber deception can mitigate cyber spy actions within the cyber espionage “kill chain.” The chapter describes how defenders can apply cyber denial and deception in their cyber counterintelligence operations to mitigate a cyber espionage threat and thwart cyber spies. The chapter provides a hypothetical case, based on real cyber espionage operations by a state actor.

Steven Templeton, Matt Bishop, Karl Levitt, Mark Heckman.  2019.  A Biological Framework for Characterizing Mimicry in Cyber-Deception. ProQuest. :508-517.

Deception, both offensive and defensive, is a fundamental tactic in warfare and a well-studied topic in biology. Living organisms use a variety deception tools, including mimicry, camouflage, and nocturnality. Evolutionary biologists have published a variety of formal models for deception in nature. Deception in these models is fundamentally based on misclassification of signals between the entities of the system, represented as a tripartite relation between two signal senders, the “model” and the “mimic”, and a signal receiver, called the “dupe”. Examples of relations between entities include attraction, repulsion and expected advantage gained or lost from the interaction. Using this representation, a multitude of deception systems can be described. Some deception systems in cybersecurity are well-known. Consider, for example, all of the many different varieties of “honey-things” used to ensnare attackers. The study of deception in cybersecurity is limited compared to the richness found in biology. While multiple ontologies of deception in cyberenvironments exist, these are primarily lists of terms without a greater organizing structure. This is both a lost opportunity and potentially quite dangerous: a lost opportunity because defenders may be missing useful defensive deception strategies; dangerous because defenders may be oblivious to ongoing attacks using previously unidentified types of offensive deception. In this paper, we extend deception models from biology to present a framework for identifying relations in the cyber-realm analogous to those found in nature. We show how modifications of these relations can create, enhance or on the contrary prevent deception. From these relations, we develop a framework of cyber-deception types, with examples, and a general model for cyber-deception. The signals used in cyber-systems, which are not directly tied to the “Natural” world, differ significantly from those utilized in biologic mimicry systems. However, similar concepts supporting identity exist and are discussed in brief.

Tanmoy Chakraborty, Sushil Jajodia, Jonathan Katz, Antonio Picariello, Giancarlo Sperli, V. S. Subrahmanian.  2019.  FORGE: A Fake Online Repository Generation Engine for Cyber Deception. IEEE.

Today, major corporations and government organizations must face the reality that they will be hacked by malicious actors. In this paper, we consider the case of defending enterprises that have been successfully hacked by imposing additional a posteriori costs on the attacker. Our idea is simple: for every real document d, we develop methods to automatically generate a set Fake(d) of fake documents that are very similar to d. The attacker who steals documents must wade through a large number of documents in detail in order to separate the real one from the fakes. Our FORGE system focuses on technical documents (e.g. engineering/design documents) and involves three major innovations. First, we represent the semantic content of documents via multi-layer graphs (MLGs). Second, we propose a novel concept of “meta-centrality” for multi-layer graphs. Our third innovation is to show that the problem of generating the set Fake(d) of fakes can be viewed as an optimization problem. We prove that this problem is NP-complete and then develop efficient heuristics to solve it in practice. We ran detailed experiments with a panel of 20 human subjects and show that FORGE generates highly believable fakes.

Kimberly Ferguson-Walter, Sunny Fugate, Justin Mauger, Maxine Major.  2019.  Game Theory for Adaptive Defensive Cyber Deception. ACM Digital Library.

As infamous hacker Kevin Mitnick describes in his book The Art of Deception, "the human factor is truly security's weakest link". Deception has been widely successful when used by hackers for social engineering and by military strategists in kinetic warfare [26]. Deception affects the human's beliefs, decisions, and behaviors. Similarly, as cyber defenders, deception is a powerful tool that should be employed to protect our systems against humans who wish to penetrate, attack, and harm them.

Sarah Cooney, Phebe Vayanos, Thanh H. Nguyen, Cleotilde Gonzalez, Christian Lebiere, Edward A. Cranford, Milind Tambe.  2019.  Warning Time: Optimizing Strategic Signaling for Security Against Boundedly Rational Adversaries. Team Core USC.

Defender-attacker Stackelberg security games (SSGs) have been applied for solving many real-world security problems. Recent work in SSGs has incorporated a deceptive signaling scheme into the SSG model, where the defender strategically reveals information about her defensive strategy to the attacker, in order to influence the attacker’s decision making for the defender’s own benefit. In this work, we study the problem of signaling in security games against a boundedly rational attacker. 

Shari Lawrence Pfleegera, Deanna Caputo.  2012.  Leveraging behavioral science to mitigate cyber security risk. Science Direct. 31(4):597-611.

Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use. 

2019-09-11
[Anonymous].  2019.  El Paso and Dayton Tragedy-Related Scams and Malware Campaigns. CISA.

In the wake of the recent shootings in El Paso, TX, and Dayton, OH, the Cybersecurity and Infrastructure Security Agency (CISA) advises users to watch out for possible malicious cyber activity seeking to capitalize on these tragic events. Users should exercise caution in handling emails related to the shootings, even if they appear to originate from trusted sources. It is common for hackers to try to capitalize on horrible events that occur to perform phishing attacks.

Lucas Ropek.  2019.  Social Engineering Attack Nets $1.7M in Government Funds. Government Technology.

Social engineering, the act of manipulating someone into a specific action through online deception. According to Norton, social engineering attempts typically take one of several forms, including phishing, impersonation and various types of baiting. Social Engineering attacks are on the rise, according to the FBI, which reportedly received some 20,373 complaints in 2018 alone. Those complaints amount to $1.2 billion in overall losses.

[Anonymous].  2019.  Millions of fake businesses list on Google Maps. WARC News.

Google handles more than 90% of the world's online search queries, generating billions in advertising revenue, yet it has emerged that ad-supported Google Maps includes an estimated 11 million falsely listed businesses on any given day.

Chris Bing.  2018.  Winter Olympics hack shows how advanced groups can fake attribution. Cyber Scoop.

A malware attack that disrupted the opening ceremony of the 2018 Winter Olympics highlights false flag operations. The malware called the "Olympic Destroyer" contained code deriving from other well-known attacks launched by different hacking groups. This lead different cybersecurity companies to accuse Russia, North Korea, Iran, or China.