Visible to the public Biblio

Filters: Keyword is Articles of Interest  [Clear All Filters]
2019-09-13
Gonzalez, Cleotilde, Lerch, Javier F, Lebiere, Christian.  2003.  Instance-based learning in dynamic decision making. Cognitive Science. 27:591–635.

This paper presents a learning theory pertinent to dynamic decision making (DDM) called instance-based learning theory (IBLT). IBLT proposes five learning mechanisms in the context of a decision-making process: instance-based knowledge, recognition-based retrieval, adaptive strategies, necessity-based choice, and feedback updates. IBLT suggests in DDM people learn with the accumulation and refinement of instances, containing the decision-making situation, action, and utility of decisions. As decision makers interact with a dynamic task, they recognize a situation according to its similarity to past instances, adapt their judgment strategies from heuristic-based to instance-based, and refine the accumulated knowledge according to feedback on the result of their actions. The IBLT’s learning mechanisms have been implemented in an ACT-R cognitive model. Through a series of experiments, this paper shows how the IBLT’s learning mechanisms closely approximate the relative trend magnitude and performance of human data. Although the cognitive model is bounded within the context of a dynamic task, the IBLT is a general theory of decision making applicable to other dynamic environments.

Cranford, Edward A, Gonzalez, Cleotilde, Aggarwal, Palvi, Lebiere, Christian.  2019.  Towards Personalized Deceptive Signaling for Cyber Defense Using Cognitive Models.

Recent research in cybersecurity has begun to develop active defense strategies using game-theoretic optimization of the allocation of limited defenses combined with deceptive signaling. While effective, the algorithms are optimized against perfectly rational adversaries. In a laboratory experiment, we pit humans against the defense algorithm in an online game designed to simulate an insider attack scenario. Humans attack far more often than predicted under perfect rationality. Optimizing against human bounded rationality is vitally important. We propose a cognitive model based on instancebased learning theory and built in ACT-R that accurately predicts human performance and biases in the game. We show that the algorithm does not defend well, largely due to its static nature and lack of adaptation to the particular individual’s actions. Thus, we propose an adaptive method of signaling that uses the cognitive model to trace an individual’s experience in real time, in order to optimize defenses. We discuss the results and implications of personalized defense.

2019-09-12
Kimberly Ferguson-Walter, D. S. LaFon, T. B. Shade.  2017.  Friend or Faux: Deception for Cyber Defense. Journal of Information Warfare. 16(2):28-42.

Defensive deception provides promise in rebalancing the asymmetry of cybersecurity. It makes an attacker’s job harder because it does more than just block access; it impacts the decision making causing him or her to waste time and effort as well as expose his or her presence in the network. Pilot studies conducted by NSA research demonstrated the plausibility and necessity for metrics of success including difficulty attacking the system, behavioral changes caused, cognitive and emotional reactions aroused, and attacker strategy changes due to deception. Designing reliable and valid measures of effectiveness is a worthy (though often overlooked) goal for industry and government alike.

Cristiano De Faveri, Ana Moreira, Vasco Amaral.  2018.  Multi-paradigm deception modeling for cyber defense. Science Direct. 141:32-51.

Security-critical systems demand multiple well-balanced mechanisms to detect ill-intentioned actions and protect valuable assets from damage while keeping costs in acceptable levels. The use of deception to enhance security has been studied for more than two decades. However, deception is still included in the software development process in an ad-hoc fashion, typically realized as single tools or entire solutions repackaged as honeypot machines. We propose a multi-paradigm modeling approach to specify deception tactics during the software development process so that conflicts and risks can be found in the initial phases of the development, reducing costs of ill-planned decisions. We describe a metamodel containing deception concepts that integrates other models, such as a goal-oriented model, feature model, and behavioral UML models to specify static and dynamic aspects of a deception operation. The outcome of this process is a set of deception tactics that is realized by a set of deception components integrated with the system components. The feasibility of this multi-paradigm approach is shown by designing deception defense strategies for a students’ presence control system for the Faculty of Science and Technology of Universidade NOVA de Lisboa.

Omkar Thakoor, Milind Tambe, Phebe Vayanos, Haifeng Xu, Christopher Kiekintveld.  2019.  General-Sum Cyber Deception Games under Partial Attacker Valuation Information. Cais USC.

The rapid increase in cybercrime, causing a reported annual economic loss of $600 billion [20], has prompted a critical need for effective cyber defense. Strategic criminals conduct network reconnaissance prior to executing attacks to avoid detection and establish situational awareness via scanning and fingerprinting tools. Cyber deception attempts to foil these reconnaissance efforts; by disguising network and system attributes, among several other techniques. Cyber Deception Games (CDG) is a game-theoretic model for optimizing strategic deception, and can apply to various deception methods. Recently introduced initial model for CDGs assumes zero-sum payoffs, implying directly conflicting attacker motives, and perfect defender knowledge on attacker preferences. These unrealistic assumptions are fundamental limitations of the initial zero-sum model, which we address by proposing a general-sum model that can also handle uncertainty in the defender’s knowledge.

Kimberly Ferguson-Walter, Temmie Shade, Andrew Rogers, Michael Trumbo, Kevin Nauer, Kristin Divis, Aaron Jones, Angela Combs, Robert Abbott.  2018.  The Tularosa Study: An Experimental Design and Implementation to Quantify the Effectiveness of Cyber Deception.. Proposed for presentation at the Hawaii International Conference on System Sciences.

The Tularosa study was designed to understand how defensive deception—including both cyber and psychological—affects cyber attackers. Over 130 red teamers participated in a network penetration test over two days in which we controlled both the presence of and explicit mention of deceptive defensive techniques. To our knowledge, this represents the largest study of its kind ever conducted on a professional red team population. The design was conducted with a battery of questionnaires (e.g., experience, personality, etc.) and cognitive tasks (e.g., fluid intelligence, working memory, etc.), allowing for the characterization of a "typical" red teamer, as well as physiological measures (e.g., galvanic skin response, heart rate, etc.) to be correlated with the cyber events. This paper focuses on the design, implementation, population characteristics, lessons learned, and planned analyses.

Prakruthi Karuna, Hemant Purohit, Rajesh Ganesan, Sushil Jajodia.  2018.  Generating Hard to Comprehend Fake Documents for Defensive Cyber Deception. IEEE Xplore Digital Library. 33(5):16-25.

Existing approaches to cyber defense have been inadequate at defending the targets from advanced persistent threats (APTs). APTs are stealthy and orchestrated attacks, which target both corporations and governments to exfiltrate important data. In this paper, we present a novel comprehensibility manipulation framework (CMF) to generate a haystack of hard to comprehend fake documents, which can be used for deceiving attackers and increasing the cost of data exfiltration by wasting their time and resources. CMF requires an original document as input and generates fake documents that are both believable and readable for the attacker, possess no important information, and are hard to comprehend. To evaluate CMF, we experimented with college aptitude tests and compared the performance of many readers on separate reading comprehension exercises with fake and original content. Our results showed a statistically significant difference in the correct responses to the same questions across the fake and original exercises, thus validating the effectiveness of CMF operations to mislead.

Tao Zhang, Quanyan Zhu.  2018.  Hypothesis Testing Game for Cyber Deception. Springer Link. 11199

Deception is a technique to mislead human or computer systems by manipulating beliefs and information. Successful deception is characterized by the information-asymmetric, dynamic, and strategic behaviors of the deceiver and the deceivee. This paper proposes a game-theoretic framework to capture these features of deception in which the deceiver sends the strategically manipulated information to the deceivee while the deceivee makes the best-effort decisions based on the information received and his belief. In particular, we consider the case when the deceivee adopts hypothesis testing to make binary decisions and the asymmetric information is modeled using a signaling game where the deceiver is a privately-informed player called sender and the deceivee is an uninformed player called receiver. We characterize perfect Bayesian Nash equilibrium (PBNE) solution of the game and study the deceivability of the game. Our results show that the hypothesis testing game admits pooling and partially-separating-pooling equilibria. In pooling equilibria, the deceivability depends on the true types, while in partially-separating-pooling equilibria, the deceivability depends on the cost of the deceiver. We introduce the receiver operating characteristic curve to visualize the equilibrium behavior of the deceiver and the performance of the decision making, thereby characterizing the deceivability of the hypothesis testing game.

Frank Stech, Kristin Heckman.  2018.  Human Nature and Cyber Weaponry: Use of Denial and Deception in Cyber Counterintelligence. Springer Link. :13-27.

With the increase use of cyber weapons for Internet-based cyber espionage, the need for cyber counterintelligence has become apparent, but counterintelligence remains more art than science because of its focus on tricking human nature—the way people think, feel, and behave. Nevertheless, counterintelligence theory and practice have been extended to domains such as industry and finance, and can be applied to cyber security and active cyber defense. Nonetheless, there are relatively few explicit counterintelligence applications to cyber security reported in the open literature. This chapter describes the mechanisms of cyber denial and deception operations, using a cyber deception methods matrix and a cyber deception chain to build a tailored active cyber defense system for cyber counterintelligence. Cyber counterintelligence with cyber deception can mitigate cyber spy actions within the cyber espionage “kill chain.” The chapter describes how defenders can apply cyber denial and deception in their cyber counterintelligence operations to mitigate a cyber espionage threat and thwart cyber spies. The chapter provides a hypothetical case, based on real cyber espionage operations by a state actor.

Steven Templeton, Matt Bishop, Karl Levitt, Mark Heckman.  2019.  A Biological Framework for Characterizing Mimicry in Cyber-Deception. ProQuest. :508-517.

Deception, both offensive and defensive, is a fundamental tactic in warfare and a well-studied topic in biology. Living organisms use a variety deception tools, including mimicry, camouflage, and nocturnality. Evolutionary biologists have published a variety of formal models for deception in nature. Deception in these models is fundamentally based on misclassification of signals between the entities of the system, represented as a tripartite relation between two signal senders, the “model” and the “mimic”, and a signal receiver, called the “dupe”. Examples of relations between entities include attraction, repulsion and expected advantage gained or lost from the interaction. Using this representation, a multitude of deception systems can be described. Some deception systems in cybersecurity are well-known. Consider, for example, all of the many different varieties of “honey-things” used to ensnare attackers. The study of deception in cybersecurity is limited compared to the richness found in biology. While multiple ontologies of deception in cyberenvironments exist, these are primarily lists of terms without a greater organizing structure. This is both a lost opportunity and potentially quite dangerous: a lost opportunity because defenders may be missing useful defensive deception strategies; dangerous because defenders may be oblivious to ongoing attacks using previously unidentified types of offensive deception. In this paper, we extend deception models from biology to present a framework for identifying relations in the cyber-realm analogous to those found in nature. We show how modifications of these relations can create, enhance or on the contrary prevent deception. From these relations, we develop a framework of cyber-deception types, with examples, and a general model for cyber-deception. The signals used in cyber-systems, which are not directly tied to the “Natural” world, differ significantly from those utilized in biologic mimicry systems. However, similar concepts supporting identity exist and are discussed in brief.

Tanmoy Chakraborty, Sushil Jajodia, Jonathan Katz, Antonio Picariello, Giancarlo Sperli, V. S. Subrahmanian.  2019.  FORGE: A Fake Online Repository Generation Engine for Cyber Deception. IEEE.

Today, major corporations and government organizations must face the reality that they will be hacked by malicious actors. In this paper, we consider the case of defending enterprises that have been successfully hacked by imposing additional a posteriori costs on the attacker. Our idea is simple: for every real document d, we develop methods to automatically generate a set Fake(d) of fake documents that are very similar to d. The attacker who steals documents must wade through a large number of documents in detail in order to separate the real one from the fakes. Our FORGE system focuses on technical documents (e.g. engineering/design documents) and involves three major innovations. First, we represent the semantic content of documents via multi-layer graphs (MLGs). Second, we propose a novel concept of “meta-centrality” for multi-layer graphs. Our third innovation is to show that the problem of generating the set Fake(d) of fakes can be viewed as an optimization problem. We prove that this problem is NP-complete and then develop efficient heuristics to solve it in practice. We ran detailed experiments with a panel of 20 human subjects and show that FORGE generates highly believable fakes.

Kimberly Ferguson-Walter, Sunny Fugate, Justin Mauger, Maxine Major.  2019.  Game Theory for Adaptive Defensive Cyber Deception. ACM Digital Library.

As infamous hacker Kevin Mitnick describes in his book The Art of Deception, "the human factor is truly security's weakest link". Deception has been widely successful when used by hackers for social engineering and by military strategists in kinetic warfare [26]. Deception affects the human's beliefs, decisions, and behaviors. Similarly, as cyber defenders, deception is a powerful tool that should be employed to protect our systems against humans who wish to penetrate, attack, and harm them.

Sarah Cooney, Phebe Vayanos, Thanh H. Nguyen, Cleotilde Gonzalez, Christian Lebiere, Edward A. Cranford, Milind Tambe.  2019.  Warning Time: Optimizing Strategic Signaling for Security Against Boundedly Rational Adversaries. Team Core USC.

Defender-attacker Stackelberg security games (SSGs) have been applied for solving many real-world security problems. Recent work in SSGs has incorporated a deceptive signaling scheme into the SSG model, where the defender strategically reveals information about her defensive strategy to the attacker, in order to influence the attacker’s decision making for the defender’s own benefit. In this work, we study the problem of signaling in security games against a boundedly rational attacker. 

Shari Lawrence Pfleegera, Deanna Caputo.  2012.  Leveraging behavioral science to mitigate cyber security risk. Science Direct. 31(4):597-611.

Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use. 

2019-09-11
[Anonymous].  2019.  El Paso and Dayton Tragedy-Related Scams and Malware Campaigns. CISA.

In the wake of the recent shootings in El Paso, TX, and Dayton, OH, the Cybersecurity and Infrastructure Security Agency (CISA) advises users to watch out for possible malicious cyber activity seeking to capitalize on these tragic events. Users should exercise caution in handling emails related to the shootings, even if they appear to originate from trusted sources. It is common for hackers to try to capitalize on horrible events that occur to perform phishing attacks.

Lucas Ropek.  2019.  Social Engineering Attack Nets $1.7M in Government Funds. Government Technology.

Social engineering, the act of manipulating someone into a specific action through online deception. According to Norton, social engineering attempts typically take one of several forms, including phishing, impersonation and various types of baiting. Social Engineering attacks are on the rise, according to the FBI, which reportedly received some 20,373 complaints in 2018 alone. Those complaints amount to $1.2 billion in overall losses.

[Anonymous].  2019.  Millions of fake businesses list on Google Maps. WARC News.

Google handles more than 90% of the world's online search queries, generating billions in advertising revenue, yet it has emerged that ad-supported Google Maps includes an estimated 11 million falsely listed businesses on any given day.

Chris Bing.  2018.  Winter Olympics hack shows how advanced groups can fake attribution. Cyber Scoop.

A malware attack that disrupted the opening ceremony of the 2018 Winter Olympics highlights false flag operations. The malware called the "Olympic Destroyer" contained code deriving from other well-known attacks launched by different hacking groups. This lead different cybersecurity companies to accuse Russia, North Korea, Iran, or China.

James Sanders.  2018.  Attackers are using cloud services to mask attack origin and build false trust. Tech Republic.

According to a report released by Menlo Security, the padlock in a browser's URL bar gives users a false sense of security as cloud hosting services are being used by attackers to host malware droppers. The use of this tactic allows attackers to hide the origin of their attacks and further evade detection. The exploitation of trust is a major component of such attacks.

Nicole Lee.  2019.  Google’s new curriculum teaches kids how to detect disinformation. Engadget.

The curriculum includes "Don't Fall for Fake" activities that are centered around teaching children critical thinking skills. This is so they'll know the difference between credible and non-credible news sources.

Devin Coldewey.  2019.  To Detect Fake News, This AI First Learned to Write it. Tech Crunch.

Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.

Caleb Townsend.  2019.  Deepfake Technology: Implications for the Future. U.S. Cybersecurity Magazine.

Deepfakes' most menacing consequence is their ability to make us question what we are seeing. The more popular deepfake technology gets, the less we will be able to trust our own eyes.

[Anonymous].  2019.  Researchers develop app to detect Twitter bots in any language. Help Net Security.

Language scholars and machine learning specialists collaborated to create a new application that can detect Twitter bots independent of the language used. The detection of bots will help in decreasing the spread of fake news.

Clint Watts.  2019.  The National Security Challenges of Artificial Intelligence, Manipulated Media, and 'Deepfakes'. Foreign Policy Research Institute.

The spread of Deepfakes via social media platforms leads to disinformation and misinformation. There are ways in which the government and social media companies can prevent to prevent Deepfakes.

2019-09-10
[Anonymous].  2019.  What is digital ad fraud and how does it work? Cyware.

Ad fraud is becoming more common among websites. Ad fraud can help fraudsters to generate revenue for themselves through fake traffic, fake clicks and fake installs. It can also help the cybercriminals to deploy malware on users' computers.