Visible to the public Biblio

Found 107 results

Filters: Keyword is Cognitive Security in Cyber  [Clear All Filters]
2019-09-25
Carolyn Crandall.  2019.  You’ve Been Deceived about Deception Technology. Cyber Defense Magazine.

There are three misconceptions about deception technology in regard to its value, complexity, and application. Deception technology is valuable in that it provides accurate detection of attacks. Deceptions are organized, deployed, and managed by modem deception technology through the use of machine learning. Different Organizations of all sizes and types can apply deception in their cybersecurity strategies.

[Anonymous].  2019.  How to use deception to gain the advantage over cyber-attackers. Tiess Information Security Series.

Deception has mainly been used by attackers to deceive victims into sharing their personal information or downloading malware. However, deception has become the key to tricking adversaries into revealing their attack strategies and vulnerabilities.  In order for defensive cyber deception to be effective, a deception decoy fabric must be generated throughout a network. 

[Anonymous].  2018.  Deception As a Strategy for Cyber Security. Taslet Security.

Deception has been a key tactic in warfare since the ancient days. The growing frequency and complexity of cyberattacks has created the potential for cyber warfare.  Deception has become an important tactic in cyber defense as it allows security teams to learn more about the techniques and tools used by attackers as well as the weaknesses of organizations’ defense approaches. 

Abdul Rahman.  2019.  Tricking attackers through the art of deception. Help Net Security.

The purpose of using deception technology in cybersecurity is to misdirect or lure attackers away from valuable technology assets once they have successfully infiltrated a network, using traps or decoys. Deception technology can also be used to further learn about the motives and tactics of attackers. Several components are required for the effective performance of deception. 

Kelly Shortridge.  2017.  Disrupting the attack lifecycle: how do attackers behave?

In the realm of cybersecurity, the fact that hackers are human is often forgotten. It is important to examine the biases and behavior of attackers. Kelly Shortridge, detection project manager at BAE Systems Applied Intelligence, has highlighted five key points in regard to attacker biases, which include the avoidance of hard targets, the preference for repeatable or repackageable attacks, risk aversion, and more. Shortridge also identifies the ways in which these biases can be leveraged by defenders.

Kelly Sheridan.  2019.  Cognitive Bias Can Hamper Security Decisions. Dark Reading.

A report published by Forcepoint, titled Thinking About Thinking: Exploring Bias in Cybersecurity with Insights from Cognitive Science, highlights availability bias as one of the biases held by security and business teams. Availability bias occurs when a person lets the frequency with which they receive information influence their decisions. For example, if there are more headlines about nation-state attacks, such attacks may become a greater priority to major decision-makers in the development and spending surrounding cybersecurity solutions. 

George Hulme.  2016.  9 biases killing your security program. CSO Online.

George V. Hulme at CSO Online highlighted nine cognitive biases often held by security professionals that may be affecting the success of security programs. These biases include the availability heuristic, confirmation bias, information bias, the ostrich effect, and more. It is important to reduce these biases as they could lead to inaccurate judgments in the defense against cyberattacks. 

Tony Cole.  2018.  Deception technology: An approach that is beginning to gain traction. Federal News Network.

Organizations are encouraged to embrace deception technology in order to safely study cyber adversaries. The use of deception technology could allow security teams to further understand the motives of attackers and improve upon their defense methods.  This technology could also reduce dwell time, which is the amount of time attackers go undetected in a system or the time it takes for an organization to become aware of an incident.

2019-09-24
Sarah Garcia.  2019.  Cognitive Bias is the Threat Actor you may never detect. The Security Ledger.

Implicit biases held by security professionals could lead to the misinterpretation of critical data and bad decision-making, thus leaving organizations vulnerable to being attacked. It has been highlighted that biases, including aggregate bias, confirmation bias, anchoring bias, and more, can also affect cybersecurity policies and procedures. Organizations are encouraged to develop a structured decision-making plan for security professionals at the security operations levels and the executive levels in order to mitigate these biases. 

M Mitchell Waldrop.  2016.  How to hack the hackers: The human side of cybercrime. Nature.

Psychologists, economists, and human-factors people in addition to computer scientists need to be working on improving cybersecurity as the frequency and sophistication of cyberattacks grows. Cybersecurity professionals call for the exploration of behavioral science and economics in regard to cybercriminals and victims. The discovery of weakness in user behavior could leaked to the discovery of vulnerabilities among cybercriminals. 

[Anonymous].  2017.  What is Deception Technology? Force Point.

Deception technology involves the generation of traps or deception decoys. The use of deception technology can help fool hackers into thinking that they have gained access to assets such as workstations, servers, applications, and more, in a real environment.  Security teams can observe and monitor the operations, navigation, and tools of the hackers without the concern that any damage will occur on real assets. It is possible to detect breaches early, reduce false positives, and more, using deception technology. 
 

[Anonymous].  2017.  HADES misleads hackers by creating an alternate reality. Homeland Security News Wire.

Cyber researchers at Sandia National Laboratories are applying deceptive strategies in defending systems against hackers. Deception strategies are being applied through the use of a recently patented alternative reality by the name of HADES (High-fidelity Adaptive Deception & Emulation System). Instead of obstructing or removing a hacker upon infiltration into a system, HADES leads them to a simulated reality in which cloned virtual hard drives, data sets, and memory that have been inconspicuously altered, are presented. The goal is to introduce doubt to adversaries. 

Mike Elgan.  2018.  How to Overcome Cognitive Biases That Threaten Data Security. Security Intelligence.

Cognitive biases are considered to be logical errors in thinking. Such biases pose a significant threat to the security of enterprises in that they increase the success of social engineering attacks in which users are tricked into exposing sensitive information that could be used by attackers to infiltrate protected systems. Different types of bias, including anchoring bias, the availability heuristic, and the Dunning-Kruger effect, could also affect responses to cyber incidents. It is essential to understand biases to reduce human error. 

Doron Kolton.  2018.  5 ways deception tech is disrupting cybersecurity. The Next Web.

Deception is a tactic that could be used in cybersecurity to attack adversaries. Deception technology goes beyond the honeypot concept in that it can be used to actively lure and bait attackers to an environment in which deception is applied. Organizations can use deception technology to reduce false positives, trigger early threat hunting operations, and more. 

Carolyn Crandall.  2017.  Advanced Deception: How It Works & Why Attackers Hate It. Dark Reading.

The growing complexity and frequency of cyberattacks call for advanced methods to enhance the detection and prevention of such attacks. Deception is a cyber defense technique that is drawing more attention from organizations. This technique could be used to detect, deceive, and lure attackers away from sensitive data upon infiltration into a system. It is important to look at the most common features of distributed deception platforms such as high-interaction deception, adaptive deception, and more. 

Drew Robb.  2017.  Deceiving the Deceivers: Deception Technology Emerges as an IT Security Defense Strategy. eSecurity Planet.

Deception has always been a key strategy in war, politics, and commerce, but now this technique is being utilized in the battle of cybersecurity. Cybercriminals have applied this technique through the development and launch of cyberattacks such as phishing. Deception technology is now emerging as a security defense method for enterprises.  The implementation of this technology could help lure hackers away from sensitive assets once they have successfully infiltrated an organization's network. 

Raef Meeuwisse.  2019.  How to Hack a Human. How to Hack a Human: Cybersecurity for the Mind.

Raef Meeuwisse, CISM, CISA, ISACA expert speaker, and author of Cybersecurity for Beginners, has explored the different ways in which the human mind can be hacked as well as the effectiveness of these techniques. One of the techniques involves the manipulation of cognitive biases. Meeuwisse also examined how cybersecurity techniques could be used to analyze and defend against tactics used to hack the human mind. 

Rachael Flores.  2018.  Consistent Deception vs. a Malicious Hacker. Bing U News.

Computer scientists at Binghamton University are working to increase the effectiveness of cyber deception tools against malicious hackers. Cyber deception is a security defense method that can be used to detect, deceive, and lure attackers away from sensitive data once they have infiltrated a system. Researchers want to improve the consistency of deception. The goal is to reduce the use of ‘bad lies’ in cyber deception. 

Mohammad Sujan Miah, Marcus Gutierrez, Oscar Veliz, Omkar Thakoor, Christopher Kiekintveld.  2019.  Concealing Cyber-Decoys using Two-Sided Feature Deception Games. 10th International Workshop on Optimization in Multi-agent Systems 2019.

An increasingly important tool for securing computer net- works is the use of deceptive decoy objects (e.g., fake hosts, accounts, or files) to detect, confuse, and distract attackers. One of the well-known challenges in using decoys is that it can be difficult to design effective decoys that are hard to distinguish from real objects, especially against sophisticated attackers who may be aware of the use of decoys. A key issue is that both real and decoy objects may have observable features that may give the attacker the ability to distinguish one from the other. However, a defender deploying decoys may be able to modify some features of either the real or decoy objects (at some cost) making the decoys more effective. We present a game-theoretic model of two-sided deception that models this scenario. We present an empirical analysis of this model to show strategies for effectively concealing decoys, as well as some limitations of decoys for cyber security. 

Schlenker, Aaron, Thakoor, Omkar, Xu, Haifeng, Fang, Fei, Tambe, Milind, Tran-Thanh, Long, Vayanos, Phebe, Vorobeychik, Yevgeniy.  2018.  Deceiving Cyber Adversaries: A Game Theoretic Approach. Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. :892–900.

An important way cyber adversaries find vulnerabilities in mod- ern networks is through reconnaissance, in which they attempt to identify configuration specifics of network hosts. To increase un- certainty of adversarial reconnaissance, the network administrator (henceforth, defender) can introduce deception into responses to network scans, such as obscuring certain system characteristics.  We introduce a novel game-theoretic model of deceptive interactions of this kind between a defender and a cyber attacker, which we call the Cyber Deception Game. We consider both a powerful (rational) attacker, who is aware of the defender’s exact deception strategy, and a naive attacker who is not. We show that computing the optimal deception strategy is NP-hard for both types of attackers. For the case with a powerful attacker, we provide a mixed-integer linear program solution as well as a fast and effective greedy algorithm. Similarly, we provide complexity results and propose exact and heuristic approaches when the attacker is naive. Our extensive experimental analysis demonstrates the effectiveness of our approaches.

Herbert Lin, Jaclynn Kerr.  2019.  On Cyber-Enabled Information Warfare and Information Operations. forthcoming, Oxford Handbook of Cybersecurity. :29pages.

The United States has no peer competitors in conventional military power. But its adversaries are increasingly turning to asymmetric methods for engaging in conflict. Much has been written about cyber warfare as a domain that offers many adversaries ways to counter the U.S. conventional military advantages, but for the most part, U.S. capabilities for prosecuting cyber warfare are as potent as those of any other nation. This paper advances the idea of cyber-enabled information warfare and influence operations (IWIO) as a form of conflict or confrontation to which the United States (and liberal democracies more generally) are particularly vulnerable and are not particularly potent compared to the adversaries who specialize in this form of conflict. IWIO is the deliberate use of information against an adversary to confuse, mislead, and perhaps to influence the choices and decisions that the adversary makes. IWIO is a hostile activity, or at least an activity that is conducted between two parties whose interests are not well-aligned, but it does not constitute warfare in the sense that international law or domestic institutions construe it. Cyber-enabled IWIO exploits modern communications technologies to obtain benefits afforded by high connectivity, low latency, high degrees of anonymity, insensitivity to distance and national borders, democratized access to publishing capabilities, and inexpensive production and consumption of information content. Some approaches to counter IWIO show some promise of having some modest but valuable defensive effect. But on the whole, there are no good solutions for large-scale countering of IWIO in free and democratic societies. Development of new tactics and responses is therefore needed.

Federico Pistono, Roman V. Yampolskiy.  2016.  Unethical Research: How to Create a Malevolent Artificial Intelligence.

Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).

Kott, Alexander, Buchler, Norbou, Schaefer, Kristin E..  2014.  Kinetic and Cyber. Cyber Defense and Situational Awareness. 62:29–45.

Although a fairly new topic in the context of cyber security, situation awareness (SA) has a far longer history of study and applications in such areas as control of complex enterprises and in conventional warfare. Far more is known about the SA in conventional military conflicts, or adversarial engagements, than in cyber ones. By exploring what is known about SA in conventional–-also commonly referred to as kinetic–-battles, we may gain insights and research directions relevant to cyber conflicts. For this reason, having outlined the foundations and challenges on CSA in the previous chapter, we proceed to discuss the nature of SA in conventional (often called kinetic) conflict, review what is known about this kinetic SA (KSA), and then offer a comparison with what is currently understood regarding the cyber SA (CSA). We find that challenges and opportunities of KSA and CSA are similar or at least parallel in several important ways. With respect to similarities, in both kinetic and cyber worlds, SA strongly impacts the outcome of the mission. Also similarly, cognitive biases are found in both KSA and CSA. As an example of differences, KSA often relies on commonly accepted, widely used organizing representation–-map of the physical terrain of the battlefield. No such common representation has emerged in CSA, yet.

Edward A. Cranford, Christian Lebiere, Cleotilde Gonzalez, Sarah Cooney, Phebe Vayanos, Milind Tambe.  2018.  Learning about Cyber Deception through Simulations: Predictions of Human Decision Making with Deceptive Signals in Stackelberg Security Games. CogSci.

To improve cyber defense, researchers have developed algorithms to allocate limited defense resources optimally. Through signaling theory, we have learned that it is possible to trick the human mind when using deceptive signals. The present work is an initial step towards developing a psychological theory of cyber deception. We use simulations to investigate how humans might make decisions under various conditions of deceptive signals in cyber-attack scenarios. We created an Instance-Based Learning (IBL) model of the attacker decisions using the ACT-R cognitive architecture. We ran simulations against the optimal deceptive signaling algorithm and against four alternative deceptive signal schemes. Our results show that the optimal deceptive algorithm is more effective at reducing the probability of attack and protecting assets compared to other signaling conditions, but it is not perfect. These results shed some light on the expected effectiveness of deceptive signals for defense. The implications of these findings are discussed.

Gomez, Steven R., Mancuso, Vincent, Staheli, Diane.  2019.  Considerations for Human-Machine Teaming in Cybersecurity. Augmented Cognition. :153–168.

Understanding cybersecurity in an environment is uniquely challenging due to highly dynamic and potentially-adversarial activity. At the same time, the stakes are high for performance during these tasks: failures to reason about the environment and make decisions can let attacks go unnoticed or worsen the effects of attacks. Opportunities exist to address these challenges by more tightly integrating computer agents with human operators. In this paper, we consider implications for this integration during three stages that contribute to cyber analysts developing insights and conclusions about their environment: data organization and interaction, toolsmithing and analytic interaction, and human-centered assessment that leads to insights and conclusions. In each area, we discuss current challenges and opportunities for improved human-machine teaming. Finally, we present a roadmap of research goals for advanced human-machine teaming in cybersecurity operations.