Biblio
The underlying psychological elements of social engineering attacks must be further explored by security researchers to help develop better strategies for protecting end user from such attacks. Hackers often try to evoke emotions or behavioral behaviors such as fear, obedience, greed, and helpfulness, in the launch of social engineering attacks.
A proactive approach to security can be adopted by organizations through the use of deception technology. The application of deception technology allows organization to reduce dwell time, quickly detect attackers, and lessen false positives. Modern deception platforms use machine learning and AI to be scalable and easy to manage.
There are three misconceptions about deception technology in regard to its value, complexity, and application. Deception technology is valuable in that it provides accurate detection of attacks. Deceptions are organized, deployed, and managed by modem deception technology through the use of machine learning. Different Organizations of all sizes and types can apply deception in their cybersecurity strategies.
Deception has mainly been used by attackers to deceive victims into sharing their personal information or downloading malware. However, deception has become the key to tricking adversaries into revealing their attack strategies and vulnerabilities. In order for defensive cyber deception to be effective, a deception decoy fabric must be generated throughout a network.
Deception has been a key tactic in warfare since the ancient days. The growing frequency and complexity of cyberattacks has created the potential for cyber warfare. Deception has become an important tactic in cyber defense as it allows security teams to learn more about the techniques and tools used by attackers as well as the weaknesses of organizations’ defense approaches.
The purpose of using deception technology in cybersecurity is to misdirect or lure attackers away from valuable technology assets once they have successfully infiltrated a network, using traps or decoys. Deception technology can also be used to further learn about the motives and tactics of attackers. Several components are required for the effective performance of deception.
In the realm of cybersecurity, the fact that hackers are human is often forgotten. It is important to examine the biases and behavior of attackers. Kelly Shortridge, detection project manager at BAE Systems Applied Intelligence, has highlighted five key points in regard to attacker biases, which include the avoidance of hard targets, the preference for repeatable or repackageable attacks, risk aversion, and more. Shortridge also identifies the ways in which these biases can be leveraged by defenders.
A report published by Forcepoint, titled Thinking About Thinking: Exploring Bias in Cybersecurity with Insights from Cognitive Science, highlights availability bias as one of the biases held by security and business teams. Availability bias occurs when a person lets the frequency with which they receive information influence their decisions. For example, if there are more headlines about nation-state attacks, such attacks may become a greater priority to major decision-makers in the development and spending surrounding cybersecurity solutions.
George V. Hulme at CSO Online highlighted nine cognitive biases often held by security professionals that may be affecting the success of security programs. These biases include the availability heuristic, confirmation bias, information bias, the ostrich effect, and more. It is important to reduce these biases as they could lead to inaccurate judgments in the defense against cyberattacks.
Organizations are encouraged to embrace deception technology in order to safely study cyber adversaries. The use of deception technology could allow security teams to further understand the motives of attackers and improve upon their defense methods. This technology could also reduce dwell time, which is the amount of time attackers go undetected in a system or the time it takes for an organization to become aware of an incident.
Implicit biases held by security professionals could lead to the misinterpretation of critical data and bad decision-making, thus leaving organizations vulnerable to being attacked. It has been highlighted that biases, including aggregate bias, confirmation bias, anchoring bias, and more, can also affect cybersecurity policies and procedures. Organizations are encouraged to develop a structured decision-making plan for security professionals at the security operations levels and the executive levels in order to mitigate these biases.
Psychologists, economists, and human-factors people in addition to computer scientists need to be working on improving cybersecurity as the frequency and sophistication of cyberattacks grows. Cybersecurity professionals call for the exploration of behavioral science and economics in regard to cybercriminals and victims. The discovery of weakness in user behavior could leaked to the discovery of vulnerabilities among cybercriminals.
Deception technology involves the generation of traps or deception decoys. The use of deception technology can help fool hackers into thinking that they have gained access to assets such as workstations, servers, applications, and more, in a real environment. Security teams can observe and monitor the operations, navigation, and tools of the hackers without the concern that any damage will occur on real assets. It is possible to detect breaches early, reduce false positives, and more, using deception technology.
Cyber researchers at Sandia National Laboratories are applying deceptive strategies in defending systems against hackers. Deception strategies are being applied through the use of a recently patented alternative reality by the name of HADES (High-fidelity Adaptive Deception & Emulation System). Instead of obstructing or removing a hacker upon infiltration into a system, HADES leads them to a simulated reality in which cloned virtual hard drives, data sets, and memory that have been inconspicuously altered, are presented. The goal is to introduce doubt to adversaries.
Cognitive biases are considered to be logical errors in thinking. Such biases pose a significant threat to the security of enterprises in that they increase the success of social engineering attacks in which users are tricked into exposing sensitive information that could be used by attackers to infiltrate protected systems. Different types of bias, including anchoring bias, the availability heuristic, and the Dunning-Kruger effect, could also affect responses to cyber incidents. It is essential to understand biases to reduce human error.
Deception is a tactic that could be used in cybersecurity to attack adversaries. Deception technology goes beyond the honeypot concept in that it can be used to actively lure and bait attackers to an environment in which deception is applied. Organizations can use deception technology to reduce false positives, trigger early threat hunting operations, and more.
The growing complexity and frequency of cyberattacks call for advanced methods to enhance the detection and prevention of such attacks. Deception is a cyber defense technique that is drawing more attention from organizations. This technique could be used to detect, deceive, and lure attackers away from sensitive data upon infiltration into a system. It is important to look at the most common features of distributed deception platforms such as high-interaction deception, adaptive deception, and more.
Deception has always been a key strategy in war, politics, and commerce, but now this technique is being utilized in the battle of cybersecurity. Cybercriminals have applied this technique through the development and launch of cyberattacks such as phishing. Deception technology is now emerging as a security defense method for enterprises. The implementation of this technology could help lure hackers away from sensitive assets once they have successfully infiltrated an organization's network.
Raef Meeuwisse, CISM, CISA, ISACA expert speaker, and author of Cybersecurity for Beginners, has explored the different ways in which the human mind can be hacked as well as the effectiveness of these techniques. One of the techniques involves the manipulation of cognitive biases. Meeuwisse also examined how cybersecurity techniques could be used to analyze and defend against tactics used to hack the human mind.
Computer scientists at Binghamton University are working to increase the effectiveness of cyber deception tools against malicious hackers. Cyber deception is a security defense method that can be used to detect, deceive, and lure attackers away from sensitive data once they have infiltrated a system. Researchers want to improve the consistency of deception. The goal is to reduce the use of ‘bad lies’ in cyber deception.
An increasingly important tool for securing computer net- works is the use of deceptive decoy objects (e.g., fake hosts, accounts, or files) to detect, confuse, and distract attackers. One of the well-known challenges in using decoys is that it can be difficult to design effective decoys that are hard to distinguish from real objects, especially against sophisticated attackers who may be aware of the use of decoys. A key issue is that both real and decoy objects may have observable features that may give the attacker the ability to distinguish one from the other. However, a defender deploying decoys may be able to modify some features of either the real or decoy objects (at some cost) making the decoys more effective. We present a game-theoretic model of two-sided deception that models this scenario. We present an empirical analysis of this model to show strategies for effectively concealing decoys, as well as some limitations of decoys for cyber security.
An important way cyber adversaries find vulnerabilities in mod- ern networks is through reconnaissance, in which they attempt to identify configuration specifics of network hosts. To increase un- certainty of adversarial reconnaissance, the network administrator (henceforth, defender) can introduce deception into responses to network scans, such as obscuring certain system characteristics. We introduce a novel game-theoretic model of deceptive interactions of this kind between a defender and a cyber attacker, which we call the Cyber Deception Game. We consider both a powerful (rational) attacker, who is aware of the defender’s exact deception strategy, and a naive attacker who is not. We show that computing the optimal deception strategy is NP-hard for both types of attackers. For the case with a powerful attacker, we provide a mixed-integer linear program solution as well as a fast and effective greedy algorithm. Similarly, we provide complexity results and propose exact and heuristic approaches when the attacker is naive. Our extensive experimental analysis demonstrates the effectiveness of our approaches.
The United States has no peer competitors in conventional military power. But its adversaries are increasingly turning to asymmetric methods for engaging in conflict. Much has been written about cyber warfare as a domain that offers many adversaries ways to counter the U.S. conventional military advantages, but for the most part, U.S. capabilities for prosecuting cyber warfare are as potent as those of any other nation. This paper advances the idea of cyber-enabled information warfare and influence operations (IWIO) as a form of conflict or confrontation to which the United States (and liberal democracies more generally) are particularly vulnerable and are not particularly potent compared to the adversaries who specialize in this form of conflict. IWIO is the deliberate use of information against an adversary to confuse, mislead, and perhaps to influence the choices and decisions that the adversary makes. IWIO is a hostile activity, or at least an activity that is conducted between two parties whose interests are not well-aligned, but it does not constitute warfare in the sense that international law or domestic institutions construe it. Cyber-enabled IWIO exploits modern communications technologies to obtain benefits afforded by high connectivity, low latency, high degrees of anonymity, insensitivity to distance and national borders, democratized access to publishing capabilities, and inexpensive production and consumption of information content. Some approaches to counter IWIO show some promise of having some modest but valuable defensive effect. But on the whole, there are no good solutions for large-scale countering of IWIO in free and democratic societies. Development of new tactics and responses is therefore needed.
Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).
Although a fairly new topic in the context of cyber security, situation awareness (SA) has a far longer history of study and applications in such areas as control of complex enterprises and in conventional warfare. Far more is known about the SA in conventional military conflicts, or adversarial engagements, than in cyber ones. By exploring what is known about SA in conventional–-also commonly referred to as kinetic–-battles, we may gain insights and research directions relevant to cyber conflicts. For this reason, having outlined the foundations and challenges on CSA in the previous chapter, we proceed to discuss the nature of SA in conventional (often called kinetic) conflict, review what is known about this kinetic SA (KSA), and then offer a comparison with what is currently understood regarding the cyber SA (CSA). We find that challenges and opportunities of KSA and CSA are similar or at least parallel in several important ways. With respect to similarities, in both kinetic and cyber worlds, SA strongly impacts the outcome of the mission. Also similarly, cognitive biases are found in both KSA and CSA. As an example of differences, KSA often relies on commonly accepted, widely used organizing representation–-map of the physical terrain of the battlefield. No such common representation has emerged in CSA, yet.