Visible to the public Biblio

Filters: Keyword is deception  [Clear All Filters]
2023-01-13
Cabral, Warren Z., Sikos, Leslie F., Valli, Craig.  2022.  Shodan Indicators Used to Detect Standard Conpot Implementations and Their Improvement Through Sophisticated Customization. 2022 IEEE Conference on Dependable and Secure Computing (DSC). :1—7.
Conpot is a low-interaction SCADA honeypot system that mimics a Siemens S7-200 proprietary device on default deployments. Honeypots operating using standard configurations can be easily detected by adversaries using scanning tools such as Shodan. This study focuses on the capabilities of the Conpot honeypot, and how these competences can be used to lure attackers. In addition, the presented research establishes a framework that enables for the customized configuration, thereby enhancing its functionality to achieve a high degree of deceptiveness and realism when presented to the Shodan scanners. A comparison between the default and configured deployments is further conducted to prove the modified deployments' effectiveness. The resulting annotations can assist cybersecurity personnel to better acknowledge the effectiveness of the honeypot's artifacts and how they can be used deceptively. Lastly, it informs and educates cybersecurity audiences on how important it is to deploy honeypots with advanced deceptive configurations to bait cybercriminals.
2022-09-29
Ferguson-Walter, Kimberly J., Gutzwiller, Robert S., Scott, Dakota D., Johnson, Craig J..  2021.  Oppositional Human Factors in Cybersecurity: A Preliminary Analysis of Affective States. 2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW). :153–158.
The need for cyber defense research is growing as more cyber-attacks are directed at critical infrastructure and other sensitive networks. Traditionally, the focus has been on hardening system defenses. However, other techniques are being explored including cyber and psychological deception which aim to negatively impact the cognitive and emotional state of cyber attackers directly through the manipulation of network characteristics. In this study, we present a preliminary analysis of survey data collected following a controlled experiment in which over 130 professional red teamers participated in a network penetration task that included cyber deception and psychological deception manipulations [7]. Thematic and inductive analysis of previously un-analyzed open-ended survey responses revealed factors associated with affective states. These preliminary results are a first step in our analysis efforts and show that there are potentially several distinct dimensions of cyber-behavior that induce negative affective states in cyber attackers, which may serve as potential avenues for supplementing traditional cyber defense strategies.
2022-06-09
Obaidat, Muath, Brown, Joseph, Alnusair, Awny.  2021.  Blind Attack Flaws in Adaptive Honeypot Strategies. 2021 IEEE World AI IoT Congress (AIIoT). :0491–0496.
Adaptive honeypots are being widely proposed as a more powerful alternative to the traditional honeypot model. Just as with typical honeypots, however, one of the most important concerns of an adaptive honeypot is environment deception in order to make sure an adversary cannot fingerprint the honeypot. The threat of fingerprinting hints at a greater underlying concern, however; this being that honeypots are only effective because an adversary does not know that the environment on which they are operating is a honeypot. What has not been widely discussed in the context of adaptive honeypots is that they actually have an inherently increased level of susceptibility to this threat. Honeypots not only bear increased risks when an adversary knows they are a honeypot rather than a native system, but they are only effective as adaptable entities if one does not know that the honeypot environment they are operating on is adaptive as wekk. Thus, if adaptive honeypots become commonplace - or, instead, if attackers even have an inkling that an adaptive honeypot may exist on any given network, a new attack which could develop is a “blind confusion attack”; a form of connection which simply makes an assumption all environments are adaptive honeypots, and instead of attempting to perform a malicious strike on a given entity, opts to perform non-malicious behavior in specified and/or random patterns to confuse an adaptive network's learning.
2021-08-12
Weissman, David.  2020.  IoT Security Using Deception – Measuring Improved Risk Posture. 2020 IEEE 6th World Forum on Internet of Things (WF-IoT). :1—2.
Deception technology is a useful approach to improve the security posture of IoT systems. The deployment of replication techniques as a deception tactic is presented with a summary of our research progress towards quantifying the defensive improvement as part of overall risk management considerations.
2021-03-29
Das, T., Eldosouky, A. R., Sengupta, S..  2020.  Think Smart, Play Dumb: Analyzing Deception in Hardware Trojan Detection Using Game Theory. 2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–8.
In recent years, integrated circuits (ICs) have become significant for various industries and their security has been given greater priority, specifically in the supply chain. Budgetary constraints have compelled IC designers to offshore manufacturing to third-party companies. When the designer gets the manufactured ICs back, it is imperative to test for potential threats like hardware trojans (HT). In this paper, a novel multi-level game-theoretic framework is introduced to analyze the interactions between a malicious IC manufacturer and the tester. In particular, the game is formulated as a non-cooperative, zero-sum, repeated game using prospect theory (PT) that captures different players' rationalities under uncertainty. The repeated game is separated into a learning stage, in which the defender learns about the attacker's tendencies, and an actual game stage, where this learning is used. Experiments show great incentive for the attacker to deceive the defender about their actual rationality by "playing dumb" in the learning stage (deception). This scenario is captured using hypergame theory to model the attacker's view of the game. The optimal deception rationality of the attacker is analytically derived to maximize utility gain. For the defender, a first-step deception mitigation process is proposed to thwart the effects of deception. Simulation results show that the attacker can profit from the deception as it can successfully insert HTs in the manufactured ICs without being detected.
2020-09-21
Kovach, Nicholas S., Lamont, Gary B..  2019.  Trust and Deception in Hypergame Theory. 2019 IEEE National Aerospace and Electronics Conference (NAECON). :262–268.
Hypergame theory has been used to model advantages in decision making. This research provides a formal representation of deception to further extend the hypergame model. In order to extend the model, we propose a hypergame theoretic framework based on temporal logic to model decision making under the potential for trust and deception. Using the temporal hypergame model, the concept of trust is defined within the constraints of the model. With a formal definition of trust in hypergame theory, the concepts of distrust, mistrust, misperception, and deception are then constructed. These formal definitions are then applied to an Attacker-Defender hypergame to show how the deception within the game can be formally modeled; the model is presented. This demonstrates how hypergame theory can be used to model trust, mistrust, misperception, and deception using a formal model.
2020-07-13
Mahmood, Shah.  2019.  The Anti-Data-Mining (ADM) Framework - Better Privacy on Online Social Networks and Beyond. 2019 IEEE International Conference on Big Data (Big Data). :5780–5788.
The unprecedented and enormous growth of cloud computing, especially online social networks, has resulted in numerous incidents of the loss of users' privacy. In this paper, we provide a framework, based on our anti-data-mining (ADM) principle, to enhance users' privacy against adversaries including: online social networks; search engines; financial terminal providers; ad networks; eavesdropping governments; and other parties who can monitor users' content from the point where the content leaves users' computers to within the data centers of these information accumulators. To achieve this goal, our framework proactively uses the principles of suppression of sensitive data and disinformation. Moreover, we use social-bots in a novel way for enhanced privacy and provide users' with plausible deniability for their photos, audio, and video content uploaded online.
2020-01-27
Gröndahl, Tommi, Asokan, N..  2019.  Text Analysis in Adversarial Settings: Does Deception Leave a Stylistic Trace? ACM Computing Surveys (CSUR). 52:45:1-45:36.

Textual deception constitutes a major problem for online security. Many studies have argued that deceptiveness leaves traces in writing style, which could be detected using text classification techniques. By conducting an extensive literature review of existing empirical work, we demonstrate that while certain linguistic features have been indicative of deception in certain corpora, they fail to generalize across divergent semantic domains. We suggest that deceptiveness as such leaves no content-invariant stylistic trace, and textual similarity measures provide a superior means of classifying texts as potentially deceptive. Additionally, we discuss forms of deception beyond semantic content, focusing on hiding author identity by writing style obfuscation. Surveying the literature on both author identification and obfuscation techniques, we conclude that current style transformation methods fail to achieve reliable obfuscation while simultaneously ensuring semantic faithfulness to the original text. We propose that future work in style transformation should pay particular attention to disallowing semantically drastic changes.

2019-09-09
Fraunholz, Daniel, Krohmer, Daniel, Duque Anton, Simon, Schotten, Hans Dieter.  2018.  Catch Me If You Can: Dynamic Concealment of Network Entities. Proceedings of the 5th ACM Workshop on Moving Target Defense. :31–39.
In this paper, a framework for Moving Target Defense is introduced. This framework bases on three pillars: network address mutation, communication stack randomization and the dynamic deployment of decoys. The network address mutation is based on the concept of domain generation algorithms, where different features are included to fulfill the system requirements. Those requirements are time dependency, unpredictability and determinism. Communication stack randomization is applied additionally to increase the complexity of reconnaissance activity. By employing communication stack randomization, previously fingerprinted systems do not only differ in the network address but also in their communication pattern behavior. And finally, decoys are integrated into the proposed framework to detect attackers that have breached the perimeter. Furthermore, attacker's resources can be bound by interacting with the decoy systems. Additionally, the framework can be extended with more advanced Moving Target Defense methods such as obscuring port numbers of services.
2019-08-26
Araujo, F., Taylor, T., Zhang, J., Stoecklin, M..  2018.  Cross-Stack Threat Sensing for Cyber Security and Resilience. 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :18-21.

We propose a novel cross-stack sensor framework for realizing lightweight, context-aware, high-interaction network and endpoint deceptions for attacker disinformation, misdirection, monitoring, and analysis. In contrast to perimeter-based honeypots, the proposed method arms production workloads with deceptive attack-response capabilities via injection of booby-traps at the network, endpoint, operating system, and application layers. This provides defenders with new, potent tools for more effectively harvesting rich cyber-threat data from the myriad of attacks launched by adversaries whose identities and methodologies can be better discerned through direct engagement rather than purely passive observations of probe attempts. Our research provides new tactical deception capabilities for cyber operations, including new visibility into both enterprise and national interest networks, while equipping applications and endpoints with attack awareness and active mitigation capabilities.

2019-07-01
Urias, V. E., Stout, M. S. William, Leeuwen, B. V..  2018.  On the Feasibility of Generating Deception Environments for Industrial Control Systems. 2018 IEEE International Symposium on Technologies for Homeland Security (HST). :1–6.

The cyber threat landscape is a constantly morphing surface; the need for cyber defenders to develop and create proactive threat intelligence is on the rise, especially on critical infrastructure environments. It is commonly voiced that Supervisory Control and Data Acquisition (SCADA) systems and Industrial Control Systems (ICS) are vulnerable to the same classes of threats as other networked computer systems. However, cyber defense in operational ICS is difficult, often introducing unacceptable risks of disruption to critical physical processes. This is exacerbated by the notion that hardware used in ICS is often expensive, making full-scale mock-up systems for testing and/or cyber defense impractical. New paradigms in cyber security have focused heavily on using deception to not only protect assets, but also gather insight into adversary motives and tools. Much of the work that we see in today's literature is focused on creating deception environments for traditional IT enterprise networks; however, leveraging our prior work in the domain, we explore the opportunities, challenges and feasibility of doing deception in ICS networks.

2019-04-05
Tsinganos, Nikolaos, Sakellariou, Georgios, Fouliras, Panagiotis, Mavridis, Ioannis.  2018.  Towards an Automated Recognition System for Chat-Based Social Engineering Attacks in Enterprise Environments. Proceedings of the 13th International Conference on Availability, Reliability and Security. :53:1-53:10.

Increase in usage of electronic communication tools (email, IM, Skype, etc.) in enterprise environments has created new attack vectors for social engineers. Billions of people are now using electronic equipment in their everyday workflow which means billions of potential victims of Social Engineering (SE) attacks. Human is considered the weakest link in cybersecurity chain and breaking this defense is nowadays the most accessible route for malicious internal and external users. While several methods of protection have already been proposed and applied, none of these focuses on chat-based SE attacks while at the same time automation in the field is still missing. Social engineering is a complex phenomenon that requires interdisciplinary research combining technology, psychology, and linguistics. Attackers treat human personality traits as vulnerabilities and use the language as their weapon to deceive, persuade and finally manipulate the victims as they wish. Hence, a holistic approach is required to build a reliable SE attack recognition system. In this paper we present the current state-of-the-art on SE attack recognition systems, we dissect a SE attack to recognize the different stages, forms, and attributes and isolate the critical enablers that can influence a SE attack to work. Finally, we present our approach for an automated recognition system for chat-based SE attacks that is based on Personality Recognition, Influence Recognition, Deception Recognition, Speech Act and Chat History.

2019-02-22
Anderson, Ross.  2018.  Covert and Deniable Communications. Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. :1-1.

At the first Information Hiding Workshop in 1996 we tried to clarify the models and assumptions behind information hiding. We agreed the terminology of cover text and stego text against a background of the game proposed by our keynote speaker Gus Simmons: that Alice and Bob are in jail and wish to hatch an escape plan without the fact of their communication coming to the attention of the warden, Willie. Since then there have been significant strides in developing technical mechanisms for steganography and steganalysis, with new techniques from machine learning providing ever more powerful tools for the analyst, such as the ensemble classifier. There have also been a number of conceptual advances, such as the square root law and effective key length. But there always remains the question whether we are using the right security metrics for the application. In this talk I plan to take a step backwards and look at the systems context. When can stegosystems actually be used? The deployment history is patchy, with one being Trucrypt's hidden volumes, inspired by the steganographic file system. Image forensics also find some use, and may be helpful against some adversarial machine learning attacks (or at least help us understand them). But there are other contexts in which patterns of activity have to be hidden for that activity to be effective. I will discuss a number of examples starting with deception mechanisms such as honeypots, Tor bridges and pluggable transports, which merely have to evade detection for a while; then moving on to the more challenging task of designing deniability mechanisms, from leaking secrets to a newspaper through bitcoin mixes, which have to withstand forensic examination once the participants come under suspicion. We already know that, at the system level, anonymity is hard. However the increasing quantity and richness of the data available to opponents may move a number of applications from the deception category to that of deniability. To pick up on our model of 20 years ago, Willie might not just put Alice and Bob in solitary confinement if he finds them communicating, but torture them or even execute them. Changing threat models are historically one of the great disruptive forces in security engineering. This leads me to suspect that a useful research area may be the intersection of deception and forensics, and how information hiding systems can be designed in anticipation of richer and more complex threat models. The ever-more-aggressive censorship systems deployed in some parts of the world also raise the possibility of using information hiding techniques in censorship circumvention. As an example of recent practical work, I will discuss Covertmark, a toolkit for testing pluggable transports that was partly inspired by Stirmark, a tool we presented at the second Information Hiding Workshop twenty years ago.

2019-02-13
Fraunholz, Daniel, Reti, Daniel, Duque Anton, Simon, Schotten, Hans Dieter.  2018.  Cloxy: A Context-aware Deception-as-a-Service Reverse Proxy for Web Services. Proceedings of the 5th ACM Workshop on Moving Target Defense. :40–47.

Legacy software, outdated applications and fast changing technologies pose a serious threat to information security. Several domains, such as long-life industrial control systems and Internet of Things devices, suffer from it. In many cases, system updates and new acquisitions are not an option. In this paper, a framework that combines a reverse proxy with various deception-based defense mechanisms is presented. It is designed to autonomously provide deception methods to web applications. Context-awareness and minimal configuration overhead make it perfectly suited to work as a service. The framework is built modularly to provide flexibility and adaptability to the application use case. It is evaluated with common web-based applications such as content management systems and several frequent attack vectors against them. Furthermore, the security and performance implications of the additional security layer are quantified and discussed. It is found that, given sound implementation, no further attack vectors are introduced to the web application. The performance of the prototypical framework increases the delay of communication with the underlying web application. This delay is within tolerable boundaries and can be further reduced by a more efficient implementation.

2018-11-19
Pal, Partha, Soule, Nathaniel, Lageman, Nate, Clark, Shane S., Carvalho, Marco, Granados, Adrian, Alves, Anthony.  2017.  Adaptive Resource Management Enabling Deception (ARMED). Proceedings of the 12th International Conference on Availability, Reliability and Security. :52:1–52:8.
Distributed Denial of Service (DDoS) attacks routinely disrupt access to critical services. Mitigation of these attacks often relies on planned over-provisioning or elastic provisioning of resources, and third-party monitoring, analysis, and scrubbing of network traffic. While volumetric attacks which saturate a victim's network are most common, non-volumetric, low and slow, DDoS attacks can achieve their goals without requiring high traffic volume by targeting vulnerable network protocols or protocol implementations. Non-volumetric attacks, unlike their noisy counterparts, require more sophisticated detection mechanisms, and typically have only post-facto and targeted protocol/application mitigations. In this paper, we introduce our work under the Adaptive Resource Management Enabling Deception (ARMED) effort, which is developing a network-level approach to automatically mitigate sophisticated DDoS attacks through deception-focused adaptive maneuvering. We describe the concept, implementation, and initial evaluation of the ARMED Network Actors (ANAs) that facilitate transparent interception, sensing, analysis, and mounting of adaptive responses that can disrupt the adversary's decision process.
Kedrowitsch, Alexander, Yao, Danfeng(Daphne), Wang, Gang, Cameron, Kirk.  2017.  A First Look: Using Linux Containers for Deceptive Honeypots. Proceedings of the 2017 Workshop on Automated Decision Making for Active Cyber Defense. :15–22.

The ever-increasing sophistication of malware has made malicious binary collection and analysis an absolute necessity for proactive defenses. Meanwhile, malware authors seek to harden their binaries against analysis by incorporating environment detection techniques, in order to identify if the binary is executing within a virtual environment or in the presence of monitoring tools. For security researchers, it is still an open question regarding how to remove the artifacts from virtual machines to effectively build deceptive "honeypots" for malware collection and analysis. In this paper, we explore a completely different and yet promising approach by using Linux containers. Linux containers, in theory, have minimal virtualization artifacts and are easily deployable on low-power devices. Our work performs the first controlled experiments to compare Linux containers with bare metal and 5 major types of virtual machines. We seek to measure the deception capabilities offered by Linux containers to defeat mainstream virtual environment detection techniques. In addition, we empirically explore the potential weaknesses in Linux containers to help defenders to make more informed design decisions.

2018-01-16
Takabi, Hassan, Jafarian, J. Haadi.  2017.  Insider Threat Mitigation Using Moving Target Defense and Deception. Proceedings of the 2017 International Workshop on Managing Insider Security Threats. :93–96.

The insider threat has been subject of extensive study and many approaches from technical perspective to behavioral perspective and psychological perspective have been proposed to detect or mitigate it. However, it still remains one of the most difficult security issues to combat. In this paper, we propose an ongoing effort on developing a systematic framework to address insider threat challenges by laying a scientific foundation for defensive deception,leveraging moving target defense (MTD), an emerging technique for providing proactive security measurements, and integrating deception and MTD into attribute-based access control (ABAC).

2017-12-12
Reinerman-Jones, L., Matthews, G., Wohleber, R., Ortiz, E..  2017.  Scenarios using situation awareness in a simulation environment for eliciting insider threat behavior. 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). :1–3.

An important topic in cybersecurity is validating Active Indicators (AI), which are stimuli that can be implemented in systems to trigger responses from individuals who might or might not be Insider Threats (ITs). The way in which a person responds to the AI is being validated for identifying a potential threat and a non-threat. In order to execute this validation process, it is important to create a paradigm that allows manipulation of AIs for measuring response. The scenarios are posed in a manner that require participants to be situationally aware that they are being monitored and have to act deceptively. In particular, manipulations in the environment should no differences between conditions relative to immersion and ease of use, but the narrative should be the driving force behind non-deceptive and IT responses. The success of the narrative and the simulation environment to induce such behaviors is determined by immersion, usability, and stress response questionnaires, and performance. Initial results of the feasibility to use a narrative reliant upon situation awareness of monitoring and evasion are discussed.

2015-04-30
Ormrod, D..  2014.  The Coordination of Cyber and Kinetic Deception for Operational Effect: Attacking the C4ISR Interface. Military Communications Conference (MILCOM), 2014 IEEE. :117-122.

Modern military forces are enabled by networked command and control systems, which provide an important interface between the cyber environment, electronic sensors and decision makers. However these systems are vulnerable to cyber attack. A successful cyber attack could compromise data within the system, leading to incorrect information being utilized for decisions with potentially catastrophic results on the battlefield. Degrading the utility of a system or the trust a decision maker has in their virtual display may not be the most effective means of employing offensive cyber effects. The coordination of cyber and kinetic effects is proposed as the optimal strategy for neutralizing an adversary's C4ISR advantage. However, such an approach is an opportunity cost and resource intensive. The adversary's cyber dependence can be leveraged as a means of gaining tactical and operational advantage in combat, if a military force is sufficiently trained and prepared to attack the entire information network. This paper proposes a research approach intended to broaden the understanding of the relationship between command and control systems and the human decision maker, as an interface for both cyber and kinetic deception activity.