Biblio
Textual deception constitutes a major problem for online security. Many studies have argued that deceptiveness leaves traces in writing style, which could be detected using text classification techniques. By conducting an extensive literature review of existing empirical work, we demonstrate that while certain linguistic features have been indicative of deception in certain corpora, they fail to generalize across divergent semantic domains. We suggest that deceptiveness as such leaves no content-invariant stylistic trace, and textual similarity measures provide a superior means of classifying texts as potentially deceptive. Additionally, we discuss forms of deception beyond semantic content, focusing on hiding author identity by writing style obfuscation. Surveying the literature on both author identification and obfuscation techniques, we conclude that current style transformation methods fail to achieve reliable obfuscation while simultaneously ensuring semantic faithfulness to the original text. We propose that future work in style transformation should pay particular attention to disallowing semantically drastic changes.
We propose a novel cross-stack sensor framework for realizing lightweight, context-aware, high-interaction network and endpoint deceptions for attacker disinformation, misdirection, monitoring, and analysis. In contrast to perimeter-based honeypots, the proposed method arms production workloads with deceptive attack-response capabilities via injection of booby-traps at the network, endpoint, operating system, and application layers. This provides defenders with new, potent tools for more effectively harvesting rich cyber-threat data from the myriad of attacks launched by adversaries whose identities and methodologies can be better discerned through direct engagement rather than purely passive observations of probe attempts. Our research provides new tactical deception capabilities for cyber operations, including new visibility into both enterprise and national interest networks, while equipping applications and endpoints with attack awareness and active mitigation capabilities.
The cyber threat landscape is a constantly morphing surface; the need for cyber defenders to develop and create proactive threat intelligence is on the rise, especially on critical infrastructure environments. It is commonly voiced that Supervisory Control and Data Acquisition (SCADA) systems and Industrial Control Systems (ICS) are vulnerable to the same classes of threats as other networked computer systems. However, cyber defense in operational ICS is difficult, often introducing unacceptable risks of disruption to critical physical processes. This is exacerbated by the notion that hardware used in ICS is often expensive, making full-scale mock-up systems for testing and/or cyber defense impractical. New paradigms in cyber security have focused heavily on using deception to not only protect assets, but also gather insight into adversary motives and tools. Much of the work that we see in today's literature is focused on creating deception environments for traditional IT enterprise networks; however, leveraging our prior work in the domain, we explore the opportunities, challenges and feasibility of doing deception in ICS networks.
Increase in usage of electronic communication tools (email, IM, Skype, etc.) in enterprise environments has created new attack vectors for social engineers. Billions of people are now using electronic equipment in their everyday workflow which means billions of potential victims of Social Engineering (SE) attacks. Human is considered the weakest link in cybersecurity chain and breaking this defense is nowadays the most accessible route for malicious internal and external users. While several methods of protection have already been proposed and applied, none of these focuses on chat-based SE attacks while at the same time automation in the field is still missing. Social engineering is a complex phenomenon that requires interdisciplinary research combining technology, psychology, and linguistics. Attackers treat human personality traits as vulnerabilities and use the language as their weapon to deceive, persuade and finally manipulate the victims as they wish. Hence, a holistic approach is required to build a reliable SE attack recognition system. In this paper we present the current state-of-the-art on SE attack recognition systems, we dissect a SE attack to recognize the different stages, forms, and attributes and isolate the critical enablers that can influence a SE attack to work. Finally, we present our approach for an automated recognition system for chat-based SE attacks that is based on Personality Recognition, Influence Recognition, Deception Recognition, Speech Act and Chat History.
At the first Information Hiding Workshop in 1996 we tried to clarify the models and assumptions behind information hiding. We agreed the terminology of cover text and stego text against a background of the game proposed by our keynote speaker Gus Simmons: that Alice and Bob are in jail and wish to hatch an escape plan without the fact of their communication coming to the attention of the warden, Willie. Since then there have been significant strides in developing technical mechanisms for steganography and steganalysis, with new techniques from machine learning providing ever more powerful tools for the analyst, such as the ensemble classifier. There have also been a number of conceptual advances, such as the square root law and effective key length. But there always remains the question whether we are using the right security metrics for the application. In this talk I plan to take a step backwards and look at the systems context. When can stegosystems actually be used? The deployment history is patchy, with one being Trucrypt's hidden volumes, inspired by the steganographic file system. Image forensics also find some use, and may be helpful against some adversarial machine learning attacks (or at least help us understand them). But there are other contexts in which patterns of activity have to be hidden for that activity to be effective. I will discuss a number of examples starting with deception mechanisms such as honeypots, Tor bridges and pluggable transports, which merely have to evade detection for a while; then moving on to the more challenging task of designing deniability mechanisms, from leaking secrets to a newspaper through bitcoin mixes, which have to withstand forensic examination once the participants come under suspicion. We already know that, at the system level, anonymity is hard. However the increasing quantity and richness of the data available to opponents may move a number of applications from the deception category to that of deniability. To pick up on our model of 20 years ago, Willie might not just put Alice and Bob in solitary confinement if he finds them communicating, but torture them or even execute them. Changing threat models are historically one of the great disruptive forces in security engineering. This leads me to suspect that a useful research area may be the intersection of deception and forensics, and how information hiding systems can be designed in anticipation of richer and more complex threat models. The ever-more-aggressive censorship systems deployed in some parts of the world also raise the possibility of using information hiding techniques in censorship circumvention. As an example of recent practical work, I will discuss Covertmark, a toolkit for testing pluggable transports that was partly inspired by Stirmark, a tool we presented at the second Information Hiding Workshop twenty years ago.
Legacy software, outdated applications and fast changing technologies pose a serious threat to information security. Several domains, such as long-life industrial control systems and Internet of Things devices, suffer from it. In many cases, system updates and new acquisitions are not an option. In this paper, a framework that combines a reverse proxy with various deception-based defense mechanisms is presented. It is designed to autonomously provide deception methods to web applications. Context-awareness and minimal configuration overhead make it perfectly suited to work as a service. The framework is built modularly to provide flexibility and adaptability to the application use case. It is evaluated with common web-based applications such as content management systems and several frequent attack vectors against them. Furthermore, the security and performance implications of the additional security layer are quantified and discussed. It is found that, given sound implementation, no further attack vectors are introduced to the web application. The performance of the prototypical framework increases the delay of communication with the underlying web application. This delay is within tolerable boundaries and can be further reduced by a more efficient implementation.
The ever-increasing sophistication of malware has made malicious binary collection and analysis an absolute necessity for proactive defenses. Meanwhile, malware authors seek to harden their binaries against analysis by incorporating environment detection techniques, in order to identify if the binary is executing within a virtual environment or in the presence of monitoring tools. For security researchers, it is still an open question regarding how to remove the artifacts from virtual machines to effectively build deceptive "honeypots" for malware collection and analysis. In this paper, we explore a completely different and yet promising approach by using Linux containers. Linux containers, in theory, have minimal virtualization artifacts and are easily deployable on low-power devices. Our work performs the first controlled experiments to compare Linux containers with bare metal and 5 major types of virtual machines. We seek to measure the deception capabilities offered by Linux containers to defeat mainstream virtual environment detection techniques. In addition, we empirically explore the potential weaknesses in Linux containers to help defenders to make more informed design decisions.
The insider threat has been subject of extensive study and many approaches from technical perspective to behavioral perspective and psychological perspective have been proposed to detect or mitigate it. However, it still remains one of the most difficult security issues to combat. In this paper, we propose an ongoing effort on developing a systematic framework to address insider threat challenges by laying a scientific foundation for defensive deception,leveraging moving target defense (MTD), an emerging technique for providing proactive security measurements, and integrating deception and MTD into attribute-based access control (ABAC).
An important topic in cybersecurity is validating Active Indicators (AI), which are stimuli that can be implemented in systems to trigger responses from individuals who might or might not be Insider Threats (ITs). The way in which a person responds to the AI is being validated for identifying a potential threat and a non-threat. In order to execute this validation process, it is important to create a paradigm that allows manipulation of AIs for measuring response. The scenarios are posed in a manner that require participants to be situationally aware that they are being monitored and have to act deceptively. In particular, manipulations in the environment should no differences between conditions relative to immersion and ease of use, but the narrative should be the driving force behind non-deceptive and IT responses. The success of the narrative and the simulation environment to induce such behaviors is determined by immersion, usability, and stress response questionnaires, and performance. Initial results of the feasibility to use a narrative reliant upon situation awareness of monitoring and evasion are discussed.
Modern military forces are enabled by networked command and control systems, which provide an important interface between the cyber environment, electronic sensors and decision makers. However these systems are vulnerable to cyber attack. A successful cyber attack could compromise data within the system, leading to incorrect information being utilized for decisions with potentially catastrophic results on the battlefield. Degrading the utility of a system or the trust a decision maker has in their virtual display may not be the most effective means of employing offensive cyber effects. The coordination of cyber and kinetic effects is proposed as the optimal strategy for neutralizing an adversary's C4ISR advantage. However, such an approach is an opportunity cost and resource intensive. The adversary's cyber dependence can be leveraged as a means of gaining tactical and operational advantage in combat, if a military force is sufficiently trained and prepared to attack the entire information network. This paper proposes a research approach intended to broaden the understanding of the relationship between command and control systems and the human decision maker, as an interface for both cyber and kinetic deception activity.