Biblio
Although a fairly new topic in the context of cyber security, situation awareness (SA) has a far longer history of study and applications in such areas as control of complex enterprises and in conventional warfare. Far more is known about the SA in conventional military conflicts, or adversarial engagements, than in cyber ones. By exploring what is known about SA in conventional–-also commonly referred to as kinetic–-battles, we may gain insights and research directions relevant to cyber conflicts. For this reason, having outlined the foundations and challenges on CSA in the previous chapter, we proceed to discuss the nature of SA in conventional (often called kinetic) conflict, review what is known about this kinetic SA (KSA), and then offer a comparison with what is currently understood regarding the cyber SA (CSA). We find that challenges and opportunities of KSA and CSA are similar or at least parallel in several important ways. With respect to similarities, in both kinetic and cyber worlds, SA strongly impacts the outcome of the mission. Also similarly, cognitive biases are found in both KSA and CSA. As an example of differences, KSA often relies on commonly accepted, widely used organizing representation–-map of the physical terrain of the battlefield. No such common representation has emerged in CSA, yet.
Cyber SA is described as the current and predictive knowledge of cyberspace in relation to the Network, Missions and Threats across friendly, neutral and adversary forces. While this model provides a good high-level understanding of Cyber SA, it does not contain actionable information to help inform the development of capabilities to improve SA. In this paper, we present a systematic, human-centered process that uses a card sort methodology to understand and conceptualize Senior Leader Cyber SA requirements. From the data collected, we were able to build a hierarchy of high- and low- priority Cyber SA information, as well as uncover items that represent high levels of disagreement with and across organizations. The findings of this study serve as a first step in developing a better understanding of what Cyber SA means to Senior Leaders, and can inform the development of future capabilities to improve their SA and Mission Performance.
Defensive deception provides promise in rebalancing the asymmetry of cybersecurity. It makes an attacker’s job harder because it does more than just block access; it impacts the decision making causing him or her to waste time and effort as well as expose his or her presence in the network. Pilot studies conducted by NSA research demonstrated the plausibility and necessity for metrics of success including difficulty attacking the system, behavioral changes caused, cognitive and emotional reactions aroused, and attacker strategy changes due to deception. Designing reliable and valid measures of effectiveness is a worthy (though often overlooked) goal for industry and government alike.
Security-critical systems demand multiple well-balanced mechanisms to detect ill-intentioned actions and protect valuable assets from damage while keeping costs in acceptable levels. The use of deception to enhance security has been studied for more than two decades. However, deception is still included in the software development process in an ad-hoc fashion, typically realized as single tools or entire solutions repackaged as honeypot machines. We propose a multi-paradigm modeling approach to specify deception tactics during the software development process so that conflicts and risks can be found in the initial phases of the development, reducing costs of ill-planned decisions. We describe a metamodel containing deception concepts that integrates other models, such as a goal-oriented model, feature model, and behavioral UML models to specify static and dynamic aspects of a deception operation. The outcome of this process is a set of deception tactics that is realized by a set of deception components integrated with the system components. The feasibility of this multi-paradigm approach is shown by designing deception defense strategies for a students’ presence control system for the Faculty of Science and Technology of Universidade NOVA de Lisboa.