Visible to the public Biblio

Filters: Keyword is Track Materials  [Clear All Filters]
2019-09-13
Madni, Azad, Madni, Carla.  2018.  Architectural Framework for Exploring Adaptive Human-Machine Teaming Options in Simulated Dynamic Environments. Systems. 6:44.

With the growing complexity of environments in which systems are expected to operate, adaptive human-machine teaming (HMT) has emerged as a key area of research. While human teams have been extensively studied in the psychological and training literature, and agent teams have been investigated in the artificial intelligence research community, the commitment to research in HMT is relatively new and fueled by several technological advances such as electrophysiological sensors, cognitive modeling, machine learning, and adaptive/adaptable human-machine systems. This paper presents an architectural framework for investigating HMT options in various simulated operational contexts including responding to systemic failures and external disruptions. The paper specifically discusses new and novel roles for machines made possible by new technology and offers key insights into adaptive human-machine teams. Landed aircraft perimeter security is used as an illustrative example of an adaptive cyber-physical-human system (CPHS). This example is used to illuminate the use of the HMT framework in identifying the different human and machine roles involved in this scenario. The framework is domain-independent and can be applied to both defense and civilian adaptive HMT. The paper concludes with recommendations for advancing the state-of-the-art in HMT. 

Cranford, Edward A, Gonzalez, Cleotilde, Aggarwal, Palvi, Lebiere, Christian.  2019.  Towards Personalized Deceptive Signaling for Cyber Defense Using Cognitive Models.

Recent research in cybersecurity has begun to develop active defense strategies using game-theoretic optimization of the allocation of limited defenses combined with deceptive signaling. While effective, the algorithms are optimized against perfectly rational adversaries. In a laboratory experiment, we pit humans against the defense algorithm in an online game designed to simulate an insider attack scenario. Humans attack far more often than predicted under perfect rationality. Optimizing against human bounded rationality is vitally important. We propose a cognitive model based on instancebased learning theory and built in ACT-R that accurately predicts human performance and biases in the game. We show that the algorithm does not defend well, largely due to its static nature and lack of adaptation to the particular individual’s actions. Thus, we propose an adaptive method of signaling that uses the cognitive model to trace an individual’s experience in real time, in order to optimize defenses. We discuss the results and implications of personalized defense.