Visible to the public Biblio

Filters: Keyword is psychology  [Clear All Filters]
2023-07-21
Eze, Emmanuel O., Keates, Simeon, Pedram, Kamran, Esfahani, Alireza, Odih, Uchenna.  2022.  A Context-Based Decision-Making Trust Scheme for Malicious Detection in Connected and Autonomous Vehicles. 2022 International Conference on Computing, Electronics & Communications Engineering (iCCECE). :31—36.
The fast-evolving Intelligent Transportation Systems (ITS) are crucial in the 21st century, promising answers to congestion and accidents that bother people worldwide. ITS applications such as Connected and Autonomous Vehicle (CAVs) update and broadcasts road incident event messages, and this requires significant data to be transmitted between vehicles for a decision to be made in real-time. However, broadcasting trusted incident messages such as accident alerts between vehicles pose a challenge for CAVs. Most of the existing-trust solutions are based on the vehicle's direct interaction base reputation and the psychological approaches to evaluate the trustworthiness of the received messages. This paper provides a scheme for improving trust in the received incident alert messages for real-time decision-making to detect malicious alerts between CAVs using direct and indirect interactions. This paper applies artificial intelligence and statistical data classification for decision-making on the received messages. The model is trained based on the US Department of Technology Safety Pilot Deployment Model (SPMD). An Autonomous Decision-making Trust Scheme (ADmTS) that incorporates a machine learning algorithm and a local trust manager for decision-making has been developed. The experiment showed that the trained model could make correct predictions such as 98% and 0.55% standard deviation accuracy in predicting false alerts on the 25% malicious data
2023-05-12
Kostis, Ioannis - Aris, Karamitsios, Konstantinos, Kotrotsios, Konstantinos, Tsolaki, Magda, Tsolaki, Anthoula.  2022.  AI-Enabled Conversational Agents in Service of Mild Cognitive Impairment Patients. 2022 International Conference on Electrical and Information Technology (IEIT). :69–74.
Over the past two decades, several forms of non-intrusive technology have been deployed in cooperation with medical specialists in order to aid patients diagnosed with some form of mental, cognitive or psychological condition. Along with the availability and accessibility to applications offered by mobile devices, as well as the advancements in the field of Artificial Intelligence applications and Natural Language Processing, Conversational Agents have been developed with the objective of aiding medical specialists detecting those conditions in their early stages and monitoring their symptoms and effects on the cognitive state of the patient, as well as supporting the patient in their effort of mitigating those symptoms. Coupled with the recent advances in the the scientific field of machine and deep learning, we aim to explore the grade of applicability of such technologies into cognitive health support Conversational Agents, and their impact on the acceptability of such applications bytheir end users. Therefore, we conduct a systematic literature review, following a transparent and thorough process in order to search and analyze the bibliography of the past five years, focused on the implementation of Conversational Agents, supported by Artificial Intelligence technologies and in service of patients diagnosed with Mild Cognitive Impairment and its variants.
2023-03-31
Ming, Lan.  2022.  The Application of Dynamic Random Network Structure in the Modeling of the Combination of Core Values and Network Education in the Propagation Algorithm. 2022 4th International Conference on Inventive Research in Computing Applications (ICIRCA). :455–458.
The topological structure of the network relationship is described by the network diagram, and the formation and evolution process of the network is analyzed by using the cost-benefit method. Assuming that the self-interested network member nodes can connect or break the connection, the network topology model is established based on the dynamic random pairing evolution network model. The static structure of the network is studied. Respecting the psychological cognition law of college students and innovating the core value cultivation model can reverse the youth's identification dilemma with the core values, and then create a good political environment for the normal, healthy, civilized and orderly network participation of the youth. In recognition of the atmosphere, an automatic learning algorithm of Bayesian network structure that effectively integrates expert knowledge and data-driven methods is realized.
2023-03-06
Grebenyuk, Konstantin A..  2021.  Motivation Generator: An Empirical Model of Intrinsic Motivation for Learning. 2021 IEEE International Conference on Engineering, Technology & Education (TALE). :1001–1005.
In present research, an empirical model for building and maintaining students' intrinsic motivation to learn is proposed. Unlike many other models of motivation, this model is not based on psychological theories but is derived directly from empirical observations made by experienced learners and educators. Thanks to empirical nature of the proposed model, its application to educational practice may be more straightforward in comparison with assumptions-based motivation theories. Interestingly, the structure of the proposed model resembles to some extent the structure of the oscillator circuit containing an amplifier and a positive feedback loop.
ISSN: 2470-6698
2023-02-17
Babel, Franziska, Baumann, Martin.  2022.  Designing Psychological Conflict Resolution Strategies for Autonomous Service Robots. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :1146–1148.
As autonomous service robots will become increasingly ubiquitous in our daily lives, human-robot conflicts will become more likely when humans and robots share the same spaces and resources. This thesis investigates the conflict resolution of robots and humans in everyday conflicts in the domestic and public context. Hereby, the acceptability, trustworthiness, and effectiveness of verbal and non-verbal strategies for the robot to solve the conflict in its favor are evaluated. Based on the assumption of the Media Equation and CASA paradigm that people interact with computers as social actors, robot conflict resolution strategies from social psychology and human-machine interaction were derived. The effectiveness, acceptability, and trustworthiness of those strategies were evaluated in online, virtual reality, and laboratory experiments. Future work includes determining the psychological processes of human-robot conflict resolution in further experimental studies.
2022-09-29
López-Aguilar, Pablo, Solanas, Agusti.  2021.  Human Susceptibility to Phishing Attacks Based on Personality Traits: The Role of Neuroticism. 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC). :1363–1368.
The COVID19 pandemic situation has opened a wide range of opportunities for cyber-criminals, who take advantage of the anxiety generated and the time spent on the Internet, to undertake massive phishing campaigns. Although companies are adopting protective measures, the psychological traits of the victims are still considered from a very generic perspective. In particular, current literature determines that the model proposed in the Big-Five personality traits (i.e., Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism) might play an important role in human behaviour to counter cybercrime. However, results do not provide unanimity regarding the correlation between phishing susceptibility and neuroticism. With the aim to understand this lack of consensus, this article provides a comprehensive literature review of papers extracted from relevant databases (IEEE Xplore, Scopus, ACM Digital Library, and Web of Science). Our results show that there is not a well-established psychological theory explaining the role of neuroticism in the phishing context. We sustain that non-representative samples and the lack of homogeneity amongst the studies might be the culprits behind this lack of consensus on the role of neuroticism on phishing susceptibility.
Ferguson-Walter, Kimberly J., Gutzwiller, Robert S., Scott, Dakota D., Johnson, Craig J..  2021.  Oppositional Human Factors in Cybersecurity: A Preliminary Analysis of Affective States. 2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW). :153–158.
The need for cyber defense research is growing as more cyber-attacks are directed at critical infrastructure and other sensitive networks. Traditionally, the focus has been on hardening system defenses. However, other techniques are being explored including cyber and psychological deception which aim to negatively impact the cognitive and emotional state of cyber attackers directly through the manipulation of network characteristics. In this study, we present a preliminary analysis of survey data collected following a controlled experiment in which over 130 professional red teamers participated in a network penetration task that included cyber deception and psychological deception manipulations [7]. Thematic and inductive analysis of previously un-analyzed open-ended survey responses revealed factors associated with affective states. These preliminary results are a first step in our analysis efforts and show that there are potentially several distinct dimensions of cyber-behavior that induce negative affective states in cyber attackers, which may serve as potential avenues for supplementing traditional cyber defense strategies.
2022-06-06
Uchida, Hikaru, Matsubara, Masaki, Wakabayashi, Kei, Morishima, Atsuyuki.  2020.  Human-in-the-loop Approach towards Dual Process AI Decisions. 2020 IEEE International Conference on Big Data (Big Data). :3096–3098.
How to develop AI systems that can explain how they made decisions is one of the important and hot topics today. Inspired by the dual-process theory in psychology, this paper proposes a human-in-the-loop approach to develop System-2 AI that makes an inference logically and outputs interpretable explanation. Our proposed method first asks crowd workers to raise understandable features of objects of multiple classes and collect training data from the Internet to generate classifiers for the features. Logical decision rules with the set of generated classifiers can explain why each object is of a particular class. In our preliminary experiment, we applied our method to an image classification of Asian national flags and examined the effectiveness and issues of our method. In our future studies, we plan to combine the System-2 AI with System-1 AI (e.g., neural networks) to efficiently output decisions.
2022-05-23
Zhang, Zuyao, Gao, Jing.  2021.  Design of Immersive Interactive Experience of Intangible Cultural Heritage based on Flow Theory. 2021 13th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). :146–149.
At present, the limitation of intangible cultural experience lies in the lack of long-term immersive cultural experience for users. In order to solve this problem, this study divides the process from the perspective of Freudian psychology and combines the theoretical research on intangible cultural heritage and flow experience to get the preliminary research direction. Then, based on the existing interactive experience cases of intangible cultural heritage, a set of method model of immersive interactive experience of intangible cultural heritage based on flow theory is summarized through user interviews in this research. Finally, through data verification, the model is proved to be correct. In addition, this study offers some important insights into differences between primary users and experienced users, and proposed specific guiding suggestions for immersive interactive experience design of intangible cultural heritage based on flow theory in the future.
2021-09-07
Lessio, Nadine, Morris, Alexis.  2020.  Toward Design Archetypes for Conversational Agent Personality. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :3221–3228.
Conversational agents (CAs), often referred to as chatbots, are being widely deployed within existing commercial frameworks and online service websites. As society moves further into incorporating data rich systems, like the internet of things (IoT), into daily life, it is expected that conversational agents will take on an increasingly important role to help users manage these complex systems. In this, the concept of personality is becoming increasingly important, as we seek for more human-friendly ways to interact with these CAs. In this work a conceptual framework is proposed that considers how existing standard psychological and persona models could be mapped to different kinds of CA functionality outside of strictly dialogue. As CAs become more diverse in their abilities, and more integrated with different kinds of systems, it is important to consider how function can be impacted by the design of agent personality, whether intentionally designed or not. Based on this framework, derived archetype classes of CAs are presented as starting points that can hopefully aid designers, developers, and the curious, into thinking about how to work toward better CA personality development.
Choi, Ho-Jin, Lee, Young-Jun.  2020.  Deep Learning Based Response Generation using Emotion Feature Extraction. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :255–262.
Neural response generation is to generate human-like response given human utterance by using a deep learning. In the previous studies, expressing emotion in response generation improve user performance, user engagement, and user satisfaction. Also, the conversational agents can communicate with users at the human level. However, the previous emotional response generation model cannot understand the subtle part of emotions, because this model use the desired emotion of response as a token form. Moreover, this model is difficult to generate natural responses related to input utterance at the content level, since the information of input utterance can be biased to the emotion token. To overcome these limitations, we propose an emotional response generation model which generates emotional and natural responses by using the emotion feature extraction. Our model consists of two parts: Extraction part and Generation part. The extraction part is to extract the emotion of input utterance as a vector form by using the pre-trained LSTM based classification model. The generation part is to generate an emotional and natural response to the input utterance by reflecting the emotion vector from the extraction part and the thought vector from the encoder. We evaluate our model on the emotion-labeled dialogue dataset: DailyDialog. We evaluate our model on quantitative analysis and qualitative analysis: emotion classification; response generation modeling; comparative study. In general, experiments show that the proposed model can generate emotional and natural responses.
2021-06-01
Plager, Trenton, Zhu, Ying, Blackmon, Douglas A..  2020.  Creating a VR Experience of Solitary Confinement. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). :692—693.
The goal of this project is to create a realistic VR experience of solitary confinement and study its impact on users. Although there have been active debates and studies on this subject, very few people have personal experience of solitary confinement. Our first aim is to create such an experience in VR to raise the awareness of solitary confinement. We also want to conduct user studies to compare the VR solitary confinement experience with other types of media experiences, such as films or personal narrations. Finally, we want to study people’s sense of time in such a VR environment.
2021-03-29
Schiliro, F., Moustafa, N., Beheshti, A..  2020.  Cognitive Privacy: AI-enabled Privacy using EEG Signals in the Internet of Things. 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys). :73—79.

With the advent of Industry 4.0, the Internet of Things (IoT) and Artificial Intelligence (AI), smart entities are now able to read the minds of users via extracting cognitive patterns from electroencephalogram (EEG) signals. Such brain data may include users' experiences, emotions, motivations, and other previously private mental and psychological processes. Accordingly, users' cognitive privacy may be violated and the right to cognitive privacy should protect individuals against the unconsented intrusion by third parties into the brain data as well as against the unauthorized collection of those data. This has caused a growing concern among users and industry experts that laws to protect the right to cognitive liberty, right to mental privacy, right to mental integrity, and the right to psychological continuity. In this paper, we propose an AI-enabled EEG model, namely Cognitive Privacy, that aims to protect data and classifies users and their tasks from EEG data. We present a model that protects data from disclosure using normalized correlation analysis and classifies subjects (i.e., a multi-classification problem) and their tasks (i.e., eye open and eye close as a binary classification problem) using a long-short term memory (LSTM) deep learning approach. The model has been evaluated using the EEG data set of PhysioNet BCI, and the results have revealed its high performance of classifying users and their tasks with achieving high data privacy.

Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Jia, C., Li, C. L., Ying, Z..  2020.  Facial expression recognition based on the ensemble learning of CNNs. 2020 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1—5.

As a part of body language, facial expression is a psychological state that reflects the current emotional state of the person. Recognition of facial expressions can help to understand others and enhance communication with others. We propose a facial expression recognition method based on convolutional neural network ensemble learning in this paper. Our model is composed of three sub-networks, and uses the SVM classifier to Integrate the output of the three networks to get the final result. The recognition accuracy of the model's expression on the FER2013 dataset reached 71.27%. The results show that the method has high test accuracy and short prediction time, and can realize real-time, high-performance facial recognition.

2021-02-22
Eftimie, S., Moinescu, R., Rǎcuciu, C..  2020.  Insider Threat Detection Using Natural Language Processing and Personality Profiles. 2020 13th International Conference on Communications (COMM). :325–330.
This work represents an interdisciplinary effort to proactively identify insider threats, using natural language processing and personality profiles. Profiles were developed for the relevant insider threat types using the five-factor model of personality and were used in a proof-of-concept detection system. The system employs a third-party cloud service that uses natural language processing to analyze personality profiles based on personal content. In the end, an assessment was made over the feasibility of the system using a public dataset.
2021-02-03
Mou, W., Ruocco, M., Zanatto, D., Cangelosi, A..  2020.  When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :956—962.

Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.

Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.

2021-02-01
Ajenaghughrure, I. B., Sousa, S. C. da Costa, Lamas, D..  2020.  Risk and Trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI). :118–123.
This study investigates how risk influences users' trust before and after interactions with technologies such as autonomous vehicles (AVs'). Also, the psychophysiological correlates of users' trust from users” eletrodermal activity responses. Eighteen (18) carefully selected participants embark on a hypothetical trip playing an autonomous vehicle driving game. In order to stay safe, throughout the drive experience under four risk conditions (very high risk, high risk, low risk and no risk) that are based on automotive safety and integrity levels (ASIL D, C, B, A), participants exhibit either high or low trust by evaluating the AVs' to be highly or less trustworthy and consequently relying on the Artificial intelligence or the joystick to control the vehicle. The result of the experiment shows that there is significant increase in users' trust and user's delegation of controls to AVs' as risk decreases and vice-versa. In addition, there was a significant difference between user's initial trust before and after interacting with AVs' under varying risk conditions. Finally, there was a significant correlation in users' psychophysiological responses (electrodermal activity) when exhibiting higher and lower trust levels towards AVs'. The implications of these results and future research opportunities are discussed.
Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., Billinghurst, M..  2020.  Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). :756–765.
With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user's trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users' head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
2020-12-07
Xia, H., Xiao, F., Zhang, S., Hu, C., Cheng, X..  2019.  Trustworthiness Inference Framework in the Social Internet of Things: A Context-Aware Approach. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. :838–846.
The concept of social networking is integrated into Internet of things (IoT) to socialize smart objects by mimicking human behaviors, leading to a new paradigm of Social Internet of Things (SIoT). A crucial problem that needs to be solved is how to establish reliable relationships autonomously among objects, i.e., building trust. This paper focuses on exploring an efficient context-aware trustworthiness inference framework to address this issue. Based on the sociological and psychological principles of trust generation between human beings, the proposed framework divides trust into two types: familiarity trust and similarity trust. The familiarity trust can be calculated by direct trust and recommendation trust, while the similarity trust can be calculated based on external similarity trust and internal similarity trust. We subsequently present concrete methods for the calculation of different trust elements. In particular, we design a kernel-based nonlinear multivariate grey prediction model to predict the direct trust of a specific object, which acts as the core module of the entire framework. Besides, considering the fuzziness and uncertainty in the concept of trust, we introduce the fuzzy logic method to synthesize these trust elements. The experimental results verify the validity of the core module and the resistance to attacks of this framework.
2020-12-01
Geiskkovitch, D. Y., Thiessen, R., Young, J. E., Glenwright, M. R..  2019.  What? That's Not a Chair!: How Robot Informational Errors Affect Children's Trust Towards Robots 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :48—56.

Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.

Ogawa, R., Park, S., Umemuro, H..  2019.  How Humans Develop Trust in Communication Robots: A Phased Model Based on Interpersonal Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :606—607.

The purpose of this study was to propose a model of development of trust in social robots. Insights in interpersonal trust were adopted from social psychology and a novel model was proposed. In addition, this study aimed to investigate the relationship among trust development and self-esteem. To validate the proposed model, an experiment using a communication robot NAO was conducted and changes in categories of trust as well as self-esteem were measured. Results showed that general and category trust have been developed in the early phase. Self-esteem is also increased along the interactions with the robot.

2020-11-17
Hu, Y., Sanjab, A., Saad, W..  2019.  Dynamic Psychological Game Theory for Secure Internet of Battlefield Things (IoBT) Systems. IEEE Internet of Things Journal. 6:3712—3726.

In this paper, a novel anti-jamming mechanism is proposed to analyze and enhance the security of adversarial Internet of Battlefield Things (IoBT) systems. In particular, the problem is formulated as a dynamic psychological game between a soldier and an attacker. In this game, the soldier seeks to accomplish a time-critical mission by traversing a battlefield within a certain amount of time, while maintaining its connectivity with an IoBT network. The attacker, on the other hand, seeks to find the optimal opportunity to compromise the IoBT network and maximize the delay of the soldier's IoBT transmission link. The soldier and the attacker's psychological behavior are captured using tools from psychological game theory, with which the soldier's and attacker's intentions to harm one another are considered in their utilities. To solve this game, a novel learning algorithm based on Bayesian updating is proposed to find an ∈ -like psychological self-confirming equilibrium of the game.

2020-10-12
Ferguson-Walter, Kimberly, Major, Maxine, Van Bruggen, Dirk, Fugate, Sunny, Gutzwiller, Robert.  2019.  The World (of CTF) is Not Enough Data: Lessons Learned from a Cyber Deception Experiment. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :346–353.
The human side of cyber is fundamentally important to understanding and improving cyber operations. With the exception of Capture the Flag (CTF) exercises, cyber testing and experimentation tends to ignore the human attacker. While traditional CTF events include a deeply rooted human component, they rarely aim to measure human performance, cognition, or psychology. We argue that CTF is not sufficient for measuring these aspects of the human; instead, we examine the value in performing red team behavioral and cognitive testing in a large-scale, controlled human-subject experiment. In this paper we describe the pros and cons of performing this type of experimentation and provide detailed exposition of the data collection and experimental controls used during a recent cyber deception experiment-the Tularosa Study. Finally, we will discuss lessons learned and how our experiences can inform best practices in future cyber operations studies of human behavior and cognition.