Visible to the public Biblio

Filters: Keyword is conflict  [Clear All Filters]
2021-02-03
Bellas, A., Perrin, S., Malone, B., Rogers, K., Lucas, G., Phillips, E., Tossell, C., Visser, E. d.  2020.  Rapport Building with Social Robots as a Method for Improving Mission Debriefing in Human-Robot Teams. 2020 Systems and Information Engineering Design Symposium (SIEDS). :160—163.

Conflicts may arise at any time during military debriefing meetings, especially in high intensity deployed settings. When such conflicts arise, it takes time to get everyone back into a receptive state of mind so that they engage in reflective discussion rather than unproductive arguing. It has been proposed by some that the use of social robots equipped with social abilities such as emotion regulation through rapport building may help to deescalate these situations to facilitate critical operational decisions. However, in military settings, the same AI agent used in the pre-brief of a mission may not be the same one used in the debrief. The purpose of this study was to determine whether a brief rapport-building session with a social robot could create a connection between a human and a robot agent, and whether consistency in the embodiment of the robot agent was necessary for maintaining this connection once formed. We report the results of a pilot study conducted at the United States Air Force Academy which simulated a military mission (i.e., Gravity and Strike). Participants' connection with the agent, sense of trust, and overall likeability revealed that early rapport building can be beneficial for military missions.

2020-09-21
Kovach, Nicholas S., Lamont, Gary B..  2019.  Trust and Deception in Hypergame Theory. 2019 IEEE National Aerospace and Electronics Conference (NAECON). :262–268.
Hypergame theory has been used to model advantages in decision making. This research provides a formal representation of deception to further extend the hypergame model. In order to extend the model, we propose a hypergame theoretic framework based on temporal logic to model decision making under the potential for trust and deception. Using the temporal hypergame model, the concept of trust is defined within the constraints of the model. With a formal definition of trust in hypergame theory, the concepts of distrust, mistrust, misperception, and deception are then constructed. These formal definitions are then applied to an Attacker-Defender hypergame to show how the deception within the game can be formally modeled; the model is presented. This demonstrates how hypergame theory can be used to model trust, mistrust, misperception, and deception using a formal model.
2020-02-17
Goncharov, Nikita, Dushkin, Alexander, Goncharov, Igor.  2019.  Mathematical Modeling of the Security Management Process of an Information System in Conditions of Unauthorized External Influences. 2019 1st International Conference on Control Systems, Mathematical Modelling, Automation and Energy Efficiency (SUMMA). :77–82.

In this paper, we consider one of the approaches to the study of the characteristics of an information system that is under the influence of various factors, and their management using neural networks and wavelet transforms based on determining the relationship between the modified state of the information system and the possibility of dynamic analysis of effects. At the same time, the process of influencing the information system includes the following components: impact on the components providing the functions of the information system; determination of the result of exposure; analysis of the result of exposure; response to the result of exposure. As an input signal, the characteristics of the means that affect are taken. The system includes an adaptive response unit, the input of which receives signals about the prerequisites for changes, and at the output, this unit generates signals for the inclusion of appropriate means to eliminate or compensate for these prerequisites or directly the changes in the information system.