Visible to the public Biblio

Filters: Keyword is Embodied Conversational Agents  [Clear All Filters]
2020-07-16
McNeely-White, David G., Ortega, Francisco R., Beveridge, J. Ross, Draper, Bruce A., Bangar, Rahul, Patil, Dhruva, Pustejovsky, James, Krishnaswamy, Nikhil, Rim, Kyeongmin, Ruiz, Jaime et al..  2019.  User-Aware Shared Perception for Embodied Agents. 2019 IEEE International Conference on Humanized Computing and Communication (HCC). :46—51.

We present Diana, an embodied agent who is aware of her own virtual space and the physical space around her. Using video and depth sensors, Diana attends to the user's gestures, body language, gaze and (soon) facial expressions as well as their words. Diana also gestures and emotes in addition to speaking, and exists in a 3D virtual world that the user can see. This produces symmetric and shared perception, in the sense that Diana can see the user, the user can see Diana, and both can see the virtual world. The result is an embodied agent that begins to develop the conceit that the user is interacting with a peer rather than a program.

Biancardi, Beatrice, Wang, Chen, Mancini, Maurizio, Cafaro, Angelo, Chanel, Guillaume, Pelachaud, Catherine.  2019.  A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :1—7.

This paper presents a computational model for managing an Embodied Conversational Agent's first impressions of warmth and competence towards the user. These impressions are important to manage because they can impact users' perception of the agent and their willingness to continue the interaction with the agent. The model aims at detecting user's impression of the agent and producing appropriate agent's verbal and nonverbal behaviours in order to maintain a positive impression of warmth and competence. User's impressions are recognized using a machine learning approach with facial expressions (action units) which are important indicators of users' affective states and intentions. The agent adapts in real-time its verbal and nonverbal behaviour, with a reinforcement learning algorithm that takes user's impressions as reward to select the most appropriate combination of verbal and non-verbal behaviour to perform. A user study to test the model in a contextualized interaction with users is also presented. Our hypotheses are that users' ratings differs when the agents adapts its behaviour according to our reinforcement learning algorithm, compared to when the agent does not adapt its behaviour to user's reactions (i.e., when it randomly selects its behaviours). The study shows a general tendency for the agent to perform better when using our model than in the random condition. Significant results shows that user's ratings about agent's warmth are influenced by their a-priori about virtual characters, as well as that users' judged the agent as more competent when it adapted its behaviour compared to random condition.

2017-10-18
Rayon, Alex, Gonzalez, Timothy, Novick, David.  2016.  Analysis of Gesture Frequency and Amplitude As a Function of Personality in Virtual Agents. Proceedings of the Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction. :3–9.

Embodied conversational agents are changing the way humans interact with technology. In order to develop humanlike ECAs they need to be able to perform natural gestures that are used in day-to-day conversation. Gestures can give insight into an ECAs personality trait of extraversion, but what factors into it is still being explored. Our study focuses on two aspects of gesture: amplitude and frequency. Our goal is to find out whether agents should use specific gestures more frequently than others depending on the personality type they have been designed with. We also look to quantify gesture amplitude and compare it to a previous study on the perception of an agent's naturalness of its gestures. Our results showed some indication that introverts and extraverts judge the agent's naturalness similarly. The larger the amplitude our agent used, the more natural its gestures were perceived. The frequency of gestures between extraverts and introverts seem to contain hardly any difference, even in terms of types of gesture used.

Valstar, Michel, Baur, Tobias, Cafaro, Angelo, Ghitulescu, Alexandru, Potard, Blaise, Wagner, Johannes, André, Elisabeth, Durieu, Laurent, Aylett, Matthew, Dermouche, Soumia et al..  2016.  Ask Alice: An Artificial Retrieval of Information Agent. Proceedings of the 18th ACM International Conference on Multimodal Interaction. :419–420.

We present a demonstration of the ARIA framework, a modular approach for rapid development of virtual humans for information retrieval that have linguistic, emotional, and social skills and a strong personality. We demonstrate the framework's capabilities in a scenario where `Alice in Wonderland', a popular English literature book, is embodied by a virtual human representing Alice. The user can engage in an information exchange dialogue, where Alice acts as the expert on the book, and the user as an interested novice. Besides speech recognition, sophisticated audio-visual behaviour analysis is used to inform the core agent dialogue module about the user's state and intentions, so that it can go beyond simple chat-bot dialogue. The behaviour generation module features a unique new capability of being able to deal gracefully with interruptions of the agent.