Visible to the public Biblio

Filters: Keyword is virtual agent  [Clear All Filters]
2023-05-12
Jain, Raghav, Saha, Tulika, Chakraborty, Souhitya, Saha, Sriparna.  2022.  Domain Infused Conversational Response Generation for Tutoring based Virtual Agent. 2022 International Joint Conference on Neural Networks (IJCNN). :1–8.
Recent advances in deep learning typically, with the introduction of transformer based models has shown massive improvement and success in many Natural Language Processing (NLP) tasks. One such area which has leveraged immensely is conversational agents or chatbots in open-ended (chit-chat conversations) and task-specific (such as medical or legal dialogue bots etc.) domains. However, in the era of automation, there is still a dearth of works focused on one of the most relevant use cases, i.e., tutoring dialog systems that can help students learn new subjects or topics of their interest. Most of the previous works in this domain are either rule based systems which require a lot of manual efforts or are based on multiple choice type factual questions. In this paper, we propose EDICA (Educational Domain Infused Conversational Agent), a language tutoring Virtual Agent (VA). EDICA employs two mechanisms in order to converse fluently with a student/user over a question and assist them to learn a language: (i) Student/Tutor Intent Classification (SIC-TIC) framework to identify the intent of the student and decide the action of the VA, respectively, in the on-going conversation and (ii) Tutor Response Generation (TRG) framework to generate domain infused and intent/action conditioned tutor responses at every step of the conversation. The VA is able to provide hints, ask questions and correct student's reply by generating an appropriate, informative and relevant tutor response. We establish the superiority of our proposed approach on various evaluation metrics over other baselines and state of the art models.
ISSN: 2161-4407
2022-08-26
Anastasia, Nadya, Harlili, Yulianti, Lenny Putri.  2021.  Designing Embodied Virtual Agent in E-commerce System Recommendations using Conversational Design Interaction. 2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA). :1–6.
System recommendation is currently on the rise: more and more e-commerce rely on this feature to give more privilege to their users. However, system recommendation still faces a lot of problems that can lead to its downfall. For instance, the cold start problem and lack of privacy for user’s data in system recommendation will make the quality of this system lesser than ever. Moreover, e-commerce also faces another significant issue which is the lack of social presence. Compared to offline shopping, online shopping in e-commerce may be seen as lacking human presence and sociability as it is more impersonal, cold, automated, and generally devoid of face-to-face interactions. Hence, all of those issues mentioned above may lead to the regression of user’s trust toward e-commerce itself. This study will focus on solving those problems using conversational design interaction in the form of a Virtual Agent. This Virtual Agent can help e-commerce gather user preferences and give clear and direct information regarding the use of user’s data as well as help the user find products, promo, or similar products that they seek in e-commerce. The final result of this solution is a high fidelity prototype designed using User-Centered Design Methodology and Natural Conversational Framework. The implementation of this solution is carried out in Shopee e-commerce by modifying their product recommendation system. This prototype was measured using the usability testing method for usability goals efficient to use and user experience goals helpful.
2018-05-30
Ali, Mohammad Rafayet, Hoque, Ehsan.  2017.  Social Skills Training with Virtual Assistant and Real-Time Feedback. Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. :325–329.
Nonverbal cues are considered the most important part in social communication. Many people desire people; but due to the stigma and unavailability of resources, they are unable to practice their social skills. In this work, we envision a virtual assistant that can give individuals real-time feedback on their smiles, eye-contact, body language and volume modulation that is available anytime, anywhere using a computer browser. To instantiate our idea, we have set up a Wizard-of-Oz study in the context of speed-dating with 47 individuals. We collected videos of the participants having a conversation with a virtual agent before and after of a speed-dating session. This study revealed that the participants who used our system improved their gesture in a face-to-face conversation. Our next goal is to explore different machine learning techniques on the facial and prosodic features to automatically generate feedback on the nonverbal cues. In addition, we want to explore different strategies of conveying real-time feedback that is non-threatening, repeatable, objective and more likely to transfer to a real-world conversation.
2017-10-18
Emmerich, Katharina, Masuch, Maic.  2016.  The Influence of Virtual Agents on Player Experience and Performance. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. :10–21.

This paper contributes a systematic research approach as well as findings of an empirical study conducted to investigate the effect of virtual agents on task performance and player experience in digital games. As virtual agents are supposed to evoke social effects similar to real humans under certain conditions, the basic social phenomenon social facilitation is examined in a testbed game that was specifically developed to enable systematical variation of single impact factors of social facilitation. Independent variables were the presence of a virtual agent (present vs. not present) and the output device (ordinary monitor vs. head-mounted display). Results indicate social inhibition effects, but only for players using a head-mounted display. Additional potential impact factors and future research directions are discussed.

Dermouche, Soumia, Pelachaud, Catherine.  2016.  Sequence-based Multimodal Behavior Modeling for Social Agents. Proceedings of the 18th ACM International Conference on Multimodal Interaction. :29–36.

The goal of this work is to model a virtual character able to converse with different interpersonal attitudes. To build our model, we rely on the analysis of multimodal corpora of non-verbal behaviors. The interpretation of these behaviors depends on how they are sequenced (order) and distributed over time. To encompass the dynamics of non-verbal signals across both modalities and time, we make use of temporal sequence mining. Specifically, we propose a new algorithm for temporal sequence extraction. We apply our algorithm to extract temporal patterns of non-verbal behaviors expressing interpersonal attitudes from a corpus of job interviews. We demonstrate the efficiency of our algorithm in terms of significant accuracy improvement over the state-of-the-art algorithms.

2017-06-27
Ravenet, Brian, Bevacqua, Elisabetta, Cafaro, Angelo, Ochs, Magalie, Pelachaud, Catherine.  2016.  Perceiving Attitudes Expressed Through Nonverbal Behaviors in Immersive Virtual Environments. Proceedings of the 9th International Conference on Motion in Games. :175–180.

Virtual Reality and immersive experiences, which allow players to share the same virtual environment as the characters of a virtual world, have gained more and more interest recently. In order to conceive these immersive virtual worlds, one of the challenges is to give to the characters that populate them the ability to express behaviors that can support the immersion. In this work, we propose a model capable of controlling and simulating a conversational group of social agents in an immersive environment. We describe this model which has been previously validated using a regular screen setting and we present a study for measuring whether users recognized the attitudes expressed by virtual agents through the realtime generated animations of nonverbal behavior in an immersive setting. Results mirrored those of the regular screen setting thus providing further insights for improving players experiences by integrating them into immersive simulated group conversations with characters that express different interpersonal attitudes.