Visible to the public Conversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example

TitleConversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example
Publication TypeConference Paper
Year of Publication2017
AuthorsZou, Shuai, Kuzushima, Kento, Mitake, Hironori, Hasegawa, Shoichi
Conference NameProceedings of the 5th International Conference on Human Agent Interaction
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-5113-3
Keywordsconversational agent, conversational agents, gaze, HMM, Human Behavior, machine learning, Metrics, pubcrawl, Scalability
Abstract

Recent developments in robotics and virtual reality (VR) are making embodied agents familiar, and social behaviors of embodied conversational agents are essential to create mindful daily lives with conversational agents. Especially, natural nonverbal behaviors are required, such as gaze and gesture movement. We propose a novel method to create an agent with human-like gaze as a listener in multi-party conversation, using Hidden Markov Model (HMM) to learn the behavior from real conversation examples. The model can generate gaze reaction according to users' gaze and utterance. We implemented an agent with proposed method, and created VR environment to interact with the agent. The proposed agent reproduced several features of gaze behavior in example conversations. Impression survey result showed that there is at least a group who felt the proposed agent is similar to human and better than conventional methods.

URLhttps://dl.acm.org/doi/10.1145/3125739.3132607
DOI10.1145/3125739.3132607
Citation Keyzou_conversational_2017