Biblio
Conversational agents assist traditional teaching-learning instruments in proposing new designs for knowledge creation and learning analysis, across organizational environments. Means of building common educative background in both industry and academic fields become of interest for ensuring educational effectiveness and consistency. Such a context requires transferable practices and becomes the basis for the Agile adoption into Higher Education, at both curriculum and operational levels. The current work proposes a model for delivering Agile Scrum training through an assistive web-based conversational service, where analytics are collected to provide an overview on learners' knowledge path. Besides its specific applicability into Software Engineering (SE) industry, the model is to assist the academic SE curriculum. A user-acceptance test has been carried out among 200 undergraduate students and patterns of interaction have been depicted for 2 conversational strategies.
The advances in natural language processing and the wide use of social networks have boosted the proliferation of chatbots. These are software services typically embedded within a social network, and which can be addressed using conversation through natural language. Many chatbots exist with different purposes, e.g., to book all kind of services, to automate software engineering tasks, or for customer support. In previous work, we proposed the use of chatbots for domain-specific modelling within social networks. In this short paper, we report on the needs for flexible modelling required by modelling using conversation. In particular, we propose a process of meta-model relaxation to make modelling more flexible, followed by correction steps to make the model conforming to its meta-model. The paper shows how this process is integrated within our conversational modelling framework, and illustrates the approach with an example.
A conversational agent to detect anomalous traffic in consumer IoT networks is presented. The agent accepts two inputs in the form of user speech received by Amazon Alexa enabled devices, and classified IDS logs stored in a DynamoDB Table. Aural analysis is used to query the database of network traffic, and respond accordingly. In doing so, this paper presents a solution to the problem of making consumers situationally aware when their IoT devices are infected, and anomalous traffic has been detected. The proposed conversational agent addresses the issue of how to present network information to non-technical users, for better comprehension, and improves awareness of threats derived from the mirai botnet malware.
Building natural and conversational virtual humans is a task of formidable complexity. We believe that, especially when building agents that affectively interact with biological humans in real-time, a cognitive science-based, multilayered sensing and artificial intelligence (AI) systems approach is needed. For this demo, we show a working version (through human interaction with it) our modular system of natural, conversation 3D virtual human using AI or sensing layers. These including sensing the human user via facial emotion recognition, voice stress, semantic meaning of the words, eye gaze, heart rate, and galvanic skin response. These inputs are combined with AI sensing and recognition of the environment using deep learning natural language captioning or dense captioning. These are all processed by our AI avatar system allowing for an affective and empathetic conversation using an NLP topic-based dialogue capable of using facial expressions, gestures, breath, eye gaze and voice language-based two-way back and forth conversations with a sensed human. Our lab has been building these systems in stages over the years.
There are many challenges when it comes to deploying robots remotely including lack of operator situation awareness and decreased trust. Here, we present a conversational agent embodied in a Furhat robot that can help with the deployment of such remote robots by facilitating teaming with varying levels of operator control.
The purpose of this paper is to examine the influence of lexical entrainment while communicating with a conversational agent. We consider two types of cognitive information processing:top-down processing, which depends on prior knowledge, and bottom-up processing, which depends on one's partners' behavior. Each works mutually complementarily in interpersonal cognition. It was hypothesized that we will separate each method of processing because of the agent's behavior. We designed a word choice task where participants and the agent described pictures and selected them alternately and held two factors constant:First, the expectation about the agent's intelligence by the experimenter's instruction as top-down processing; second, the agent's behavior, manipulating the degree of intellectual impression, as bottom-up processing. The results show that people select words differently because of the diversity of expressed behavior and thus supported our hypothesis. The findings obtained in this study could bring about new guidelines for a human-to-agent language interface.
Our operational context is a task-oriented dialog system where no single module satisfactorily addresses the range of conversational queries from humans. Such systems must be equipped with a range of technologies to address semantic, factual, task-oriented, open domain conversations using rule-based, semantic-web, traditional machine learning and deep learning. This raises two key challenges. First, the modules need to be managed and selected appropriately. Second, the complexity of troubleshooting on such systems is high. We address these challenges with a mixed-initiative model that controls conversational logic through hierarchical classification. We also developed an interface to increase interpretability for operators and to aggregate module performance.
Recent developments in robotics and virtual reality (VR) are making embodied agents familiar, and social behaviors of embodied conversational agents are essential to create mindful daily lives with conversational agents. Especially, natural nonverbal behaviors are required, such as gaze and gesture movement. We propose a novel method to create an agent with human-like gaze as a listener in multi-party conversation, using Hidden Markov Model (HMM) to learn the behavior from real conversation examples. The model can generate gaze reaction according to users' gaze and utterance. We implemented an agent with proposed method, and created VR environment to interact with the agent. The proposed agent reproduced several features of gaze behavior in example conversations. Impression survey result showed that there is at least a group who felt the proposed agent is similar to human and better than conventional methods.
Building conversational agents is becoming easier thanks to the profusion of designated platforms. Integrating emotional intelligence in such agents contributes to positive user satisfaction. Currently, this integration is implemented using calls to an emotion analysis service. In this demonstration we present EHCTool that aims to detect and notify the conversation designer about problematic conversation states where emotions are likely to be expressed by the user. Using its exploration view, the tool assists the designer to manage and define appropriate responses in these cases.