Visible to the public Biblio

Filters: Keyword is conversational agents  [Clear All Filters]
2021-09-07
Kuchlous, Sahil, Kadaba, Madhura.  2020.  Short Text Intent Classification for Conversational Agents. 2020 IEEE 17th India Council International Conference (INDICON). :1–4.
Intent classification is an important and relevant area of research in artificial intelligence and machine learning, with applications ranging from marketing and product design to intelligent communication. This paper explores the performance of various models and techniques for short text intent classification in the context of chatbots. The problem was explored for use within the mental wellness and therapy chatbot application, Wysa, to give improved responses to free-text user input. The authors looked at classifying text samples in-to 4 categories - assertions, refutations, clarifiers and transitions. For this, the suitability of the following techniques was evaluated: count vectors, TF-IDF, sentence embeddings and n-grams, as well as modifications of the same. Each technique was used to train a number of state-of-the-art classifiers, and the results have been compiled and presented. This is the first documented implementation of Arora's modification to sentence embeddings for real world use. It also introduces a technique to generate custom stop words that gave a significant gain in performance (10 percentage points). The best pipeline, using these techniques together, gave an accuracy of 95 percent.
Lessio, Nadine, Morris, Alexis.  2020.  Toward Design Archetypes for Conversational Agent Personality. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :3221–3228.
Conversational agents (CAs), often referred to as chatbots, are being widely deployed within existing commercial frameworks and online service websites. As society moves further into incorporating data rich systems, like the internet of things (IoT), into daily life, it is expected that conversational agents will take on an increasingly important role to help users manage these complex systems. In this, the concept of personality is becoming increasingly important, as we seek for more human-friendly ways to interact with these CAs. In this work a conceptual framework is proposed that considers how existing standard psychological and persona models could be mapped to different kinds of CA functionality outside of strictly dialogue. As CAs become more diverse in their abilities, and more integrated with different kinds of systems, it is important to consider how function can be impacted by the design of agent personality, whether intentionally designed or not. Based on this framework, derived archetype classes of CAs are presented as starting points that can hopefully aid designers, developers, and the curious, into thinking about how to work toward better CA personality development.
Kuttal, Sandeep Kaur, Myers, Jarow, Gurka, Sam, Magar, David, Piorkowski, David, Bellamy, Rachel.  2020.  Towards Designing Conversational Agents for Pair Programming: Accounting for Creativity Strategies and Conversational Styles. 2020 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). :1–11.
Established research on pair programming reveals benefits, including increasing communication, creativity, self-efficacy, and promoting gender inclusivity. However, research has reported limitations such as finding a compatible partner, scheduling sessions between partners, and resistance to pairing. Further, pairings can be affected by predispositions to negative stereotypes. These problems can be addressed by replacing one human member of the pair with a conversational agent. To investigate the design space of such a conversational agent, we conducted a controlled remote pair programming study. Our analysis found various creative problem-solving strategies and differences in conversational styles. We further analyzed the transferable strategies from human-human collaboration to human-agent collaboration by conducting a Wizard of Oz study. The findings from the two studies helped us gain insights regarding design of a programmer conversational agent. We make recommendations for researchers and practitioners for designing pair programming conversational agent tools.
Simud, Thikamporn, Ruengittinun, Somchoke, Surasvadi, Navaporn, Sanglerdsinlapachai, Nuttapong, Plangprasopchok, Anon.  2020.  A Conversational Agent for Database Query: A Use Case for Thai People Map and Analytics Platform. 2020 15th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP). :1–6.
Since 2018, Thai People Map and Analytics Platform (TPMAP) has been developed with the aims of supporting government officials and policy makers with integrated household and community data to analyze strategic plans, implement policies and decisions to alleviate poverty. However, to acquire complex information from the platform, non-technical users with no database background have to ask a programmer or a data scientist to query data for them. Such a process is time-consuming and might result in inaccurate information retrieved due to miscommunication between non-technical and technical users. In this paper, we have developed a Thai conversational agent on top of TPMAP to support self-service data analytics on complex queries. Users can simply use natural language to fetch information from our chatbot and the query results are presented to users in easy-to-use formats such as statistics and charts. The proposed conversational agent retrieves and transforms natural language queries into query representations with relevant entities, query intentions, and output formats of the query. We employ Rasa, an open-source conversational AI engine, for agent development. The results show that our system yields Fl-score of 0.9747 for intent classification and 0.7163 for entity extraction. The obtained intents and entities are then used for query target information from a graph database. Finally, our system achieves end-to-end performance with accuracies ranging from 57.5%-80.0%, depending on query message complexity. The generated answers are then returned to users through a messaging channel.
Choi, Ho-Jin, Lee, Young-Jun.  2020.  Deep Learning Based Response Generation using Emotion Feature Extraction. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :255–262.
Neural response generation is to generate human-like response given human utterance by using a deep learning. In the previous studies, expressing emotion in response generation improve user performance, user engagement, and user satisfaction. Also, the conversational agents can communicate with users at the human level. However, the previous emotional response generation model cannot understand the subtle part of emotions, because this model use the desired emotion of response as a token form. Moreover, this model is difficult to generate natural responses related to input utterance at the content level, since the information of input utterance can be biased to the emotion token. To overcome these limitations, we propose an emotional response generation model which generates emotional and natural responses by using the emotion feature extraction. Our model consists of two parts: Extraction part and Generation part. The extraction part is to extract the emotion of input utterance as a vector form by using the pre-trained LSTM based classification model. The generation part is to generate an emotional and natural response to the input utterance by reflecting the emotion vector from the extraction part and the thought vector from the encoder. We evaluate our model on the emotion-labeled dialogue dataset: DailyDialog. We evaluate our model on quantitative analysis and qualitative analysis: emotion classification; response generation modeling; comparative study. In general, experiments show that the proposed model can generate emotional and natural responses.
Ahmed, Faruk, Mahmud, Md Sultan, Yeasin, Mohammed.  2020.  Assistive System for Navigating Complex Realistic Simulated World Using Reinforcement Learning. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Finding a free path without obstacles or situation that pose minimal risk is critical for safe navigation. People who are sighted and people who are blind or visually impaired require navigation safety while walking on a sidewalk. In this paper we develop assistive navigation on a sidewalk by integrating sensory inputs using reinforcement learning. We train the reinforcement model in a simulated robotic environment which is used to avoid sidewalk obstacles. A conversational agent is built by training with real conversation data. The reinforcement learning model along with a conversational agent improved the obstacle avoidance experience about 2.5% from the base case which is 78.75%.
2020-07-16
McNeely-White, David G., Ortega, Francisco R., Beveridge, J. Ross, Draper, Bruce A., Bangar, Rahul, Patil, Dhruva, Pustejovsky, James, Krishnaswamy, Nikhil, Rim, Kyeongmin, Ruiz, Jaime et al..  2019.  User-Aware Shared Perception for Embodied Agents. 2019 IEEE International Conference on Humanized Computing and Communication (HCC). :46—51.

We present Diana, an embodied agent who is aware of her own virtual space and the physical space around her. Using video and depth sensors, Diana attends to the user's gestures, body language, gaze and (soon) facial expressions as well as their words. Diana also gestures and emotes in addition to speaking, and exists in a 3D virtual world that the user can see. This produces symmetric and shared perception, in the sense that Diana can see the user, the user can see Diana, and both can see the virtual world. The result is an embodied agent that begins to develop the conceit that the user is interacting with a peer rather than a program.

Velmovitsky, Pedro Elkind, Viana, Marx, Cirilo, Elder, Milidiu, Ruy Luiz, Pelegrini Morita, Plinio, Lucena, Carlos José Pereira de.  2019.  Promoting Reusability and Extensibility in the Engineering of Domain-Specific Conversational Systems. 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). :473—478.

Conversational systems are computer programs that interact with users using natural language. Considering the complexity and interaction of the different components involved in building intelligent conversational systems that can perform diverse tasks, a promising approach to facilitate their development is by using multiagent systems (MAS). This paper reviews the main concepts and history of conversational systems, and introduces an architecture based on MAS. This architecture was designed to support the development of conversational systems in the domain chosen by the developer while also providing a reusable built-in dialogue control. We present a practical application in the healthcare domain. We observed that it can help developers to create conversational systems in different domains while providing a reusable and centralized dialogue control. We also present derived lessons learned that can be helpful to steer future research on engineering domain-specific conversational systems.

Biancardi, Beatrice, Wang, Chen, Mancini, Maurizio, Cafaro, Angelo, Chanel, Guillaume, Pelachaud, Catherine.  2019.  A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :1—7.

This paper presents a computational model for managing an Embodied Conversational Agent's first impressions of warmth and competence towards the user. These impressions are important to manage because they can impact users' perception of the agent and their willingness to continue the interaction with the agent. The model aims at detecting user's impression of the agent and producing appropriate agent's verbal and nonverbal behaviours in order to maintain a positive impression of warmth and competence. User's impressions are recognized using a machine learning approach with facial expressions (action units) which are important indicators of users' affective states and intentions. The agent adapts in real-time its verbal and nonverbal behaviour, with a reinforcement learning algorithm that takes user's impressions as reward to select the most appropriate combination of verbal and non-verbal behaviour to perform. A user study to test the model in a contextualized interaction with users is also presented. Our hypotheses are that users' ratings differs when the agents adapts its behaviour according to our reinforcement learning algorithm, compared to when the agent does not adapt its behaviour to user's reactions (i.e., when it randomly selects its behaviours). The study shows a general tendency for the agent to perform better when using our model than in the random condition. Significant results shows that user's ratings about agent's warmth are influenced by their a-priori about virtual characters, as well as that users' judged the agent as more competent when it adapted its behaviour compared to random condition.

Ciupe, Aurelia, Mititica, Doru Florin, Meza, Serban, Orza, Bogdan.  2019.  Learning Agile with Intelligent Conversational Agents. 2019 IEEE Global Engineering Education Conference (EDUCON). :1100—1107.

Conversational agents assist traditional teaching-learning instruments in proposing new designs for knowledge creation and learning analysis, across organizational environments. Means of building common educative background in both industry and academic fields become of interest for ensuring educational effectiveness and consistency. Such a context requires transferable practices and becomes the basis for the Agile adoption into Higher Education, at both curriculum and operational levels. The current work proposes a model for delivering Agile Scrum training through an assistive web-based conversational service, where analytics are collected to provide an overview on learners' knowledge path. Besides its specific applicability into Software Engineering (SE) industry, the model is to assist the academic SE curriculum. A user-acceptance test has been carried out among 200 undergraduate students and patterns of interaction have been depicted for 2 conversational strategies.

Yousef, Muhammad, Torad, Mohamed A..  2019.  A Treatise On Conversational AI Agents: Learning From Humans’ Behaviour As A Design Outlook. 2019 International Conference on Electrical and Computing Technologies and Applications (ICECTA). :1—4.

Engineering a successful conversational AI agent is a tough process, and requires the consideration of achieving an effective communication between its various endpoints. In this paper, we present our perspective for designing an efficient conversational agent according to our belief that the existence of a centralized learning module that is capable of analyzing and understanding humans' behaviour from day one, and acting upon this behaviour is a must.

Pérez-Soler, Sara, Guerra, Esther, de Lara, Juan.  2019.  Flexible Modelling using Conversational Agents. 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C). :478—482.

The advances in natural language processing and the wide use of social networks have boosted the proliferation of chatbots. These are software services typically embedded within a social network, and which can be addressed using conversation through natural language. Many chatbots exist with different purposes, e.g., to book all kind of services, to automate software engineering tasks, or for customer support. In previous work, we proposed the use of chatbots for domain-specific modelling within social networks. In this short paper, we report on the needs for flexible modelling required by modelling using conversation. In particular, we propose a process of meta-model relaxation to make modelling more flexible, followed by correction steps to make the model conforming to its meta-model. The paper shows how this process is integrated within our conversational modelling framework, and illustrates the approach with an example.

2019-12-16
McDermott, Christopher D., Jeannelle, Bastien, Isaacs, John P..  2019.  Towards a Conversational Agent for Threat Detection in the Internet of Things. 2019 International Conference on Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

A conversational agent to detect anomalous traffic in consumer IoT networks is presented. The agent accepts two inputs in the form of user speech received by Amazon Alexa enabled devices, and classified IDS logs stored in a DynamoDB Table. Aural analysis is used to query the database of network traffic, and respond accordingly. In doing so, this paper presents a solution to the problem of making consumers situationally aware when their IoT devices are infected, and anomalous traffic has been detected. The proposed conversational agent addresses the issue of how to present network information to non-technical users, for better comprehension, and improves awareness of threats derived from the mirai botnet malware.

DiPaola, Steve, Yalçin, Özge Nilay.  2019.  A multi-layer artificial intelligence and sensing based affective conversational embodied agent. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :91–92.

Building natural and conversational virtual humans is a task of formidable complexity. We believe that, especially when building agents that affectively interact with biological humans in real-time, a cognitive science-based, multilayered sensing and artificial intelligence (AI) systems approach is needed. For this demo, we show a working version (through human interaction with it) our modular system of natural, conversation 3D virtual human using AI or sensing layers. These including sensing the human user via facial emotion recognition, voice stress, semantic meaning of the words, eye gaze, heart rate, and galvanic skin response. These inputs are combined with AI sensing and recognition of the environment using deep learning natural language captioning or dense captioning. These are all processed by our AI avatar system allowing for an affective and empathetic conversation using an NLP topic-based dialogue capable of using facial expressions, gestures, breath, eye gaze and voice language-based two-way back and forth conversations with a sensed human. Our lab has been building these systems in stages over the years.

Lopes, José, Robb, David A., Ahmad, Muneeb, Liu, Xingkun, Lohan, Katrin, Hastie, Helen.  2019.  Towards a Conversational Agent for Remote Robot-Human Teaming. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :548–549.

There are many challenges when it comes to deploying robots remotely including lack of operator situation awareness and decreased trust. Here, we present a conversational agent embodied in a Furhat robot that can help with the deployment of such remote robots by facilitating teaming with varying levels of operator control.

Park, Chan Mi, Lee, Jung Yeon, Baek, Hyoung Woo, Lee, Hae-Sung, Lee, JeeHang, Kim, Jinwoo.  2019.  Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :646–647.
Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a `flawless' human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.
Sannon, Shruti, Stoll, Brett, DiFranzo, Dominic, Jung, Malte, Bazarova, Natalya N..  2018.  How Personification and Interactivity Influence Stress-Related Disclosures to Conversational Agents. Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. :285–288.
In this exploratory study, we examine how personification and interactivity may influence people's disclosures around sensitive topics, such as psychological stressors. Participants (N=441) shared a recent stressful experience with one of three agent interfaces: 1) a non-interactive, non-personified survey, 2) an interactive, non-personified chatbot, and 3) an interactive, personified chatbot. We coded these responses to examine how agent type influenced the nature of the stressor disclosed, and the intimacy and amount of disclosure. Participants discussed fewer homelife related stressors, but more finance-related stressors and more chronic stressors overall with the personified chatbot than the other two agents. The personified chatbot was also twice as likely as the other agents to receive disclosures that contained very little detail. We discuss the role played by personification and interactivity in interactions with conversational agents, and implications for design.
Alam, Mehreen.  2018.  Neural Encoder-Decoder based Urdu Conversational Agent. 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :901–905.
Conversational agents have very much become part of our lives since the renaissance of neural network based "neural conversational agents". Previously used manually annotated and rule based methods lacked the scalability and generalization capabilities of the neural conversational agents. A neural conversational agent has two parts: at one end an encoder understands the question while the other end a decoder prepares and outputs the corresponding answer to the question asked. Both the parts are typically designed using recurrent neural network and its variants and trained in an end-to-end fashion. Although conversation agents for other languages have been developed, Urdu language has seen very less progress in building of conversational agents. Especially recent state of the art neural network based techniques have not been explored yet. In this paper, we design an attention driven deep encoder-decoder based neural conversational agent for Urdu language. Overall, we make following contributions we (i) create a dataset of 5000 question-answer pairs, and (ii) present a new deep encoder-decoder based conversational agent for Urdu language. For our work, we limit the knowledge base of our agent to general knowledge regarding Pakistan. Our best model has the BLEU score of 58 and gives syntactically and semantically correct answers in majority of the cases.
Fast, Ethan, Chen, Binbin, Mendelsohn, Julia, Bassen, Jonathan, Bernstein, Michael S..  2018.  Iris: A Conversational Agent for Complex Tasks. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. :473:1–473:12.
Today, most conversational agents are limited to simple tasks supported by standalone commands, such as getting directions or scheduling an appointment. To support more complex tasks, agents must be able to generalize from and combine the commands they already understand. This paper presents a new approach to designing conversational agents inspired by linguistic theory, where agents can execute complex requests interactively by combining commands through nested conversations. We demonstrate this approach in Iris, an agent that can perform open-ended data science tasks such as lexical analysis and predictive modeling. To power Iris, we have created a domain-specific language that transforms Python functions into combinable automata and regulates their combinations through a type system. Running a user study to examine the strengths and limitations of our approach, we find that data scientists completed a modeling task 2.6 times faster with Iris than with Jupyter Notebook.
2018-11-28
Porcheron, Martin, Fischer, Joel E., McGregor, Moira, Brown, Barry, Luger, Ewa, Candello, Heloisa, O'Hara, Kenton.  2017.  Talking with Conversational Agents in Collaborative Action. Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. :431–436.

This one-day workshop intends to bring together both academics and industry practitioners to explore collaborative challenges in speech interaction. Recent improvements in speech recognition and computing power has led to conversational interfaces being introduced to many of the devices we use every day, such as smartphones, watches, and even televisions. These interfaces allow us to get things done, often by just speaking commands, relying on a reasonably well understood single-user model. While research on speech recognition is well established, the social implications of these interfaces remain underexplored, such as how we socialise, work, and play around such technologies, and how these might be better designed to support collaborative collocated talk-in-action. Moreover, the advent of new products such as the Amazon Echo and Google Home, which are positioned as supporting multi-user interaction in collocated environments such as the home, makes exploring the social and collaborative challenges around these products, a timely topic. In the workshop, we will review current practices and reflect upon prior work on studying talk-in-action and collocated interaction. We wish to begin a dialogue that takes on the renewed interest in research on spoken interaction with devices, grounded in the existing practices of the CSCW community.

Hoshida, Masahiro, Tamura, Masahiko, Hayashi, Yugo.  2017.  Lexical Entrainment Toward Conversational Agents: An Experimental Study on Top-down Processing and Bottom-up Processing. Proceedings of the 5th International Conference on Human Agent Interaction. :189–194.

The purpose of this paper is to examine the influence of lexical entrainment while communicating with a conversational agent. We consider two types of cognitive information processing:top-down processing, which depends on prior knowledge, and bottom-up processing, which depends on one's partners' behavior. Each works mutually complementarily in interpersonal cognition. It was hypothesized that we will separate each method of processing because of the agent's behavior. We designed a word choice task where participants and the agent described pictures and selected them alternately and held two factors constant:First, the expectation about the agent's intelligence by the experimenter's instruction as top-down processing; second, the agent's behavior, manipulating the degree of intellectual impression, as bottom-up processing. The results show that people select words differently because of the diversity of expressed behavior and thus supported our hypothesis. The findings obtained in this study could bring about new guidelines for a human-to-agent language interface.

Suzanna, Sia Xin Yun, Anthony, Li Lianjie.  2017.  Hierarchical Module Classification in Mixed-Initiative Conversational Agent System. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. :2535–2538.

Our operational context is a task-oriented dialog system where no single module satisfactorily addresses the range of conversational queries from humans. Such systems must be equipped with a range of technologies to address semantic, factual, task-oriented, open domain conversations using rule-based, semantic-web, traditional machine learning and deep learning. This raises two key challenges. First, the modules need to be managed and selected appropriately. Second, the complexity of troubleshooting on such systems is high. We address these challenges with a mixed-initiative model that controls conversational logic through hierarchical classification. We also developed an interface to increase interpretability for operators and to aggregate module performance.

Zou, Shuai, Kuzushima, Kento, Mitake, Hironori, Hasegawa, Shoichi.  2017.  Conversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example. Proceedings of the 5th International Conference on Human Agent Interaction. :405–409.

Recent developments in robotics and virtual reality (VR) are making embodied agents familiar, and social behaviors of embodied conversational agents are essential to create mindful daily lives with conversational agents. Especially, natural nonverbal behaviors are required, such as gaze and gesture movement. We propose a novel method to create an agent with human-like gaze as a listener in multi-party conversation, using Hidden Markov Model (HMM) to learn the behavior from real conversation examples. The model can generate gaze reaction according to users' gaze and utterance. We implemented an agent with proposed method, and created VR environment to interact with the agent. The proposed agent reproduced several features of gaze behavior in example conversations. Impression survey result showed that there is at least a group who felt the proposed agent is similar to human and better than conventional methods.

Ghelani, Nimesh, Mohammed, Salman, Wang, Shine, Lin, Jimmy.  2017.  Event Detection on Curated Tweet Streams. Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. :1325–1328.

We present a system for identifying interesting social media posts on Twitter and delivering them to users' mobile devices in real time as push notifications. In our problem formulation, users are interested in broad topics such as politics, sports, and entertainment: our system processes tweets in real time to identify relevant, novel, and salient content. There are three interesting aspects to our work: First, instead of attempting to tame the cacophony of unfiltered tweets, we exploit a smaller, but still sizeable, collection of curated tweet streams corresponding to the Twitter accounts of different media outlets. Second, we apply distant supervision to extract topic labels from curated streams that have a specific focus, which can then be leveraged to build high-quality topic classifiers essentially "for free". Finally, our system delivers content via Twitter direct messages, supporting in situ interactions modeled after conversations with intelligent agents. These ideas are demonstrated in an end-to-end working prototype.

Vaziri, Mandana, Mandel, Louis, Shinnar, Avraham, Siméon, Jérôme, Hirzel, Martin.  2017.  Generating Chat Bots from Web API Specifications. Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. :44–57.

Companies want to offer chat bots to their customers and employees which can answer questions, enable self-service, and showcase their products and services. Implementing and maintaining chat bots by hand costs time and money. Companies typically have web APIs for their services, which are often documented with an API specification. This paper presents a compiler that takes a web API specification written in Swagger and automatically generates a chat bot that helps the user make API calls. The generated bot is self-documenting, using descriptions from the API specification to answer help requests. Unfortunately, Swagger specifications are not always good enough to generate high-quality chat bots. This paper addresses this problem via a novel in-dialogue curation approach: the power user can improve the generated chat bot by interacting with it. The result is then saved back as an API specification. This paper reports on the design and implementation of the chat bot compiler, the in-dialogue curation, and working case studies.