A Review of Evaluation Techniques for Social Dialogue Systems
Title | A Review of Evaluation Techniques for Social Dialogue Systems |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Curry, Amanda Cercas, Hastie, Helen, Rieser, Verena |
Conference Name | Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-5558-2 |
Keywords | Automatic Evaluation, conversational agents, Evaluation Metrics, Human Behavior, human factors, pubcrawl, Scalability, Social Agents, Social Dialogue Systems |
Abstract | In contrast with goal-oriented dialogue, social dialogue has no clear measure of task success. Consequently, evaluation of these systems is notoriously hard. In this paper, we review current evaluation methods, focusing on automatic metrics. We conclude that turn-based metrics often ignore the context and do not account for the fact that several replies are valid, while end-of-dialogue rewards are mainly hand-crafted. Both lack grounding in human perceptions. |
URL | https://dl.acm.org/doi/10.1145/3139491.3139504 |
DOI | 10.1145/3139491.3139504 |
Citation Key | curry_review_2017 |