Visible to the public Biblio

Filters: Author is Gratch, Jonathan  [Clear All Filters]
2022-08-26
Chawla, Kushal, Clever, Rene, Ramirez, Jaysa, Lucas, Gale, Gratch, Jonathan.  2021.  Towards Emotion-Aware Agents For Negotiation Dialogues. 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII). :1–8.
Negotiation is a complex social interaction that encapsulates emotional encounters in human decision-making. Virtual agents that can negotiate with humans are useful in pedagogy and conversational AI. To advance the development of such agents, we explore the prediction of two important subjective goals in a negotiation – outcome satisfaction and partner perception. Specifically, we analyze the extent to which emotion attributes extracted from the negotiation help in the prediction, above and beyond the individual difference variables. We focus on a recent dataset in chat-based negotiations, grounded in a realistic camping scenario. We study three degrees of emotion dimensions – emoticons, lexical, and contextual by leveraging affective lexicons and a state-of-the-art deep learning architecture. Our insights will be helpful in designing adaptive negotiation agents that interact through realistic communication interfaces.
2019-02-25
Lucas, Gale M., Krämer, Nicole, Peters, Clara, Taesch, Lisa-Sophie, Mell, Johnathan, Gratch, Jonathan.  2018.  Effects of Perceived Agency and Message Tone in Responding to a Virtual Personal Trainer. Proceedings of the 18th International Conference on Intelligent Virtual Agents. :247-254.
Research has demonstrated promising benefits of applying virtual trainers to promote physical fitness. The current study investigated the value of virtual agents in the context of personal fitness, compared to trainers with greater levels of perceived agency (avatar or live human). We also explored the possibility that the effectiveness of the virtual trainer might depend on the affective tone it uses when trying to motivate users. Accordingly, participants received either positively or negatively valenced motivational messages from a virtual human they believed to be either an agent or an avatar, or they received the messages from a human instructor via skype. Both self-report and physiological data were collected. Like in-person coaches, the live human trainer who used negatively valenced messages were well-regarded; however, when the agent or avatar used negatively valenced messages, participants responded more poorly than when they used positively valenced ones. Perceived agency also affected rapport: compared to the agent, users felt more rapport with the live human trainer or the avatar. Regardless of trainer type, they also felt more rapport - and said they put in more effort - with trainers that used positively valenced messages than those that used negatively valenced ones. However, in reality, they put in more physical effort (as measured by heart rate) when trainers employed the more negatively valenced affective tone. We discuss implications for human–computer interaction.
Lucas, Gale M., Boberg, Jill, Traum, David, Artstein, Ron, Gratch, Jonathan, Gainer, Alesia, Johnson, Emmanuel, Leuski, Anton, Nakano, Mikio.  2018.  Culture, Errors, and Rapport-Building Dialogue in Social Agents. Proceedings of the 18th International Conference on Intelligent Virtual Agents. :51-58.
This work explores whether culture impacts the extent to which social dialogue can mitigate (or exacerbate) the loss of trust caused when agents make conversational errors. Our study uses an agent designed to persuade users to agree with its rankings on two tasks. Participants from the U.S. and Japan completed our study. We perform two manipulations: (1) The presence of conversational errors – the agent exhibited errors in the second task or not; (2) The presence of social dialogue – between the two tasks, users either engaged in a social dialogue with the agent or completed a control task. Replicating previous research, conversational errors reduce the agent's influence. However, we found that culture matters: there was a marginally significant three-way interaction with culture, presence of social dialogue, and presence of errors. The pattern of results suggests that, for American participants, social dialogue backfired if it is followed by errors, presumably because it extends the period of good performance, creating a stronger contrast effect with the subsequent errors. However, for Japanese participants, social dialogue if anything mitigates the detrimental effect of errors; the negative effect of errors is only seen in the absence of a social dialogue. Agent design should therefore take the culture of the intended users into consideration when considering use of social dialogue to bolster agents against conversational errors.
2017-05-16
Lucas, Gale, Stratou, Giota, Lieblich, Shari, Gratch, Jonathan.  2016.  Trust Me: Multimodal Signals of Trustworthiness. Proceedings of the 18th ACM International Conference on Multimodal Interaction. :5–12.

This paper builds on prior psychological studies that identify signals of trustworthiness between two human negotiators. Unlike prior work, the current work tracks such signals automatically and fuses them into computational models that predict trustworthiness. To achieve this goal, we apply automatic trackers to recordings of human dyads negotiating in a multi-issue bargaining task. We identify behavioral indicators in different modalities (facial expressions, gestures, gaze, and conversational features) that are predictive of trustworthiness. We predict both objective trustworthiness (i.e., are they honest) and perceived trustworthiness (i.e., do they seem honest to their interaction partner). Our experiments show that people are poor judges of objective trustworthiness (i.e., objective and perceived trustworthiness are predicted by different indicators), and that multimodal approaches better predict objective trustworthiness, whereas people overly rely on facial expressions when judging the honesty of their partner. Moreover, domain knowledge (from the literature and prior analysis of behaviors) facilitates the model development process.