Visible to the public Biblio

Filters: Keyword is experiment  [Clear All Filters]
2020-02-10
Schneeberger, Tanja, Scholtes, Mirella, Hilpert, Bernhard, Langer, Markus, Gebhard, Patrick.  2019.  Can Social Agents elicit Shame as Humans do? 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :164–170.
This paper presents a study that examines whether social agents can elicit the social emotion shame as humans do. For that, we use job interviews, which are highly evaluative situations per se. We vary the interview style (shame-eliciting vs. neutral) and the job interviewer (human vs. social agent). Our dependent variables include observational data regarding the social signals of shame and shame regulation as well as self-assessment questionnaires regarding the felt uneasiness and discomfort in the situation. Our results indicate that social agents can elicit shame to the same amount as humans. This gives insights about the impact of social agents on users and the emotional connection between them.
2018-02-27
Soleymani, Mohammad, Riegler, Michael, al Halvorsen, P$\backslash$a.  2017.  Multimodal Analysis of Image Search Intent: Intent Recognition in Image Search from User Behavior and Visual Content. Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. :251–259.

Users search for multimedia content with different underlying motivations or intentions. Study of user search intentions is an emerging topic in information retrieval since understanding why a user is searching for a content is crucial for satisfying the user's need. In this paper, we aimed at automatically recognizing a user's intent for image search in the early stage of a search session. We designed seven different search scenarios under the intent conditions of finding items, re-finding items and entertainment. We collected facial expressions, physiological responses, eye gaze and implicit user interactions from 51 participants who performed seven different search tasks on a custom-built image retrieval platform. We analyzed the users' spontaneous and explicit reactions under different intent conditions. Finally, we trained machine learning models to predict users' search intentions from the visual content of the visited images, the user interactions and the spontaneous responses. After fusing the visual and user interaction features, our system achieved the F-1 score of 0.722 for classifying three classes in a user-independent cross-validation. We found that eye gaze and implicit user interactions, including mouse movements and keystrokes are the most informative features. Given that the most promising results are obtained by modalities that can be captured unobtrusively and online, the results demonstrate the feasibility of deploying such methods for improving multimedia retrieval platforms.