Biblio
Filters: Keyword is Artificial agents [Clear All Filters]
Deliberative and Affective Reasoning: a Bayesian Dual-Process Model. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :388–394.
.
2019. The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological and sociological theory.
How Do Artificial Agents Think? Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents. :1–1.
.
2017. Anthropomorphic artificial agents, computed characters or humanoid robots, can be sued to investigate human cognition. They are intrinsically ambivalent. They appear and act as humans, hence we should tend to consider them as human, yet we know they are machine designed by humans, and should not consider them as humans. Reviewing a number of behavioral and neurophysiological studies provides insights into social mechanisms that are primarily influenced by the appearance of the agent, and in particular its resemblance to humans, and other mechanisms that are influenced by the knowledge we have about the artificial nature of the agent. A significant finding is that, as expected, humans don't naturally adopt an intentional stance when interacting with artificial agents.