Visible to the public Biblio

Filters: Keyword is influence  [Clear All Filters]
2019-12-18
Atkinson, Simon Reay, Walker, David, Beaulne, Kevin, Hossain, Liaquat.  2012.  Cyber – Transparencies, Assurance and Deterrence. 2012 International Conference on Cyber Security. :119–126.
Cyber-has often been considered as a coordination and control, as opposed to collaborative influence, media. This conceptual-design paper, uniquely, builds upon a number of entangled, cross disciplinary research strands – integrating engineering and conflict studies – and a detailed literature review to propose a new paradigm of assurance and deterrence models. We consider an ontology for Cyber-sûréte, which combines both the social trusts necessary for [knowledge &, information] assurance such as collaboration by social influence (CSI) and the technological controls and rules for secure information management referred as coordination by rule and control (CRC). We posit Cyber-sûréte as enabling both a 'safe-to-fail' ecology (in which learning, testing and adaptation can take place) within a fail-safe supervisory control and data acquisition (SCADA type) system, e.g. in a nuclear power plant. Building upon traditional state-based threat analysis, we consider Warning Time and the Threat equation with relation to policies for managing Cyber-Deterrence. We examine how the goods of Cyber-might be galvanised so as to encourage virtuous behaviour and deter and / or dissuade ne'er-do-wells through multiple transparencies. We consider how the Deterrence-escalator may be managed by identifying both weak influence and strong control signals so as to create a more benign and responsive cyber-ecology, in which strengths can be exploited and weaknesses identified. Finally, we consider declaratory / mutual transparencies as opposed to legalistic / controlled transparency.
2019-02-25
Lucas, Gale M., Boberg, Jill, Traum, David, Artstein, Ron, Gratch, Jonathan, Gainer, Alesia, Johnson, Emmanuel, Leuski, Anton, Nakano, Mikio.  2018.  Culture, Errors, and Rapport-Building Dialogue in Social Agents. Proceedings of the 18th International Conference on Intelligent Virtual Agents. :51-58.
This work explores whether culture impacts the extent to which social dialogue can mitigate (or exacerbate) the loss of trust caused when agents make conversational errors. Our study uses an agent designed to persuade users to agree with its rankings on two tasks. Participants from the U.S. and Japan completed our study. We perform two manipulations: (1) The presence of conversational errors – the agent exhibited errors in the second task or not; (2) The presence of social dialogue – between the two tasks, users either engaged in a social dialogue with the agent or completed a control task. Replicating previous research, conversational errors reduce the agent's influence. However, we found that culture matters: there was a marginally significant three-way interaction with culture, presence of social dialogue, and presence of errors. The pattern of results suggests that, for American participants, social dialogue backfired if it is followed by errors, presumably because it extends the period of good performance, creating a stronger contrast effect with the subsequent errors. However, for Japanese participants, social dialogue if anything mitigates the detrimental effect of errors; the negative effect of errors is only seen in the absence of a social dialogue. Agent design should therefore take the culture of the intended users into consideration when considering use of social dialogue to bolster agents against conversational errors.