Biblio
Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.
In rapid continuous software development, time- and cost-effective prototyping techniques are beneficial through enabling software designers to quickly explore and evaluate different design concepts. Regarding low-fidelity prototyping for augmented reality (AR) applications, software designers are so far restricted to non-digital prototypes, which enable the visualization of first design concepts, but can be laborious in capturing interactivity. The lack of empirical values and standards for designing user interactions in AR-software leads to a particular need for applying end-user feedback to software refinement. In this paper we present the concept of a tool for rapid digital prototyping for augmented reality applications, enabling software designers to rapidly design augmented reality prototypes, without requiring programming skills. The prototyping tool focuses on modeling multimodal interactions, especially regarding the interaction with physical objects, as well as performing user-based studies to integrate valuable end-user feedback into the refinement of software aspects.
The Business Intelligence (BI) paradigm is challenged by emerging use cases such as news and social media analytics in which the source data are unstructured, the analysis metrics are unspecified, and the appropriate visual representations are unsupported by mainstream tools. This case study documents the work undertaken in Microsoft Research to enable these use cases in the Microsoft Power BI product. Our approach comprises: (a) back-end pipelines that use AI to infer navigable data structures from streams of unstructured text, media and metadata; and (b) front-end representations of these structures grounded in the Visual Analytics literature. Through our creation of multiple end-to-end data applications, we learned that representing the varying quality of inferred data structures was crucial for making the use and limitations of AI transparent to users. We conclude with reflections on BI in the age of AI, big data, and democratized access to data analytics.
Small, local groups who share protected resources (e.g., families, work teams, student organizations) have unmet authentication needs. For these groups, existing authentication strategies either create unnecessary social divisions (e.g., biometrics), do not identify individuals (e.g., shared passwords), do not equitably distribute security responsibility (e.g., individual passwords), or make it difficult to share or revoke access (e.g., physical keys). To explore an alternative, we designed Thumprint: inclusive group authentication with a shared secret knock. All group members share one secret knock, but individual expressions of the secret are discernible. We evaluated the usability and security of our concept through two user studies with 30 participants. Our results suggest that (1) individuals who enter the same shared thumprint are distinguishable from one another, (2) that people can enter thumprints consistently over time, and (3) that thumprints are resilient to casual adversaries.
The emphasis on exhaustive passive capturing of images using wearable cameras like Autographer, which is often known as lifelogging has brought into foreground the challenge of preserving privacy, in addition to presenting the vast amount of images in a meaningful way. In this paper, we present a user-study to understand the importance of an array of factors that are likely to influence the lifeloggers to share their lifelog images in their online circle. The findings are a step forward in the emerging area intersecting HCI, and privacy, to help in exploring design directions for privacy mediating techniques in lifelogging applications.
The design of systems with independent agency to act on the environment or which can act as persuasive agents requires consideration of not only the technical aspects of design, but of the psychological, sociological, and philosophical aspects as well. Creating usable, safe, and ethical systems will require research into human-computer communication, in order to design systems that can create and maintain a relationship with users, explain their workings, and act in the best interests of both users and of the larger society.
To this date the majority of the existing instruments to measure trustworthiness in an online context are based on Likert scaling [1,3,11]. These however are somewhat restricted in applicability. Statements formed in Likert scaling are typically addressing one specific website. Therefore, adjusting these statements for other websites can be accompanied with a loss of validity. To meet these limitations, we propose to use semantic differential. Research has shown that using semantic differential is appropriate to measure multidimensional constructs [8,12] such as trust. Our novel approach in measuring trustworthiness exceeds Likert based scaling in its effortless application in different online context and its better translatability. After one pre-study and two online-studies with a total of 554 participants we achieved to develop a questionnaire with nine items which is comparable to other existing questionnaires in terms of reliability and internal consistency. But it overcomes the limitation of Likert scale based questionnaire.
Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.