Visible to the public Biblio

Filters: Keyword is HCI  [Clear All Filters]
2022-09-29
Suresh, V., Ramesh, M.K., Shadruddin, Sheikh, Paul, Tapobrata, Bhattacharya, Anirban, Ahmad, Abrar.  2021.  Design and Application of Converged Infrastructure through Virtualization Technology in Grid Operation Control Center in North Eastern Region of India. 2020 3rd International Conference on Energy, Power and Environment: Towards Clean Energy Technologies. :1–5.
Modern day grid operation requires multiple interlinked applications and many automated processes at control center for monitoring and operation of grid. Information technology integrated with operational technology plays a critical role in grid operation. Computing resource requirements of these software applications varies widely and includes high processing applications, high Input/Output (I/O) sensitive applications and applications with low resource requirements. Present day grid operation control center uses various applications for load despatch schedule management, various real-time analytics & optimization applications, post despatch analysis and reporting applications etc. These applications are integrated with Operational Technology (OT) like Data acquisition system / Energy management system (SCADA/EMS), Wide Area Measurement System (WAMS) etc. This paper discusses various design considerations and implementation of converged infrastructure through virtualization technology by consolidation of servers and storages using multi-cluster approach to meet high availability requirement of the applications and achieve desired objectives of grid control center of north eastern region in India. The process involves weighing benefits of different architecture solution, grouping of application hosts, making multiple clusters with reliability and security considerations, and designing suitable infrastructure to meet all end objectives. Reliability, enhanced resource utilization, economic factors, storage and physical node selection, integration issues with OT systems and optimization of cost are the prime design considerations. Modalities adopted to minimize downtime of critical systems for grid operation during migration from the existing infrastructure and integration with OT systems of North Eastern Regional Load Despatch Center are also elaborated in this paper.
2021-02-03
Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.

2020-04-13
Dechand, Sergej, Naiakshina, Alena, Danilova, Anastasia, Smith, Matthew.  2019.  In Encryption We Don’t Trust: The Effect of End-to-End Encryption to the Masses on User Perception. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :401–415.
With WhatsApp's adoption of the Signal Protocol as its default, end-to-end encryption by the masses happened almost overnight. Unlike iMessage, WhatsApp notifies users that encryption is enabled, explicitly informing users about improved privacy. This rare feature gives us an opportunity to study people's understandings and perceptions of secure messaging pre-and post-mass messenger encryption (pre/post-MME). To study changes in perceptions, we compared the results of two mental models studies: one conducted in 2015 pre-MME and one in 2017 post-MME. Our primary finding is that users do not trust encryption as currently offered. When asked about encryption in the study, most stated that they had heard of encryption, but only a few understood the implications, even on a high level. Their consensus view was that no technical solution to stop skilled attackers from getting their data exists. Even with a major development, such as WhatsApp rolling out end-to-end encryption, people still do not feel well protected by their technology. Surprisingly, despite WhatsApp's end-to-end security info messages and the high media attention, the majority of the participants were not even aware of encryption. Most participants had an almost correct threat model, but don't believe that there is a technical solution to stop knowledgeable attackers to read their messages. Using technology made them feel vulnerable.
2019-02-22
Börsting, Ingo, Gruhn, Volker.  2018.  Towards Rapid Digital Prototyping for Augmented Reality Applications. Proceedings of the 4th International Workshop on Rapid Continuous Software Engineering. :12-15.

In rapid continuous software development, time- and cost-effective prototyping techniques are beneficial through enabling software designers to quickly explore and evaluate different design concepts. Regarding low-fidelity prototyping for augmented reality (AR) applications, software designers are so far restricted to non-digital prototypes, which enable the visualization of first design concepts, but can be laborious in capturing interactivity. The lack of empirical values and standards for designing user interactions in AR-software leads to a particular need for applying end-user feedback to software refinement. In this paper we present the concept of a tool for rapid digital prototyping for augmented reality applications, enabling software designers to rapidly design augmented reality prototypes, without requiring programming skills. The prototyping tool focuses on modeling multimodal interactions, especially regarding the interaction with physical objects, as well as performing user-based studies to integrate valuable end-user feedback into the refinement of software aspects.

2019-01-16
Lowens, Byron M..  2018.  Toward Privacy Enhanced Solutions For Granular Control Over Health Data Collected by Wearable Devices. Proceedings of the 2018 Workshop on MobiSys 2018 Ph.D. Forum. :5–6.
The advent of wearable technologies has engendered novel ways to understand human behavior as it relates to personalized healthcare and health management. As the availability of these technologies expand and proliferate among users, concerns about threats to data privacy have been raised, specifically, regarding the collection and dissemination of data from wearable devices. These factors point to the urgency to better understand user sharing preferences to formulate personalized solutions that give users granular control of the data collected by their wearable devices. The goal of my dissertation is to design and build human-centered solutions that address the need for granular privacy control over data generated by wearable devices.
2018-12-10
Edge, Darren, Larson, Jonathan, White, Christopher.  2018.  Bringing AI to BI: Enabling Visual Analytics of Unstructured Data in a Modern Business Intelligence Platform. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. :CS02:1–CS02:9.

The Business Intelligence (BI) paradigm is challenged by emerging use cases such as news and social media analytics in which the source data are unstructured, the analysis metrics are unspecified, and the appropriate visual representations are unsupported by mainstream tools. This case study documents the work undertaken in Microsoft Research to enable these use cases in the Microsoft Power BI product. Our approach comprises: (a) back-end pipelines that use AI to infer navigable data structures from streams of unstructured text, media and metadata; and (b) front-end representations of these structures grounded in the Visual Analytics literature. Through our creation of multiple end-to-end data applications, we learned that representing the varying quality of inferred data structures was crucial for making the use and limitations of AI transparent to users. We conclude with reflections on BI in the age of AI, big data, and democratized access to data analytics.

2018-07-18
Das, Sauvik, Laput, Gierad, Harrison, Chris, Hong, Jason I..  2017.  Thumprint: Socially-Inclusive Local Group Authentication Through Shared Secret Knocks. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. :3764–3774.

Small, local groups who share protected resources (e.g., families, work teams, student organizations) have unmet authentication needs. For these groups, existing authentication strategies either create unnecessary social divisions (e.g., biometrics), do not identify individuals (e.g., shared passwords), do not equitably distribute security responsibility (e.g., individual passwords), or make it difficult to share or revoke access (e.g., physical keys). To explore an alternative, we designed Thumprint: inclusive group authentication with a shared secret knock. All group members share one secret knock, but individual expressions of the secret are discernible. We evaluated the usability and security of our concept through two user studies with 30 participants. Our results suggest that (1) individuals who enter the same shared thumprint are distinguishable from one another, (2) that people can enter thumprints consistently over time, and (3) that thumprints are resilient to casual adversaries.

2017-10-25
Chowdhury, Soumyadeb, Ferdous, Md Sadek, Jose, Joemon M.  2016.  Exploring Lifelog Sharing and Privacy. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. :553–558.

The emphasis on exhaustive passive capturing of images using wearable cameras like Autographer, which is often known as lifelogging has brought into foreground the challenge of preserving privacy, in addition to presenting the vast amount of images in a meaningful way. In this paper, we present a user-study to understand the importance of an array of factors that are likely to influence the lifeloggers to share their lifelog images in their online circle. The findings are a step forward in the emerging area intersecting HCI, and privacy, to help in exploring design directions for privacy mediating techniques in lifelogging applications.

2017-10-18
Miller, David.  2016.  AgentSmith: Exploring Agentic Systems. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :234–238.

The design of systems with independent agency to act on the environment or which can act as persuasive agents requires consideration of not only the technical aspects of design, but of the psychological, sociological, and philosophical aspects as well. Creating usable, safe, and ethical systems will require research into human-computer communication, in order to design systems that can create and maintain a relationship with users, explain their workings, and act in the best interests of both users and of the larger society.

2017-05-16
Rieser, Denise Christine, Bernhard, Orlando.  2016.  Measuring Trust: The Simpler the Better? Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :2940–2946.

To this date the majority of the existing instruments to measure trustworthiness in an online context are based on Likert scaling [1,3,11]. These however are somewhat restricted in applicability. Statements formed in Likert scaling are typically addressing one specific website. Therefore, adjusting these statements for other websites can be accompanied with a loss of validity. To meet these limitations, we propose to use semantic differential. Research has shown that using semantic differential is appropriate to measure multidimensional constructs [8,12] such as trust. Our novel approach in measuring trustworthiness exceeds Likert based scaling in its effortless application in different online context and its better translatability. After one pre-study and two online-studies with a total of 554 participants we achieved to develop a questionnaire with nine items which is comparable to other existing questionnaires in terms of reliability and internal consistency. But it overcomes the limitation of Likert scale based questionnaire.

Conway, Dan, Chen, Fang, Yu, Kun, Zhou, Jianlong, Morris, Richard.  2016.  Misplaced Trust: A Bias in Human-Machine Trust Attribution – In Contradiction to Learning Theory. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :3035–3041.

Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.