Biblio
In an augmented reality system, labelling technique is a very useful assistant technique for browsing and understanding unfamiliar objects or environments, through which the superimposed virtual labels of words or pictures on the real scene provide convenient information to the viewers, expand the recognition to area of interests and promote the interaction with real scene. How to design the layout of labels in user's field of view, keep the clarity of virtual information and balance the ratio between virtual information and real scene information is a key problem in the field of view management. This paper presents the empirical results extracted from experiment aiming at the user's visual perception to labelling layout, which reflects the subjective preferences to different factors influencing the labelling result. Statistical analysis of the experiment results shows the intuitive visual judgement accomplished by subjects. The quantitative measurement of clutter indicates the change induced by labels on real scene, therefore contributing the label design on view management in future.
Head-mounted augmented reality (AR) enables embodied in situ drawing in three dimensions (3D). We explore 3D drawing interactions based on uninstrumented, unencumbered (bare) hands that preserve the user's ability to freely navigate and interact with the physical environment. We derive three alternative interaction techniques supporting bare-handed drawing in AR from the literature and by analysing several envisaged use cases. The three interaction techniques are evaluated in a controlled user study examining three distinct drawing tasks: planar drawing, path description, and 3D object reconstruction. The results indicate that continuous freehand drawing supports faster line creation than the control point based alternatives, although with reduced accuracy. User preferences for the different techniques are mixed and vary considerably between the different tasks, highlighting the value of diverse and flexible interactions. The combined effectiveness of these three drawing techniques is illustrated in an example application of 3D AR drawing.
Our prototype app, Pocket Penjing, built using Unity3D, takes its name from the Chinese "Penjing." These tray plantings of miniature trees pre-date bonsai, often including miniature benches or figures to allude to people's relationship to the tree. App users choose a species, then create and name their tree. Swiping rotates a 3D globe showing flagged locations. Each flag represents a live online air quality monitoring station data stream that the app can scrape. Data is pulled in from the selected station and the AR window loads. The AR tree grows in real-time 3D. Its L-Systems form is determined by the selected live air quality data. We used this prototype as the basis of a two-part formative participatory design workshop with 63 participants.
In rapid continuous software development, time- and cost-effective prototyping techniques are beneficial through enabling software designers to quickly explore and evaluate different design concepts. Regarding low-fidelity prototyping for augmented reality (AR) applications, software designers are so far restricted to non-digital prototypes, which enable the visualization of first design concepts, but can be laborious in capturing interactivity. The lack of empirical values and standards for designing user interactions in AR-software leads to a particular need for applying end-user feedback to software refinement. In this paper we present the concept of a tool for rapid digital prototyping for augmented reality applications, enabling software designers to rapidly design augmented reality prototypes, without requiring programming skills. The prototyping tool focuses on modeling multimodal interactions, especially regarding the interaction with physical objects, as well as performing user-based studies to integrate valuable end-user feedback into the refinement of software aspects.
Network analysts have long used two-dimensional security visualizations to make sense of overwhelming amounts of network data. As networks grow larger and more complex, two-dimensional displays can become convoluted, compromising user cyber-threat perspective. Using augmented reality to display data with cyber-physical context creates a naturally intuitive interface that helps restore perspective and comprehension sacrificed by complicated two-dimensional visualizations. We introduce Mobile Augmented Reality for Cybersecurity, or MARCS, as a platform to visualize a diverse array of data in real time and space to improve user perspective and threat response. Early work centers around CovARVT and ConnectAR, two proof of concept, prototype applications designed to visualize intrusion detection and wireless association data, respectively.
Augmented reality (AR) technologies, such as Microsoft's HoloLens head-mounted display and AR-enabled car windshields, are rapidly emerging. AR applications provide users with immersive virtual experiences by capturing input from a user's surroundings and overlaying virtual output on the user's perception of the real world. These applications enable users to interact with and perceive virtual content in fundamentally new ways. However, the immersive nature of AR applications raises serious security and privacy concerns. Prior work has focused primarily on input privacy risks stemming from applications with unrestricted access to sensor data. However, the risks associated with malicious or buggy AR output remain largely unexplored. For example, an AR windshield application could intentionally or accidentally obscure oncoming vehicles or safety-critical output of other AR applications. In this work, we address the fundamental challenge of securing AR output in the face of malicious or buggy applications. We design, prototype, and evaluate Arya, an AR platform that controls application output according to policies specified in a constrained yet expressive policy framework. In doing so, we identify and overcome numerous challenges in securing AR output.
The evolution of cloud gaming systems is substantially the security requirements for computer games. Although online game development often utilizes artificial intelligence and human computer interaction, game developers and providers often do not pay much attention to security techniques. In cloud gaming, location-based games are augmented reality games which take the original principals of the game and applies them to the real world. In other terms, it uses the real world to impact the game experience. Because the execution of such games is distributed in cloud computing, users cannot be certain where their input and output data are managed. This introduces the possibility to input incorrect data in the exchange between the gamer's terminal and the gaming platform. In this context, we propose a new gaming concept for augmented reality and location-based games in order to solve the aforementioned cheating scenario problem. The merit of our approach is to establish an accurate and verifiable proof that the gamer reached the goal or found the target. The major novelty in our method is that it allows the gamer to submit an authenticated proof related to the game result without altering the privacy of positioning data.
Robots operating alongside humans in field environments have the potential to greatly increase the situational awareness of their human teammates. A significant challenge, however, is the efficient conveyance of what the robot perceives to the human in order to achieve improved situational awareness. We believe augmented reality (AR), which allows a human to simultaneously perceive the real world and digital information situated virtually in the real world, has the potential to address this issue. We propose to demonstrate that augmented reality can be used to enable human-robot cooperative search, where the robot can both share search results and assist the human teammate in navigating to a search target.
Immersive augmented reality (AR) technologies are becoming a reality. Prior works have identified security and privacy risks raised by these technologies, primarily considering individual users or AR devices. However, we make two key observations: (1) users will not always use AR in isolation, but also in ecosystems of other users, and (2) since immersive AR devices have only recently become available, the risks of AR have been largely hypothetical to date. To provide a foundation for understanding and addressing the security and privacy challenges of emerging AR technologies, grounded in the experiences of real users, we conduct a qualitative lab study with an immersive AR headset, the Microsoft HoloLens. We conduct our study in pairs - 22 participants across 11 pairs - wherein participants engage in paired and individual (but physically co-located) HoloLens activities. Through semi-structured interviews, we explore participants' security, privacy, and other concerns, raising key findings. For example, we find that despite the HoloLens's limitations, participants were easily immersed, treating virtual objects as real (e.g., stepping around them for fear of tripping). We also uncover numerous security, privacy, and safety concerns unique to AR (e.g., deceptive virtual objects misleading users about the real world), and a need for access control among users to manage shared physical spaces and virtual content embedded in those spaces. Our findings give us the opportunity to identify broader lessons and key challenges to inform the design of emerging single-and multi-user AR technologies.
In context of Industry 4.0 Augmented Reality (AR) is frequently mentioned as the upcoming interface technology for human-machine communication and collaboration. Many prototypes have already arisen in both the consumer market and in the industrial sector. According to numerous experts it will take only few years until AR will reach the maturity level to be deployed in productive applications. Especially for industrial usage it is required to assess security risks and challenges this new technology implicates. Thereby we focus on plant operators, Original Equipment Manufacturers (OEMs) and component vendors as stakeholders. Starting from several industrial AR use cases and the structure of contemporary AR applications, in this paper we identify security assets worthy of protection and derive the corresponding security goals. Afterwards we elaborate the threats industrial AR applications are exposed to and develop an edge computing architecture for future AR applications which encompasses various measures to reduce security risks for our stakeholders.
This paper explores the potential of enabling SDN security and monitoring services by piggybacking on SDN reactive routing. As a case study, we implement and evaluate a piggybacking based intrusion prevention system called SDN-Defense. Our study of university WiFi traffic traces reveals that up to 73% of malicious flows can be detected by inspecting just the first three packets of a flow, and 90% of malicious flows from the first four packets. Using such empirical insights, we propose to forward the first K packets of each new flow to an augmented SDN controller for security inspection, where K is a dynamically configurable parameter. We characterize the cost-benefit trade-offs of SDN-Defense using real wireless traces and discuss potential scalability issues. Finally, we discuss other applications which can be enhanced by using our proposed piggybacking approach.
Software permeates every aspect of our world, from our homes to the infrastructure that provides mission-critical services. As the size and complexity of software systems increase, the number and sophistication of software security flaws increase as well. The analysis of these flaws began as a manual approach, but it soon became apparent that a manual approach alone cannot scale, and that tools were necessary to assist human experts in this task, resulting in a number of techniques and approaches that automated certain aspects of the vulnerability analysis process. Recently, DARPA carried out the Cyber Grand Challenge, a competition among autonomous vulnerability analysis systems designed to push the tool-assisted human-centered paradigm into the territory of complete automation, with the hope that, by removing the human factor, the analysis would be able to scale to new heights. However, when the autonomous systems were pitted against human experts it became clear that certain tasks, albeit simple, could not be carried out by an autonomous system, as they require an understanding of the logic of the application under analysis. Based on this observation, we propose a shift in the vulnerability analysis paradigm, from tool-assisted human-centered to human-assisted tool-centered. In this paradigm, the automated system orchestrates the vulnerability analysis process, and leverages humans (with different levels of expertise) to perform well-defined sub-tasks, whose results are integrated in the analysis. As a result, it is possible to scale the analysis to a larger number of programs, and, at the same time, optimize the use of expensive human resources. In this paper, we detail our design for a human-assisted automated vulnerability analysis system, describe its implementation atop an open-sourced autonomous vulnerability analysis system that participated in the Cyber Grand Challenge, and evaluate and discuss the significant improvements that non-expert human assistance can offer to automated analysis approaches.
Many mobile apps, including augmented-reality games, bar-code readers, and document scanners, digitize information from the physical world by applying computer-vision algorithms to live camera data. However, because camera permissions for existing mobile operating systems are coarse (i.e., an app may access a camera's entire view or none of it), users are vulnerable to visual privacy leaks. An app violates visual privacy if it extracts information from camera data in unexpected ways. For example, a user might be surprised to find that an augmented-reality makeup app extracts text from the camera's view in addition to detecting faces. This paper presents results from the first large-scale study of visual privacy leaks in the wild. We build CamForensics to identify the kind of information that apps extract from camera data. Our extensive user surveys determine what kind of information users expected an app to extract. Finally, our results show that camera apps frequently defy users' expectations based on their descriptions.
With the increasing popularity of augmented reality (AR) services, providing seamless human-computer interactions in the AR setting has received notable attention in the industry. Gesture control devices have recently emerged to be the next great gadgets for AR due to their unique ability to enable computer interaction with day-to-day gestures. While these AR devices are bringing revolutions to our interaction with the cyber world, it is also important to consider potential privacy leakages from these always-on wearable devices. Specifically, the coarse access control on current AR systems could lead to possible abuse of sensor data. Although the always-on gesture sensors are frequently quoted as a privacy concern, there has not been any study on information leakage of these devices. In this article, we present our study on side-channel information leakage of the most popular gesture control device, Myo. Using signals recorded from the electromyography (EMG) sensor and accelerometers on Myo, we can recover sensitive information such as passwords typed on a keyboard and PIN sequence entered through a touchscreen. EMG signal records subtle electric currents of muscle contractions. We design novel algorithms based on dynamic cumulative sum and wavelet transform to determine the exact time of finger movements. Furthermore, we adopt the Hudgins feature set in a support vector machine to classify recorded signal segments into individual fingers or numbers. We also apply coordinate transformation techniques to recover fine-grained spatial information with low-fidelity outputs from the sensor in keystroke recovery. We evaluated the information leakage using data collected from a group of volunteers. Our results show that there is severe privacy leakage from these commodity wearable sensors. Our system recovers complex passwords constructed with lowercase letters, uppercase letters, numbers, and symbols with a mean success rate of 91%.
Augmented Reality (AR) devices continuously scan their environment in order to naturally overlay virtual objects onto user's view of the physical world. In contrast to Virtual Reality, where one's environment is fully replaced with a virtual one, one of AR's "killer features" is co-located collaboration, in which multiple users interact with the same combination of virtual and real objects. Microsoft recently released HoloLens, the first consumer-ready augmented reality headset that needs no outside markers to achieve precise inside-out spatial mapping, which allows centimeter-scale hologram positioning. However, despite many applications published on the Windows Mixed Reality platform that rely on direct communication between AR devices, there currently exists no implementation or achievable proposal for secure direct pairing of two unassociated headsets. As augmented reality gets into mainstream, this omission exposes current and future users to a range of avoidable attacks. In order to close this real-world gap in both theory and engineering practice, in this paper we design and evaluate HoloPair, a system for secure and usable pairing of two AR headsets. We propose a pairing protocol and build a working prototype to experimentally evaluate its security guarantees, usability, and system performance. By running a user study with a total of 22 participants, we show that the system achieves high rates of attack detection, short pairing times, and a high average usability score. Moreover, in order to make an immediate impact on the wider developer community, we have published the full implementation and source code of our prototype, which is currently under consideration to be included in the official HoloLens development toolkit.
Augmented reality is poised to become a dominant computing paradigm over the next decade. With promises of three-dimensional graphics and interactive interfaces, augmented reality experiences will rival the very best science fiction novels. This breakthrough also brings in unique challenges on how users can authenticate one another to share rich content between augmented reality headsets. Traditional authentication protocols fall short when there is no common central entity or when access to the central authentication server is not available or desirable. Looks Good To Me (LGTM) is an authentication protocol that leverages the unique hardware and context provided with augmented reality headsets to bring innate human trust mechanisms into the digital world to solve authentication in a usable and secure way. LGTM works over point to point wireless communication so users can authenticate one another in a variety of circumstances and is designed with usability at its core, requiring users to perform only two actions: one to initiate and one to confirm. Users intuitively authenticate one another, using seemingly only each other's faces, but under the hood LGTM uses a combination of facial recognition and wireless localization to bootstrap trust from a wireless signal, to a location, to a face, for secure and usable authentication.
Augmented reality (AR) technologies, such as those in head-mounted displays like Microsoft HoloLens or in automotive windshields, are poised to change how people interact with their devices and the physical world. Though researchers have begun considering the security, privacy, and safety issues raised by these technologies, to date such efforts have focused on input, i.e., how to limit the amount of private information to which AR applications receive access. In this work, we focus on the challenge of output management: how can an AR operating system allow multiple concurrently running applications to safely augment the user's view of the world? That is, how can the OS prevent apps from (for example) interfering with content displayed by other apps or the user's perception of critical real-world context, while still allowing them sufficient flexibility to implement rich, immersive AR scenarios? We explore the design space for the management of visual AR output, propose a design that balances OS control with application flexibility, and lay out the research directions raised and enabled by this proposal.
The latest advances in head-mounted displays (HMDs) for augmented reality (AR) and mixed reality (MR) have produced commercialized devices that are gradually accepted by the public. These HMDs are generally equipped with head tracking, which provides an excellent input to explore immersive visualization and interaction techniques for various AR/MR applications. This paper explores the head tracking function on the latest Microsoft HoloLens – where gaze is defined as the ray starting at the head location and points forward. We present a gaze-directed visualization approach to study ensembles of 2D oil spill simulations in mixed reality. Our approach allows users to place an ensemble as an image stack in a real environment and explore the ensemble with gaze tracking. The prototype system demonstrates the challenges and promising effects of gaze-based interaction in the state-of-the-art mixed reality.
Beyond other domains, the field of immersive analytics makes use of Augmented Reality techniques to successfully support users in analyzing data. When displaying ubiquitous data integrated into the everyday life, spatial immersion issues like depth perception, data localization and object relations become relevant. Although there is a variety of techniques to deal with those, they are difficult to apply if the examined data or the reference space are large and abstract. In this work, we discuss observed problems in such immersive analytics systems and the applicability of current countermeasures to identify needs for action.
Recent advances in vehicle automation have led to excitement and discourse in academia, industry, the media, and the public. Human factors such as trust and user experience are critical in terms of safety and customer acceptance. One of the main challenges in partial and conditional automation is related to drivers' situational awareness, or a lack thereof. In this paper, we critically analyse state of the art implementations in this arena and present a proactive approach to increasing situational awareness. We propose to make use of augmented reality to carefully design applications aimed at constructs such as amplification and voluntary attention. Finally, we showcase an example application, Pokémon DRIVE, that illustrates the utility of our proposed approach.
- « first
- ‹ previous
- 1
- 2
- 3