Visible to the public Biblio

Filters: Keyword is augmented reality  [Clear All Filters]
2020-12-11
Vasiliu, V., Sörös, G..  2019.  Coherent Rendering of Virtual Smile Previews with Fast Neural Style Transfer. 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). :66—73.

Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.

2020-08-28
Kolomeets, Maxim, Chechulin, Andrey, Zhernova, Ksenia, Kotenko, Igor, Gaifulina, Diana.  2020.  Augmented reality for visualizing security data for cybernetic and cyberphysical systems. 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). :421—428.
The paper discusses the use of virtual (VR) and augmented (AR) reality for visual analytics in information security. Paper answers two questions: “In which areas of information security visualization VR/AR can be useful?” and “What is the difference of the VR/AR from similar methods of visualization at the level of perception of information?”. The first answer is based on the investigation of information security areas and visualization models that can be used in VR/AR security visualization. The second answer is based on experiments that evaluate perception of visual components in VR.
Huang, Bai-Ruei, Lin, Chang Hong, Lee, Chia-Han.  2012.  Mobile augmented reality based on cloud computing. and Identification Anti-counterfeiting, Security. :1—5.
In this paper, we implemented a mobile augmented reality system based on cloud computing. This system uses a mobile device with a camera to capture images of book spines and sends processed features to the cloud. In the cloud, the features are compared with the database and the information of the best matched book would be sent back to the mobile device. The information will then be rendered on the display via augmented reality. In order to reduce the transmission cost, the mobile device is used to perform most of the image processing tasks, such as the preprocessing, resizing, corner detection, and augmented reality rendering. On the other hand, the cloud is used to realize routine but large quantity feature comparisons. Using the cloud as the database also makes the future extension much more easily. For our prototype system, we use an Android smart phone as our mobile device, and Chunghwa Telecoms hicloud as the cloud.
Knierim, Pascal, Kiss, Francisco, Schmidt, Albrecht.  2018.  Look Inside: Understanding Thermal Flux Through Augmented Reality. 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). :170—171.
The transition from high school to university is an exciting time for students including many new challenges. Particularly in the field of science, technology, engineering, and mathematics, the university dropout rate may reach up to 40%. The studies of physics rely on many abstract concepts and quantities that are not directly visible like energy or heat. We developed a mixed reality application for education, which augments the thermal conduction of metal by overlaying a representation of temperature as false-color visualization directly onto the object. This real-time augmentation avoids attention split and overcomes the perception gap by amplifying the human eye. Augmented and Virtual Reality environments allow students to perform experiments that were impossible to conduct for security or financial reasons. With the application, we try to foster a deeper understanding of the learning material and higher engagement during the studies.
Kommera, Nikitha, Kaleem, Faisal, Shah Harooni, Syed Mubashir.  2016.  Smart augmented reality glasses in cybersecurity and forensic education. 2016 IEEE Conference on Intelligence and Security Informatics (ISI). :279—281.
Augmented reality is changing the way its users see the world. Smart augmented-reality glasses, with high resolution Optical Head Mounted display, supplements views of the real-world using video, audio, or graphics projected in front of user's eye. The area of Smart Glasses and heads-up display devices is not a new one, however in the last few years, it has seen an extensive growth in various fields including education. Our work takes advantage of a student's ability to adapt to new enabling technologies to investigate improvements teaching techniques in STEM areas and enhance the effectiveness and efficiency in teaching the new course content. In this paper, we propose to focus on the application of Smart Augmented-Reality Glasses in cybersecurity education to attract and retain students in STEM. In addition, creative ways to learn cybersecurity education via Smart Glasses will be explored using a Discovery Learning approach. This mode of delivery will allow students to interact with cybersecurity theories in an innovative, interactive and effective way, enhancing their overall live experience and experimental learning. With the help of collected data and in-depth analysis of existing smart glasses, the ongoing work will lay the groundwork for developing augmented reality applications that will enhance the learning experiences of students. Ultimately, research conducted with the glasses and applications may help to identify the unique skillsets of cybersecurity analysts, learning gaps and learning solutions.
Ferreira, P.M.F.M., Orvalho, J.M., Boavida, F..  2005.  Large Scale Mobile and Pervasive Augmented Reality Games. EUROCON 2005 - The International Conference on "Computer as a Tool". 2:1775—1778.
Ubiquitous or pervasive computing is a new kind of computing, where specialized elements of hardware and software will have such high level of deployment that their use will be fully integrated with the environment. Augmented reality extends reality with virtual elements but tries to place the computer in a relatively unobtrusive, assistive role. To our knowledge, there is no specialized network middleware solution for large-scale mobile and pervasive augmented reality games. We present a work that focus on the creation of such network middleware for mobile and pervasive entertainment, applied to the area of large scale augmented reality games. In, this context, mechanisms are being studied, proposed and evaluated to deal with issues such as scalability, multimedia data heterogeneity, data distribution and replication, consistency, security, geospatial location and orientation, mobility, quality of service, management of networks and services, discovery, ad-hoc networking and dynamic configuration
Ferreira, Pedro, Orvalho, Joao, Boavida, Fernando.  2007.  Security and privacy in a middleware for large scale mobile and pervasive augmented reality. 2007 15th International Conference on Software, Telecommunications and Computer Networks. :1—5.
Ubiquitous or pervasive computing is a new kind of computing, where specialized elements of hardware and software will have such high level of deployment that their use will be fully integrated with the environment. Augmented reality extends reality with virtual elements but tries to place the computer in a relatively unobtrusive, assistive role. In this paper we propose, test and analyse a security and privacy architecture for a previously proposed middleware architecture for mobile and pervasive large scale augmented reality games, which is the main contribution of this paper. The results show that the security features proposed in the scope of this work do not affect the overall performance of the system.
Dauenhauer, Ralf, Müller, Tobias.  2016.  An Evaluation of Information Connection in Augmented Reality for 3D Scenes with Occlusion. 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). :235—237.
Most augmented reality applications connect virtual information to anchors, i.e. physical places or objects, by using spatial overlays or proximity. However, for industrial use cases this is not always feasible because specific parts must remain fully visible in order to meet work or security requirements. In these situations virtual information must be displayed at alternative positions while connections to anchors must still be clearly recognizable. In our previous research we were the first to show that for simple scenes connection lines are most suitable for this. To extend these results to more complex environments, we conducted an experiment on the effects of visual interruptions in connection lines and incorrect occlusion. Completion time and subjective mental effort for search tasks were used as measures. Our findings confirm that also in 3D scenes with partial occlusion connection lines are preferable to connect virtual information with anchors if an assignment via overlay or close proximity is not feasible. The results further imply that neither incorrectly used depth cues nor missing parts of connection lines make a significant difference concerning completion time or subjective mental effort. For designers of industrial augmented reality applications this means that they can choose either visualization based on their needs.
Brinkman, Bo.  2012.  Willing to be fooled: Security and autoamputation in augmented reality. 2012 IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities (ISMAR-AMH). :89—90.

What does it mean to trust, or not trust, an augmented reality system? Froma computer security point of view, trust in augmented reality represents a real threat to real people. The fact that augmented reality allows the programmer to tinker with the user's senses creates many opportunities for malfeasance. It might be natural to think that if we warn users to be careful it will lower their trust in the system, greatly reducing risk.

2020-06-04
Almeida, L., Lopes, E., Yalçinkaya, B., Martins, R., Lopes, A., Menezes, P., Pires, G..  2019.  Towards natural interaction in immersive reality with a cyber-glove. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2653—2658.

Over the past few years, virtual and mixed reality systems have evolved significantly yielding high immersive experiences. Most of the metaphors used for interaction with the virtual environment do not provide the same meaningful feedback, to which the users are used to in the real world. This paper proposes a cyber-glove to improve the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. In particular, we are proposing a cyber-glove system that tracks wrist movements, hand orientation and finger movements. It provides a decoupled position of the wrist and hand, which can contribute to a better embodiment in interaction and manipulation tasks. Additionally, the detection of the curvature of the fingers aims to improve the proprioceptive perception of the grasping/releasing gestures more consistent to visual feedback. The cyber-glove system is being developed for VR applications related to real estate promotion, where users have to go through divisions of the house and interact with objects and furniture. This work aims to assess if glove-based systems can contribute to a higher sense of immersion, embodiment and usability when compared to standard VR hand controller devices (typically button-based). Twenty-two participants tested the cyber-glove system against the HTC Vive controller in a 3D manipulation task, specifically the opening of a virtual door. Metric results showed that 83% of the users performed faster door pushes, and described shorter paths with their hands wearing the cyber-glove. Subjective results showed that all participants rated the cyber-glove based interactions as equally or more natural, and 90% of users experienced an equal or a significant increase in the sense of embodiment.

Bang, Junseong, Lee, Youngho, Lee, Yong-Tae, Park, Wonjoo.  2019.  AR/VR Based Smart Policing For Fast Response to Crimes in Safe City. 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). :470—475.

With advances in information and communication technologies, cities are getting smarter to enhance the quality of human life. In smart cities, safety (including security) is an essential issue. In this paper, by reviewing several safe city projects, smart city facilities for the safety are presented. With considering the facilities, a design for a crime intelligence system is introduced. Then, concentrating on how to support police activities (i.e., emergency call reporting reception, patrol activity, investigation activity, and arrest activity) with immersive technologies in order to reduce a crime rate and to quickly respond to emergencies in the safe city, smart policing with augmented reality (AR) and virtual reality (VR) is explained.

Asiri, Somayah, Alzahrani, Ahmad A..  2019.  The Effectiveness of Mixed Reality Environment-Based Hand Gestures in Distributed Collaboration. 2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1—6.

Mixed reality (MR) technologies are widely used in distributed collaborative learning scenarios and have made learning and training more flexible and intuitive. However, there are many challenges in the use of MR due to the difficulty in creating a physical presence, particularly when a physical task is being performed collaboratively. We therefore developed a novel MR system to overcomes these limitations and enhance the distributed collaboration user experience. The primary objective of this paper is to explore the potential of a MR-based hand gestures system to enhance the conceptual architecture of MR in terms of both visualization and interaction in distributed collaboration. We propose a synchronous prototype named MRCollab as an immersive collaborative approach that allows two or more users to communicate with a peer based on the integration of several technologies such as video, audio, and hand gestures.

Shang, Jiacheng, Wu, Jie.  2019.  Enabling Secure Voice Input on Augmented Reality Headsets using Internal Body Voice. 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). :1—9.

Voice-based input is usually used as the primary input method for augmented reality (AR) headsets due to immersive AR experience and good recognition performance. However, recent researches have shown that an attacker can inject inaudible voice commands to the devices that lack voice verification. Even if we secure voice input with voice verification techniques, an attacker can easily steal the victim's voice using low-cast handy recorders and replay it to voice-based applications. To defend against voice-spoofing attacks, AR headsets should be able to determine whether the voice is from the person who is using the AR headsets. Existing voice-spoofing defense systems are designed for smartphone platforms. Due to the special locations of microphones and loudspeakers on AR headsets, existing solutions are hard to be implemented on AR headsets. To address this challenge, in this paper, we propose a voice-spoofing defense system for AR headsets by leveraging both the internal body propagation and the air propagation of human voices. Experimental results show that our system can successfully accept normal users with average accuracy of 97% and defend against two types of attacks with average accuracy of at least 98%.

Cong, Huy Phi, Tran, Ha Huu, Trinh, Anh Vu, Vu, Thang X..  2019.  Modeling a Virtual Reality System with Caching and Computing Capabilities at Mobile User’ Device. 2019 6th NAFOSTED Conference on Information and Computer Science (NICS). :393—397.

Virtual reality (VR) recently is a promising technique in both industry and academia due to its potential applications in immersive experiences including website, game, tourism, or museum. VR technique provides an amazing 3-Dimensional (3D) experiences by requiring a very high amount of elements such as images, texture, depth, focus length, etc. However, in order to apply VR technique to various devices, especially in mobiles, ultra-high transmission rate and extremely low latency are really big challenge. Considering this problem, this paper proposes a novel combination model by transforming the computing capability of VR device into an equivalent caching amount while remaining low latency and fast transmission. In addition, Classic caching models are used to computing and catching capabilities which is easily apply to multi-user models.

2020-06-01
Parikh, Sarang, Sanjay, H A, Shastry, K. Aditya, Amith, K K.  2019.  Multimodal Data Security Framework Using Steganography Approaches. 2019 International Conference on Communication and Electronics Systems (ICCES). :1997–2002.
Information or data is a very crucial resource. Hence securing the information becomes a critical task. Transfer and Communication mediums via which we send this information do not provide data security natively. Therefore, methods for data security have to be devised to protect the information from third party and unauthorized users. Information hiding strategies like steganography provide techniques for data encryption so that the unauthorized users cannot read it. This work is aimed at creating a novel method of Augmented Reality Steganography (ARSteg). ARSteg uses cloud for image and key storage that does not alter any attributes of an image such as size and colour scheme. Unlike, traditional algorithms such as Least Significant Bit (LSB) which changes the attributes of images, our approach uses well established encryption algorithm such as Advanced Encryption Standard (AES) for encryption and decryption. This system is further secured by many alternative means such as honey potting, tracking and heuristic intrusion detection that ensure that the transmitted messages are completely secure and no intrusions are allowed. The intrusions are prevented by detecting them immediately and neutralizing them.
2020-03-02
Ullah, Rehmat, Ur Rehman, Muhammad Atif, Kim, Byung-Seo, Sonkoly, Balázs, Tapolcai, János.  2019.  On Pending Interest Table in Named Data Networking based Edge Computing: The Case of Mobile Augmented Reality. 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN). :263–265.
Future networks require fast information response time, scalable content distribution, security and mobility. In order to enable future Internet many key enabling technologies have been proposed such as Edge computing (EC) and Named Data Networking (NDN). In EC substantial compute and storage resources are placed at the edge of the network, in close proximity to end users. Similarly, NDN provides an alternative to traditional host centric IP architecture which seems a perfect candidate for distributed computation. Although NDN with EC seems a promising approach for enabling future Internet, it can cause various challenges such as expiry time of the Pending Interest Table (PIT) and non-trivial computation of the edge node. In this paper we discuss the expiry time and non-trivial computation in NDN based EC. We argue that if NDN is integrated in EC, then the PIT expiry time will be affected in relation with the processing time on the edge node. Our analysis shows that integrating NDN in EC without considering PIT expiry time may result in the degradation of network performance in terms of Interest Satisfaction Rate.
2020-02-17
Kim, Joonsoo, Kim, Kyeongho, Jang, Moonsu.  2019.  Cyber-Physical Battlefield Platform for Large-Scale Cybersecurity Exercises. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1–19.
In this study, we propose a platform upon which a cyber security exercise environment can be built efficiently for national critical infrastructure protection, i.e. a cyber-physical battlefield (CPB), to simulate actual ICS/SCADA systems in operation. Among various design considerations, this paper mainly discusses scalability, mobility, reality, extensibility, consideration of the domain or vendor specificities, and the visualization of physical facilities and their damage as caused by cyber attacks. The main purpose of the study was to develop a platform that can maximize the coverage that encompasses such design considerations. We discuss the construction of the platform through the final design choices. The features of the platform that we attempt to achieve are closely related to the target cyber exercise format. Design choices were made considering the construction of a realistic ICS/SCADA exercise environment that meets the goals and matches the characteristics of the Cyber Conflict Exercise (CCE), an annual national exercise organized by the National Security Research Institute (NSR) of South Korea. CCE is a real-time attack-defense battlefield drill between 10 red teams who try to penetrate a multi-level organization network and 16 blue teams who try to defend the network. The exercise platform provides scalability and a significant degree of freedom in the design of a very large-scale CCE environment. It also allowed us to fuse techniques such as 3D-printing and augmented reality (AR) to achieve the exercise goals. This CPB platform can also be utilized in various ways for different types of cybersecurity exercise. The successful application of this platform in Locked Shields 2018 (LS18) is strong evidence of this; it showed the great potential of this platform to integrate high-level strategic or operational exercises effectively with low-level technical exercises. This paper also discusses several possible improvements of the platform which could be made for better integration, as well as various exercise environments that can be constructed given the scalability and extensibility of the platform.
2019-12-30
Ahn, Surin, Gorlatova, Maria, Naghizadeh, Parinaz, Chiang, Mung, Mittal, Prateek.  2018.  Adaptive Fog-Based Output Security for Augmented Reality. Proceedings of the 2018 Morning Workshop on Virtual Reality and Augmented Reality Network. :1–6.
Augmented reality (AR) technologies are rapidly being adopted across multiple sectors, but little work has been done to ensure the security of such systems against potentially harmful or distracting visual output produced by malicious or bug-ridden applications. Past research has proposed to incorporate manually specified policies into AR devices to constrain their visual output. However, these policies can be cumbersome to specify and implement, and may not generalize well to complex and unpredictable environmental conditions. We propose a method for generating adaptive policies to secure visual output in AR systems using deep reinforcement learning. This approach utilizes a local fog computing node, which runs training simulations to automatically learn an appropriate policy for filtering potentially malicious or distracting content produced by an application. Through empirical evaluations, we show that these policies are able to intelligently displace AR content to reduce obstruction of real-world objects, while maintaining a favorable user experience.
2019-09-23
Zhang, Caixia, Bai, Gang.  2018.  Using Hybrid Features of QR Code to Locate and Track in Augmented Reality. Proceedings of the 2018 International Conference on Information Science and System. :273–279.
Augmented Reality (AR) is a technique which seamlessly integrate virtual 3D models into the image of the real scenario in real time. Using the QR code as the identification mark, an algorithm is proposed to extract the virtual straight line of QR code and to locate and track the camera based on the hybrid features, thus it avoids the possibility of failure when locating and tracking only by feature points. The experimental results show that the method of combining straight lines with feature points is better than that of using only straight lines or feature points. Further, an AR (Augmented Reality) system is developed.
2019-02-22
Roberts, Jasmine.  2018.  Using Affective Computing for Proxemic Interactions in Mixed-Reality. Proceedings of the Symposium on Spatial User Interaction. :176-176.

Immersive technologies have been touted as empathetic mediums. This capability has yet to be fully explored through machine learning integration. Our demo seeks to explore proxemics in mixed-reality (MR) human-human interactions. The author developed a system, where spatial features can be manipulated in real time by identifying emotions corresponding to unique combinations of facial micro-expressions and tonal analysis. The Magic Leap One is used as the interactive interface, the first commercial spatial computing head mounted (virtual retinal) display (HUD). A novel spatial user interface visualization element is prototyped that leverages the affordances of mixed-reality by introducing both a spatial and affective component to interfaces.

Hartmann, Jeremy, Vogel, Daniel.  2018.  An Evaluation of Mobile Phone Pointing in Spatial Augmented Reality. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. :LBW122:1-LBW122:6.

We investigate mobile phone pointing in Spatial Augmented Reality (SAR). Three pointing methods are compared, raycasting, viewport, and tangible (i.e. direct contact), using a five-projector "full" SAR environment with targets distributed on varying surfaces. Participants were permitted free movement in the environment to create realistic variations in target occlusion and target incident angle. Our results show raycast is fastest for high and distant targets, tangible is fastest for targets in close proximity to the user, and viewport performance is in between.

Zhou, Bing, Guven, Sinem, Tao, Shu, Ye, Fan.  2018.  Pose-Assisted Active Visual Recognition in Mobile Augmented Reality. Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. :756-758.

While existing visual recognition approaches, which rely on 2D images to train their underlying models, work well for object classification, recognizing the changing state of a 3D object requires addressing several additional challenges. This paper proposes an active visual recognition approach to this problem, leveraging camera pose data available on mobile devices. With this approach, the state of a 3D object, which captures its appearance changes, can be recognized in real time. Our novel approach selects informative video frames filtered by 6-DOF camera poses to train a deep learning model to recognize object state. We validate our approach through a prototype for Augmented Reality-assisted hardware maintenance.

Pointon, Grant, Thompson, Chelsey, Creem-Regehr, Sarah, Stefanucci, Jeanine, Joshi, Miti, Paris, Richard, Bodenheimer, Bobby.  2018.  Judging Action Capabilities in Augmented Reality. Proceedings of the 15th ACM Symposium on Applied Perception. :6:1-6:8.

The utility of mediated environments increases when environmental scale (size and distance) is perceived accurately. We present the use of perceived affordances–-judgments of action capabilities–-as an objective way to assess space perception in an augmented reality (AR) environment. The current study extends the previous use of this methodology in virtual reality (VR) to AR. We tested two locomotion-based affordance tasks. In the first experiment, observers judged whether they could pass through a virtual aperture presented at different widths and distances, and also judged the distance to the aperture. In the second experiment, observers judged whether they could step over a virtual gap on the ground. In both experiments, the virtual objects were displayed with the HoloLens in a real laboratory environment. We demonstrate that affordances for passing through and perceived distance to the aperture are similar in AR to those measured in the real world, but that judgments of gap-crossing in AR were underestimated. These differences across two affordances may result from the different spatial characteristics of the virtual objects (on the ground versus extending off the ground).

Rustagi, Taru, Yoo, Kyungjin.  2018.  AR Navigation Solution Using Vector Tiles. Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology. :71:1-71:2.

This study discusses the results and findings of an augmented reality navigation app that was created using vector data uploaded to an online mapping software for indoor navigation. The main objective of this research is to determine the current issues with a solution of indoor navigation that relies on the use of GPS signals, as these signals are sparse in buildings. The data was uploaded in the form of GeoJSON files to MapBox which relayed the data to the app using an API in the form of Tilesets. The application converted the tilesets to a miniaturized map and calculated the navigation path, and then overlaid that navigation line onto the floor via the camera. Once the project setup was completed, multiple navigation paths have been tested numerous times between the different sync points and destination rooms. At the end, their accuracy, ease of access and several other factors, along with their issues, were recorded. The testing revealed that the navigation system was not only accurate despite the lack of GPS signal, but it also detected the device motion precisely. Furthermore, the navigation system did not take much time to generate the navigation path, as the app processed the data tile by tile. The application was also able to accurately measure the ground plane along with the walls, perfectly overlaying the navigation line. However, a few observations indicated various factors affected the accuracy of the navigation, and testing revealed areas where major improvements can be made to improve both accuracy and ease of access.

Aftab, Muhammad, Chau, Sid Chi-Kin, Khonji, Majid.  2018.  Enabling Self-Aware Smart Buildings by Augmented Reality. Proceedings of the Ninth International Conference on Future Energy Systems. :261-265.

Conventional HVAC control systems are usually incognizant of the physical structures and materials of buildings. These systems merely follow pre-set HVAC control logic based on abstract building thermal response models, which are rough approximations to true physical models, ignoring dynamic spatial variations in built environments. To enable more accurate and responsive HVAC control, this paper introduces the notion of self-aware smart buildings, such that buildings are able to explicitly construct physical models of themselves (e.g., incorporating building structures and materials, and thermal flow dynamics). The question is how to enable self-aware buildings that automatically acquire dynamic knowledge of themselves. This paper presents a novel approach using augmented reality. The extensive user-environment interactions in augmented reality not only can provide intuitive user interfaces for building systems, but also can capture the physical structures and possibly materials of buildings accurately to enable real-time building simulation and control. This paper presents a building system prototype incorporating augmented reality, and discusses its applications.