Visible to the public Biblio

Found 16998 results

2018-01-10
Frieslaar, Ibraheem, Irwin, Barry.  2017.  Investigating the Effects Various Compilers Have on the Electromagnetic Signature of a Cryptographic Executable. Proceedings of the South African Institute of Computer Scientists and Information Technologists. :15:1–15:10.

This research investigates changes in the electromagnetic (EM) signatures of a cryptographic binary executable based on compile-time parameters to the GNU and clang compilers. The source code was compiled and executed on a Raspberry Pi 2, which utilizes the ARMv7 CPU. Various optimization flags are enabled at compile-time and the output of the binary executable's EM signatures are captured at run-time. It is demonstrated that GNU and clang compilers produced different EM signature on program execution. The results indicated while utilizing the O3 optimization flag, the EM signature of the program changes. Additionally, the g++ compiler demonstrated fewer instructions were required to run the executable; this related to fewer EM emissions leaked. The EM data from the various compilers under different optimization levels was used as input data for a correlation power analysis attack. The results indicated that partial AES-128 encryption keys was possible. In addition, the fewest subkeys recovered was when the clang compiler was used with level O2 optimization. Finally, the research was able to recover 15 of 16 AES-128 cryptographic algorithm's subkeys, from the the Pi.

Zinzindohoué, Jean-Karim, Bhargavan, Karthikeyan, Protzenko, Jonathan, Beurdouche, Benjamin.  2017.  HACL*: A Verified Modern Cryptographic Library. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1789–1806.
HACL* is a verified portable C cryptographic library that implements modern cryptographic primitives such as the ChaCha20 and Salsa20 encryption algorithms, Poly1305 and HMAC message authentication, SHA-256 and SHA-512 hash functions, the Curve25519 elliptic curve, and Ed25519 signatures. HACL* is written in the F* programming language and then compiled to readable C code. The F* source code for each cryptographic primitive is verified for memory safety, mitigations against timing side-channels, and functional correctness with respect to a succinct high-level specification of the primitive derived from its published standard. The translation from F* to C preserves these properties and the generated C code can itself be compiled via the CompCert verified C compiler or mainstream compilers like GCC or CLANG. When compiled with GCC on 64-bit platforms, our primitives are as fast as the fastest pure C implementations in OpenSSL and libsodium, significantly faster than the reference C code in TweetNaCl, and between 1.1x-5.7x slower than the fastest hand-optimized vectorized assembly code in SUPERCOP. HACL* implements the NaCl cryptographic API and can be used as a drop-in replacement for NaCl libraries like libsodium and TweetNaCl. HACL* provides the cryptographic components for a new mandatory ciphersuite in TLS 1.3 and is being developed as the main cryptographic provider for the miTLS verified implementation. Primitives from HACL* are also being integrated within Mozilla's NSS cryptographic library. Our results show that writing fast, verified, and usable C cryptographic libraries is now practical.
Proy, Julien, Heydemann, Karine, Berzati, Alexandre, Cohen, Albert.  2017.  Compiler-Assisted Loop Hardening Against Fault Attacks. ACM Trans. Archit. Code Optim.. 14:36:1–36:25.
Secure elements widely used in smartphones, digital consumer electronics, and payment systems are subject to fault attacks. To thwart such attacks, software protections are manually inserted requiring experts and time. The explosion of the Internet of Things (IoT) in home, business, and public spaces motivates the hardening of a wider class of applications and the need to offer security solutions to non-experts. This article addresses the automated protection of loops at compilation time, covering the widest range of control- and data-flow patterns, in both shape and complexity. The security property we consider is that a sensitive loop must always perform the expected number of iterations; otherwise, an attack must be reported. We propose a generic compile-time loop hardening scheme based on the duplication of termination conditions and of the computations involved in the evaluation of such conditions. We also investigate how to preserve the security property along the compilation flow while enabling aggressive optimizations. We implemented this algorithm in LLVM 4.0 at the Intermediate Representation (IR) level in the backend. On average, the compiler automatically hardens 95% of the sensitive loops of typical security benchmarks, and 98% of these loops are shown to be robust to simulated faults. Performance and code size overhead remain quite affordable, at 12.5% and 14%, respectively.
Almeida, José Bacelar, Barbosa, Manuel, Barthe, Gilles, Dupressoir, François, Grégoire, Benjamin, Laporte, Vincent, Pereira, Vitor.  2017.  A Fast and Verified Software Stack for Secure Function Evaluation. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1989–2006.
We present a high-assurance software stack for secure function evaluation (SFE). Our stack consists of three components: i. a verified compiler (CircGen) that translates C programs into Boolean circuits; ii. a verified implementation of Yao's SFE protocol based on garbled circuits and oblivious transfer; and iii. transparent application integration and communications via FRESCO, an open-source framework for secure multiparty computation (MPC). CircGen is a general purpose tool that builds on CompCert, a verified optimizing compiler for C. It can be used in arbitrary Boolean circuit-based cryptography deployments. The security of our SFE protocol implementation is formally verified using EasyCrypt, a tool-assisted framework for building high-confidence cryptographic proofs, and it leverages a new formalization of garbled circuits based on the framework of Bellare, Hoang, and Rogaway (CCS 2012). We conduct a practical evaluation of our approach, and conclude that it is competitive with state-of-the-art (unverified) approaches. Our work provides concrete evidence of the feasibility of building efficient, verified, implementations of higher-level cryptographic systems. All our development is publicly available.
Wang, S., Yan, Q., Chen, Z., Yang, B., Zhao, C., Conti, M..  2017.  TextDroid: Semantics-based detection of mobile malware using network flows. 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :18–23.

The wide-spreading mobile malware has become a dreadful issue in the increasingly popular mobile networks. Most of the mobile malware relies on network interface to coordinate operations, steal users' private information, and launch attack activities. In this paper, we propose TextDroid, an effective and automated malware detection method combining natural language processing and machine learning. TextDroid can extract distinguishable features (n-gram sequences) to characterize malware samples. A malware detection model is then developed to detect mobile malware using a Support Vector Machine (SVM) classifier. The trained SVM model presents a superior performance on two different data sets, with the malware detection rate reaching 96.36% in the test set and 76.99% in an app set captured in the wild, respectively. In addition, we also design a flow header visualization method to visualize the highlighted texts generated during the apps' network interactions, which assists security researchers in understanding the apps' complex network activities.

Buber, E., Dırı, B., Sahingoz, O. K..  2017.  Detecting phishing attacks from URL by using NLP techniques. 2017 International Conference on Computer Science and Engineering (UBMK). :337–342.

Nowadays, cyber attacks affect many institutions and individuals, and they result in a serious financial loss for them. Phishing Attack is one of the most common types of cyber attacks which is aimed at exploiting people's weaknesses to obtain confidential information about them. This type of cyber attack threats almost all internet users and institutions. To reduce the financial loss caused by this type of attacks, there is a need for awareness of the users as well as applications with the ability to detect them. In the last quarter of 2016, Turkey appears to be second behind China with an impact rate of approximately 43% in the Phishing Attack Analysis report between 45 countries. In this study, firstly, the characteristics of this type of attack are explained, and then a machine learning based system is proposed to detect them. In the proposed system, some features were extracted by using Natural Language Processing (NLP) techniques. The system was implemented by examining URLs used in Phishing Attacks before opening them with using some extracted features. Many tests have been applied to the created system, and it is seen that the best algorithm among the tested ones is the Random Forest algorithm with a success rate of 89.9%.

Thaler, S., Menkonvski, V., Petkovic, M..  2017.  Towards a neural language model for signature extraction from forensic logs. 2017 5th International Symposium on Digital Forensic and Security (ISDFS). :1–6.
Signature extraction is a critical preprocessing step in forensic log analysis because it enables sophisticated analysis techniques to be applied to logs. Currently, most signature extraction frameworks either use rule-based approaches or handcrafted algorithms. Rule-based systems are error-prone and require high maintenance effort. Hand-crafted algorithms use heuristics and tend to work well only for specialized use cases. In this paper we present a novel approach to extract signatures from forensic logs that is based on a neural language model. This language model learns to identify mutable and non-mutable parts in a log message. We use this information to extract signatures. Neural language models have shown to work extremely well for learning complex relationships in natural language text. We experimentally demonstrate that our model can detect which parts are mutable with an accuracy of 86.4%. We also show how extracted signatures can be used for clustering log lines.
Barreira, R., Pinheiro, V., Furtado, V..  2017.  A framework for digital forensics analysis based on semantic role labeling. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :66–71.
This article describes a framework for semantic annotation of texts that are submitted for forensic analysis, based on Frame Semantics, and a knowledge base of Forensic Frames - FrameFOR. We demonstrate through experimental evaluations that the application of the Semantic Role Labeling (SRL) techniques and Natural Language Processing (NLP) in digital forensic increases the performance of the forensic experts in terms of agility, precision and recall.
Devyatkin, D., Smirnov, I., Ananyeva, M., Kobozeva, M., Chepovskiy, A., Solovyev, F..  2017.  Exploring linguistic features for extremist texts detection (on the material of Russian-speaking illegal texts). 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :188–190.

In this paper we present results of a research on automatic extremist text detection. For this purpose an experimental dataset in the Russian language was created. According to the Russian legislation we cannot make it publicly available. We compared various classification methods (multinomial naive Bayes, logistic regression, linear SVM, random forest, and gradient boosting) and evaluated the contribution of differentiating features (lexical, semantic and psycholinguistic) to classification quality. The results of experiments show that psycholinguistic and semantic features are promising for extremist text detection.

Bhattacharjee, S. Das, Talukder, A., Al-Shaer, E., Doshi, P..  2017.  Prioritized active learning for malicious URL detection using weighted text-based features. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :107–112.

Data analytics is being increasingly used in cyber-security problems, and found to be useful in cases where data volumes and heterogeneity make it cumbersome for manual assessment by security experts. In practical cyber-security scenarios involving data-driven analytics, obtaining data with annotations (i.e. ground-truth labels) is a challenging and known limiting factor for many supervised security analytics task. Significant portions of the large datasets typically remain unlabelled, as the task of annotation is extensively manual and requires a huge amount of expert intervention. In this paper, we propose an effective active learning approach that can efficiently address this limitation in a practical cyber-security problem of Phishing categorization, whereby we use a human-machine collaborative approach to design a semi-supervised solution. An initial classifier is learnt on a small amount of the annotated data which in an iterative manner, is then gradually updated by shortlisting only relevant samples from the large pool of unlabelled data that are most likely to influence the classifier performance fast. Prioritized Active Learning shows a significant promise to achieve faster convergence in terms of the classification performance in a batch learning framework, and thus requiring even lesser effort for human annotation. An useful feature weight update technique combined with active learning shows promising classification performance for categorizing Phishing/malicious URLs without requiring a large amount of annotated training samples to be available during training. In experiments with several collections of PhishMonger's Targeted Brand dataset, the proposed method shows significant improvement over the baseline by as much as 12%.

Zheng, Y., Shi, Y., Guo, K., Li, W., Zhu, L..  2017.  Enhanced word embedding with multiple prototypes. 2017 4th International Conference on Industrial Economics System and Industrial Security Engineering (IEIS). :1–5.

Word representation is one of the basic word repressentation methods in natural language processing, which mapped a word into a dense real-valued vector space based on a hypothesis: words with similar context have similar meanings. Models like NNLM, C&W, CBOW, Skip-gram have been designed for word embeddings learning, and get widely used in many NLP tasks. However, these models assume that one word had only one semantics meaning which is contrary to the real language rules. In this paper we pro-pose a new word unit with multiple meanings and an algorithm to distinguish them by it's context. This new unit can be embedded in most language models and get series of efficient representations by learning variable embeddings. We evaluate a new model MCBOW that integrate CBOW with our word unit on word similarity evaluation task and some downstream experiments, the result indicated our new model can learn different meanings of a word and get a better result on some other tasks.

Meltsov, V. Y., Lesnikov, V. A., Dolzhenkova, M. L..  2017.  Intelligent system of knowledge control with the natural language user interface. 2017 International Conference "Quality Management,Transport and Information Security, Information Technologies" (IT QM IS). :671–675.
This electronic document is a “live” template and already defines the components of your paper [title, text, heads, etc.] in its style sheet. The paper considers the possibility and necessity of using in modern control and training systems with a natural language interface methods and mechanisms, characteristic for knowledge processing systems. This symbiosis assumes the introduction of specialized inference machines into the testing systems. For the effective operation of such an intelligent interpreter, it is necessary to “translate” the user's answers into one of the known forms of the knowledge representation, for example, into the expressions (rules) of the first-order predicate calculus. A lexical processor, performing morphological, syntactic and semantic analysis, solves this task. To simplify further work with the rules, the Skolem-transformation is used, which allows to get rid of quantifiers and to present semantic structures in the form of sequents (clauses, disjuncts). The basic principles of operation of the inference machine are described, which is the main component of the developed intellectual subsystem. To improve the performance of the machine, one of the fastest methods was chosen - a parallel method of deductive inference based on the division of clauses. The parallelism inherent in the method, and the use of the dataflow architecture, allow parallel computations in the output machine to be implemented without additional effort on the part of the programmer. All this makes it possible to reduce the time for comparing the sequences stored in the knowledge base by several times as compared to traditional inference mechanisms that implement various versions of the principle of resolutions. Formulas and features of the technique of numerical estimation of the user's answers are given. In general, the development of the human-computer dialogue capabilities in test systems- through the development of a specialized module for processing knowledge, will increase the intelligence of such systems and allow us to directly consider the semantics of sentences, more accurately determine the relevance of the user's response to standard knowledge and, ultimately, get rid of the skeptical attitude of many managers to machine testing systems.
Alzhrani, K., Rudd, E. M., Chow, C. E., Boult, T. E..  2017.  Automated U.S diplomatic cables security classification: Topic model pruning vs. classification based on clusters. 2017 IEEE International Symposium on Technologies for Homeland Security (HST). :1–6.
The U.S Government has been the target for cyberattacks from all over the world. Just recently, former President Obama accused the Russian government of the leaking emails to Wikileaks and declared that the U.S. might be forced to respond. While Russia denied involvement, it is clear that the U.S. has to take some defensive measures to protect its data infrastructure. Insider threats have been the cause of other sensitive information leaks too, including the infamous Edward Snowden incident. Most of the recent leaks were in the form of text. Due to the nature of text data, security classifications are assigned manually. In an adversarial environment, insiders can leak texts through E-mail, printers, or any untrusted channels. The optimal defense is to automatically detect the unstructured text security class and enforce the appropriate protection mechanism without degrading services or daily tasks. Unfortunately, existing Data Leak Prevention (DLP) systems are not well suited for detecting unstructured texts. In this paper, we compare two recent approaches in the literature for text security classification, evaluating them on actual sensitive text data from the WikiLeaks dataset.
Gupta, P., Goswami, A., Koul, S., Sartape, K..  2017.  IQS-intelligent querying system using natural language processing. 2017 International conference of Electronics, Communication and Aerospace Technology (ICECA). 2:410–413.
Modern databases contain an enormous amount of information stored in a structured format. This information is processed to acquire knowledge. However, the process of information extraction from a Database System is cumbersome for non-expert users as it requires an extensive knowledge of DBMS languages. Therefore, an inevitable need arises to bridge the gap between user requirements and the provision of a simple information retrieval system whereby the role of a specialized Database Administrator is annulled. In this paper, we propose a methodology for building an Intelligent Querying System (IQS) by which a user can fire queries in his own (natural) language. The system first parses the input sentences and then generates SQL queries from the natural language expressions of the input. These queries are in turn mapped with the desired information to generate the required output. Hence, it makes the information retrieval process simple, effective and reliable.
Bönsch, Andrea, Trisnadi, Robert, Wendt, Jonathan, Vierjahn, Tom, Kuhlen, Torsten W..  2017.  Score-based Recommendation for Efficiently Selecting Individual Virtual Agents in Multi-agent Systems. Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. :74:1–74:2.
Controlling user-agent-interactions by means of an external operator includes selecting the virtual interaction partners fast and faultlessly. However, especially in immersive scenes with a large number of potential partners, this task is non-trivial. Thus, we present a score-based recommendation system supporting an operator in the selection task. Agents are recommended as potential partners based on two parameters: the user's distance to the agents and the user's gazing direction. An additional graphical user interface (GUI) provides elements for configuring the system and for applying actions to those agents which the operator has confirmed as interaction partners.
Vellingiri, Shanthi, Balakrishnan, Prabhakaran.  2017.  Modeling User Quality of Experience (QoE) through Position Discrepancy in Multi-Sensorial, Immersive, Collaborative Environments. Proceeding MMSys'17 Proceedings of the 8th ACM on Multimedia Systems Conference Pages 296-307 .

Users' QoE (Quality of Experience) in Multi-sensorial, Immersive, Collaborative Environments (MICE) applications is mostly measured by psychometric studies. These studies provide a subjective insight into the performance of such applications. In this paper, we hypothesize that spatial coherence or the lack of it of the embedded virtual objects among users has a correlation to the QoE in MICE. We use Position Discrepancy (PD) to model this lack of spatial coherence in MICE. Based on that, we propose a Hierarchical Position Discrepancy Model (HPDM) that computes PD at multiple levels to derive the application/system-level PD as a measure of performance.; AB@Experimental results on an example task in MICE show that HPDM can objectively quantify the application performance and has a correlation to the psychometric study-based QoE measurements. We envisage HPDM can provide more insight on the MICE application without the need for extensive user study.

Schaefer, Gerald, Budnik, Mateusz, Krawczyk, Bartosz.  2017.  Immersive Browsing in an Image Sphere. Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication. :26:1–26:4.
In this paper, we present an immersive image database navigation system. Images are visualised in a spherical visualisation space and arranged, on a grid, by colour so that images of similar colour are located close to each other, while access to large image sets is possible through a hierarchical browsing structure. The user is wearing a 3-D head mounted display (HMD) and is immersed inside the image sphere. Navigation is performed by head movement using a 6-degree-of-freedom tracker integrated in the HMD in conjunction with a wiimote remote control.
Hosseini, S., Swash, M. R., Sadka, A..  2017.  Immersive 360 Holoscopic 3D system design. 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN). :325–329.
3D imaging has been a hot research topic recently due to a high demand from various applications of security, health, autonomous vehicle and robotics. Yet Stereoscopic 3D imaging is limited due to its principles which mimics the human eye technique thus the camera separation baseline defines amount of 3D depth can be captured. Holoscopic 3D (H3D) Imaging is based on the “Fly's eye” technique that uses coherent replication of light to record a spatial image of a real scene using a microlens array (MLA) which gives the complete 3D parallax. H3D Imaging has been considered a promising 3D imaging technique which pursues the simple form of 3D acquisition using a single aperture camera therefore it is the most suited for scalable digitization, security and autonomous applications. This paper proposes 360-degree holoscopic 3D imaging system design for immersive 3D acquisition and stitching.
Vincur, J., Navrat, P., Polasek, I..  2017.  VR City: Software Analysis in Virtual Reality Environment. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :509–516.
This paper presents software visualization tool that utilizes the modified city metaphor to represent software system and related analysis data in virtual reality environment. To better address all three kinds of software aspects we propose a new layouting algorithm that provides a higher level of detail and position the buildings according to the coupling between classes that they represent. Resulting layout allows us to visualize software metrics and source code modifications at the granularity of methods, visualize method invocations involved in program execution and to support the remodularization analysis. To further reduce the cognitive load and increase efficiency of 3D visualization we allow users to observe and interact with our city in immersive virtual reality environment that also provides a source code browsing feature. We demonstrate the use of our approach on two open-source systems.
Holdsworth, J., Apeh, E..  2017.  An Effective Immersive Cyber Security Awareness Learning Platform for Businesses in the Hospitality Sector. 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW). :111–117.
The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements fo- designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee's progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.
Boos, Kevin, Chu, David, Cuervo, Eduardo.  2017.  FlashBack: Immersive Virtual Reality on Mobile Devices via Rendering Memoization. GetMobile: Mobile Comp. and Comm.. 20:23–27.
Driven by recent advances in the mobile computing hardware ecosystem, wearable Virtual Reality (VR) is experiencing a boom in popularity, with many offerings becoming available. Modern VR head-mounted displays (HMDs) fall into two device classes: (i) Tethered HMDs: HMDs tethered to powerful, expensive gaming desktops, such as the Oculus Rift, HTC Vive, and Sony PlayStation VR; (ii) Mobile-rendered HMDs: self-contained, untethered HMDs that run on mobile phones slotted into head mounts, e.g., Google Cardboard and Samsung Gear VR.
Chu, Jacqueline, Bryan, Chris, Shih, Min, Ferrer, Leonardo, Ma, Kwan-Liu.  2017.  Navigable Videos for Presenting Scientific Data on Affordable Head-Mounted Displays. Proceedings of the 8th ACM on Multimedia Systems Conference. :250–260.
Immersive, stereoscopic visualization enables scientists to better analyze structural and physical phenomena compared to traditional display mediums. Unfortunately, current head-mounted displays (HMDs) with the high rendering quality necessary for these complex datasets are prohibitively expensive, especially in educational settings where their high cost makes it impractical to buy several devices. To address this problem, we develop two tools: (1) An authoring tool allows domain scientists to generate a set of connected, 360° video paths for traversing between dimensional keyframes in the dataset. (2) A corresponding navigational interface is a video selection and playback tool that can be paired with a low-cost HMD to enable an interactive, non-linear, storytelling experience. We demonstrate the authoring tool's utility by conducting several case studies and assess the navigational interface with a usability study. Results show the potential of our approach in effectively expanding the accessibility of high-quality, immersive visualization to a wider audience using affordable HMDs.
Cheng, Lung-Pan, Marwecki, Sebastian, Baudisch, Patrick.  2017.  Mutual Human Actuation. Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. :797–805.
Human actuation is the idea of using people to provide large-scale force feedback to users. The Haptic Turk system, for example, used four human actuators to lift and push a virtual reality user; TurkDeck used ten human actuators to place and animate props for a single user. While the experience of human actuators was decent, it was still inferior to the experience these people could have had, had they participated as a user. In this paper, we address this issue by making everyone a user. We introduce mutual human actuation, a version of human actuation that works without dedicated human actuators. The key idea is to run pairs of users at the same time and have them provide human actuation to each other. Our system, Mutual Turk, achieves this by (1) offering shared props through which users can exchange forces while obscuring the fact that there is a human on the other side, and (2) synchronizing the two users' timelines such that their way of manipulating the shared props is consistent across both virtual worlds. We demonstrate mutual human actuation with an example experience in which users pilot kites though storms, tug fish out of ponds, are pummeled by hail, battle monsters, hop across chasms, push loaded carts, and ride in moving vehicles.
Valkov, Dimitar, Flagge, Steffen.  2017.  Smooth Immersion: The Benefits of Making the Transition to Virtual Environments a Continuous Process. Proceedings of the 5th Symposium on Spatial User Interaction. :12–19.
In this paper we discuss the benefits and the limitations, as well as different implementation options for smooth immersion into a HMD-based IVE. We evaluated our concept in a preliminary user study, in which we have tested users' awareness, reality judgment and experience in the IVE, when using different transition techniques to enter it. Our results show that a smooth transition to the IVE improves the awareness of the user and may increase the perceived interactivity of the system.
Cordeil, Maxime, Cunningham, Andrew, Dwyer, Tim, Thomas, Bruce H., Marriott, Kim.  2017.  ImAxes: Immersive Axes As Embodied Affordances for Interactive Multivariate Data Visualisation. Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. :71–83.
We introduce ImAxes immersive system for exploring multivariate data using fluid, modeless interaction. The basic interface element is an embodied data axis. The user can manipulate these axes like physical objects in the immersive environment and combine them into sophisticated visualisations. The type of visualisation that appears depends on the proximity and relative orientation of the axes with respect to one another, which we describe with a formal grammar. This straight-forward composability leads to a number of emergent visualisations and interactions, which we review, and then demonstrate with a detailed multivariate data analysis use case.