Biblio
Augmented Reality (AR) devices continuously scan their environment in order to naturally overlay virtual objects onto user's view of the physical world. In contrast to Virtual Reality, where one's environment is fully replaced with a virtual one, one of AR's "killer features" is co-located collaboration, in which multiple users interact with the same combination of virtual and real objects. Microsoft recently released HoloLens, the first consumer-ready augmented reality headset that needs no outside markers to achieve precise inside-out spatial mapping, which allows centimeter-scale hologram positioning. However, despite many applications published on the Windows Mixed Reality platform that rely on direct communication between AR devices, there currently exists no implementation or achievable proposal for secure direct pairing of two unassociated headsets. As augmented reality gets into mainstream, this omission exposes current and future users to a range of avoidable attacks. In order to close this real-world gap in both theory and engineering practice, in this paper we design and evaluate HoloPair, a system for secure and usable pairing of two AR headsets. We propose a pairing protocol and build a working prototype to experimentally evaluate its security guarantees, usability, and system performance. By running a user study with a total of 22 participants, we show that the system achieves high rates of attack detection, short pairing times, and a high average usability score. Moreover, in order to make an immediate impact on the wider developer community, we have published the full implementation and source code of our prototype, which is currently under consideration to be included in the official HoloLens development toolkit.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
We present a technique for performing secure location verification of position claims by measuring the time-difference of arrival (TDoA) between a fixed receiver node and a mobile one. The mobile node moves randomly in order to substantially increase the difficulty for an attacker to make false messages appear genuine. We explore the performance and requirements of such a system in the context of verifying aircraft position claims made over the Automatic Dependent Surveillance - Broadcast (ADS-B) system through the use of simulation and find that it correctly detects false claims with a peak accuracy of over 97\textbackslash% for the most complex attack modelled; requiring only 75m of deviation between the reported position and the actual position in order for a false claim to be detected. We then report on our design for a mobile receiver and our construction of a prototype using low-cost COTS equipment. We discuss some additional benefits of incorporating a mobile node, examine the difficulties to be overcome and explore the applicability of the approach in other location verification use-cases.
Growing numbers of ubiquitous electronic devices and services motivate the need for effortless user authentication and identification. While biometrics are a natural means of achieving these goals, their use poses privacy risks, due mainly to the difficulty of preventing theft and abuse of biometric data. One way to minimize information leakage is to derive biometric keys from users' raw biometric measurements. Such keys can be used in subsequent security protocols and ensure that no sensitive biometric data needs to be transmitted or permanently stored. This paper is the first attempt to explore the use of human body impedance as a biometric trait for deriving secret keys. Building upon Randomized Biometric Templates as a key generation scheme, we devise a mechanism that supports consistent regeneration of unique keys from users' impedance measurements. The underlying set of biometric features are found using a feature learning technique based on Siamese networks. Compared to prior feature extraction methods, the proposed technique offers significantly improved recognition rates in the context of key generation. Besides computing experimental error rates, we tailor a known key guessing approach specifically to the used key generation scheme and assess security provided by the resulting keys. We give a very conservative estimate of the number of guesses an adversary must make to find a correct key. Results show that the proposed key generation approach produces keys comparable to those obtained by similar methods based on other biometrics.
We study the trade-off between the benefits obtained by communication, vs. the risks due to exposure of the location of the transmitter. To study this problem, we introduce a game between two teams of mobile agents, the P-bots team and the E-bots team. The E-bots attempt to eavesdrop and collect information, while evading the P-bots; the P-bots attempt to prevent this by performing patrol and pursuit. The game models a typical use-case of micro-robots, i.e., their use for (industrial) espionage. We evaluate strategies for both teams, using analysis and simulations.