Biblio
We present simple, practical, and powerful new techniques for garbled circuits. These techniques result in significant concrete and asymptotic improvements over the state of the art, for several natural kinds of computations. For arithmetic circuits over the integers, our construction results in garbled circuits with free addition, weighted threshold gates with cost independent of fan-in, and exponentiation by a fixed exponent with cost independent of the exponent. For boolean circuits, our construction gives an exponential improvement over the state of the art for threshold gates (including AND/OR gates) of high fan-in. Our construction can be efficiently instantiated with practical symmetric-key primitives (e.g., AES), and is proven secure under similar assumptions to that of the Free-XOR garbling scheme (Kolesnikov & Schneider, ICALP 2008). We give an extensive comparison between our scheme and state-of-the-art garbling schemes applied to boolean circuits.
The latest advances in head-mounted displays (HMDs) for augmented reality (AR) and mixed reality (MR) have produced commercialized devices that are gradually accepted by the public. These HMDs are generally equipped with head tracking, which provides an excellent input to explore immersive visualization and interaction techniques for various AR/MR applications. This paper explores the head tracking function on the latest Microsoft HoloLens – where gaze is defined as the ray starting at the head location and points forward. We present a gaze-directed visualization approach to study ensembles of 2D oil spill simulations in mixed reality. Our approach allows users to place an ensemble as an image stack in a real environment and explore the ensemble with gaze tracking. The prototype system demonstrates the challenges and promising effects of gaze-based interaction in the state-of-the-art mixed reality.
When people utilize social applications and services, their privacy suffers a potential serious threat. In this article, we present a novel, robust, and effective de-anonymization attack to mobility trace data and social data. First, we design a Unified Similarity (US) measurement, which takes account of local and global structural characteristics of data, information obtained from auxiliary data, and knowledge inherited from ongoing de-anonymization results. By analyzing the measurement on real datasets, we find that some data can potentially be de-anonymized accurately and the other can be de-anonymized in a coarse granularity. Utilizing this property, we present a US-based De-Anonymization (DA) framework, which iteratively de-anonymizes data with accuracy guarantee. Then, to de-anonymize large-scale data without knowledge of the overlap size between the anonymized data and the auxiliary data, we generalize DA to an Adaptive De-Anonymization (ADA) framework. By smartly working on two core matching subgraphs, ADA achieves high de-anonymization accuracy and reduces computational overhead. Finally, we examine the presented de-anonymization attack on three well-known mobility traces: St Andrews, Infocom06, and Smallblue, and three social datasets: ArnetMiner, Google+, and Facebook. The experimental results demonstrate that the presented de-anonymization framework is very effective and robust to noise. The source code and employed datasets are now publicly available at SecGraph [2015].
Proxy Re-Encryption (PRE) is a favorable primitive to realize a cryptographic cloud with secure and flexible data sharing mechanism. A number of PRE schemes with versatile capabilities have been proposed for different applications. The secure data sharing can be internally achieved in each PRE scheme. But no previous work can guarantee the secure data sharing among different PRE schemes in a general manner. Moreover, it is challenging to solve this problem due to huge differences among the existing PRE schemes in their algebraic systems and public-key types. To solve this problem more generally, this paper uniforms the definitions of the existing PRE and Public Key Encryption (PKE) schemes, and further uniforms their security definitions. Then taking any uniformly defined PRE scheme and any uniformly defined PKE scheme as two building blocks, this paper constructs a Generally Hybrid Proxy Re-Encryption (GHPRE) scheme with the idea of temporary public and private keys to achieve secure data sharing between these two underlying schemes. Since PKE is a more general definition than PRE, the proposed GHPRE scheme also is workable between any two PRE schemes. Moreover, the proposed GHPRE scheme can be transparently deployed even if the underlying PRE schemes are implementing.
Growing numbers of ubiquitous electronic devices and services motivate the need for effortless user authentication and identification. While biometrics are a natural means of achieving these goals, their use poses privacy risks, due mainly to the difficulty of preventing theft and abuse of biometric data. One way to minimize information leakage is to derive biometric keys from users' raw biometric measurements. Such keys can be used in subsequent security protocols and ensure that no sensitive biometric data needs to be transmitted or permanently stored. This paper is the first attempt to explore the use of human body impedance as a biometric trait for deriving secret keys. Building upon Randomized Biometric Templates as a key generation scheme, we devise a mechanism that supports consistent regeneration of unique keys from users' impedance measurements. The underlying set of biometric features are found using a feature learning technique based on Siamese networks. Compared to prior feature extraction methods, the proposed technique offers significantly improved recognition rates in the context of key generation. Besides computing experimental error rates, we tailor a known key guessing approach specifically to the used key generation scheme and assess security provided by the resulting keys. We give a very conservative estimate of the number of guesses an adversary must make to find a correct key. Results show that the proposed key generation approach produces keys comparable to those obtained by similar methods based on other biometrics.
The main goal of this work is to create a model of trust which can be considered as a reference for developing applications oriented on collaborative annotation. Such a model includes design parameters inferred from online communities operated on collaborative content. This study aims to create a static model, but it could be dynamic or more than one model depending on the context of an application. An analysis on Genius as a peer production community was done to understand user behaviors. This study characterizes user interactions based on the differentiation between Lightweight Peer Production (LWPP) and Heavyweight Peer Production (HWPP). It was found that more LWPP- interactions take place in the lower levels of this system. As the level in the role system increases, there will be more HWPP-interactions. This can be explained as LWPP-interacions are straightforward, while HWPP-interations demand more agility by the user. These provide more opportunities and therefore attract other users for further interactions.
Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with "full-fledged" security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and ommunication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources. We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about \$Ntextasciicircum4\$ queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way. Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.
There is an increasing trend for data owners to store their data in a third-party cloud server and buy the service from the cloud server to provide information to other users. To ensure confidentiality, the data is usually encrypted. Therefore, an encrypted data searching scheme with privacy preserving is of paramount importance. Predicate encryption (PE) is one of the attractive solutions due to its attribute-hiding merit. However, as cloud is not always trusted, verifying the searched results is also crucial. Firstly, a generic construction of Publicly Verifiable Predicate Encryption (PVPE) scheme is proposed to provide verification for PE. We reduce the security of PVPE to the security of PE. However, from practical point of view, to decrease the communication overhead and computation overhead, an improved PVPE is proposed with the trade-off of a small probability of error.
Together with its great advantages, cloud storage brought many interesting security issues to our attention. Since 2007, with the first efficient storage integrity protocols Proofs of Retrievability (PoR) of Juels and Kaliski, and Provable Data Possession (PDP) of Ateniese et al., many researchers worked on such protocols.
The difference among PDP and PoR models were greatly debated. The first DPDP scheme was shown by Erway et al. in 2009, while the first DPoR scheme was created by Cash et al. in 2013. We show how to obtain DPoR from DPDP, PDP, and erasure codes, making us realize that even though we did not know it, we could have had a DPoR solution in 2009.
We propose a general framework for constructing DPoR schemes that encapsulates known DPoR schemes as its special cases. We show practical and interesting optimizations enabling better performance than Chandran et al. and Shi et al. constructions. For the first time, we show how to obtain constant audit bandwidth for DPoR, independent of the data size, and how the client can greatly speed up updates with O(λ√n) local storage (where n is the number of blocks, and λ is the security parameter), which corresponds to ~ 3MB for 10GB outsourced data, and can easily be obtained in today's smart phones, let alone computers.
A new dataset is presented composed of music identification matches from Gracenote, a leading global music metadata company. Matches from January 1, 2014 to December 31, 2014 have been curated and made available as a public dataset called Gracenote Music Identification 2014, or GNMID14, at the following address: https://developer.gracenote.com/mid2014. This collection is the first significant music identification dataset and one of the largest music related datasets available containing more than 110M matches in 224 countries for 3M unique tracks, and 509K unique artists. It features geotemporal information (i.e. country and match date), genre and mood metadata. In this paper, we characterize the dataset and demonstrate its utility for Information Retrieval (IR) research.
In this paper, we propose a new approach to diagnosing problems in complex distributed systems. Our approach is based on the insight that many of the trickiest problems are anomalies. For instance, in a network, problems often affect only a small fraction of the traffic (e.g., perhaps a certain subnet), or they only manifest infrequently. Thus, it is quite common for the operator to have “examples” of both working and non-working traffic readily available – perhaps a packet that was misrouted, and a similar packet that was routed correctly. In this case, the cause of the problem is likely to be wherever the two packets were treated differently by the network. We present the design of a debugger that can leverage this information using a novel concept that we call differential provenance. Differential provenance tracks the causal connections between network states and state changes, just like classical provenance, but it can additionally perform root-cause analysis by reasoning about the differences between two provenance trees. We have built a diagnostic tool that is based on differential provenance, and we have used our tool to debug a number of complex, realistic problems in two scenarios: software-defined networks and MapReduce jobs. Our results show that differential provenance can be maintained at relatively low cost, and that it can deliver very precise diagnostic information; in many cases, it can even identify the precise root cause of the problem.
The Google Identity Platform is a system that allows a user to sign in to applications and other services by using a Google account. Google Sign-In is one such method for providing one’s identity to the Google Identity Platform. Google Sign-In is available for Android applications and iOS applications, as well as for websites and other devices. Users of Google Sign-In find that it integrates well with the Android platform, but iOS users (iPhone, iPad, etc.) do not have the same experience. The user experience when logging in to a Google account on an iOS application can not only be more tedious than the Android experience, but it also conditions users to engage in behaviors that put the information in their Google accounts at risk.
Most cyber network attacks begin with an adversary gaining a foothold within the network and proceed with lateral movement until a desired goal is achieved. The mechanism by which lateral movement occurs varies but the basic signature of hopping between hosts by exploiting vulnerabilities is the same. Because of the nature of the vulnerabilities typically exploited, lateral movement is very difficult to detect and defend against. In this paper we define a dynamic reachability graph model of the network to discover possible paths that an adversary could take using different vulnerabilities, and how those paths evolve over time. We use this reachability graph to develop dynamic machine-level and network-level impact scores. Lateral movement mitigation strategies which make use of our impact scores are also discussed, and we detail an example using a freely available data set.
This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called GraphPrints. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets–-small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84% at the time-interval level, and 0.05% at the IP-level with 100% true positive rates at both.
This paper describes an approach where group testing helps in enforcing security and privacy in identification. We detail a particular scheme based on embedding and group testing. We add a second layer of defense, group vectors, where each group vector represents a set of dataset vectors. Whereas the selected embedding poorly protects the data when used alone, the group testing approach makes it much harder to reconstruct the data when combined with the embedding. Even when curious server and user collude to disclose the secret parameters, they cannot accurately recover the data. Another byproduct of our approach is that it reduces the complexity of the search and the required storage space. We show the interest of our work in a benchmark biometrics dataset, where we verify our theoretical analysis with real data.
Recent attacks show that threats to cyber infrastructure are not only increasing in volume, but are getting more sophisticated. The attacks may comprise multiple actions that are hard to differentiate from benign activity, and therefore common detection techniques have to deal with high false positive rates. Because of the imperfect performance of automated detection techniques, responses to such attacks are highly dependent on human-driven decision-making processes. While game theory has been applied to many problems that require rational decisionmaking, we find limitation on applying such method on security games. In this work, we propose Q-Learning to react automatically to the adversarial behavior of a suspicious user to secure the system. This work compares variations of Q-Learning with a traditional stochastic game. Simulation results show the possibility of Naive Q-Learning, despite restricted information on opponents.
Modern Internet applications are being disaggregated into a microservice-based architecture, with services being updated and deployed hundreds of times a day. The accelerated software life cycle and heterogeneity of language runtimes in a single application necessitates a new approach for testing the resiliency of these applications in production infrastructures. We present Gremlin, a framework for systematically testing the failure-handling capabilities of microservices. Gremlin is based on the observation that microservices are loosely coupled and thus rely on standard message-exchange patterns over the network. Gremlin allows the operator to easily design tests and executes them by manipulating inter-service messages at the network layer. We show how to use Gremlin to express common failure scenarios and how developers of an enterprise application were able to discover previously unknown bugs in their failure-handling code without modifying the application.
Security analysis requires specialized knowledge to align threats and vulnerabilities in information technology. To identify mitigations, analysts need to understand how threats, vulnerabilities, and mitigations are composed together to yield security requirements. Despite abundant guidance in the form of checklists and controls about how to secure systems, evidence suggests that security experts do not apply these checklists. Instead, they rely on their prior knowledge and experience to identify security vulnerabilities. To better understand the different effects of checklists, design analysis, and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness, which is a theory about human perception that was used to elicit interviewee responses. The responses were then analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend, and project future threats against a system, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We highlight the role of expertise level and requirements composition in affecting security decision-making and we discuss how our method produced new hypotheses about security analysis and decisionmaking.
Automatic detection of TV advertisements is of paramount importance for various media monitoring agencies. Existing works in this domain have mostly focused on news channels using news specific features. Most commercial products use near copy detection algorithms instead of generic advertisement classification. A generic detector needs to handle inter-class and intra-class imbalances present in data due to variability in content aired across channels and frequent repetition of advertisements. Imbalances present in data make classifiers biased towards one of the classes and thus require special treatment. We propose to use tree of perceptrons to solve this problem. The training data available for each perceptron node is balanced using cluster based over-sampling and TOMEK link cleaning as we traverse the tree downwards. The trained perceptron node then passes the original unbalanced data to its children. This process is repeated recursively till we reach the leaf nodes. We call this new algorithm as "Progressively Balanced Perceptron Tree". We have also contributed a TV advertisements dataset consisting of 250 hours of videos recorded from five non-news TV channels of different genres. Experimentations on this dataset have shown that the proposed approach has comparatively superior and balanced performance with respect to six baseline methods. Our proposal generalizes well across channels, with varying training data sizes and achieved a top F1-score of 97% in detecting advertisements.