Biblio
A full-duplex radio can transmit and receive simultaneously, and, hence, is a natural fit for realizing an in-band relay system. Most of existing full-duplex relay designs, however, simply forward an amplified version of the received signal without decoding it, and, thereby, also amplify the noise at the relay, offsetting throughput gains of full-duplex relaying. To overcome this issue, we explore an alternative: demodulate-and-forward. This paper presents the design and implementation of DelayForward (DF), a practical system that fully extracts the relay gains of full-duplex demodulate-and-forward mechanism. DF allows a relay to remove its noise from the signal it receives via demodulation and forward the clean signal to destination with a small delay. While such delay-and-forward mechanism avoids forwarding the noise at the relay, the half-duplex destination, however, now receives the combination of the direct signal from a source and the delayed signal from a relay. Unlike previous theoretical work, which mainly focuses on deriving the capacity of demodulate-and-forward relaying, we observe that such combined signals have a structure similar to the convolutional code, and, hence, propose a novel viterbi-type decoder to recover data from those combined signals in practice. Another challenge is that the performance of full-duplex relay is inherently bounded by the minimum of the relay's SNR and the destination's SNR. To break this limitation, we further develop a power allocation scheme to optimize the capacity of DF. We have built a prototype of DF using USRP software radios. Experimental results show that our power-adaptive DF delivers the throughput gain of 1.25×, on average, over the state-of-the-art full-duplex relay design. The gain is as high as 2.03× for the more challenged clients.
We construct the first fully succinct garbling scheme for RAM programs, assuming the existence of indistinguishability obfuscation for circuits and one-way functions. That is, the size, space requirements, and runtime of the garbled program are the same as those of the input program, up to poly-logarithmic factors and a polynomial in the security parameter. The scheme can be used to construct indistinguishability obfuscators for RAM programs with comparable efficiency, at the price of requiring sub-exponential security of the underlying primitives. In particular, this opens the door to obfuscated computations that are sublinear in the length of their inputs. The scheme builds on the recent schemes of Koppula-Lewko-Waters and Canetti-Holmgren-Jain-Vaikuntanathan [STOC 15]. A key technical challenge here is how to combine the fixed-prefix technique of KLW, which was developed for deterministic programs, with randomized Oblivious RAM techniques. To overcome that, we develop a method for arguing about the indistinguishability of two obfuscated randomized programs that use correlated randomness. Along the way, we also define and construct garbling schemes that offer only partial protection. These may be of independent interest.
Availability is one of the most important requirements in the production system. Keeping the level of high availability in Infrastructure-as-a-Service (IaaS) cloud computing is a challenge task because of the complexity of service providing. By definition, the availability can be maintain by using fault tolerance approaches. Recently, many fault tolerance methods have been developed, but few of them focus on the fault detection aspect. In this paper, after a rigorous analysis on the nature of failures, we would like to introduce a technique to identified the failures occurring in IaaS system. By using fuzzy logic algorithm, this proposed technique can provide better performance in terms of accuracy and detection speed, which is critical for the cloud system.
Decoy routing is a promising new approach for censorship circumvention that relies on traffic re-direction by volunteer autonomous systems. Decoy routing is subject to a fundamental censorship attack, called routing around decoy (RAD), in which the censors re-route their clients' Internet traffic in order to evade decoy routing autonomous systems. Recently, there has been a heated debate in the community on the real-world feasibility of decoy routing in the presence of the RAD attack. Unfortunately, previous studies rely their analysis on heuristic-based mechanisms for decoy placement strategies as well as ad hoc strategies for the implementation of the RAD attack by the censors. In this paper, we perform the first systematic analysis of decoy routing in the presence of the RAD attack. We use game theory to model the interactions between decoy router deployers and the censors in various settings. Our game-theoretic analysis finds the optimal decoy placement strategies–-as opposed to heuristic-based placements–-in the presence of RAD censors who take their optimal censorship actions–-as opposed to some ad hoc implementation of RAD. That is, we investigate the best decoy placement given the best RAD censorship. We consider two business models for the real-world deployment of decoy routers: a central deployment that resembles that of Tor and a distributed deployment where autonomous systems individually decide on decoy deployment based on their economic interests. Through extensive simulation of Internet routes, we derive the optimal strategies in the two models for various censoring countries and under different assumptions about the budget and preferences of the censors and decoy deployers. We believe that our study is a significant step forward in understanding the practicality of the decoy routing circumvention approach.
When people utilize social applications and services, their privacy suffers a potential serious threat. In this article, we present a novel, robust, and effective de-anonymization attack to mobility trace data and social data. First, we design a Unified Similarity (US) measurement, which takes account of local and global structural characteristics of data, information obtained from auxiliary data, and knowledge inherited from ongoing de-anonymization results. By analyzing the measurement on real datasets, we find that some data can potentially be de-anonymized accurately and the other can be de-anonymized in a coarse granularity. Utilizing this property, we present a US-based De-Anonymization (DA) framework, which iteratively de-anonymizes data with accuracy guarantee. Then, to de-anonymize large-scale data without knowledge of the overlap size between the anonymized data and the auxiliary data, we generalize DA to an Adaptive De-Anonymization (ADA) framework. By smartly working on two core matching subgraphs, ADA achieves high de-anonymization accuracy and reduces computational overhead. Finally, we examine the presented de-anonymization attack on three well-known mobility traces: St Andrews, Infocom06, and Smallblue, and three social datasets: ArnetMiner, Google+, and Facebook. The experimental results demonstrate that the presented de-anonymization framework is very effective and robust to noise. The source code and employed datasets are now publicly available at SecGraph [2015].
There is an increasing trend for data owners to store their data in a third-party cloud server and buy the service from the cloud server to provide information to other users. To ensure confidentiality, the data is usually encrypted. Therefore, an encrypted data searching scheme with privacy preserving is of paramount importance. Predicate encryption (PE) is one of the attractive solutions due to its attribute-hiding merit. However, as cloud is not always trusted, verifying the searched results is also crucial. Firstly, a generic construction of Publicly Verifiable Predicate Encryption (PVPE) scheme is proposed to provide verification for PE. We reduce the security of PVPE to the security of PE. However, from practical point of view, to decrease the communication overhead and computation overhead, an improved PVPE is proposed with the trade-off of a small probability of error.
In this paper, we propose a new approach to diagnosing problems in complex distributed systems. Our approach is based on the insight that many of the trickiest problems are anomalies. For instance, in a network, problems often affect only a small fraction of the traffic (e.g., perhaps a certain subnet), or they only manifest infrequently. Thus, it is quite common for the operator to have “examples” of both working and non-working traffic readily available – perhaps a packet that was misrouted, and a similar packet that was routed correctly. In this case, the cause of the problem is likely to be wherever the two packets were treated differently by the network. We present the design of a debugger that can leverage this information using a novel concept that we call differential provenance. Differential provenance tracks the causal connections between network states and state changes, just like classical provenance, but it can additionally perform root-cause analysis by reasoning about the differences between two provenance trees. We have built a diagnostic tool that is based on differential provenance, and we have used our tool to debug a number of complex, realistic problems in two scenarios: software-defined networks and MapReduce jobs. Our results show that differential provenance can be maintained at relatively low cost, and that it can deliver very precise diagnostic information; in many cases, it can even identify the precise root cause of the problem.
This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called GraphPrints. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets–-small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84% at the time-interval level, and 0.05% at the IP-level with 100% true positive rates at both.
The orthodox paradigm to defend against automated social-engineering attacks in large-scale socio-technical systems is reactive and victim-agnostic. Defenses generally focus on identifying the attacks/attackers (e.g., phishing emails, social-bot infiltrations, malware offered for download). To change the status quo, we propose to identify, even if imperfectly, the vulnerable user population, that is, the users that are likely to fall victim to such attacks. Once identified, information about the vulnerable population can be used in two ways. First, the vulnerable population can be influenced by the defender through several means including: education, specialized user experience, extra protection layers and watchdogs. In the same vein, information about the vulnerable population can ultimately be used to fine-tune and reprioritize defense mechanisms to offer differentiated protection, possibly at the cost of additional friction generated by the defense mechanism. Secondly, information about the user population can be used to identify an attack (or compromised users) based on differences between the general and the vulnerable population. This paper considers the implications of the proposed paradigm on existing defenses in three areas (phishing of user credentials, malware distribution and socialbot infiltration) and discusses how using knowledge of the vulnerable population can enable more robust defenses.
Proofs of Data Possession/Retrievability (PoDP/PoR) schemes are essential to cloud storage services, since they can increase clients' confidence on the integrity and availability of their data. The majority of PoDP/PoR schemes are constructed from homomorphic linear authentication (HLA) schemes, which decrease the price of communication between the client and the server. In this paper, a new subclass of authentication codes, named ε-authentication codes, is proposed, and a modular construction of HLA schemes from ε-authentication codes is presented. We prove that the security notions of HLA schemes are closely related to the size of the authenticator/tag space and the successful probability of impersonation attacks (with non-zero source states) of the underlying ε-authentication codes. We show that most of HLA schemes used for the PoDP/PoR schemes are instantiations of our modular construction from some ε-authentication codes. Following this line, an algebraic-curves-based ε-authentication code yields a new HLA scheme.
In this demo, we present an immersive virtual reality (VR) system which integrates multimodal interaction sensors (i.e., smartphone, Kinect v2, and Myo armband) and streaming technology to improve the VR experience. The integrated system solves the common problems in most VR systems: (1) the very limited playing area due to transmission cable between computer and display/interaction devices, and (2) non-intuitive way of controlling virtual objects. We use Unreal Engine 4 to develop an immersive VR game with 6 interactive levels to demonstrate the feasibility of our system. In the game, the user not only can freely walk within a large playing area surrounded by multiple Kinect sensors but also select the virtual objects to grab and throw with the Myo armband. The experiment shows that our idea is workable for VR experience.
Bluetooth reliant devices are increasingly proliferating into various industry and consumer sectors as part of a burgeoning wearable market that adds convenience and awareness to everyday life. Relying primarily on a constantly changing hop pattern to reduce data sniffing during transmission, wearable devices routinely disconnect and reconnect with their base station (typically a cell phone), causing a connection repair each time. These connection repairs allow an adversary to determine what local wearable devices are communicating to what base stations. In addition, data transmitted to a base station as part of a wearable app may be forwarded onward to an awaiting web API even if the base station is in an insecure environment (e.g. a public Wi-Fi). In this paper, we introduce an approach to increase the security and privacy associated with using wearable devices by imposing transmission changes given situational awareness of the base station. These changes are asserted via policy rules based on the sensor information from the wearable devices collected and aggregated by the base system. The rules are housed in an application on the base station that adapts the base station to a state in which it prevents data from being transmitted by the wearable devices without disconnecting the devices. The policies can be updated manually or through an over the air update as determined by the user.
The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.
The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.
Geo-distributed Situation Awareness applications are large in scale and are characterized by 24/7 data generation from mobile and stationary sensors (such as cameras and GPS devices); latency-sensitivity for converting sensed data to actionable knowledge; and elastic and bursty needs for computational resources. Fog computing [7] envisions providing computational resources close to the edge of the network, consequently reducing the latency for the sense-process-actuate cycle that exists in these applications. We propose Foglets, a programming infrastructure for the geo-distributed computational continuum represented by fog nodes and the cloud. Foglets provides APIs for a spatio-temporal data abstraction for storing and retrieving application generated data on the local nodes, and primitives for communication among the resources in the computational continuum. Foglets manages the application components on the Fog nodes. Algorithms are presented for launching application components and handling the migration of these components between Fog nodes, based on the mobility pattern of the sensors and the dynamic computational needs of the application. Evaluation results are presented for a Fog network consisting of 16 nodes using a simulated vehicular network as the workload. We show that the discovery and deployment protocol can be executed in 0.93 secs, and joining an already deployed application can be as quick as 65 ms. Also, QoS-sensitive proactive migration can be accomplished in 6 ms.
The moving network target defense (MTD) based approach to security aims to design and develop capabilities to dynamically change the attack surfaces to make it more difficult for attackers to strike. One such capability is to dynamically change the IP addresses of subnetworks in unpredictable ways in an attempt to disrupt the ability of an attacker to collect the necessary reconnaissance information to launch successful attacks. In particular, Denial of Service (DoS) and worms represent examples of distributed attacks that can potentially propagate through networks very quickly, but could also be disrupted by MTD. Conversely, MTD are also disruptive to regular users. For example, when IP addresses are changed dynamically it is no longer effective to use DNS caches for IP address resolutions before any communication can be performed. In this work we take another approach. We note that the deployment of MTD could be triggered through the use of light-weight intrusion detection. We demonstrate that the neuro-evolution of augmented topologies algorithm (NEAT) has the capacity to construct detectors that operate on packet data and produce sparse topologies, hence are real-time in operation. Benchmarking under examples of DoS and worm attacks indicates that NEAT detectors can be constructed from relatively small amounts of data and detect attacks approx. 90% accuracy. Additional experiments with the open-ended evolution of code modules through genetic program teams provided detection rates approaching 100%. We believe that adopting such an approach to MTB a more specific deployment strategy that is less invasive to legitimate users, while disrupting the actions of malicious users.
Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain. We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a "wealthy" block to "steal" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.