Visible to the public Computing Theory and Security Resilience, 2014

SoS Newsletter- Advanced Book Block

 

 
SoS Logo

Computing Theory and Security Resilience

2014


The works cited here combine research into computing theory with research into security resilience. All were presented in 2014.


Praks, P.; Kopustinskas, V., “Monte-Carlo Based Reliability Modelling of a Gas Network Using Graph Theory Approach,” Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, vol, no., pp.380, 386, 8-12 Sept. 2014. doi:10.1109/ARES.2014.57
Abstract: The aim of the study is to develop a European gas transmission system probabilistic model to analyse in a single computer model, the reliability and capacity constraints of a gas transmission network. We describe our approach to modelling the reliability and capacity constraints of networks elements, for example gas storages and compressor stations by a multi-state system. The paper presents our experience with the computer implementation of a gas transmission network probabilistic prototype model based on generalization of the maximum flow problem for a stochastic-flow network in which elements can randomly fail with known failure probabilities. The paper includes a test-case benchmark study, which is based on a real gas transmission network. Monte-Carlo simulations are used for estimating the probability that less than the demanded volume of the commodity (for example, gas) is available in the selected network nodes. Simulated results are presented and analysed in depth by statistical methods.
Keywords: Monte Carlo methods; compressors; gas industry; graph theory; probability; reliability; stochastic processes; European gas transmission system probabilistic model ;Monte-Carlo based reliability modelling; Monte-Carlo simulations; capacity constraints; compressor stations; computer model; gas network; gas storages; gas transmission network probabilistic prototype model; graph theory approach; known failure probabilities; maximum flow problem; multistate system; network elements; network nodes; probability estimation; reliability constraints; statistical methods; stochastic-flow network; test-case benchmark study; Computational modeling; Computer network reliability; Liquefied natural gas; Monte Carlo methods; Pipelines; Probabilistic logic; Reliability; Monte-Carlo methods; gas transmission network modelling; network reliability; network resilience (ID#: 15-5807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980306&isnumber=6980232

 

T. Stepanova, D. Zegzhda. “Applying Large-scale Adaptive Graphs to Modeling Internet of Things Security.” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 479. doi:10.1145/2659651.2659696
Abstract: Lots of upcoming IT trends are based on the concept of heterogeneous networks: Internet of Things is amongst them. Modern heterogeneous networks are characterized by hardly predictable behavior, hundreds of parameters of network nodes and connections and lack of single basis for development of control methods and algorithms. To overcome listed problems one need to implement topological modeling of dynamically changing structures. In this paper authors propose basic theoretical framework that will allow estimation of controllability, resiliency, scalability and other determinant parameters of complex heterogeneous networks.
Keywords: internet of things, large-scale adaptive graph, security modeling, sustainability (ID#: 15-5808)
URL:  http://doi.acm.org/10.1145/2659651.2659696

 

Xing Chen, Wei Yu, David Griffith, Nada Golmie, Guobin Xu. “On Cascading Failures and Countermeasures Based on Energy Storage in the Smart Grid.” RACS '14 Proceedings of the 2014 Conference on Research in Adaptive and Convergent Systems, October 2014, Pages 291-296. doi:10.1145/2663761.2663770
Abstract: Recently, there have been growing concerns about electric power grid security and resilience. The performance of the power grid may suffer from component failures or targeted attacks. A sophisticated adversary may target critical components in the grid, leading to cascading failures and large blackouts. To this end, this paper begins with identifying the most critical components that lead to cascading failures in the grid and then presents a defensive mechanism using energy storage to defend against cascading failures. Based on the optimal power flow control on the standard IEEE power system test cases, we systematically assess component significance, simulate attacks against power grid components, and evaluate the consequences. We also conduct extensive simulations to investigate the effectiveness of deploying Energy Storage Systems (ESSs), in terms of storage capacity and deployment locations, to mitigate cascading failures. Through extensive simulations, our data shows that integrating energy storage systems into the smart grid can efficiently mitigate cascading failures.
Keywords: cascading failure, cascading mitigation, energy storage, smart grid (ID#: 15-5809)
URL: http://doi.acm.org/10.1145/2663761.2663770

 

Gokce Gorbil, Omer H. Abdelrahman, Erol Gelenbe. “Storms in Mobile Networks.” Q2SWinet '14 Proceedings of the 10th ACM Symposium on QoS and Security for Wireless and Mobile Networks, September 2014, Pages 119-126. doi:10.1145/2642687.2642688
Abstract: Mobile networks are vulnerable to signalling attacks and storms caused by traffic that overloads the control plane through excessive signalling, which can be introduced via malware and mobile botnets. With the advent of machine-to-machine (M2M) communications over mobile networks, the potential for signalling storms increases due to the normally periodic nature of M2M traffic and the sheer number of communicating nodes. Several mobile network operators have also experienced signalling storms due to poorly designed applications that result in service outage. The radio resource control (RRC) protocol is particularly susceptible to such attacks, motivating this work within the EU FP7 NEMESYS project which presents simulations that clarify the temporal dynamics of user behavior and signalling, allowing us to suggest how such attacks can be detected and mitigated.
Keywords: 3G to 5G, malware, network attacks, network simulation, performance analysis, signalling storms, umts networks (ID#: 15-5810)
URL: http://doi.acm.org/10.1145/2642687.2642688

 

Lina Perelman, Saurabh Amin. “A Network Interdiction Model for Analyzing the Vulnerability of Water Distribution Systems.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 135-144. doi:10.1145/2566468.2566480
Abstract: This article presents a network interdiction model to assess the vulnerabilities of a class of physical flow networks. A flow network is modeled by a potential function defined over the nodes and a flow function defined over arcs (links). In particular, the difference in potential function between two nodes is characterized by a nonlinear flux function of the flow on link between the two nodes. To assess the vulnerability of the network to adversarial attack, the problem is formulated as an attacker-defender network interdiction model. The attacker's objective is to interdict the most valuable links of the network given his resource constraints. The defender's objective is to minimize power loss and the unmet demand in the network. A bi-level approach is explored to identify most critical links for network interdiction. The applicability of the proposed approach is demonstrated on a reference water distribution network, and its utility toward developing mitigation plans is discussed.
Keywords: cyber-physical systems, network flow analysis, network interdiction, vulnerability assessment, water distribution systems (ID#: 15-5811)
URL: http://doi.acm.org/10.1145/2566468.2566480

 

Radoslav Ivanov, Miroslav Pajic, Insup Lee. “Resilient Multidimensional Sensor Fusion Using Measurement History.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 1-10. doi:10.1145/2566468.2566475
Abstract: This work considers the problem of performing resilient sensor fusion using past sensor measurements. In particular, we consider a system with n sensors measuring the same physical variable where some sensors might be attacked or faulty. We consider a setup in which each sensor provides the controller with a set of possible values for the true value. Here, more precise sensors provide smaller sets. Since a lot of modern sensors provide multidimensional measurements (e.g. position in three dimensions), the sets considered in this work are multidimensional polyhedra. Given the assumption that some sensors can be attacked or faulty, the paper provides a sensor fusion algorithm that obtains a fusion polyhedron which is guaranteed to contain the true value and is minimal in size. A bound on the volume of the fusion polyhedron is also proved based on the number of faulty or attacked sensors. In addition, we incorporate system dynamics in order to utilize past measurements and further reduce the size of the fusion polyhedron. We describe several ways of mapping previous measurements to current time and compare them, under different assumptions, using the volume of the fusion polyhedron. Finally, we illustrate the implementation of the best of these methods and show its effectiveness using a case study with sensor values from a real robot.
Keywords: cps security, fault-tolerance, fault-tolerant algorithms, sensor fusion (ID#: 15-5812)
URL: http://doi.acm.org/10.1145/2566468.2566475

 

Marina Krotofil, Alvaro A. Cárdenas, Bradley Manning, Jason Larsen. “CPS: Driving Cyber-Physical Systems to Unsafe Operating Conditions by Timing DoS Attacks on Sensor Signals.” ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 146-155. doi:10.1145/2664243.2664290
Abstract: DoS attacks on sensor measurements used for industrial control can cause the controller of the process to use stale data. If the DoS attack is not timed properly, the use of stale data by the controller will have limited impact on the process; however, if the attacker is able to launch the DoS attack at the correct time, the use of stale data can cause the controller to drive the system to an unsafe state.  Understanding the timing parameters of the physical processes does not only allow an attacker to construct a successful attack but also to maximize its impact (damage to the system). In this paper we use Tennessee Eastman challenge process to study an attacker that has to identify (in realtime) the optimal timing to launch a DoS attack. The choice of time to begin an attack is forward-looking, requiring the attacker to consider each opportunity against the possibility of a better opportunity in the future, and this lends itself to the theory of optimal stopping problems. In particular we study the applicability of the Best Choice Problem (also known as the Secretary Problem), quickest change detection, and statistical process outliers. Our analysis can be used to identify specific sensor measurements that need to be protected, and the time that security or safety teams required to respond to attacks, before they cause major damage.
Keywords: CUSUM, DoS attacks, Tennessee eastman process, cyber-physical systems, optimal stopping problems (ID#: 15-5813)
URL: http://doi.acm.org/10.1145/2664243.2664290

 

Ran Gelles, Amit Sahai, Akshay Wadia. “Private Interactive Communication Across an Adversarial Channel.” ITCS '14 Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, January 2014, Pages 135-144. doi:10.1145/2554797.2554812
Abstract: Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x, y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x, y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors. If Alice and Bob are only interested in computing f(x, y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors.  We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors. The same impossibility holds also for sub-constant noise rate, e.g., when c is exponentially small (as a function of the input size).
Keywords: adversarial noise, coding, information-theoretic security, interactive communication, private function evaluation (ID#: 15-5814)
URL:  http://doi.acm.org/10.1145/2554797.2554812

 

Saleh Soltan, Dorian Mazauric, Gil Zussman. “Cascading Failures in Power Grids: Analysis and Algorithms.” e-Energy '14 Proceedings of the 5th International Conference on Future Energy Systems, June 2014, Pages 195-206.  doi:10.1145/2602044.2602066
Abstract: This paper focuses on cascading line failures in the transmission system of the power grid. Recent large-scale power outages demonstrated the limitations of percolation- and epidemic-based tools in modeling cascades. Hence, we study cascades by using computational tools and a linearized power flow model. We first obtain results regarding the Moore-Penrose pseudo-inverse of the power grid admittance matrix. Based on these results, we study the impact of a single line failure on the flows on other lines. We also illustrate via simulation the impact of the distance and resistance distance on the flow increase following a failure, and discuss the difference from the epidemic models. We use the pseudo-inverse of admittance matrix to develop an efficient algorithm to identify the cascading failure evolution, which can be a building block for cascade mitigation. Finally, we show that finding the set of lines whose removal results in the minimum yield (the fraction of demand satisfied after the cascade) is NP-Hard and introduce a simple heuristic for finding such a set. Overall, the results demonstrate that using the resistance distance and the pseudo-inverse of admittance matrix provides important insights and can support the development of efficient algorithms.
Keywords: algorithms, cascading failures, power grid, pseudo-inverse (ID#: 15-5815)
URL: http://doi.acm.org/10.1145/2602044.2602066

 

Mahdi Zamani, Mahnush Movahedi. “Secure Location Sharing.”  FOMC '14, Proceedings of the 10th ACM International Workshop on Foundations of Mobile Computing, August 2014, Pages 1-10. doi:10.1145/2634274.2634281
Abstract: In the last decade, the number of location-aware mobile devices has mushroomed. Just as location-based services grow fast, they lay out many questions and challenges when it comes to privacy. For example, who owns the location data and for what purpose is the data used? To answer these questions, we need new tools for location privacy. In this paper, we focus on the problem of secure location sharing, where a group of n clients want to collaborate with each other to anonymously share their location data with a location database server and execute queries based on them. To become more realistic, we assume up to a certain fraction of the clients are controlled arbitrarily by an active and computationally unbounded adversary. A relaxed version of this problem has already been studied in the literature assuming either a trusted third party or a weaker adversarial model. We alternatively propose a scalable fully-decentralized protocol for secure location sharing that tolerates up to n/6 statically-chosen malicious clients and does not require any trusted third party. We show that, unlike most other location-based services, our protocol is secure against traffic-analysis attacks. We also show that our protocol requires each client to send a polylogarithmic number of bits and compute a polylogarithmic number of operations (with respect to n) to query a point of interest based on its location.
Keywords: distributed algorithms, fault-tolerance, location-based services (ID#: 15-5816)
URL:   http://doi.acm.org/10.1145/2634274.2634281

 

Zain Shamsi, Ankur Nandwani, Derek Leonard, Dmitri Loguinov. “Hershel: Single-Packet OS Fingerprinting.” ACM SIGMETRICS Performance Evaluation Review - Performance Evaluation Review, Volume 42, Issue 1, June 2014, Pages 195-206. doi:10.1145/2637364.2591972
Abstract: Traditional TCP/IP fingerprinting tools (e.g., nmap) are poorly suited for Internet-wide use due to the large amount of traffic and intrusive nature of the probes. This can be overcome by approaches that rely on a single SYN packet to elicit a vector of features from the remote server; however, these methods face difficult classification problems due to the high volatility of the features and severely limited amounts of information contained therein. Since these techniques have not been studied before, we first pioneer stochastic theory of single-packet OS fingerprinting, build a database of 116 OSes, design a classifier based on our models, evaluate its accuracy in simulations, and then perform OS classification of 37.8M hosts from an Internet-wide scan.
Keywords: internet measurement, os classification, os fingerprinting (ID#: 15-5817)
URL: http://doi.acm.org/10.1145/2637364.2591972

 

Heath J. LeBlanc, Firas Hassan. “Resilient Distributed Parameter Estimation in Heterogeneous Time-Varying Networks.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 19-28. doi:10.1145/2566468.2566476
Abstract: In this paper, we study a lightweight algorithm for distributed parameter estimation in a heterogeneous network in the presence of adversary nodes. All nodes interact under a local broadcast model of communication in a time-varying network comprised of many inexpensive normal nodes, along with several more expensive, reliable nodes. Either the normal or reliable nodes may be tampered with and overtaken by an adversary, thus becoming an adversary node. The reliable nodes have an accurate estimate of their true parameters, whereas the inexpensive normal nodes communicate and take difference measurements with neighbors in the network in order to better estimate their parameters. The normal nodes are unsure, a priori, about which of their neighbors are normal, reliable, or adversary nodes. However, by sharing information on their local estimates with neighbors, we prove that the resilient iterative distributed estimation (RIDE) algorithm, which utilizes redundancy by removing extreme information, is able to drive the local estimates to their true parameters as long as each normal node is able to interact with a sufficient number of reliable nodes often enough and is not directly influenced by too many adversary nodes.
Keywords: adversary, clock synchronization, distributed algorithm, distributed parameter estimation, localization, resilient systems (ID#: 15-5818)
URL: http://doi.acm.org/10.1145/2566468.2566476

 

Benoît Libert, Marc Joye, Moti Yung. “Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares.” PODC '14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 303-312. doi:10.1145/2611462.2611498
Abstract: Threshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into n shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1 ≤ n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to t corrupted servers; i.e., it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical fully distributed (the private key is born distributed), non-interactive schemes — where the servers can compute their partial signatures without communication with other servers — with adaptive security (i.e., the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1), where certain solutions incur O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems).  To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born — although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player).
Keywords: adaptive security, availability, distributed key generation, efficiency, erasure-free schemes, fault tolerance, fully distributed systems, non-interactivity, threshold signature schemes (ID#: 15-5819)
URL: http://doi.acm.org/10.1145/2611462.2611498

 

Nathaniel Husted, Steven Myers. “Emergent Properties & Security: The Complexity of Security as a Science.” NSPW '14 Proceedings of the 2014 workshop on New Security Paradigms Workshop, September 2014, Pages 1-14. doi:10.1145/2683467.2683468
Abstract: The notion of emergent properties is becoming common place in the physical and social sciences, with applications in physics, chemistry, biology, medicine, economics, and sociology. Unfortunately, little attention has been given to the discussion of emergence in the realm of computer security, from either the attack or defense perspectives, despite there being examples of such attacks and defenses. We review the concept of emergence, discuss it in the context of computer security, argue that understanding such concepts is essential for securing our current and future systems, give examples of current attacks and defenses that make use of such concepts, and discuss the tools currently available to understand this field. We conclude by arguing that more focus needs to be given to the emergent perspective in information security, especially as we move forward to the Internet of Things and a world full of cyber-physical systems, as we believe many future attacks will make use of such ideas and defenses will require such insights.
Keywords: complex systems, information security, ubiquitous computing (ID#: 15-5820)
URL: http://doi.acm.org/10.1145/2683467.2683468

 

Minzhe Guo, Prabir Bhattacharya. “Diverse Virtual Replicas for Improving Intrusion Tolerance in Cloud.” CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 41-44. doi:10.1145/2602087.2602116
Abstract: Intrusion tolerance is important for services in cloud to continue functioning while under attack. Byzantine fault-tolerant replication is considered a fundamental component of intrusion tolerant systems. However, the monoculture of replicas can render the theoretical properties of Byzantine fault-tolerant system ineffective, even when proactive recovery techniques are employed. This paper exploits the design diversity available from off-the-shelf operating system products and studies how to diversify the configurations of virtual replicas for improving the resilience of the service in the presence of attacks. A game-theoretic model is proposed for studying the optimal diversification strategy for the system defender and an efficient algorithm is designed to approximate the optimal defense strategies in large games.
Keywords: diversity, intrusion tolerance, virtual replica (ID#: 15-5821)
URL: http://doi.acm.org/10.1145/2602087.2602116

 

Stjepan Picek, Bariş Ege, Lejla Batina, Domagoj Jakobovic, Łukasz Chmielewski, Marin Golub. “On Using Genetic Algorithms for Intrinsic Side-Channel Resistance: The Case of AES S-Box.” CS2 '14 Proceedings of the First Workshop on Cryptography and Security in Computing Systems, January 2014, Pages 13-18. doi:10.1145/2556315.2556319
Abstract: Finding balanced S-boxes with high nonlinearity and low transparency order is a difficult problem. The property of transparency order is important since it specifies the resilience of an S-box against differential power analysis. Better values for transparency order and hence improved side-channel security often imply less in terms of nonlinearity. Therefore, it is impossible to find an S-box with all optimal values. Currently, there are no algebraic procedures that can give the preferred and complete set of properties for an S-box. In this paper, we employ evolutionary algorithms to find S-boxes with desired cryptographic properties. Specifically, we conduct experiments for the 8×8 S-box case as used in the AES standard. The results of our experiments proved the feasibility of finding S-boxes with the desired properties in the case of AES. In addition, we show preliminary results of side-channel experiments on different versions of "improved" S-boxes.
Keywords: S-box, block ciphers, genetic algorithms, side-channel analysis, transparency order (ID#: 15-5822)
URL: http://doi.acm.org/10.1145/2556315.2556319

 

Tua A. Tamba, M. D. Lemmon. “Forecasting the Resilience of Networked Dynamical Systems under Environmental Perturbation.” HiCoNS '14 Proceedings of the 3rd International Conference on High Confidence Networked Systems, April 2014, Pages 61-62. doi:10.1145/2566468.2576848
Abstract: (not provided)
Keywords: distance-to-bifurcation, resilience, sum-of-square relaxation (ID#: 15-5823)
URL: http://doi.acm.org/10.1145/2566468.2576848

 

Marica Amadeo, Claudia Campolo, Antonella Molinaro. “Multi-source Data Retrieval in IoT via Named Data Networking.” ICN '14 Proceedings of the 1st International Conference on Information-Centric Networking, September 2014, Pages 67-76. doi:10.1145/2660129.2660148
Abstract: The new era of Internet of Things (IoT) is driving the revolution in computing and communication technologies spanning every aspect of our lives. Thanks to its innovative concepts, such as named content, name-based routing and in-network caching, Named Data Networking (NDN) appears as a key enabling paradigm for IoT. Despite its potential, the support of IoT applications often requires some modifications in the NDN engine for a more efficient and effective exchange of packets. In this paper, we propose a baseline NDN framework for the support of reliable retrieval of data from different wireless producers which can answer to the same Interest packet (e.g., a monitoring application collecting environmental data from sensors in a target area). The solution is evaluated through simulations in ndnSIM and achieved results show that, by leveraging the concept of exclude field and ad hoc defined schemes for Data suppression and collision avoidance, it leads to improved performance in terms of data collection time and network overhead.
Keywords: data retrieval, internet of things, named data networking, naming, transport (ID#: 15-5824)
URL: http://doi.acm.org/10.1145/2660129.2660148

 

Yibo Zhu, Xia Zhou, Zengbin Zhang, Lin Zhou, Amin Vahdat, Ben Y. Zhao, Haitao Zheng. “Cutting the Cord: A Robust Wireless Facilities Network for Data Centers.” MobiCom '14 Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, September 2014, Pages 581-592. doi:10.1145/2639108.2639140
Abstract: Today's network control and management traffic are limited by their reliance on existing data networks. Fate sharing in this context is highly undesirable, since control traffic has very different availability and traffic delivery requirements. In this paper, we explore the feasibility of building a dedicated wireless facilities network for data centers. We propose Angora, a low-latency facilities network using low-cost, 60GHz beamforming radios that provides robust paths decoupled from the wired network, and flexibility to adapt to workloads and network dynamics. We describe our solutions to address challenges in link coordination, link interference and network failures. Our testbed measurements and simulation results show that Angora enables large number of low-latency control paths to run concurrently, while providing low latency end-to-end message delivery with high tolerance for radio and rack failures.
Keywords: 60ghz wireless, data centers, wireless beamforming (ID#: 15-5825)
URL: http://doi.acm.org/10.1145/2639108.2639140

 

Anupam Das, Nikita Borisov, Prateek Mittal, Matthew Caesar. “Re3: Relay Reliability Reputation for Anonymity Systems.” ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 63-74. doi:10.1145/2590296.2590338
Abstract: To conceal user identities, Tor, a popular anonymity system, forwards traffic through multiple relays. These relays, however, are often unreliable, leading to a degraded user experience. Worse yet, malicious relays may strategically introduce deliberate failures to increase their chance of compromising anonymity. In this paper we propose a reputation system that profiles the reliability of relays in an anonymity system based on users' past experience. A particular challenge is that an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. This enables an attack where malicious relays can target a set of honest relays in order to drive down their reputation. Our system defends against this attack in two ways. Firstly, we use an adaptive exponentially-weighted moving average (EWMA) that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time. Secondly, we propose a filtering scheme based on the evaluated reputation score that can effectively discard relays involved in such attacks. We use probabilistic analysis, simulations, and real-world experiments to validate our reputation system. We show that the dominant strategy for an attacker is to not perform deliberate failures, but rather maintain a high quality of service. Our reputation system also significantly improves the reliability of path construction even in the absence of attacks. Finally, we show that the benefits of our reputation system can be realized with a moderate number of observations, making it feasible for individual clients to perform their own profiling, rather than relying on an external entity.
Keywords: DOS attack, anonymity, reputation systems, tor network (ID#: 15-5826)
URL:  http://doi.acm.org/10.1145/2590296.2590338

 

Paulo Casanova, David Garlan, Bradley Schmerl, Rui Abreu. “Diagnosing Unobserved Components in Self-Adaptive Systems.” SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 75-84. doi:10.1145/2593929.2593946
Abstract: Availability is an increasingly important quality for today's software-based systems and it has been successfully addressed by the use of closed-loop control systems in self-adaptive systems. Probes are inserted into a running system to obtain information and the information is fed to a controller that, through provided interfaces, acts on the system to alter its behavior. When a failure is detected, pinpointing the source of the failure is a critical step for a repair action. However, information obtained from a running system is commonly incomplete due to probing costs or unavailability of probes. In this paper we address the problem of fault localization in the presence of incomplete system monitoring. We may not be able to directly observe a component but we may be able to infer its health state. We provide formal criteria to determine when health states of unobservable components can be inferred and establish formal theoretical bounds for accuracy when using any spectrum-based fault localization algorithm.
Keywords: Diagnostics, Monitoring, Self-adaptive systems (ID#: 15-5827)
URL:  http://doi.acm.org/10.1145/2593929.2593946

 

Michael Backes, Fabian Bendun, Ashish Choudhury, Aniket Kate. “Asynchronous MPC with a Strict Honest Majority Using Non-Equivocation.” PODC '14 Proceedings of the 2014 ACM Symposium on Principles of Distributed Computing, July 2014, Pages 10-19.  doi:10.1145/2611462.2611490
Abstract: Multiparty computation (MPC) among n parties can tolerate up to t < n/2 active corruptions in a synchronous communication setting; however, in an asynchronous communication setting, the resiliency bound decreases to only t < n/3 active corruptions. We improve the resiliency bound for asynchronous MPC (AMPC) to match synchronous MPC using non-equivocation. Non-equivocation is a message authentication mechanism to restrict a corrupted sender from making conflicting statements to different (honest) parties. It can be implemented using an increment-only counter and a digital signature oracle, realizable with trusted hardware modules readily available in commodity computers and smartphone devices. A non-equivocation mechanism can also be transferable and allow a receiver to verifiably transfer the authenticated statement to other parties. In this work, using transferable non-equivocation, we present an AMPC protocol tolerating t < n/2 faults. From a practical point of view, our AMPC protocol requires fewer setup assumptions than the previous AMPC protocol with t < n/2 by Beerliová-Trubíniová, Hirt and Nielsen [PODC 2010]: unlike their AMPC protocol, it does not require any synchronous broadcast round at the beginning of the protocol and avoids the threshold homomorphic encryption setup assumption. Moreover, our AMPC protocol is also efficient and provides a gain of Θ(n) in the communication complexity per multiplication gate, over the AMPC protocol of Beerliová-Trubíniová et al. In the process, using non-equivocation, we also define the first asynchronous verifiable secret sharing (AVSS) scheme with t < n/2, which is of independent interest to threshold cryptography.
Keywords: asynchrony, multiparty computation (MPC), non-equivocation, reduced assumptions, resiliency, verifiable secret sharing (VSS) (ID#: 15-5828)
URL: http://doi.acm.org/10.1145/2611462.2611490

 

Lannan Luo, Jiang Ming, Dinghao Wu, Peng Liu, Sencun Zhu. “Semantics-Based Obfuscation-Resilient Binary Code Similarity Comparison with Applications to Software Plagiarism Detection.” FSE 2014 Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, November 2014, Pages 389-400. doi:10.1145/2635868.2635900
Abstract: Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.
Keywords: Software plagiarism detection, binary code similarity comparison, obfuscation, symbolic execution, theorem proving (ID#: 15-5829)
URL: http://doi.acm.org/10.1145/2635868.2635900

 

Divesh Aggarwal, Yevgeniy Dodis, Shachar Lovett. “Non-malleable Codes from Additive Combinatorics.” STOC '14, Proceedings of the 46th Annual ACM Symposium on Theory of Computing, May 2014, Pages 774-783. doi:10.1145/2591796.2591804
Abstract: Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of "tampering functions" F is completely unrestricted, they are known to exist for many broad tampering families F. One such natural family is the family of tampering functions in the so called split-state model. Here the message m is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with L and R individually. The split-state tampering arises in many realistic applications, such as the design of non-malleable secret sharing schemes, motivating the question of designing efficient non-malleable codes in this model.  Prior to this work, non-malleable codes in the splitstate model received considerable attention in the literature, but were constructed either (1) in the random oracle model [16], or (2) relied on advanced cryptographic assumptions (such as non-interactive zero-knowledge proofs and leakage-resilient encryption) [26], or (3) could only encode 1-bit messages [14]. As our main result, we build the first efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model. The heart of our construction uses the following new property of the inner-product function ⟨L;R⟩ over the vector space Fnp (for a prime p and large enough dimension n): if L and R are uniformly random over Fnp, and f,g : Fnp Fnp are two arbitrary functions on L and R, then the joint distribution (⟨L;R⟩, ⟨f(L), g(R)⟩) is "close" to the convex combination of "affine distributions" {(U, aU + b) | a,b Fp}, where U is uniformly random in Fp. In turn, the proof of this surprising property of the inner product function critically relies on some results from additive combinatorics, including the so called Quasi-polynomial Freiman-Ruzsa Theorem which was recently established by Sanders [29] as a step towards resolving the Polynomial Freiman-Ruzsa conjecture.
Keywords: (not provided) (ID#: 15-5830)
URL: http://doi.acm.org/10.1145/2591796.2591804


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.