Visible to the public Hard Problems: Scalability and Composability (2014 Year in Review)

SoS Newsletter- Advanced Book Block


 
SoS Logo

Hard Problems: Scalability and Composability (2014 Year in Review)

 

Scalability and Composability is a hard problem in the Science of Security. A survey of the IEEE and ACM Digital Libraries found a long list of scholarly articles about research into scalability and composability published in 2014. This series of bibliographical citations includes those actually published by IEEE or ACM. A separate listing of works researching these areas but not published by IEEE or ACM is cited under the heading "Citations for Hard Problems."


 

Yves Lhuillier, Maroun Ojail, Alexandre Guerre, Jean-Marc Philippe, Karim Ben Chehida, Farhat Thabet, Caaliph Andriamisaina, Chafic Jaber, Raphaël David; “HARS: A Hardware-Assisted Runtime Software For Embedded Many-Core Architectures,” ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Design Challenges for Many-Core Processors, Special Section on ESTIMedia'13 and Regular Papers TECS Homepage, Volume 13 Issue 3s, March 2014, Article No. 102. Doi: 10.1145/2517311 The current trend in embedded computing consists in increasing the number of processing resources on a chip. Following this paradigm, cluster-based many-core accelerators with a shared hierarchical memory have emerged. Handling synchronizations on these architectures is critical since parallel implementations speed-ups of embedded applications strongly depend on the ability to exploit the largest possible number of cores while limiting task management overhead. This article presents the combination of a low-overhead complete runtime software and a flexible hardware accelerator for synchronizations called HARS (Hardware-Assisted Runtime Software). Experiments on a multicore test chip showed that the hardware accelerator for synchronizations has less than 1% area overhead compared to a cluster of the chip while reducing synchronization latencies (up to 2.8 times compared to a test-and-set implementation) and contentions. The runtime software part offers basic features like memory management but also optimized execution engines to allow the easy and efficient extraction of the parallelism in applications with multiple programming models. By using the hardware acceleration as well as a very low overhead task scheduling software technique, we show that HARS outperforms an optimized state-of-the-art task scheduler by 13% for the execution of a parallel application.
Keywords: Embedded runtime software, hardware acceleration, many-core architectures, multicore architectures (ID#: 15-4378)
URLhttp://doi.acm.org/10.1145/2517311

 

Rupak Majumdar, Sai Deep Tetali, Zilong Wang; “Kuai: A Model Checker for Software-defined Networks,” FMCAD '14 Proceedings of the 14th Conference on Formal Methods in Computer-Aided Design, October 2014, Pages 163-170. Doi: (none provided) ISBN: 978-0-9835678-4-4 In software-defined networking (SDN), a software controller manages a distributed collection of switches by installing and uninstalling packet-forwarding rules in the switches. SDNs allow flexible implementations for expressive and sophisticated network management policies.  We consider the problem of verifying that an SDN satisfies a given safety property. We describe Kuai, a distributed enumerative model checker for SDNs. Kuai takes as input a controller implementation written in Murphi, a description of the network topology (switches and connections), and a safety property, and performs a distributed enumerative reachability analysis on a cluster of machines. Kuai uses a set of partial order reduction techniques specific to the SDN domain that help reduce the state space dramatically. In addition, Kuai performs an automatic abstraction to handle unboundedly many packets traversing the network at a given time and unboundedly many control messages between the controller and the switches. We demonstrate the scalability and coverage of Kuai on standard SDN benchmarks. We show that our set of partial order reduction techniques significantly reduces the state spaces of these benchmarks by many orders of magnitude. In addition, Kuai exploits large-scale distribution to quickly search the reduced state space.
Keywords:  (not provided) (ID#: 15-4379)
URLhttp://dl.acm.org/citation.cfm?id=2682923.2682953

 

Mingxin Zhang, Alexander Verbraeck; “A Composable PRS-Based Agent Meta-Model For Multi-Agent Simulation Using the DEVS Framework,” ADS '14 Proceedings of the 2014 Symposium on Agent Directed Simulation, April 2014, Article No. 1.  Doi: (none provided) This paper presents a composable cognitive agent meta-model for multi-agent simulation based on the DEVS (Discrete Event System Specification) framework. We describe an attempt to compose a PRS-based cognitive agent by merely combining "plug and play" DEVS components, show how this DEVS-based cognitive agent meta-model is extensible to serve as a higher-level component for M&S of multi-agent systems, and how the agent meta-model components are reusable to ease cognitive agent modelling development. In addition to an overview of our agent meta-model, we also describe the components of the model specification and services in detail. To test the feasibility of our design, we constructed a simulation based on a Rock-Paper-Scissors game scenario. We also give out comparisons between this agent meta-model and other cognitive agent models. Our agent meta-model is novel in terms of both agent and agent components as these are all abstracted using the DEVS formalism. As different implementations of agent model components are based on the same meta-model components, all the developed agent model components can be reused in the development of other agent models which increases the composability of the agent model, and the whole cognitive agent model can be considered as a coupled model in the DEVS model hierarchy which supports multi-hierarchy modelling.
Keywords: DEVS, PRS, agent model, cognitive architecture, composability (ID#: 15-4380)
URL: http://dl.acm.org/citation.cfm?id=2665049.2665050

 

Miloš Panić, Eduardo Quiñones, Pavel G. Zaykov, Carles Hernandez, Jaume Abella, Francisco J. Cazorla; “Parallel Many-Core Avionics Systems,” EMSOFT '14 Proceedings of the 14th International Conference on Embedded Software, October 2014, Article No. 26.  Doi: 10.1145/2656045.2656063 Integrated Modular Avionics (IMA) enables incremental qualification by encapsulating avionics applications into software partitions (SWPs), as defined by the ARINC 653 standard. SWPs, when running on top of single-core processors, provide robust time partitioning as a means to isolate SWPs timing behavior from each other. However, when moving towards parallel execution in many-core processors, the simultaneous accesses to shared hardware and software resources influence the timing behavior of SWPs, defying the purpose of time partitioning to provide isolation among applications. In this paper, we extend the concept of SWP by introducing parallel software partitions (pSWP) specification that describes the behavior of SWPs required when running in a many-core to enable incremental qualification. pSWP are supported by a new hardware feature called guaranteed resource partition (GRP) that defines an execution environment in which SWPs run and that controls interferences in the accesses to shared hardware resources among SWPs such that time composability can be guaranteed.
Keywords:  (not provided) (ID#: 15-4381)
URL: http://doi.acm.org/10.1145/2656045.2656063

 

Andreas Weichslgartner, Deepak Gangadharan, Stefan Wildermann, Michael Glaß, Jürgen Teich; “DAARM: Design-Time Application Analysis And Run-Time Mapping For Predictable Execution In Many-Core Systems,” CODES '14 Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis, October 2014, Article No. 34. Doi: 10.1145/2656075.2656083  Future many-core systems are envisaged to support the concurrent execution of varying mixes of different applications. Because of the vast number of binding options for such mixes on heterogeneous resources, enabling predictable application execution is far from trivial. Hybrid application mapping is an efficient way of achieving run-time predictability by combining design-time analysis of application mappings with run-time management. Existing hybrid mapping strategies focus on computation resources and either ignore communication details or make significantly simplifying assumptions like unlimited bandwidth or exclusive usage. But, actual many-core systems consist of constrained and shared computation and communication resources where the run-time decision of whether a feasible application binding on a set of preoccupied resources exists or not is an NP-complete problem. As a remedy, we present a novel hybrid application mapping approach that considers constrained shared communication and computation resources. Here, (a) a design space exploration coupled with a formal performance analysis delivers several resource reservation configurations with verified real-time guarantees for each individual application. The configurations are then transformed to (b) a novel efficient intermediate representation that is passed to the run-time management where we (c) formulate run-time resource reservation and application binding as a constraint satisfaction problem and present an adequate solving mechanism. Our experimental evaluation shows that existing approaches may produce infeasible outcomes and are thus not applicable for predictable application execution, while the proposed approach enables predictable and efficient run-time management of dynamic application mixes.
Keywords: dse, hybrid mapping, many-core, networks-on-chip, predictability (ID#: 15-4382)
URL: http://doi.acm.org/10.1145/2656075.2656083

 

Hanane Becha, Daniel Amyot; “Consumer-Centric Non-Functional Properties Of SOA-Based Services,” PESOS 2014 Proceedings of the 6th International Workshop on Principles of Engineering Service-Oriented and Cloud Systems, May 2014, Pages 18-27. Doi:  10.1145/2593793.2593796 An effective SOA service development approach requires the identification, specification, implementation, aggregation, management and monitoring of service-related Non-Functional Properties (NFPs). However, commonly nowadays, NFPs are often not handled or are handled partially in ad hoc, proprietary ways. In this paper, we focus on providing formal NFP descriptions of SOA-based services to be published along with their functional description. Such descriptions empower service consumers to determine whether a given service is the most appropriate one for their needs and enables them to predict the NFPs of composed services based on the NFPs of their composed underlying services. Our contributions are an externally validated collection of NFPs with a concrete syntax and composition algorithms ready to be used for defining, selecting and composing NFP-driven services. 
Keywords: Non-Functional Properties (NFPs), QoS, SOA, Service composition, Service description (ID#: 15-4383)
URL: http://doi.acm.org/10.1145/2593793.2593796

 

Jan Nowotsch, Michael Paulitsch, Arne Henrichsen, Werner Pongratz, Andreas Schacht; “Monitoring and WCET Analysis In COTS Multi-Core-SoC-Based Mixed-Criticality Systems,” DATE '14 Proceedings of the conference on Design, Automation & Test in Europe,  March 2014, Article No. 67.  Doi:  (none provided) The performance and power efficiency of multi-core processors are attractive features for safety-critical applications, for example in avionics. But the inherent use of shared resources complicates timing analysability. In this paper we discuss a novel approach to compute the Worst-Case Execution Time (WCET) of multiple hard real-time applications scheduled on a Commercial Off-The-Shelf (COTS) multi-core processor. The analysis is closely coupled with mechanisms for temporal partitioning as, for instance, required in ARINC 653-based systems. Based on a discussion of the challenges for temporal partitioning and timing analysis in multi-core systems, we deduce a generic architecture model. Considering the requirements for re-usability and incremental development and certification, we use this model to describe our integrated analysis approach.
Keywords: WCET, multi-core, safety-critical real-time systems, temporal partitioning (ID#: 15-4384)
URL: http://dl.acm.org/citation.cfm?id=2616606.2616689

 

Gervin Thomas, Ahmed Elhossini, Ben Juurlink; “A Generic Implementation of a Quantified Predictor on FPGAs,” GLSVLSI '14 Proceedings of the 24th Edition Of The Great Lakes Symposium on VLSI, May 2014, Pages 255-260.  Doi: 10.1145/2591513.2591517 Predictors are used in many fields of computer architectures to enhance performance. With good estimations of future system behaviour, policies can be developed to improve system performance or reduce power consumption. These policies become more effective if the predictors are implemented in hardware and can provide quantified forecasts and not only binary ones. In this paper, we present and evaluate a generic predictor implemented in VHDL running on an FPGA which produces quantified forecasts. Moreover, a complete scalability analysis is presented which shows that our implementation has a maximum device utilization of less than 5%. Furthermore, we analyse the power consumption of the predictor running on an FPGA. Additionally, we show that this implementation can be clocked by over 210 MHz. Finally, we evaluate a power-saving policy based on our hardware predictor. Based on predicted idle periods, this power-saving policy uses power-saving modes and is able to reduce memory power consumption by 14.3%.
Keywords: RTL, VHDL, power consumption, predictor (ID#: 15-4385)
URL: http://doi.acm.org/10.1145/2591513.2591517

 

Michael Eichberg, Ben Hermann; “A Software Product Line For Static Analyses: The OPAL Framework,” SOAP '14 Proceedings of the 3rd ACM SIGPLAN International Workshop on the State of the Art in Java Program Analysis, June 2014, Pages 1-6. Doi: 10.1145/2614628.2614630  Implementations of static analyses are usually tailored toward a single goal to be efficient, hampering reusability and adaptability of the components of an analysis. To solve these issues, we propose to implement static analyses as highly-configurable software product lines (SPLs). Furthermore, we also discuss an implementation of an SPL for static analyses -- called OPAL -- that uses advanced language features offered by the Scala programming language to get an easily adaptable and (type-)safe software product line.  OPAL is a general purpose library for static analysis of Java Bytecode that is already successfully used. We present OPAL and show how a design based on software product line engineering benefits the implementation of static analyses with the framework.
Keywords: abstract interpretation, design, program analysis, software product line engineering, static analysis (ID#: 15-4386)
URL: http://doi.acm.org/10.1145/2614628.2614630

 

Christoph Bösch, Pieter Hartel, Willem Jonker, Andreas Peter; “A Survey of Provably Secure Searchable Encryption,” ACM Computing Surveys (CSUR) Surveys Homepage, Volume 47 Issue 2, November 2014 Issue-in-Progress,  Article No. 18. Doi:  10.1145/2636328  We survey the notion of provably secure searchable encryption (SE) by giving a complete and comprehensive overview of the two main SE techniques: searchable symmetric encryption (SSE) and public key encryption with keyword search (PEKS). Since the pioneering work of Song, Wagner, and Perrig (IEEE S&P '00), the field of provably secure SE has expanded to the point where we felt that taking stock would provide benefit to the community.  The survey has been written primarily for the nonspecialist who has a basic information security background. Thus, we sacrifice full details and proofs of individual constructions in favor of an overview of the underlying key techniques. We categorize and compare the different SE schemes in terms of their security, efficiency, and functionality. For the experienced researcher, we point out connections between the many approaches to SE and identify open research problems. Two major conclusions can be drawn from our work. While the so-called IND-CKA2 security notion becomes prevalent in the literature and efficient (sublinear) SE schemes meeting this notion exist in the symmetric setting, achieving this strong form of security efficiently in the asymmetric setting remains an open problem. We observe that in multirecipient SE schemes, regardless of their efficiency drawbacks, there is a noticeable lack of query expressiveness that hinders deployment in practice.
Keywords: Secure data management, keyword search on encrypted data, privacy, provably secure, searchable encryption (ID#: 15-4387)
URLhttp://doi.acm.org/10.1145/2636328

 

Yanqi Zhou, David Wentzlaff; “The Sharing Architecture: Sub-Core Configurability For IaaS Clouds,” ASPLOS '14 Proceedings of the 19th International Conference On Architectural Support For Programming Languages And Operating Systems, February 2014, pages 559-574. Doi: 10.1145/2644865.2541950 Businesses and Academics are increasingly turning to Infrastructure as a Service (IaaS) Clouds such as Amazon's Elastic Compute Cloud (EC2) to fulfill their computing needs. Unfortunately, current IaaS systems provide a severely restricted pallet of rentable computing options which do not optimally fit the workloads that they are executing. We address this challenge by proposing and evaluating a manycore architecture, called the Sharing Architecture, specifically optimized for IaaS systems by being reconfigurable on a sub-core basis. The Sharing Architecture enables better matching of workload to micro-architecture resources by replacing static cores with Virtual Cores which can be dynamically reconfigured to have different numbers of ALUs and amount of Cache. This reconfigurability enables many of the same benefits of heterogeneous multicores, but in a homogeneous fabric, and enables the reuse and resale of resources on a per ALU or per KB of cache basis. The Sharing Architecture leverages Distributed ILP techniques, but is designed in a way to be independent of recompilation. In addition, we introduce an economic model which is enabled by the Sharing Architecture and show how different users who have varying needs can be better served by such a flexible architecture. We evaluate the Sharing Architecture across a benchmark suite of Apache, SPECint, and parts of PARSEC, and find that it can achieve up to a 5x more economically efficient market when compared to static architecture multicores. We implemented the Sharing Architecture in Verilog and present area overhead results.
Keywords: cache, cache banks, infrastructure as a service (iaas), market efficiency, slice, utility, virtual core (vcore), virtual machine (vm) (ID#: 15-4388)
URL: http://doi.acm.org/10.1145/2644865.2541950

 

Clemens Dubslaff, Sascha Klüppelholz, Christel Baier; “Probabilistic Model Checking For Energy Analysis In Software Product Lines,” MODULARITY '14 Proceedings of the 13th International Conference on Modularity, April 2014, Pages 169-180. Doi:  10.1145/2577080.2577095 In a software product line (SPL), a collection of software products is defined by their commonalities in terms of features rather than explicitly specifying all products one-by-one. Several verification techniques were adapted to establish temporal properties of SPLs. Symbolic and family-based model checking have been proven to be successful for tackling the combinatorial blow-up arising when reasoning about several feature combinations. However, most formal verification approaches for SPLs presented in the literature focus on the static SPLs, where the features of a product are fixed and cannot be changed during runtime. This is in contrast to dynamic SPLs, allowing to adapt feature combinations of a product dynamically after deployment.  The main contribution of the paper is a compositional modeling framework for dynamic SPLs, which supports probabilistic and nondeterministic choices and allows for quantitative analysis. We specify the feature changes during runtime within an automata-based coordination component, enabling to reason over strategies how to trigger dynamic feature changes for optimizing various quantitative objectives, e.g., energy or monetary costs and reliability. For our framework there is a natural and conceptually simple translation into the input language of the prominent probabilistic model checker PRISM. This facilitates the application of PRISM's powerful symbolic engine to the operational behavior of dynamic SPLs and their family-based analysis against various quantitative queries. We demonstrate feasibility of our approach by a case study issuing an energy-aware bonding network device.
Keywords: dynamic features, energy analysis, probabilistic model checking, software product lines (ID#: 15-4389)
URL: http://doi.acm.org/10.1145/2577080.2577095 

 

Zaur Molotnikov, Markus Völter, Daniel Ratiu; “Automated Domain-specific C Verification with mbeddr,” ASE '14 Proceedings of the 29th ACM/IEEE International Conference On Automated Software Engineering, September 2014, Pages 539-550. Doi:  10.1145/2642937.2642938 When verifying C code, two major problems must be addressed. One is the specification of the verified systems properties, the other one is the construction of the verification environment. Neither C itself, nor existing C verification tools, offer the means to efficiently specify application domain-level properties and environments for verification. These two shortcomings hamper the usability of C verification, and limit its adoption in practice. In this paper we introduce an approach that addresses both problems and results in user-friendly and practically usable C verification. The novelty of the approach is the combination of domain-specific language engineering and C verification. We apply the approach in the domain of state-based software, using mbeddr and CBMC. We validate the implementation with an example from the Pacemaker Challenge, developing a functionally verified, lightweight, and deployable cardiac pulse generator. The approach itself is domain-independent.
Keywords: cbmc, domain-specific languages, mbeddr, verification (ID#: 15-4390)
URL: http://doi.acm.org/10.1145/2642937.2642938 

 

Jun Zhang, Graham Cormode, Cecilia M. Procopiuc, Divesh Srivastava, Xiaokui Xiao; “PrivBayes: Private Data Release via Bayesian Networks,” SIGMOD '14 Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, June 2014, Pages 1423-1434.  Doi: 10.1145/2588555.2588573 Privacy-preserving data publishing is an important problem that has been the focus of extensive study. The state-of-the-art goal for this problem is differential privacy, which offers a strong degree of privacy protection without making restrictive assumptions about the adversary. Existing techniques using differential privacy, however, cannot effectively handle the publication of high-dimensional data. In particular, when the input dataset contains a large number of attributes, existing methods require injecting a prohibitive amount of noise compared to the signal in the data, which renders the published data next to useless.  To address the deficiency of the existing methods, this paper presents PrivBayes, a differentially private method for releasing high-dimensional data. Given a dataset D, PrivBayes first constructs a Bayesian network N, which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of low-dimensional marginals of D. After that, PrivBayes injects noise into each marginal in P to ensure differential privacy, and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, PrivBayes samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data. Intuitively, PrivBayes circumvents the curse of dimensionality, as it injects noise into the low-dimensional marginals in P instead of the high-dimensional dataset D. Private construction of Bayesian networks turns out to be significantly challenging, and we introduce a novel approach that uses a surrogate function for mutual information to build the model more accurately. We experimentally evaluate PrivBayes on real data, and demonstrate that it significantly outperforms existing solutions in terms of accuracy.
Keywords: bayesian network, differential privacy, synthetic data generation (ID#: 15-4391)
URL: http://doi.acm.org/10.1145/2588555.2588573

 

Zhen Ye, Athman Bouguettaya, Xiaofang Zhou; “Economic Model-Driven Cloud Service Composition,” ACM Transactions on Internet Technology (TOIT) - Special Issue on Pricing and Incentives in Networks and Systems and Regular Papers TOIT Homepage; Volume 14 Issue 2-3, October 2014, Article No. 20.  Doi: 10.1145/2651420 This article considers cloud service composition from a decision analysis perspective. Traditional QoS-aware composition techniques usually consider the qualities available at the time of the composition because compositions are usually immediately consumed. This is fundamentally different in the cloud environment where the cloud service composition typically lasts for a relatively long period of time. The two most important drivers when composing cloud service are the long-term nature of the composition and the economic motivation for outsourcing tasks to the cloud. We propose an economic model, which we represent as a Bayesian network, to select and compose cloud services. We then leverage influence diagrams to model the cloud service composition. We further extend the traditional influence diagram problem to a hybrid one and adopt an extended Shenoy-Shafer architecture to solve such hybrid influence diagrams that include deterministic chance nodes. In addition, analytical and simulation results are presented to show the performance of the proposed composition approach.
Keywords: Cloud service, economic model, service composition  (ID#: 15-4392)
URL: http://doi.acm.org/10.1145/2651420 

 

Manohar Jonnalagedda, Thierry Coppey, Sandro Stucki, Tiark Rompf, Martin Odersky; “Staged Parser Combinators For Efficient Data Processing,” OOPSLA '14 Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications, October 2014, Pages 637-653. Doi:  10.1145/2660193.2660241  Parsers are ubiquitous in computing, and many applications depend on their performance for decoding data efficiently. Parser combinators are an intuitive tool for writing parsers: tight integration with the host language enables grammar specifications to be interleaved with processing of parse results. Unfortunately, parser combinators are typically slow due to the high overhead of the host language abstraction mechanisms that enable composition.  We present a technique for eliminating such overhead. We use staging, a form of runtime code generation, to dissociate input parsing from parser composition, and eliminate intermediate data structures and computations associated with parser composition at staging time. A key challenge is to maintain support for input dependent grammars, which have no clear stage distinction.  Our approach applies to top-down recursive-descent parsers as well as bottom-up non-deterministic parsers with key applications in dynamic programming on sequences, where we auto-generate code for parallel hardware. We achieve performance comparable to specialized, hand-written parsers.
Keywords: algebraic dynamic programming, multi-stage programming, parser combinators (ID#: 15-4393)
URL: http://doi.acm.org/10.1145/2660193.2660241

 

Sebastian Erdweg, Vlad Vergu, Mira Mezini, Eelco Visser; “Modular Specification And Dynamic Enforcement Of Syntactic Language Constraints When Generating Code,” MODULARITY '14 Proceedings of the 13th International Conference on Modularity, April 2014, Pages 241-252. Doi:  10.1145/2577080.2577089  A key problem in metaprogramming and specifically in generative programming is to guarantee that generated code is well-formed with respect to the context-free and context-sensitive constraints of the target language. We propose typesmart constructors as a dynamic approach to enforcing the well-formedness of generated code. A typesmart constructor is a function that is used in place of a regular constructor to create values, but it may reject the creation of values if the given data violates some language-specific constraint. While typesmart constructors can be implemented individually, we demonstrate how to derive them automatically from a grammar, so that the grammar remains the sole specification of a language's syntax and is not duplicated. We have integrated support for typesmart constructors into the run-time system of Stratego to enforce usage of typesmart constructors implicitly whenever a regular constructor is called. We evaluate the applicability, performance, and usefulness of typesmart constructors for syntactic constraints in a compiler for MiniJava developed with Spoofax and in various language extensions of Java and Haskell implemented with SugarJ and SugarHaskell.
Keywords: abstract syntax tree, dynamic analysis, generative programming, program transformation, spoofax, sugarj, typesmart constructors, well-formedness checks (ID#: 15-4394)
URL: http://doi.acm.org/10.1145/2577080.2577089

 

Mahdi Zamani, Mahnush Movahedi; “Secure Location Sharing,” FOMC '14 Proceedings of the 10th ACM International Workshop On Foundations Of Mobile Computing; August 2014, Pages 1-10. Doi:  10.1145/2634274.2634281 In the last decade, the number of location-aware mobile devices has mushroomed. Just as location-based services grow fast, they lay out many questions and challenges when it comes to privacy. For example, who owns the location data and for what purpose is the data used? To answer these questions, we need new tools for location privacy. In this paper, we focus on the problem of secure location sharing, where a group of n clients want to collaborate with each other to anonymously share their location data with a location database server and execute queries based on them. To become more realistic, we assume up to a certain fraction of the clients are controlled arbitrarily by an active and computationally unbounded adversary. A relaxed version of this problem has already been studied in the literature assuming either a trusted third party or a weaker adversarial model. We alternatively propose a scalable fully-decentralized protocol for secure location sharing that tolerates up to n/6 statically-chosen malicious clients and does not require any trusted third party. We show that, unlike most other location-based services, our protocol is secure against traffic-analysis attacks. We also show that our protocol requires each client to send a polylogarithmic number of bits and compute a polylogarithmic number of operations (with respect to n) to query a point of interest based on its location.
Keywords: distributed algorithms, fault-tolerance, location-based services (ID#: 15-4395)
URL: http://doi.acm.org/10.1145/2634274.2634281

 

Jamie Garside, Neil C. Audsley; “WCET Preserving Hardware Prefetch for Many-Core Real-Time Systems,” RTNS '14 Proceedings of the 22nd International Conference on Real-Time Networks and Systems, October, 2014, Pages 193.  Doi: 10.1145/2659787.2659824  There is an obvious bus bottleneck when multiple CPUs within a Many-Core architecture share the same physical off-chip memory (eg. DDR / DRAM). Worst-Case Execution Time (WCET) analysis of application tasks will inevitably include the effects of sharing the memory bus amongst CPUs; likewise average case execution times will include effects of individual memory accesses being slowed by interference with other memory requests from other CPUs. One approach for mitigating this is to use a hardware prefetch to move instructions and data from memory to the CPU cache before a cache miss instigates a memory request. However, in a real-time system, there is a trade-off between issuing prefetch requests to off-chip memory and hence reducing bandwidth available to serving CPU cache misses; and the gain in the fact that some CPU cache misses are avoided by the prefetch with the memory system seeing reduced memory requests.  In this paper we propose, analyse and show the implementation of a hardware prefetcher designed so that WCET of application tasks are not affected by the run-time behaviour of the prefetcher, i.e. it utilises spare time within the memory system to issue prefetch requests and forward them to the appropriate CPU. As well as not affecting WCET times, the prefetcher enables significant reduction in average case execution times of application tasks, showing the efficacy of the approach.
Keywords: (not provided) (ID#: 15-4396)
URLhttp://doi.acm.org/10.1145/2659787.2659824

 

 

Edward Z. Yang, David Mazières; “Dynamic Space Limits for Haskell,” PLDI '14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, Pages 588-598. Doi: 10.1145/2594291.2594341 We describe the semantics and implementation of a space limits system for Haskell, which allows programmers to create resource containers that enforce bounded resident memory usage at runtime. Our system is distinguished by a clear allocator-pays semantics drawn from previous experience with profiling in Haskell and an implementation strategy which uses a block-structured heap to organize containers, allowing us to enforce limits with high accuracy. To deal with the problem of deallocating data in a garbage collected heap, we propose a novel taint-based mechanism that unifies the existing practices of revocable pointers and killing threads in order to reclaim memory. Our system is implemented in GHC, a production-strength compiler for Haskell.

Keywords: Haskell, fault tolerance, profiling, resource limits (ID#: 15-4397)

URL: http://doi.acm.org/10.1145/2594291.2594341

 

Mikael Åsberg, Thomas Nolte, Shinpei Kato; “Towards Partitioned Hierarchical Real-Time Scheduling On Multi-Core Processors,” ACM SIGBED Review; Volume 11 Issue 2, June 2014, Pages 13-18. Doi: 10.1145/2668138.2668140 This paper extends previous work on hierarchical scheduling to multi-core systems. We have implemented partitioned multi-core scheduling of servers in the Linux kernel, using the scheduling framework ExSched. Neither ExSched nor the presented scheduler require any modifications to the Linux kernel. Hence, this makes the installation and kernel-version updates easier. We also present a user-space simulator which can be used when developing new multi-core hierarchical schedulers (plug-ins) for ExSched.  We evaluate the overhead of our new multi-core hierarchical scheduler and compare it to a single-core hierarchical scheduler. Our results can be useful for developers that want to minimize the scheduler overhead when using partitioned hierarchical multi-core scheduling.
Keywords: hierarchical scheduling, implementation, partitioned multi-core scheduling, real-time systems (ID#: 15-4398)
URL: http://doi.acm.org/10.1145/2668138.2668140

 

Rafael Maestrelli, Luis Almeida, Daniel Coutinho, Ubirajara Moreno; “Dynamic Bandwidth Management in Networked Control Systems Using Quantization,” ACM SIGBED Review - Special Issue on the 6th Workshop on Adaptive and Reconfigurable Embedded Systems; Volume 11 Issue 3, October 2014, Pages 58-61. Doi: 10.1145/2692385.2692396 Modern networked control systems integrate multiple independent feedback control loops that require guaranteed bandwidth for timely operation. However, planning the distributed control systems considering worst-case requirements leads to expensive and inefficient designs. This motivated the development of dynamic rate adaptation as a technique to support higher integration in these systems while providing an efficient use of the network bandwidth. This bandwidth can also be managed by varying the quantization used in each loop but, surprisingly, to the best of the authors' knowledge, this approach has not been explored yet. In this work-in-progress paper, we propose managing the network bandwidth varying the number of bits used to represent the transmitted variables (sensor readings and actuation values) while keeping the loop rates constant. We present the basics of the quantization-based bandwidth management as well as a qualitative discussion on the pros and cons of this method.
Keywords:  (not provided) (ID#: 15-4399)
URL: http://doi.acm.org/10.1145/2692385.2692396

 

Shahab Mokarizadeh, Peep Küngas, Mihhail Matskin; “A Framework for Evaluating Semantic Annotations of Web Services: A Network Theory Based Approach For Measuring Annotation Quality,” Web Intelligence and Agent Systems, Volume 12 Issue 1, January 2014, Pages 15-34. Doi: (none provided) In the past years various methods have been developed which require semantic annotations of Web services as an input. Such methods typically leverage discovery, match-making, composition and execution of Web services in dynamic settings. At the same time a number of automated Web service annotation approaches have been proposed for enabling application of these methods in settings where it is not feasible to provide the annotations manually. However, lack of effective automated evaluation frameworks has seriously limited proper assessment of the constructed annotations in settings where the overall annotation quality of large quantities of Web services needs to be evaluated. This paper describes an evaluation framework for measuring the quality of semantic annotations for a large number of real-world Web services from heterogeneous application domains. The evaluation framework is generally based on analyzing properties of Web service networks constructed from semantic annotations of the Web services. More specifically, we measure scale-free, small-world and correlation degree properties of the networks to evaluate the overall quality of annotations. The evaluation is demonstrated using annotations constructed semi-automatically for a set of publicly available WSDL documents containing descriptions of about 200 000 Web service operations.
Keywords: Evaluation of Annotation Quality, Network Theory, Semantic Web Services, Web Service Annotation, Web Service Networks (ID#: 15-4400)
URL: http://dl.acm.org/citation.cfm?id=2590104.2590106

 

Backes, Michael; Manoharan, Praveen; Mohammadi, Esfandiar, "TUC: Time-Sensitive and Modular Analysis of Anonymous Communication," Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, pp.383,397, 19-22 July 2014. doi: 10.1109/CSF.2014.34 The anonymous communication protocol Tor constitutes the most widely deployed technology for providing anonymity for user communication over the Internet. Several frameworks have been proposed that show strong anonymity guarantees, none of these, however, are capable of modeling the class of traffic-related timing attacks against Tor, such as traffic correlation and website fingerprinting. In this work, we present TUC: the first framework that allows for establishing strong anonymity guarantees in the presence of time-sensitive adversaries that mount traffic-related timing attacks. TUC incorporates a comprehensive notion of time in an asynchronous communication model with sequential activation, while offering strong compositionality properties for security proofs. We apply TUC to evaluate a novel countermeasure for Tor against website fingerprinting attacks. Our analysis relies on a formalization of the onion routing protocol that underlies Tor and proves rigorous anonymity guarantees in the presence of traffic-related timing attacks.
Keywords: Analytical models; Clocks; Frequency modulation; Routing protocols; Security; Timing; Anonymity Guarantees; Anonymous Communication Protocols; Timing Attacks; Tor (ID#: 15-4401)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957124&isnumber=6957090

 

Rafnsson, Willard; Sabelfeld, Andrei, "Compositional Information-Flow Security for Interactive Systems," Computer Security Foundations Symposium (CSF), 2014 IEEE 27th , vol., no., pp.277,292, 19-22 July 2014. doi: 10.1109/CSF.2014.27
Abstract: To achieve end-to-end security in a system built from parts, it is important to ensure that the composition of secure components is itself secure. This work investigates the compositionality of two popular conditions of possibilistic noninterference. The first condition, progress-insensitive noninterference (PINI), is the security condition enforced by practical tools like JSFlow, Paragon, sequential LIO, Jif, Flow Caml, and SPARK Examiner. We show that this condition is not preserved under fair parallel composition: composing a PINI system fairly with another PINI system can yield an insecure system. We explore constraints that allow recovering compositionality for PINI. Further, we develop a theory of compositional reasoning. In contrast to PINI, we show what PSNI behaves well under composition, with and without fairness assumptions. Our work is performed within a general framework for nondeterministic interactive systems.
Keywords: Cognition; Computational modeling; Interactive systems; Security; Semantics; Sensitivity; Synchronization (ID#: 15-4402)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957117&isnumber=6957090


 

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.