Visible to the public Biblio

Found 188 results

Filters: Keyword is Computing Theory  [Clear All Filters]
2017-10-13
Crockett, Eric, Peikert, Chris.  2016.  \$\textbackslashtextbackslashLambda\$ο\$\textbackslashtextbackslashlambda\$: Functional Lattice Cryptography. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :993–1005.

This work describes the design, implementation, and evaluation of Λολ, a general-purpose software framework for lattice-based cryptography. The Λολ framework has several novel properties that distinguish it from prior implementations of lattice cryptosystems, including the following. Generality, modularity, concision: Λολ defines a collection of general, highly composable interfaces for mathematical operations used across lattice cryptography, allowing for a wide variety of schemes to be expressed very naturally and at a high level of abstraction. For example, we implement an advanced fully homomorphic encryption (FHE) scheme in as few as 2–5 lines of code per feature, via code that very closely matches the scheme's mathematical definition. Theory affinity: Λολ is designed from the ground-up around the specialized ring representations, fast algorithms, and worst-case hardness proofs that have been developed for the Ring-LWE problem and its cryptographic applications. In particular, it implements fast algorithms for sampling from theory-recommended error distributions over arbitrary cyclotomic rings, and provides tools for maintaining tight control of error growth in cryptographic schemes. Safety: Λολ has several facilities for reducing code complexity and programming errors, thereby aiding the correct implementation of lattice cryptosystems. In particular, it uses strong typing to statically enforce—i.e., at compile time—a wide variety of constraints among the various parameters. Advanced features: Λολ exposes the rich hierarchy of cyclotomic rings to cryptographic applications. We use this to give the first-ever implementation of a collection of FHE operations known as "ring switching," and also define and analyze a more efficient variant that we call "ring tunneling." Lastly, this work defines and analyzes a variety of mathematical objects and algorithms for the recommended usage of Ring-LWE in cyclotomic rings, which we believe will serve as a useful knowledge base for future implementations.

Lesani, Mohsen, Bell, Christian J., Chlipala, Adam.  2016.  Chapar: Certified Causally Consistent Distributed Key-value Stores. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. :357–370.

Today’s Internet services are often expected to stay available and render high responsiveness even in the face of site crashes and network partitions. Theoretical results state that causal consistency is one of the strongest consistency guarantees that is possible under these requirements, and many practical systems provide causally consistent key-value stores. In this paper, we present a framework called Chapar for modular verification of causal consistency for replicated key-value store implementations and their client programs. Specifically, we formulate separate correctness conditions for key-value store implementations and for their clients. The interface between the two is a novel operational semantics for causal consistency. We have verified the causal consistency of two key-value store implementations from the literature using a novel proof technique. We have also implemented a simple automatic model checker for the correctness of client programs. The two independently verified results for the implementations and clients can be composed to conclude the correctness of any of the programs when executed with any of the implementations. We have developed and checked our framework in Coq, extracted it to OCaml, and built executable stores.

Denysyuk, Oksana, Woelfel, Philipp.  2016.  Are Shared Objects Composable Under an Oblivious Adversary? Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing. :335–344.

Linearizability [5] of a concurrent object ensures that operations on that object appear to execute atomically. It is well known that linearizable implementations are composable: in an algorithm designed to work with atomic objects, replacing any atomic object with a linearizable implementation preserves the correctness of the original algorithm. However, replacing atomic objects with linearizable ones in a randomized algorithm can break the original probabilistic guarantees [3]. With an adaptive adversary, this problem is solved by using strongly linearizable [3] objects in the composition. How about with an oblivious adversary. In this paper, we ask the fundamental question of what property makes implementations composable under an oblivious adversary. It turns out that the property depends on the entire collection of objects used in the algorithm. We show that the composition of every randomized algorithm with a collection of linearizable objects OL is sound if and only if OL satisfies a property called library homogeneity. Roughly, this property says that, for each process, every operation on OL has the same length and linearization point. This result has several important implications. First, for an oblivious adversary, there is nothing analogous to linearizability to ensure that the atomic objects of an algorithm can be replaced with their implementations. Second, in general, algorithms cannot use implemented objects alongside atomic objects provided by the system, such as registers. These results show that, with an oblivious adversary, it is much harder to implement reusable object types than previously believed.

Binsbergen, L. Thomas van, Sculthorpe, Neil, Mosses, Peter D..  2016.  Tool Support for Component-based Semantics. Companion Proceedings of the 15th International Conference on Modularity. :8–11.

The developers of a programming language need to document its intended syntax and semantics, and to update the documentation when the language evolves. They use formal grammars to define context-free syntax, but usually give only an informal description of semantics. Use of formal semantics could greatly increase the consistency and completeness of language documentation, support rapid prototyping, and allow empirical validation. Modularity of semantics is essential for practicality when scaling up to definitions of larger languages. Component-based semantics takes modularity to the highest possible level. In this approach, the semantics of a language is defined by equations translating its constructs (compositionally) to combinations of so-called fundamental constructs, or `funcons'. The definition of each funcon is a small, highly reusable component. The PLanCompS project has defined a substantial library of funcons, and shown their reusability in several case studies. We have designed a meta-language called CBS for component-based semantics, and an IDE to support development, rapid prototyping, and validation of definitions in CBS. After introducing and motivating CBS, we demonstrate how the IDE can be used to browse and edit the CBS definition of a toy language, to generate a prototype implementation of the language, and to parse and run programs.

Schäfer, Steven, Schneider, Sigurd, Smolka, Gert.  2016.  Axiomatic Semantics for Compiler Verification. Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs. :188–196.

Based on constructive type theory, we study two idealized imperative languages GC and IC and verify the correctness of a compiler from GC to IC. GC is a guarded command language with underspecified execution order defined with an axiomatic semantics. IC is a deterministic low-level language with linear sequential composition and lexically scoped gotos defined with a small-step semantics. We characterize IC with an axiomatic semantics and prove that the compiler from GC to IC preserves specifications. The axiomatic semantics we consider model total correctness and map programs to continuous predicate transformers. We define the axiomatic semantics of GC and IC with elementary inductive predicates and show that the predicate transformer described by a program can be obtained compositionally by recursion on the syntax of the program using a fixed point operator for loops and continuations. We also show that two IC programs are contextually equivalent if and only if their predicate transformers are equivalent.

Kozen, Dexter.  2016.  Kolmogorov Extension, Martingale Convergence, and Compositionality of Processes. Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science. :692–699.

We show that the Kolmogorov extension theorem and the Doob martingale convergence theorem are two aspects of a common generalization, namely a colimit-like construction in a category of Radon spaces and reversible Markov kernels. The construction provides a compositional denotational semantics for lossless iteration in probabilistic programming languages, even in the absence of a natural partial order.

Mehregan, Pooya, Fong, Philip W.L..  2016.  Policy Negotiation for Co-owned Resources in Relationship-Based Access Control. Proceedings of the 21st ACM on Symposium on Access Control Models and Technologies. :125–136.

The collaborative nature of content development has given rise to the novel problem of multiple ownership in access control, such that a shared resource is administrated simultaneously by co-owners who may have conflicting privacy preferences and/or sharing needs. Prior work has focused on the design of unsupervised conflict resolution mechanisms. Driven by the need for human consent in organizational settings, this paper explores interactive policy negotiation, an approach complementary to that of prior work. Specifically, we propose an extension of Relationship-Based Access Control (ReBAC) to support multiple ownership, in which a policy negotiation protocol is in place for co-owners to come up with and give consent to an access control policy in a structured manner. During negotiation, the draft policy is assessed by formally defined availability criteria: to the second level of the polynomial hierarchy. We devised two algorithms for verifying policy satisfiability, both employing a modern SAT solver for solving subproblems. The performance is found to be adequate for mid-sized organizations.

Usman, Aminu Bello, Gutierrez, Jairo.  2016.  A Reliability-Based Trust Model for Efficient Collaborative Routing in Wireless Networks. Proceedings of the 11th International Conference on Queueing Theory and Network Applications. :15:1–15:7.

Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.

Wang, Yi, Redmiles, David.  2016.  Exploring Trust and Cooperation Development with Agent-Based Simulation in A Pseudo Scale-free Network. Proceedings of the 19th International Conference on Supporting Group Work. :121–130.

Globally distributed collaboration requires cooperation and trust among team members. Current research suggests that informal, non-work related communication plays a positive role in developing cooperation and trust. However, the way in which teams connect, i.e. via a social network, greatly influences cooperation and trust development. The study described in this paper employs agent-based modeling and simulation to investigate the cooperation and trust development with the presence of informal, non-work-related communication in networked teams. Leveraging game theory, we present a model of how an individual makes strategic decisions when interacting with her social network neighbors. The results of simulation on a pseudo scale-free network reveal the conditions under which informal communication has an impact, how different network degree distributions affect efficient trust and cooperation development, and how it is possible to "seed" trust and cooperation development amongst individuals in specific network positions. This study is the first to use agent-based modeling and simulation to examine the relationships between scale-free networks' topological features (degree distribution), cooperation and trust development, and informal communication.

Yu, Kun, Berkovsky, Shlomo, Conway, Dan, Taib, Ronnie, Zhou, Jianlong, Chen, Fang.  2016.  Trust and Reliance Based on System Accuracy. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. :223–227.

Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system's output serves as one of the inputs for the users' decision making processes. In this work, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly.

Kassem, Mohamed, Hasan, Cengis, Marina, Mahesh.  2016.  Decoupled Uplink/Downlink User Association in HetNets: A Matching with Contracts Approach. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :19–28.

In light of the prevalent trend towards dense HetNets, the conventional coupled user association, where mobile device uses the same base station (BS) for both uplink and downlink traffic, is being questioned and the alternative and more general downlink/uplink decoupling paradigm is emerging. We focus on designing an effective user association mechanism for HetNets with downlink/uplink decoupling, which has started to receive more attention. We use a combination of matching theory and stochastic geometry. We model the problem as a matching with contracts game by drawing an analogy with the hospital-doctor matching problem. In our model, we use stochastic geometry to derive a closed-form expression for matching utility function. Our model captures different objectives between users in the uplink/downlink directions and also from the perspective of BSs. Based on this game model, we present a matching algorithm for decoupled uplink/downlink user association that results in a stable allocation. Simulation results demonstrate that our approach provides close-to-optimal performance, and significant gains over alternative approaches for user association in the decoupled context as well as the traditional coupled user association; these gains are a result of the holistic nature of our approach that accounts for the additional cost associated with decoupling and inter-dependence between uplink and downlink associations. Our work is also the first in the wireless communications domain to employ matching with contracts approach.

Gao, Peixin, Miao, Hui, Baras, John S., Golbeck, Jennifer.  2016.  STAR: Semiring Trust Inference for Trust-Aware Social Recommenders. Proceedings of the 10th ACM Conference on Recommender Systems. :301–308.

Social recommendation takes advantage of the influence of social relationships in decision making and the ready availability of social data through social networking systems. Trust relationships in particular can be exploited in such systems for rating prediction and recommendation, which has been shown to have the potential for improving the quality of the recommender and alleviating the issue of data sparsity, cold start, and adversarial attacks. An appropriate trust inference mechanism is necessary in extending the knowledge base of trust opinions and tackling the issue of limited trust information due to connection sparsity of social networks. In this work, we offer a new solution to trust inference in social networks to provide a better knowledge base for trust-aware recommender systems. We propose using a semiring framework as a nonlinear way to combine trust evidences for inferring trust, where trust relationship is model as 2-D vector containing both trust and certainty information. The trust propagation and aggregation rules, as the building blocks of our trust inference scheme, are based upon the properties of trust relationships. In our approach, both trust and distrust (i.e., positive and negative trust) are considered, and opinion conflict resolution is supported. We evaluate the proposed approach on real-world datasets, and show that our trust inference framework has high accuracy, and is capable of handling trust relationship in large networks. The inferred trust relationships can enlarge the knowledge base for trust information and improve the quality of trust-aware recommendation.

2017-10-03
Braverman, Mark, Efremenko, Klim, Gelles, Ran, Haeupler, Bernhard.  2016.  Constant-rate Coding for Multiparty Interactive Communication is Impossible. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :999–1010.

We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.

Chattopadhyay, Eshan, Goyal, Vipul, Li, Xin.  2016.  Non-malleable Extractors and Codes, with Their Many Tampered Extensions. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :285–298.

Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are seeded non-malleable extractors, introduced by Dodis and Wichs; seedless non-malleable extractors, introduced by Cheraghchi and Guruswami; and non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs. Besides being interesting on their own, they also have important applications in cryptography, e.g, privacy amplification with an active adversary, explicit non-malleable codes etc, and often have unexpected connections to their non-tampered analogues. However, the known constructions are far behind their non-tampered counterparts. Indeed, the best known seeded non-malleable extractor requires min-entropy rate at least 0.49; while explicit constructions of non-malleable two-source extractors were not known even if both sources have full min-entropy, and was left as an open problem by Cheraghchi and Guruswami. In this paper we make progress towards solving the above problems and other related generalizations. Our contributions are as follows. (1) We construct an explicit seeded non-malleable extractor for polylogarithmic min-entropy. This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result. In fact, we construct more general seeded non-malleable extractors (that can handle multiple adversaries) which were used in the recent construction of explicit two-source extractors for polylogarithmic min-entropy. (2) We construct the first explicit non-malleable two-source extractor for almost full min-entropy thus resolving the open question posed by Cheraghchi and Guruswami. (3) We motivate and initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. By using the connection found by Cheraghchi and Guruswami and providing efficient sampling algorithms, we obtain the first explicit non-malleable codes with tampering degree t, with near optimal rate and error. We call these stronger notions one-many and many-manynon-malleable codes. This provides a stronger information theoretic analogue of a primitive known as continuous non-malleable codes. Our basic technique used in all of our constructions can be seen as inspired, in part, by the techniques previously used to construct cryptographic non-malleable commitments.

Applebaum, Benny, Lovett, Shachar.  2016.  Algebraic Attacks Against Random Local Functions and Their Countermeasures. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :1087–1100.

Suppose that you have n truly random bits x=(x1,…,xn) and you wish to use them to generate m≫ n pseudorandom bits y=(y1,…, ym) using a local mapping, i.e., each yi should depend on at most d=O(1) bits of x. In the polynomial regime of m=ns, stextgreater1, the only known solution, originates from (Goldreich, ECCC 2000), is based on Random Local Functions: Compute yi by applying some fixed (public) d-ary predicate P to a random (public) tuple of distinct inputs (xi1,…,xid). Our goal in this paper is to understand, for any value of s, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate. We derive the following results: (1) We show that pseudorandomness against F2-linear adversaries (i.e., the distribution y has low-bias) is achieved if the predicate is (a) k=Ω(s)-resilience, i.e., uncorrelated with any k-subset of its inputs, and (b) has algebraic degree of Ω(s) even after fixing Ω(s) of its inputs. We also show that these requirements are necessary, and so they form a tight characterization (up to constants) of security against linear attacks. Our positive result shows that a d-local low-bias generator can have output length of nΩ(d), answering an open question of Mossel, Shpilka and Trevisan (FOCS, 2003). Our negative result shows that a candidate for pseudorandom generator proposed by the first author (computational complexity, 2015) and by O’Donnell and Witmer (CCC 2014) is insecure. We use similar techniques to refute a conjecture of Feldman, Perkins and Vempala (STOC 2015) regarding the hardness of planted constraint satisfaction problems. (2) Motivated by the cryptanalysis literature, we consider security against algebraic attacks. We provide the first theoretical treatment of such attacks by formalizing a general notion of algebraic inversion and distinguishing attacks based on the Polynomial Calculus proof system. We show that algebraic attacks succeed if and only if there exist a degree e=O(s) non-zero polynomial Q whose roots cover the roots of P or cover the roots of P’s complement. As a corollary, we obtain the first example of a predicate P for which the generated sequence y passes all linear tests but fails to pass some polynomial-time computable test, answering an open question posed by the first author (Question 4.9, computational complexity 2015).

Rizzi, Francesco, Morris, Karla, Sargsyan, Khachik, Mycek, Paul, Safta, Cosmin, Debusschere, Bert, LeMaitre, Olivier, Knio, Omar.  2016.  ULFM-MPI Implementation of a Resilient Task-Based Partial Differential Equations Preconditioner. Proceedings of the ACM Workshop on Fault-Tolerance for HPC at Extreme Scale. :19–26.

We present a task-based domain-decomposition preconditioner for partial differential equations (PDEs) resilient to silent data corruption (SDC) and hard faults. The algorithm exploits a reformulation of the PDE as a sampling problem, followed by a regression-based solution update that is resilient to SDC. We adopt a server-client model implemented using the User Level Fault Mitigation MPI (MPI-ULFM). All state information is held by the servers, while clients only serve as computational units. The task-based nature of the algorithm and the capabilities of ULFM are complemented at the algorithm level to support missing tasks, making the application resilient to hard faults affecting the clients. Weak and strong scaling tests up to \textasciitilde115k cores show an excellent performance of the application with efficiencies above 90%, demonstrating the suitability to run at large scale. We demonstrate the resilience of the application for a 2D elliptic PDE by injecting SDC using a random single bit-flip model, and hard faults in the form of clients crashing. We show that in all cases, the application converges to the right solution. We analyze the overhead caused by the faults, and show that, for the test problem considered, the overhead incurred due to SDC is minimal compared to that from the hard faults.

2017-09-19
Huo, Jing, Gao, Yang, Shi, Yinghuan, Yang, Wanqi, Yin, Hujun.  2016.  Ensemble of Sparse Cross-Modal Metrics for Heterogeneous Face Recognition. Proceedings of the 2016 ACM on Multimedia Conference. :1405–1414.

Heterogeneous face recognition aims to identify or verify person identity by matching facial images of different modalities. In practice, it is known that its performance is highly influenced by modality inconsistency, appearance occlusions, illumination variations and expressions. In this paper, a new method named as ensemble of sparse cross-modal metrics is proposed for tackling these challenging issues. In particular, a weak sparse cross-modal metric learning method is firstly developed to measure distances between samples of two modalities. It learns to adjust rank-one cross-modal metrics to satisfy two sets of triplet based cross-modal distance constraints in a compact form. Meanwhile, a group based feature selection is performed to enforce that features in the same position of two modalities are selected simultaneously. By neglecting features that attribute to "noise" in the face regions (eye glasses, expressions and so on), the performance of learned weak metrics can be markedly improved. Finally, an ensemble framework is incorporated to combine the results of differently learned sparse metrics into a strong one. Extensive experiments on various face datasets demonstrate the benefit of such feature selection especially when heavy occlusions exist. The proposed ensemble metric learning has been shown superiority over several state-of-the-art methods in heterogeneous face recognition.

2017-09-15
Naghmouchi, M. Yassine, Perrot, Nancy, Kheir, Nizar, Mahjoub, A. Ridha, Wary, Jean-Philippe.  2016.  A New Risk Assessment Framework Using Graph Theory for Complex ICT Systems. Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :97–100.

In this paper, we propose a new risk analysis framework that enables to supervise risks in complex and distributed systems. Our contribution is twofold. First, we provide the Risk Assessment Graphs (RAGs) as a model of risk analysis. This graph-based model is adaptable to the system changes over the time. We also introduce the potentiality and the accessibility functions which, during each time slot, evaluate respectively the chance of exploiting the RAG's nodes, and the connection time between these nodes. In addition, we provide a worst-case risk evaluation approach, based on the assumption that the intruder threats usually aim at maximising their benefits by inflicting the maximum damage to the target system (i.e. choosing the most likely paths in the RAG). We then introduce three security metrics: the propagated risk, the node risk and the global risk. We illustrate the use of our framework through the simple example of an enterprise email service. Our framework achieves both flexibility and generality requirements, it can be used to assess the external threats as well as the insider ones, and it applies to a wide set of applications.

Barhelemy, Lucas, Eyrolles, Ninon, Renault, Guenaël, Roblin, Raphaël.  2016.  Binary Permutation Polynomial Inversion and Application to Obfuscation Techniques. Proceedings of the 2016 ACM Workshop on Software PROtection. :51–59.

Whether it is for conditional statement, constant, opaque predicate or equation obfuscation, Mixed Boolean Arithmetics (MBA) technique is a powerful tool providing concrete ways to achieve obfuscation. Recent papers ([22,1]) presented ways to mix such tools with permutation polynomials modulo 2n in order to make them more resilient to SMT solvers. However, because of limitations regarding the inversion of such permutations, the set of permutation polynomials presented suffer some restrictions. Such restrictions bring several methods of arithmetic simplification, decreasing their effectiveness at hiding information. In this work, we present general methods for permutation polynomials inversion. Those methods allow us to remove some of the restrictions presented in the literature, making simplification methods less effective. We discuss complexity and limits of these methods, and conclude that not only current simplification methods may not be as effective as we thought, but they are still many uses of polynomial permutations in obfuscation that are yet to be explored.

Dziembowski, Stefan, Faust, Sebastian, Standaert, François-Xavier.  2016.  Private Circuits III: Hardware Trojan-Resilience via Testing Amplification. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :142–153.

Security against hardware trojans is currently becoming an essential ingredient to ensure trust in information systems. A variety of solutions have been introduced to reach this goal, ranging from reactive (i.e., detection-based) to preventive (i.e., trying to make the insertion of a trojan more difficult for the adversary). In this paper, we show how testing (which is a typical detection tool) can be used to state concrete security guarantees for preventive approaches to trojan-resilience. For this purpose, we build on and formalize two important previous works which introduced ``input scrambling" and ``split manufacturing" as countermeasures to hardware trojans. Using these ingredients, we present a generic compiler that can transform any circuit into a trojan-resilient one, for which we can state quantitative security guarantees on the number of correct executions of the circuit thanks to a new tool denoted as ``testing amplification". Compared to previous works, our threat model covers an extended range of hardware trojans while we stick with the goal of minimizing the number of honest elements in our transformed circuits. Since transformed circuits essentially correspond to redundant multiparty computations of the target functionality, they also allow reasonably efficient implementations, which can be further optimized if specialized to certain cryptographic primitives and security goals.

Multari, Nicholas J., Singhal, Anoop, Manz, David O..  2016.  SafeConfig'16: Testing and Evaluation for Active and Resilient Cyber Systems. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1871–1872.

The premise of this year's SafeConfig Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners.

Crampton, Jason, Gutin, Gregory, Watrigant, Rémi.  2016.  Resiliency Policies in Access Control Revisited. Proceedings of the 21st ACM on Symposium on Access Control Models and Technologies. :101–111.

Resiliency is a relatively new topic in the context of access control. Informally, it refers to the extent to which a multi-user computer system, subject to an authorization policy, is able to continue functioning if a number of authorized users are unavailable. Several interesting problems connected to resiliency were introduced by Li, Wang and Tripunitara [13], many of which were found to be intractable. In this paper, we show that these resiliency problems have unexpected connections with the workflow satisfiability problem (WSP). In particular, we show that an instance of the resiliency checking problem (RCP) may be reduced to an instance of WSP. We then demonstrate that recent advances in our understanding of WSP enable us to develop fixed-parameter tractable algorithms for RCP. Moreover, these algorithms are likely to be useful in practice, given recent experimental work demonstrating the advantages of bespoke algorithms to solve WSP. We also generalize RCP in several different ways, showing in each case how to adapt the reduction to WSP. Li et al also showed that the coexistence of resiliency policies and static separation-of-duty policies gives rise to further interesting questions. We show how our reduction of RCP to WSP may be extended to solve these problems as well and establish that they are also fixed-parameter tractable.

2017-06-27
Jordan, Michael I..  2016.  On Computational Thinking, Inferential Thinking and Data Science. Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures. :47–47.

The rapid growth in the size and scope of datasets in science and technology has created a need for novel foundational perspectives on data analysis that blend the inferential and computational sciences. That classical perspectives from these fields are not adequate to address emerging problems in "Big Data" is apparent from their sharply divergent nature at an elementary level-in computer science, the growth of the number of data points is a source of "complexity" that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of "simplicity" in that inferences are generally stronger and asymptotic results can be invoked. On a formal level, the gap is made evident by the lack of a role for computational concepts such as "runtime" in core statistical theory and the lack of a role for statistical concepts such as "risk" in core computational theory. I present several research vignettes aimed at bridging computation and statistics, including the problem of inference under privacy and communication constraints, and ways to exploit parallelism so as to trade off the speed and accuracy of inference.

2017-06-05
Chattopadhyay, Eshan, Zuckerman, David.  2016.  Explicit Two-source Extractors and Resilient Functions. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :670–683.

We explicitly construct an extractor for two independent sources on n bits, each with polylogarithmic min-entropy. Our extractor outputs one bit and has polynomially small error. The best previous extractor, by Bourgain, required each source to have min-entropy .499n. A key ingredient in our construction is an explicit construction of a monotone, almost-balanced Boolean functions that are resilient to coalitions. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown n-q bits are chosen almost polylogarithmic-wise independently, and the remaining q bits are chosen by an adversary as an arbitrary function of the n-q bits. The best previous construction, by Viola, achieved q quadratically smaller than our result. Our explicit two-source extractor directly implies improved constructions of a K-Ramsey graph over N vertices, improving bounds obtained by Barak et al. and matching independent work by Cohen.

2017-05-22
Suzuki, Kenichi, Kiselyov, Oleg, Kameyama, Yukiyoshi.  2016.  Finally, Safely-extensible and Efficient Language-integrated Query. Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation. :37–48.

Language-integrated query is an embedding of database queries into a host language to code queries at a higher level than the all-to-common concatenation of strings of SQL fragments. The eventually produced SQL is ensured to be well-formed and well-typed, and hence free from the embarrassing (security) problems. Language-integrated query takes advantage of the host language's functional and modular abstractions to compose and reuse queries and build query libraries. Furthermore, language-integrated query systems like T-LINQ generate efficient SQL, by applying a number of program transformations to the embedded query. Alas, the set of transformation rules is not designed to be extensible. We demonstrate a new technique of integrating database queries into a typed functional programming language, so to write well-typed, composable queries and execute them efficiently on any SQL back-end as well as on an in-memory noSQL store. A distinct feature of our framework is that both the query language as well as the transformation rules needed to generate efficient SQL are safely user-extensible, to account for many variations in the SQL back-ends, as well for domain-specific knowledge. The transformation rules are guaranteed to be type-preserving and hygienic by their very construction. They can be built from separately developed and reusable parts and arbitrarily composed into optimization pipelines. With this technique we have embedded into OCaml a relational query language that supports a very large subset of SQL including grouping and aggregation. Its types cover the complete set of intricate SQL behaviors.