Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2017-05-22
Kurilova, Darya, Potanin, Alex, Aldrich, Jonathan.  2016.  Modules in Wyvern: Advanced Control over Security and Privacy. Proceedings of the Symposium and Bootcamp on the Science of Security. :68–68.

In today's systems, restricting the authority of untrusted code is difficult because, by default, code has the same authority as the user running it. Object capabilities are a promising way to implement the principle of least authority, but being too low-level and fine-grained, take away many conveniences provided by module systems. We present a module system design that is capability-safe, yet preserves most of the convenience of conventional module systems. We demonstrate how to ensure key security and privacy properties of a program as a mode of use of our module system. Our authority safety result formally captures the role of mutable state in capability-based systems and uses a novel non-transitive notion of authority, which allows us to reason about authority restriction: the encapsulation of a stronger capability inside a weaker one.

Suzuki, Kenichi, Kiselyov, Oleg, Kameyama, Yukiyoshi.  2016.  Finally, Safely-extensible and Efficient Language-integrated Query. Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation. :37–48.

Language-integrated query is an embedding of database queries into a host language to code queries at a higher level than the all-to-common concatenation of strings of SQL fragments. The eventually produced SQL is ensured to be well-formed and well-typed, and hence free from the embarrassing (security) problems. Language-integrated query takes advantage of the host language's functional and modular abstractions to compose and reuse queries and build query libraries. Furthermore, language-integrated query systems like T-LINQ generate efficient SQL, by applying a number of program transformations to the embedded query. Alas, the set of transformation rules is not designed to be extensible. We demonstrate a new technique of integrating database queries into a typed functional programming language, so to write well-typed, composable queries and execute them efficiently on any SQL back-end as well as on an in-memory noSQL store. A distinct feature of our framework is that both the query language as well as the transformation rules needed to generate efficient SQL are safely user-extensible, to account for many variations in the SQL back-ends, as well for domain-specific knowledge. The transformation rules are guaranteed to be type-preserving and hygienic by their very construction. They can be built from separately developed and reusable parts and arbitrarily composed into optimization pipelines. With this technique we have embedded into OCaml a relational query language that supports a very large subset of SQL including grouping and aggregation. Its types cover the complete set of intricate SQL behaviors.

Zhu, Suwen, Lu, Long, Singh, Kapil.  2016.  CASE: Comprehensive Application Security Enforcement on COTS Mobile Devices. Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. :375–386.

Without violating existing app security enforcement, malicious modules inside apps, such as a library or an external class, can steal private data and abuse sensitive capabilities meant for other modules inside the same apps. These so-called "module-level attacks" are quickly emerging, fueled by the pervasive use of third-party code in apps and the lack of module-level security enforcement on mobile platforms. To systematically thwart the threats, we build CASE, an automatic app patching tool used by app developers to enable module-level security in their apps built for COTS Android devices. During runtime, patched apps enforce developer-supplied security policies that regulate interactions among modules at the granularity of a Java class. Requiring no changes or special support from the Android OS, the enforcement is complete in covering inter-module crossings in apps and is robust against malicious Java and native app modules. We evaluate CASE with 420 popular apps and a set of Android's unit tests. The results show that CASE is fully compatible with the tested apps and incurs an average performance overhead of 4.9%.

Sheff, Isaac, Magrino, Tom, Liu, Jed, Myers, Andrew C., van Renesse, Robbert.  2016.  Safe Serializable Secure Scheduling: Transactions and the Trade-Off Between Security and Consistency. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :229–241.

Modern applications often operate on data in multiple administrative domains. In this federated setting, participants may not fully trust each other. These distributed applications use transactions as a core mechanism for ensuring reliability and consistency with persistent data. However, the coordination mechanisms needed for transactions can both leak confidential information and allow unauthorized influence. By implementing a simple attack, we show these side channels can be exploited. However, our focus is on preventing such attacks. We explore secure scheduling of atomic, serializable transactions in a federated setting. While we prove that no protocol can guarantee security and liveness in all settings, we establish conditions for sets of transactions that can safely complete under secure scheduling. Based on these conditions, we introduce \textbackslashti\staged commit\, a secure scheduling protocol for federated transactions. This protocol avoids insecure information channels by dividing transactions into distinct stages. We implement a compiler that statically checks code to ensure it meets our conditions, and a system that schedules these transactions using the staged commit protocol. Experiments on this implementation demonstrate that realistic federated transactions can be scheduled securely, atomically, and efficiently.

Howe, J., Moore, C., O'Neill, M., Regazzoni, F., Güneysu, T., Beeden, K..  2016.  Lattice-based Encryption Over Standard Lattices In Hardware. Proceedings of the 53rd Annual Design Automation Conference. :162:1–162:6.

Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations. Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with λ/2-bits of precision. Performance results are promising compared to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing stronger security proofs; generate 1272 encryptions per second and 4395 decryptions per second.

Santoso, Bagus.  2016.  Refining Identification Scheme Based on Isomorphism of Polynomials with Two Secrets: A New Theoretical and Practical Analysis. Proceedings of the 3rd ACM International Workshop on ASIA Public-Key Cryptography. :31–38.

The isomorphism of polynomials with two secret (IP2S) problem is one candidate of computational assumptions for post- quantum cryptography. The only identification scheme based on IP2S is introduced in 1996 by Patarin. However, the security of the scheme has not been formally proven and we discover that the originally proposed parameters are no longer secure based on the most recent research. In this paper, we present the first formal security proof of identification scheme based on IP2S against impersonation under passive attack, sequential active attack, and concurrent active attack. We propose new secure parameters and methods to reduce the implementation cost. Using the proposed methods, we are able to cut the storage cost and average communication cost in a drastic way that the scheme is implementable even on the lightweight devices in the current market.

O'Neill, Maire, O'Sullivan, Elizabeth, McWilliams, Gavin, Saarinen, Markku-Juhani, Moore, Ciara, Khalid, Ayesha, Howe, James, del Pino, Rafael, Abdalla, Michel, Regazzoni, Francesco et al..  2016.  Secure Architectures of Future Emerging Cryptography SAFEcrypto. Proceedings of the ACM International Conference on Computing Frontiers. :315–322.

Funded under the European Union's Horizon 2020 research and innovation programme, SAFEcrypto will provide a new generation of practical, robust and physically secure post-quantum cryptographic solutions that ensure long-term security for future ICT systems, services and applications. The project will focus on the remarkably versatile field of Lattice-based cryptography as the source of computational hardness, and will deliver optimised public key security primitives for digital signatures and authentication, as well identity based encryption (IBE) and attribute based encryption (ABE). This will involve algorithmic and design optimisations, and implementations of lattice-based cryptographic schemes addressing cost, energy consumption, performance and physical robustness. As the National Institute of Standards and Technology (NIST) prepares for the transition to a post-quantum cryptographic suite B, urging organisations that build systems and infrastructures that require long-term security to consider this transition in architectural designs; the SAFEcrypto project will provide Proof-of-concept demonstrators of schemes for three practical real-world case studies with long-term security requirements, in the application areas of satellite communications, network security and cloud. The goal is to affirm Lattice-based cryptography as an effective replacement for traditional number-theoretic public-key cryptography, by demonstrating that it can address the needs of resource-constrained embedded applications, such as mobile and battery-operated devices, and of real-time high performance applications for cloud and network management infrastructures.

Azarderakhsh, Reza, Jao, David, Kalach, Kassem, Koziel, Brian, Leonardi, Christopher.  2016.  Key Compression for Isogeny-Based Cryptosystems. Proceedings of the 3rd ACM International Workshop on ASIA Public-Key Cryptography. :1–10.

We present a method for key compression in quantumresistant isogeny-based cryptosystems, which allows a reduction in and transmission costs of per-party public information by a factor of two, with no e ect on security. We achieve this reduction by associating a canonical choice of elliptic curve to each j-invariant, and representing elements on the curve as linear combinations with respect to a canonical choice of basis. This method of compressing public information can be applied to numerous isogeny-based protocols, such as key exchange, zero-knowledge identi cation, and public-key encryption. We performed personal computer and ARM implementations of the key exchange with compression and decompression in C and provided timing results, showing the computational cost of key compression and decompression at various security levels. Our results show that isogeny-based cryptosystems achieve by far the smallest possible key sizes among all existing families of post-quantum cryptosystems at practical security levels; e.g. 3073-bit public keys at the quantum 128-bit security level, comparable to (non-quantum) RSA key sizes.

Rabie, Asmaa.  2016.  The RSA Trap. XRDS. 23:65–65.
Tan, Chuting, Jiang, Zoe L., Wang, Xuan, Yiu, S.M., Fang, Junbin, Li, Jin, Jin, Yabin, Huang, Jiajun.  2016.  Generic Construction of Publicly Verifiable Predicate Encryption. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :889–894.

There is an increasing trend for data owners to store their data in a third-party cloud server and buy the service from the cloud server to provide information to other users. To ensure confidentiality, the data is usually encrypted. Therefore, an encrypted data searching scheme with privacy preserving is of paramount importance. Predicate encryption (PE) is one of the attractive solutions due to its attribute-hiding merit. However, as cloud is not always trusted, verifying the searched results is also crucial. Firstly, a generic construction of Publicly Verifiable Predicate Encryption (PVPE) scheme is proposed to provide verification for PE. We reduce the security of PVPE to the security of PE. However, from practical point of view, to decrease the communication overhead and computation overhead, an improved PVPE is proposed with the trade-off of a small probability of error.

Bos, Joppe, Costello, Craig, Ducas, Leo, Mironov, Ilya, Naehrig, Michael, Nikolaenko, Valeria, Raghunathan, Ananth, Stebila, Douglas.  2016.  Frodo: Take off the Ring! Practical, Quantum-Secure Key Exchange from LWE. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1006–1018.

Lattice-based cryptography offers some of the most attractive primitives believed to be resistant to quantum computers. Following increasing interest from both companies and government agencies in building quantum computers, a number of works have proposed instantiations of practical post-quantum key exchange protocols based on hard problems in ideal lattices, mainly based on the Ring Learning With Errors (R-LWE) problem. While ideal lattices facilitate major efficiency and storage benefits over their non-ideal counterparts, the additional ring structure that enables these advantages also raises concerns about the assumed difficulty of the underlying problems. Thus, a question of significant interest to cryptographers, and especially to those currently placing bets on primitives that will withstand quantum adversaries, is how much of an advantage the additional ring structure actually gives in practice. Despite conventional wisdom that generic lattices might be too slow and unwieldy, we demonstrate that LWE-based key exchange is quite practical: our constant time implementation requires around 1.3ms computation time for each party; compared to the recent NewHope R-LWE scheme, communication sizes increase by a factor of 4.7x, but remain under 12 KiB in each direction. Our protocol is competitive when used for serving web pages over TLS; when partnered with ECDSA signatures, latencies increase by less than a factor of 1.6x, and (even under heavy load) server throughput only decreases by factors of 1.5x and 1.2x when serving typical 1 KiB and 100 KiB pages, respectively. To achieve these practical results, our protocol takes advantage of several innovations. These include techniques to optimize communication bandwidth, dynamic generation of public parameters (which also offers additional security against backdoors), carefully chosen error distributions, and tight security parameters.

Daemen, Joan.  2016.  On Non-uniformity in Threshold Sharings. Proceedings of the 2016 ACM Workshop on Theory of Implementation Security. :41–41.

In threshold schemes one represents each sensitive variable by a number n of shares such that their (usually) bitwise sum equals that variable. These shares are initially generated in such a way that any subset of n-1 shares gives no information about the sensitive variable. Functions (S-boxes, mixing layers, round functions, etc.) are computed on the shares of the inputs resulting in the output as a number of shares. An essential property of a threshold implementation of a function is that each output share is computed from at most n-1 input shares. This is called incompleteness and guarantees that that computation cannot leak information about sensitive variables. The resulting output is then typically subject to some further computation, again in the form of separate, incomplete, computation on shares. For these subsequent computations to not leak information about the sensitive variables, the output of the previous stage must still be uniform. Hence, in an iterative cryptographic primitive such as a block cipher, we need a threshold implementation of the round function that yields a uniformly shared output if its input is uniformly shared. This property of the threshold implementation is called uniformity. Threshold schemes form a good protection mechanism against differential power analysis (DPA). In particular, using it allows building cryptographic hardware that is guaranteed to be unattackable with first-order DPA, assuming certain leakage models of the cryptographic hardware at hand and for a plausible definition of "first order". Constructing an incomplete threshold implementation of a non-linear function is rather straightforward. To offer resistance against first-order DPA, the number of shares equals the algebraic degree of the function plus one. However, constructing one that is at the same time incomplete and uniform may present a challenge. For instance, for the Keccak non-linear layer, incomplete 3-share threshold implementations are easy to generate but no uniform one is known. Exhaustive investigations have been performed on all small S-boxes (3 to 5 bits) and there are many S-boxes for which it is not known to build uniform threshold implementations with d+1 shares if their algebraic degree is d. Uniformity of a threshold implementation is essential in its information-theoretical proof of resistance against first-order DPA. However, given a non-uniform threshold implementation, it is not immediate how to exploit its non-uniformity in an attack. In my talk I discuss the local and global effects of non-uniformity in iterated functions and their significance on the resistance against DPA. I treat methods to quantitatively limit the amount of non-uniformity and to keep it away from where it may be harmful. These techniques are relatively cheap and can reduce non-uniformity to such a low level that it would require an astronomical amount of samples to measure it.

de Chérisey, Eloi, Guilley, Sylvain, Rioul, Olivier, Jayasinghe, Darshana.  2016.  Template Attacks with Partial Profiles and Dirichlet Priors: Application to Timing Attacks. Proceedings of the Hardware and Architectural Support for Security and Privacy 2016. :7:1–7:8.

In order to retrieve the secret key in a side-channel attack, the attacker computes distinguisher values using all the available data. A profiling stage is very useful to provide some a priori information about the leakage model. However, profiling is essentially empirical and may not be exhaustive. Therefore, during the attack, the attacker may come up on previously unseen data, which can be troublesome. A lazy workaround is to ignore all such novel observations altogether. In this paper, we show that this is not optimal and can be avoided. Our proposed techniques eventually improve the performance of classical information-theoretic distinguishers in terms of success rate.

Liu, Jiayang, Bi, Jingguo.  2016.  Cryptanalysis of a Fast Private Information Retrieval Protocol. Proceedings of the 3rd ACM International Workshop on ASIA Public-Key Cryptography. :56–60.

A private information retrieval (abbreviated as PIR) protocol deals with the schemes that allow a user to retrieve privately an element of a non-replicated database. The security of PIR protocol is that the user wants to retrieve information in a database without the database knowing which information has being retrieved. This is widely applied in medical files, video or songs databases or even stock exchanges share prices. At ISIT 2008, Carlos Aguilar Melchor and Philippe Gaborit presented a lattice-based PIR protocol, whose security based on problems close to coding theory problems known to be NP-complete. In this paper, we present a practical attack on this PIR protocol when the number of elements in the database is not big. More specifically, we can firstly uncover the hidden linear relationship between the public matrices and noisy matrices, and then propose an efficient dimension-reduced attack to locate the index of the element which the user retrieved.

Duan, Jia, Zhou, Jiantao, Li, Yuanman.  2016.  Secure and Verifiable Outsourcing of Nonnegative Matrix Factorization (NMF). Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :63–68.

Cloud computing platforms are becoming increasingly prevalent and readily available nowadays, providing us alternative and economic services for resource-constrained clients to perform large-scale computation. In this work, we address the problem of secure outsourcing of large-scale nonnegative matrix factorization (NMF) to a cloud in a way that the client can verify the correctness of results with small overhead. The input matrix protection is achieved by a lightweight, permutation-based encryption mechanism. By exploiting the iterative nature of NMF computation, we propose a single-round verification strategy, which can be proved to be effective. Both theoretical and experimental results are given to demonstrate the superior performance of our scheme.

Dörre, Felix, Klebanov, Vladimir.  2016.  Practical Detection of Entropy Loss in Pseudo-Random Number Generators. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :678–689.

Pseudo-random number generators (PRNGs) are a critical infrastructure for cryptography and security of many computer applications. At the same time, PRNGs are surprisingly difficult to design, implement, and debug. This paper presents the first static analysis technique specifically for quality assurance of cryptographic PRNG implementations. The analysis targets a particular kind of implementation defect, the entropy loss. Entropy loss occurs when the entropy contained in the PRNG seed is not utilized to the full extent for generating the pseudo-random output stream. The Debian OpenSSL disaster, probably the most prominent PRNG-related security incident, was one but not the only manifestation of such a defect. Together with the static analysis technique, we present its implementation, a tool named Entroposcope. The tool offers a high degree of automation and practicality. We have applied the tool to five real-world PRNGs of different designs and show that it effectively detects both known and previously unknown instances of entropy loss.

Keller, Marcel, Orsini, Emmanuela, Scholl, Peter.  2016.  MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :830–842.

We consider the task of secure multi-party computation of arithmetic circuits over a finite field. Unlike Boolean circuits, arithmetic circuits allow natural computations on integers to be expressed easily and efficiently. In the strongest setting of malicious security with a dishonest majority –- where any number of parties may deviate arbitrarily from the protocol –- most existing protocols require expensive public-key cryptography for each multiplication in the preprocessing stage of the protocol, which leads to a high total cost. We present a new protocol that overcomes this limitation by using oblivious transfer to perform secure multiplications in general finite fields with reduced communication and computation. Our protocol is based on an arithmetic view of oblivious transfer, with careful consistency checks and other techniques to obtain malicious security at a cost of less than 6 times that of semi-honest security. We describe a highly optimized implementation together with experimental results for up to five parties. By making extensive use of parallelism and SSE instructions, we improve upon previous runtimes for MPC over arithmetic circuits by more than 200 times.

Halevi, Shai, Ishai, Yuval, Jain, Abhishek, Kushilevitz, Eyal, Rabin, Tal.  2016.  Secure Multiparty Computation with General Interaction Patterns. Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. :157–168.

We present a unified framework for studying secure multiparty computation (MPC) with arbitrarily restricted interaction patterns such as a chain, a star, a directed tree, or a directed graph. Our study generalizes both standard MPC and recent models for MPC with specific restricted interaction patterns, such as those studied by Halevi et al. (Crypto 2011), Goldwasser et al. (Eurocrypt 2014), and Beimel et al. (Crypto 2014). Since restricted interaction patterns cannot always yield full security for MPC, we start by formalizing the notion of "best possible security" for any interaction pattern. We then obtain the following main results: Completeness theorem. We prove that the star interaction pattern is complete for the problem of MPC with general interaction patterns. Positive results. We present both information-theoretic and computationally secure protocols for computing arbitrary functions with general interaction patterns. We also present more efficient protocols for computing symmetric functions, both in the computational and in the information-theoretic setting. Our computationally secure protocols for general functions necessarily rely on indistinguishability obfuscation while the ones for computing symmetric functions make simple use of multilinear maps. Negative results. We show that, in many cases, the complexity of our information-theoretic protocols is essentially the best that can be achieved. All of our protocols rely on a correlated randomness setup, which is necessary in our setting (for computing general functions). In the computational case, we also present a generic procedure to make any correlated randomness setup reusable, in the common random string model. Although most of our information-theoretic protocols have exponential complexity, they may be practical for functions on small domains (e.g., f0; 1g20), where they are concretely faster than their computational counterparts.

Davidson, Alex, Fenn, Gregory, Cid, Carlos.  2016.  A Model for Secure and Mutually Beneficial Software Vulnerability Sharing. Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security. :3–14.

In this work we propose a model for conducting efficient and mutually beneficial information sharing between two competing entities, focusing specifically on software vulnerability sharing. We extend the two-stage game-theoretic model proposed by Khouzani et al. [18] for bug sharing, addressing two key features: we allow security information to be associated with different categories and severities, but also remove a large proportion of player homogeneity assumptions the previous work makes. We then analyse how these added degrees of realism affect the trading dynamics of the game. Secondly, we develop a new private set operation (PSO) protocol that enables the removal of the trusted mediation requirement. The PSO functionality allows for bilateral trading between the two entities up to a mutually agreed threshold on the value of information shared, keeping all other input information secret. The protocol scales linearly with set sizes and we give an implementation that establishes the practicality of the design for varying input parameters. The resulting model and protocol provide a framework for practical and secure information sharing between competing entities.

Khaledi, Mojgan, Khaledi, Mehrdad, Kasera, Sneha Kumar.  2016.  Profitable Task Allocation in Mobile Cloud Computing. Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :9–17.

We propose a game theoretic framework for task allocation in mobile cloud computing that corresponds to offloading of compute tasks to a group of nearby mobile devices. Specifically, in our framework, a distributor node holds a multidimensional auction for allocating the tasks of a job among nearby mobile nodes based on their computational capabilities and also the cost of computation at these nodes, with the goal of reducing the overall job completion time. Our proposed auction also has the desired incentive compatibility property that ensures that mobile devices truthfully reveal their capabilities and costs and that those devices benefit from the task allocation. To deal with node mobility, we perform multiple auctions over adaptive time intervals. We develop a heuristic approach to dynamically find the best time intervals between auctions to minimize unnecessary auctions and the accompanying overheads. We evaluate our framework and methods using both real world and synthetic mobility traces. Our evaluation results show that our game theoretic framework improves the job completion time by a factor of 2-5 in comparison to the time taken for executing the job locally, while minimizing the number of auctions and the accompanying overheads. Our approach is also profitable for the nearby nodes that execute the distributor's tasks with these nodes receiving a compensation higher than their actual costs.

Kantarcioglu, Murat, Xi, Bowei.  2016.  Adversarial Data Mining: Big Data Meets Cyber Security. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1866–1867.

As more and more cyber security incident data ranging from systems logs to vulnerability scan results are collected, manually analyzing these collected data to detect important cyber security events become impossible. Hence, data mining techniques are becoming an essential tool for real-world cyber security applications. For example, a report from Gartner [gartner12] claims that "Information security is becoming a big data analytics problem, where massive amounts of data will be correlated, analyzed and mined for meaningful patterns". Of course, data mining/analytics is a means to an end where the ultimate goal is to provide cyber security analysts with prioritized actionable insights derived from big data. This raises the question, can we directly apply existing techniques to cyber security applications? One of the most important differences between data mining for cyber security and many other data mining applications is the existence of malicious adversaries that continuously adapt their behavior to hide their actions and to make the data mining models ineffective. Unfortunately, traditional data mining techniques are insufficient to handle such adversarial problems directly. The adversaries adapt to the data miner's reactions, and data mining algorithms constructed based on a training dataset degrades quickly. To address these concerns, over the last couple of years new and novel data mining techniques which is more resilient to such adversarial behavior are being developed in machine learning and data mining community. We believe that lessons learned as a part of this research direction would be beneficial for cyber security researchers who are increasingly applying machine learning and data mining techniques in practice. To give an overview of recent developments in adversarial data mining, in this three hour long tutorial, we introduce the foundations, the techniques, and the applications of adversarial data mining to cyber security applications. We first introduce various approaches proposed in the past to defend against active adversaries, such as a minimax approach to minimize the worst case error through a zero-sum game. We then discuss a game theoretic framework to model the sequential actions of the adversary and the data miner, while both parties try to maximize their utilities. We also introduce a modified support vector machine method and a relevance vector machine method to defend against active adversaries. Intrusion detection and malware detection are two important application areas for adversarial data mining models that will be discussed in details during the tutorial. Finally, we discuss some practical guidelines on how to use adversarial data mining ideas in generic cyber security applications and how to leverage existing big data management tools for building data mining algorithms for cyber security.

Russu, Paolo, Demontis, Ambra, Biggio, Battista, Fumera, Giorgio, Roli, Fabio.  2016.  Secure Kernel Machines Against Evasion Attacks. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. :59–69.

Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware learning algorithms have been developed, exploiting robust optimization and game-theoretical models to incorporate knowledge of potential adversarial data manipulations into the learning algorithm. Despite these techniques have been shown to be effective in some adversarial learning tasks, their adoption in practice is hindered by different factors, including the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. In this work, we aim to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts. In particular, leveraging recent work on robustness and regularization, we show that the security of a linear classifier can be drastically improved by selecting a proper regularizer, depending on the kind of evasion attack, as well as unbalancing the cost of classification errors. We then discuss the security of nonlinear kernel machines, and show that a proper choice of the kernel function is crucial. We also show that unbalancing the cost of classification errors and varying some kernel parameters can further improve classifier security, yielding decision functions that better enclose the legitimate data. Our results on spam and PDF malware detection corroborate our analysis.

Carlsten, Miles, Kalodner, Harry, Weinberg, S. Matthew, Narayanan, Arvind.  2016.  On the Instability of Bitcoin Without the Block Reward. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :154–167.

Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain. We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a "wealthy" block to "steal" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest. We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.

Nasr, Milad, Houmansadr, Amir.  2016.  GAME OF DECOYS: Optimal Decoy Routing Through Game Theory. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1727–1738.

Decoy routing is a promising new approach for censorship circumvention that relies on traffic re-direction by volunteer autonomous systems. Decoy routing is subject to a fundamental censorship attack, called routing around decoy (RAD), in which the censors re-route their clients' Internet traffic in order to evade decoy routing autonomous systems. Recently, there has been a heated debate in the community on the real-world feasibility of decoy routing in the presence of the RAD attack. Unfortunately, previous studies rely their analysis on heuristic-based mechanisms for decoy placement strategies as well as ad hoc strategies for the implementation of the RAD attack by the censors. In this paper, we perform the first systematic analysis of decoy routing in the presence of the RAD attack. We use game theory to model the interactions between decoy router deployers and the censors in various settings. Our game-theoretic analysis finds the optimal decoy placement strategies–-as opposed to heuristic-based placements–-in the presence of RAD censors who take their optimal censorship actions–-as opposed to some ad hoc implementation of RAD. That is, we investigate the best decoy placement given the best RAD censorship. We consider two business models for the real-world deployment of decoy routers: a central deployment that resembles that of Tor and a distributed deployment where autonomous systems individually decide on decoy deployment based on their economic interests. Through extensive simulation of Internet routes, we derive the optimal strategies in the two models for various censoring countries and under different assumptions about the budget and preferences of the censors and decoy deployers. We believe that our study is a significant step forward in understanding the practicality of the decoy routing circumvention approach.

Saab, Farah, Elhajj, Imad, Kayssi, Ayman, Chehab, Ali.  2016.  A Crowdsourcing Game-theoretic Intrusion Detection and Rating System. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :622–625.

One of the main concerns for smartphone users is the quality of apps they download. Before installing any app from the market, users first check its rating and reviews. However, these ratings are not computed by experts and most times are not associated with malicious behavior. In this work, we present an IDS/rating system based on a game theoretic model with crowdsourcing. Our results show that, with minor control over the error in categorizing users and the fraction of experts in the crowd, our system provides proper ratings while flagging all malicious apps.