Biblio
Blockchain-based cryptocurrencies secure a decentralized consensus protocol by incentives. The protocol participants, called miners, generate (mine) a series of blocks, each containing monetary transactions created by system users. As incentive for participation, miners receive newly minted currency and transaction fees paid by transaction creators. Blockchain bandwidth limits lead users to pay increasing fees in order to prioritize their transactions. However, most prior work focused on models where fees are negligible. In a notable exception, Carlsten et al. [17] postulated that if incentives come only from fees then a mining gap would form\textasciitilde— miners would avoid mining when the available fees are insufficient. In this work, we analyze cryptocurrency security in realistic settings, taking into account all elements of expenses and rewards. To study when gaps form, we analyze the system as a game we call the gap game. We analyze the game with a combination of symbolic and numeric analysis tools in a wide range of scenarios. Our analysis confirms Carlsten et al.'s postulate; indeed, we show that gaps form well before fees are the only incentive, and analyze the implications on security. Perhaps surprisingly, we show that different miners choose different gap sizes to optimize their utility, even when their operating costs are identical. Alarmingly, we see that the system incentivizes large miner coalitions, reducing system decentralization. We describe the required conditions to avoid the incentive misalignment, providing guidelines for future cryptocurrency design.
Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich's famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich's barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank $łeq$ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP $\subseteq$ $\exists$ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class $\exists$ BPP is known to be a subset of the more well known complexity class in the literature. Thus $\exists$ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P\#P $\subseteq$ $\exists$ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.
The Software Defined Networking paradigm has enabled dynamic configuration and control of large networks. Although the division of the control and data planes on networks has lead to dynamic reconfigurability of large networks, finding the minimal and optimal set of controllers that can adapt to the changes in the network has proven to be a challenging problem. Recent research tends to favor small solution sets with a focus on either propagation latency or controller load distribution, and struggles to find large balanced solution sets. In this paper, we propose a multi-objective genetic algorithm based approach to the controller placement problem that minimizes inter-controller latency, load distribution and the number of controllers with fitness sharing. We demonstrate that the proposed approach provides diverse and adaptive solutions to real network architectures such as the United States backbone and Japanese backbone networks. We further discuss the relevance and application of a diversity focused genetic algorithm for a moving target defense security model.
The problem of fast items retrieval from a fixed collection is often encountered in most computer science areas, from operating system components to databases and user interfaces. We present an approach based on hash tables that focuses on both minimizing the number of comparisons performed during the search and minimizing the total collection size. The standard open-addressing double-hashing approach is improved with a non-linear transformation that can be parametrized in order to ensure a uniform distribution of the data in the hash table. The optimal parameter is determined using a genetic algorithm. The paper results show that near-perfect hashing is faster than binary search, yet uses less memory than perfect hashing, being a good choice for memory-constrained applications where search time is also critical.
In this paper we show how genetic algorithms can be effectively applied to study the security of arbitrary quantum key distribution (QKD) protocols when faced with adversaries limited to current-day technology. We compare two approaches, both of which take into account practical limitations on the quantum power of an adversary (which can be specified by the user). Our system can be used to determine upper-bounds on noise tolerances of novel QKD protocols in this scenario, thus making it a useful tool for researchers. We compare our algorithm's results with current known numerical results, and also evaluate it on newer, more complex, protocols where no results are currently known.
Motivation: The security of any system is a direct consequence of stakeholders' decisions regarding security requirements. Such decisions are taken with varying degrees of expertise, and little is currently understood about how various demographics - security experts, general computer scientists, managers - approach security decisions and the strategies that underpin those decisions. What are the typical decision patterns, the consequences of such patterns and their impact on the security of the system in question? Nor is there any substantial understanding of how the strategies and decision patterns of these different groups contrast. Is security expertise necessarily an advantage when making security decisions in a given context? Answers to these questions are key to understanding the "how" and "why" behind security decision processes. The Game: In this talk1, we present a tabletop game: Decisions and Disruptions (D-D)2 that tasks a group of players with managing the security of a small utility company while facing a variety of threats. The game is kept short - 2 hours - and simple enough to be played without prior training. A cyber-physical infrastructure, depicted through a Lego\textregistered board, makes the game easy to understand and accessible to players from varying backgrounds and security expertise, without being too trivial a setting for security experts. Key insights: We played D-D with 43 players divided into homogeneous groups: 4 groups of security experts, 4 groups of nontechnical managers and 4 groups of general computer scientists. • Strategies: Security experts had a strong interest in advanced technological solutions and tended to neglect intelligence gathering, to their own detriment. Managers, too, were technology-driven and focused on data protection while neglecting human factors more than other groups. Computer scientists tended to balance human factors and intelligence gathering with technical solutions, and achieved the best results of the three demographics. • Decision Processes: Technical experience significantly changes the way players think. Teams with little technical experience had shallow, intuition-driven discussions with few concrete arguments. Technical teams, and the most experienced in particular, had much richer debates, driven by concrete scenarios, anecdotes from experience, and procedural thinking. Security experts showed a high confidence in their decisions - despite some of them having bad consequences - while the other groups tended to doubt their own skills - even when they were playing good games. • Patterns: A number of characteristic plays were identified, some good (balance between priorities, open-mindedness, and adapting strategies based on inputs that challenge one's pre-conceptions), some bad (excessive focus on particular issues, confidence in charismatic leaders), some ugly ("tunnel vision" syndrome by over-confident players). These patterns are documented in the full paper - showing the virtue of the positive ones, discouraging the negative ones, and inviting the readers to do their own introspection. Conclusion: Beyond the analysis of the security decisions of the three demographics, there is a definite educational and awareness-raising aspect to D-D (as noted consistently by players in all our subject groups). Game boxes will be brought to the conference for demonstration purposes, and the audience will be invited to experiment with D-D themselves, make their own decisions, and reflect on their own perception of security.
In this paper we propose a new algorithm to detect Advanced Persistent Threats (APT's) that relies on a graph model of HTTP traffic. We also implement a complete detection system with a web interface that allows to interactively analyze the data. We perform a complete parameter study and experimental evaluation using data collected on a real network. The results show that the performance of our system is comparable to currently available antiviruses, although antiviruses use signatures to detect known malwares while our algorithm solely uses behavior analysis to detect new undocumented attacks.
Smart industrial control systems (e.g., smart grid, oil and gas systems, transportation systems) are connected to the internet, and have the capability to collect and transmit data; as such, they are part of the IoT. The data collected can be used to improve services; however, there are serious privacy risks. This concern is usually addressed by means of privacy policies, but it is often difficult to understand the scope and consequences of such policies. Better tools to visualise and analyse data collection policies are needed. Graph-based modelling tools have been used to analyse complex systems in other domains. In this paper, we apply this technique to IoT data-collection policy analysis and visualisation. We describe graphical representations of category-based data collection policies and show that a graph-based policy language is a powerful tool not only to specify and visualise the policy, but also to analyse policy properties. We illustrate the approach with a simple example in the context of a chemical plant with a truck monitoring system. We also consider policy administration: we propose a classification of queries to help administrators analyse policies, and we show how the queries can be answered using our technique.
In this paper, we propose a graph-based algorithmic technique for malware detection, utilizing the System-call Dependency Graphs (ScDG) obtained through taint analysis traces. We leverage the grouping of system-calls into system-call groups with respect to their functionality to merge disjoint vertices of ScDG graphs, transforming them to Group Relation Graphs (GrG); note that, the GrG graphs represent malware's behavior being hence more resilient to probable mutations of its structure. More precisely, we extend the use of GrG graphs by mapping their vertices on the plane utilizing the degrees and the vertex-weights of a specific underlying graph of the GrG graph as to compute domination relations. Furthermore, we investigate how the activity of each system-call group could be utilized in order to distinguish graph-representations of malware and benign software. The domination relations among the vertices of GrG graphs result to a new graph representation that we call Coverage Graph of the GrG graph. Finally, we evaluate the potentials of our detection model using graph similarity between Coverage Graphs of known malicious and benign software samples of various types.
A Mobile ad hoc network (MANET) is a set of nodes that communicate together in a cooperative way using the wireless medium, and without any central administration. Due to its inherent open nature and the lack of infrastructure, security is a complicated issue compared to other networks. That is, these networks are vulnerable to a a wide range of attacks at different network layers. At the network level, malicious nodes can perform several attacks ranging from passive eavesdropping to active interfering. Wormhole is an example of severe attack that has attracted much attention recently. It involves the redirection of traffic between two end-nodes through a Wormhole tunnel, and manipulates the routing algorithm to give illusion that nodes located far from each other are neighbors. To handle with this issue, we propose a novel detection model to allow a node to check whether a presumed shortest path contains a Wormhole tunnel or not. Our approach is based on the fact that the Wormhole tunnel reduces significantly the length of the paths passing through it.
Pre-Silicon hardware Trojan detection has been studied for years. The most popular benchmark circuits are from the Trust-Hub. Their common feature is that the probability of activating hardware Trojans is very low. This leads to a series of machine learning based hardware Trojan detection methods which try to find the nets with low signal probability of 0 or 1. On the other hand, it is considered that, if the probability of activating hardware Trojans is high, these hardware Trojans can be easily found through behaviour simulations or during functional test. This paper explores the "grey zone" between these two opposite scenarios: if the activation probability of a hardware Trojan is not low enough for machine learning to detect it and is not high enough for behaviour simulation or functional test to find it, it can escape from detection. Experiments show the existence of such hardware Trojans, and this paper suggests a new set of hardware Trojan benchmark circuits for future study.
We propose a Centralized Tree based Diffie-Hellman (CTDH) protocol for wireless mesh networks, which take into account the characteristics of mesh network operations, wireless routers and mobile devices. Performance analysis shows that CTDH is more efficient than the Tree-Based Group Diffie-Hellman Protocol (TGDH).
User's behavioral biometrics are promising as authentication factors in particular if accuracy is sufficiently guaranteed. They can be used to augment security in combination with other authentication factors. A gesture-based pattern lock system is a good example of such multi-factor authentication, using touch dynamics in a smartphone. However, touch dynamics can be significantly affected by a shape of gestures with regard to the performance and accuracy, and our concern is that user-chosen patterns are likely far from producing such a good shape of gestures. In this poster, we raise this problem and show our experimental study conducted in this regard. We investigate if there is a reproducible correlation between shape and accuracy and if we can derive effective attribute values for user guidance, based on the gesture-based pattern lock system. In more general, we discuss a guided approach to behavioral authentication.
Cyber-physical systems (CPS) research leverages the expertise of researchers from multiple domains to engineer complex systems of interacting physical and computational components. An approach called co-simulation is often used in CPS conceptual design to integrate the specialized tools and simulators from each of these domains into a joint simulation for the evaluation of design decisions. Many co-simulation platforms are being developed to expedite CPS conceptualization and realization, but most use intrusive modeling and communication libraries that require researchers to either abandon their existing models or spend considerable effort to integrate them into the platform. A significant number of these co-simulation platforms use the High Level Architecture (HLA) standard that provides a rich set of services to facilitate distributed simulation. This paper introduces a simple gateway that can be readily implemented without co-simulation expertise to adapt existing models and research infrastructure for use in HLA. An open-source implementation of the gateway has been developed for the National Institute of Standards and Technology (NIST) co-simulation platform called the Universal CPS Environment for Federation (UCEF).
Device-to-device communication is widely used for mobile devices and Internet of Things. Authentication and key agreement are critical to build a secure channel between two devices. However, existing approaches often rely on a pre-built fingerprint database and suffer from low key generation rate. We present GeneWave, a fast device authentication and key agreement protocol for commodity mobile devices. GeneWave first achieves bidirectional initial authentication based on the physical response interval between two devices. To keep the accuracy of interval estimation, we eliminate time uncertainty on commodity devices through fast signal detection and redundancy time cancellation. Then, we derive the initial acoustic channel response for device authentication. We design a novel coding scheme for efficient key agreement while ensuring security. Therefore, two devices can authenticate each other and securely agree on a symmetric key. GeneWave requires neither special hardware nor pre-built fingerprint database, and thus it is easyto-use on commercial mobile devices. We implement GeneWave on mobile devices (i.e., Nexus 5X and Nexus 6P) and evaluate its performance through extensive experiments. Experimental results show that GeneWave efficiently accomplish secure key agreement on commodity smartphones with a key generation rate 10× faster than the state-of-the-art approach.
The papers in this special section explore recent advancements in parallel graph processing. In the sphere of modern data science and data-driven applications, graph algorithms have achieved a pivotal place in advancing the state of scientific discovery and knowledge. Nearly three centuries of ideas have made graph theory and its applications a mature area in computational sciences. Yet, today we find ourselves at a crossroads between theory and application. Spurred by the digital revolution, data from a diverse range of high throughput channels and devices, from across internet-scale applications, are starting to mark a new era in data-driven computing and discovery. Building robust graph models and implementing scalable graph application frameworks in the context of this new era are proving to be significant challenges. Concomitant to the digital revolution, we have also experienced an explosion in computing architectures, with a broad range of multicores, manycores, heterogeneous platforms, and hardware accelerators (CPUs, GPUs) being actively developed and deployed within servers and multinode clusters. Recent advances have started to show that in more than one way, these two fields—graph theory and architectures–are capable of benefiting and in fact spurring new research directions in one another. This special section is aimed at introducing some of the new avenues of cutting-edge research happening at the intersection of graph algorithm design and their implementation on advanced parallel architectures.
Program authorship attribution has implications for the privacy of programmers who wish to contribute code anonymously. While previous work has shown that complete files that are individually authored can be attributed, these efforts have focused on ideal data sets such as the Google Code Jam data. We explore the problem of attribution "in the wild," examining source code obtained from open source version control systems, and investigate if and how such contributions can be attributed to their authors, either individually or on a per-account basis. In this work we show that accounts belonging to open source contributors containing short, incomplete, and typically uncompilable fragments can be effectively attributed.
The advent of HTML 5 revives the life of cross-site scripting attack (XSS) in the web. Cross Document Messaging, Local Storage, Attribute Abuse, Input Validation, Inline Multimedia and SVG emerge as likely targets for serious threats. Introduction of various new tags and attributes can be potentially manipulated to exploit the data on a dynamic website. The XSS attack manages to retain a spot in all the OWASP Top 10 security risks released over the past decade and placed in the seventh spot in OWASP Top 10 of 2017. It is known that XSS attempts to execute scripts with untrusted data without proper validation between websites. XSS executes scripts in the victim's browser which can hijack user sessions, deface websites, or redirect the user to the malicious site. This paper focuses on the development of a browser extension for the popular Google Chromium browser that keeps track of various attack vectors. These vectors primarily include tags and attributes of HTML 5 that may be used maliciously. The developed plugin alerts users whenever a possibility of XSS attack is discovered when a user accesses a particular website.
Internet of Battlefield Things (IoBT) devices such as actuators, sensors, wearable devises, robots, drones, and autonomous vehicles, facilitate the Intelligence, Surveillance and Reconnaissance (ISR) to Command and Control and battlefield services. IoBT devices have the ability to collect operational field data, to compute on the data, and to upload its information to the network. Securing the IoBT presents additional challenges compared with traditional information technology (IT) systems. First, IoBT devices are mass produced rapidly to be low-cost commodity items without security protection in their original design. Second, IoBT devices are highly dynamic, mobile, and heterogeneous without common standards. Third, it is imperative to understand the natural world, the physical process(es) under IoBT control, and how these real-world processes can be compromised before recommending any relevant security counter measure. Moreover, unprotected IoBT devices can be used as “stepping stones” by attackers to launch more sophisticated attacks such as advanced persistent threats (APTs). As a result of these challenges, IoBT systems are the frequent targets of sophisticated cyber attack that aim to disrupt mission effectiveness.