Biblio
Security protocols are designed in order to provide security properties (goals). They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy (confidentiality), authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets (CPN). This research analyses two security properties that are defined in a protocol that is based on trusted platform module (TPM). The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient.
Industrial Control Systems (ICS) such as Supervisory Control And Data Acquisition (SCADA), Distributed Control Systems (DCS) and Distributed Automation Systems (DAS) control and monitor critical infrastructures. In recent years, proliferation of cyber-attacks to ICS revealed that a large number of security vulnerabilities exist in such systems. Excessive security solutions are proposed to remove the vulnerabilities and improve the security of ICS. However, to the best of our knowledge, none of them presented or developed a security test-bed which is vital to evaluate the security of ICS tools and products. In this paper, a test-bed is proposed for evaluating the security of industrial applications by providing different metrics for static testing, dynamic testing and network testing in industrial settings. Using these metrics and results of the three tests, industrial applications can be compared with each other from security point of view. Experimental results on several real world applications indicate that proposed test-bed can be successfully employed to evaluate and compare the security level of industrial applications.
Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.
Concurrent programs are prone to various classes of difficult-to- detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SIMRT, an automated regression testing framework for use in de- tecting races introduced by code modifications. SIMRT employs a regression test selection technique, focused on sets of program ele- ments related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritiza- tion technique to improve the rate at which such races are detected. Our empirical study of SIMRT reveals that it is more efficient and effective for revealing races than other approaches, and that its con- stituent test selection and prioritization components each contribute to its performance.
Presented at the NSA Science of Security Quarterly Meeting, October 2014.
Application Programming Interfaces (APIs) often define object
protocols. Objects with protocols have a finite number of states and
in each state a different set of method calls is valid. Many researchers
have developed protocol verification tools because protocols are notoriously
difficult to follow correctly. However, recent research suggests that
a major challenge for API protocol programmers is effectively searching
the state space. Verification is an ineffective guide for this kind of
search. In this paper we instead propose Plaiddoc, which is like Javadoc
except it organizes methods by state instead of by class and it includes
explicit state transitions, state-based type specifications, and rich state
relationships. We compare Plaiddoc to a Javadoc control in a betweensubjects
laboratory experiment. We find that Plaiddoc participants complete
state search tasks in significantly less time and with significantly
fewer errors than Javadoc participants.
The threshold secret sharing technique has been used extensively in cryptography. This technique is used for splitting secrets into shares and distributing the shares in a network to provide protection against attacks and to reduce the possibility of loss of information. In this paper, a new approach is introduced to enhance communication security among the nodes in a network based on the threshold secret sharing technique and traditional symmetric key management. The proposed scheme aims to enhance security of symmetric key distribution in a network. In the proposed scheme, key distribution is online which means key management is conducted whenever a message needs to be communicated. The basic idea is encrypting a message with a key (the secret) at the sender, then splitting the key into shares and sending the shares from different paths to the destination. Furthermore, a Pre-Distributed Shared Key scheme is utilized for more secure transmissions of the secret’s shares. The proposed scheme, with the exception of some offline management by the network controller, is distributed, i.e., the symmetric key setups and the determination of the communication paths is performed in the nodes. This approach enhances communication security among the nodes in a network that operates in hostile environments. The cost and security analyses of the proposed scheme are provided.
Recent advances in the automated analysis of cryptographic protocols have aroused new interest in the practical application of unification modulo theories, especially theories that describe the algebraic properties of cryptosystems. However, this application requires unification algorithms that can be easily implemented and easily extended to combinations of different theories of interest. In practice this has meant that most tools use a version of a technique known as variant unification. This requires, among other things, that the theory be decomposable into a set of axioms B and a set of rewrite rules R such that R has the finite variant property with respect to B. Most theories that arise in cryptographic protocols have decompositions suitable for variant unification, but there is one major exception: the theory that describes encryption that is homomorphic over an Abelian group.
In this paper we address this problem by studying various approximations of homomorphic encryption over an Abelian group. We construct a hierarchy of increasingly richer theories, taking advantage of new results that allow us to automatically verify that their decompositions have the finite variant property. This new verification procedure also allows us to construct a rough metric of the complexity of a theory with respect to variant unification, or variant complexity. We specify different versions of protocols using the different theories, and analyze them in the Maude-NPA cryptographic protocol analysis tool to assess their behavior. This gives us greater understanding of how the theories behave in actual application, and suggests possible techniques for improving performance.
We consider a three-step three-player complete information Colonel Blotto game in this paper, in which the first two players fight against a common adversary. Each player is endowed with a certain amount of resources at the beginning of the game, and the number of battlefields on which a player and the adversary fights is specified. The first two players are allowed to form a coalition if it improves their payoffs. In the first stage, the first two players may add battlefields and incur costs. In the second stage, the first two players may transfer resources among each other. The adversary observes this transfer, and decides on the allocation of its resources to the two battles with the players. At the third step, the adversary and the other two players fight on the updated number of battlefields and receive payoffs. We characterize the subgame-perfect Nash equilibrium (SPNE) of the game in various parameter regions. In particular, we show that there are certain parameter regions in which if the players act according to the SPNE strategies, then (i) one of the first two players add battlefields and transfer resources to the other player (a coalition is formed), (ii) there is no addition of battlefields and no transfer of resources (no coalition is formed). We discuss the implications of the results on resource allocation for securing cyberphysical systems.
Multi-module Cyber-Physical Systems (CPSs), such as satellite clusters, swarms of Unmanned Aerial Vehicles (UAV), and fleets of Unmanned Underwater Vehicles (UUV) are examples of managed distributed real-time systems where mission-critical applications, such as sensor fusion or coordinated flight control, are hosted. These systems are dynamic and reconfigurable, and provide a “CPS cluster-as-a-service” for mission-specific scientific applications that can benefit from the elasticity of the cluster membership and heterogeneity of the cluster members. Distributed and remote nature of these systems often necessitates the use of Deployment and Configuration (D&C) services to manage lifecycle of software applications. Fluctuating resources, volatile cluster membership and changing environmental conditions require resilience. However, due to the dynamic nature of the system, human intervention is often infeasible. This necessitates a self-adaptive D&C infrastructure that supports autonomous resilience. Such an infrastructure must have the ability to adapt existing applications on the fly in order to provide application resilience and must itself be able to adapt to account for changes in the system as well as tolerate failures.
This paper describes the design and architectural considerations to realize a self-adaptive, D&C infrastructure for CPSs. Previous efforts in this area have resulted in D&C infrastructures that support application adaptation via dynamic re-deployment and re-configuration mechanisms. Our work, presented in this paper, improves upon these past efforts by implementing a self- adaptive D&C infrastructure which itself is resilient. The paper concludes with experimental results that demonstrate the autonomous resilience capabilities of our new D&C infrastructure.
Presented as part of the Illinois SoS Bi-weekly Meeting, October 2014.
Distributed Denial of Service attacks are a growing threat to organizations and, as defense mechanisms are becoming more advanced, hackers are aiming at the application layer. For example, application layer Low and Slow Distributed Denial of Service attacks are becoming a serious issue because, due to low resource consumption, they are hard to detect. In this position paper, we propose a reference architecture that mitigates the Low and Slow Distributed Denial of Service attacks by utilizing Software Defined Infrastructure capabilities. We also propose two concrete architectures based on the reference architecture: a Performance Model-Based and Off-The-Shelf Components based architecture, respectively. We introduce the Shark Tank concept, a cluster under detailed monitoring that has full application capabilities and where suspicious requests are redirected for further filtering.
Today, companies across the world are adopting cloud services for efficient and cost effective resource management. However, cloud computing is still in developing stage where there are lots of research problems yet to be solved. One such area is security which addresses issues like privacy, identity management, and trust management among other things. As of now, there exists no standard identity management system for a cloud environment. The aspect of trusted propagation still needs to be tackled. This research work proposes a trusted security architecture for cloud identity management that can dynamically federate user identities. The trust architecture proposed use Bayesian inference and roulette wheel selection technique to evaluate trust scores. Using the proposed trust model, dynamic trust relationships are formed across multiple cloud service providers and identity providers thereby eliminating fragmentation of user identities. The trust model was implemented and tested in Google App Engine. The performance of the trust measures was analysed.
Presented as part of the Illinois Science of Security Lablet Bi-Weekly Meeting, September 2014.
Due to limited time and resources, web software engineers need support in identifying vulnerable code. A practical approach to predicting vulnerable code would enable them to prioritize security auditing efforts. In this paper, we propose using a set of hybrid (static+dynamic) code attributes that characterize input validation and input sanitization code patterns and are expected to be significant indicators of web application vulnerabilities. Because static and dynamic program analyses complement each other, both techniques are used to extract the proposed attributes in an accurate and scalable way. Current vulnerability prediction techniques rely on the availability of data labeled with vulnerability information for training. For many real world applications, past vulnerability data is often not available or at least not complete. Hence, to address both situations where labeled past data is fully available or not, we apply both supervised and semi-supervised learning when building vulnerability predictors based on hybrid code attributes. Given that semi-supervised learning is entirely unexplored in this domain, we describe how to use this learning scheme effectively for vulnerability prediction. We performed empirical case studies on seven open source projects where we built and evaluated supervised and semi-supervised models. When cross validated with fully available labeled data, the supervised models achieve an average of 77 percent recall and 5 percent probability of false alarm for predicting SQL injection, cross site scripting, remote code execution and file inclusion vulnerabilities. With a low amount of labeled data, when compared to the supervised model, the semi-supervised model showed an average improvement of 24 percent higher recall and 3 percent lower probability of false alarm, thus suggesting semi-supervised learning may be a preferable solution for many real world applications where vulnerability data is missing.