Biblio
Cloud computing emerges as a new computing paradigm that aims to provide reliable, customized and quality of service guaranteed computation environments for cloud users. Applications and databases are moved to the large centralized data centers, called cloud. Due to resource virtualization, global replication and migration, the physical absence of data and machine in the cloud, the stored data in the cloud and the computation results may not be well managed and fully trusted by the cloud users. Most of the previous work on the cloud security focuses on the storage security rather than taking the computation security into consideration together. In this paper, we propose a privacy cheating discouragement and secure computation auditing protocol, or SecCloud, which is a first protocol bridging secure storage and secure computation auditing in cloud and achieving privacy cheating discouragement by designated verifier signature, batch verification and probabilistic sampling techniques. The detailed analysis is given to obtain an optimal sampling size to minimize the cost. Another major contribution of this paper is that we build a practical secure-aware cloud computing experimental environment, or SecHDFS, as a test bed to implement SecCloud. Further experimental results have demonstrated the effectiveness and efficiency of the proposed SecCloud.
Security protocols are designed in order to provide security properties (goals). They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy (confidentiality), authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets (CPN). This research analyses two security properties that are defined in a protocol that is based on trusted platform module (TPM). The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient.
Large and complex models can be difficult to analyze using static analysis results from current tools, including the M¨obius modeling framework, which provides a powerful, formalism- independent, discrete-event simulator that outputs static results such as execution traces. The M¨obius Simulation Debugger and Visualization (MSDV) feature adds user interaction to running simulations to provide a more transparent view into the dynamics of the models under consideration. This thesis discusses the details of the design and implementation of this feature in the M¨obius modeling environment. Also, a case study is presented to demonstrate the new capabilities provided by the feature.
A slow-paced persistent attack, such as slow worm or bot, can bewilder the detection system by slowing down their attack. Detecting such attacks based on traditional anomaly detection techniques may yield high false alarm rates. In this paper, we frame our problem as detecting slow-paced persistent attacks from a time series obtained from network trace. We focus on time series spectrum analysis to identify peculiar spectral patterns that may represent the occurrence of a persistent activity in the time domain. We propose a method to adaptively detect slow-paced persistent attacks in a time series and evaluate the proposed method by conducting experiments using both synthesized traffic and real-world traffic. The results show that the proposed method is capable of detecting slow-paced persistent attacks even in a noisy environment mixed with legitimate traffic.
This paper is concerned with mean-square stabilization of single-input Markovian jump linear systems (MJLSs) with logarithmically quantized state feedback. We introduce the concepts and provide explicit constructions of stabilizing mode-dependent logarithmic quantizers together with associated controllers, and a semi-convex way to determine the optimal (coarsest) stabilizing quantization density. An example application is presented as a special case of the developed framework, that of feedback stabilizing a linear time-invariant (LTI) system over a log-quantized erasure channel. A hardware implementation of this application on an inverted pendulum testbed is provided using a finite word-length approximation.
The emerging paradigm of cloud computing, e.g., Amazon Elastic Compute Cloud (EC2), promises a highly flexible yet robust environment for large-scale applications. Ideally, while multiple virtual machines (VM) share the same physical resources (e.g., CPUs, caches, DRAM, and I/O devices), each application should be allocated to an independently managed VM and isolated from one another. Unfortunately, the absence of physical isolation inevitably opens doors to a number of security threats. In this paper, we demonstrate in EC2 a new type of security vulnerability caused by competition between virtual I/O workloads-i.e., by leveraging the competition for shared resources, an adversary could intentionally slow down the execution of a targeted application in a VM that shares the same hardware. In particular, we focus on I/O resources such as hard-drive throughput and/or network bandwidth-which are critical for data-intensive applications. We design and implement Swiper, a framework which uses a carefully designed workload to incur significant delays on the targeted application and VM with minimum cost (i.e., resource consumption). We conduct a comprehensive set of experiments in EC2, which clearly demonstrates that Swiper is capable of significantly slowing down various server applications while consuming a small amount of resources.
The threshold secret sharing technique has been used extensively in cryptography. This technique is used for splitting secrets into shares and distributing the shares in a network to provide protection against attacks and to reduce the possibility of loss of information. In this paper, a new approach is introduced to enhance communication security among the nodes in a network based on the threshold secret sharing technique and traditional symmetric key management. The proposed scheme aims to enhance security of symmetric key distribution in a network. In the proposed scheme, key distribution is online which means key management is conducted whenever a message needs to be communicated. The basic idea is encrypting a message with a key (the secret) at the sender, then splitting the key into shares and sending the shares from different paths to the destination. Furthermore, a Pre-Distributed Shared Key scheme is utilized for more secure transmissions of the secret’s shares. The proposed scheme, with the exception of some offline management by the network controller, is distributed, i.e., the symmetric key setups and the determination of the communication paths is performed in the nodes. This approach enhances communication security among the nodes in a network that operates in hostile environments. The cost and security analyses of the proposed scheme are provided.
Recent advances in the automated analysis of cryptographic protocols have aroused new interest in the practical application of unification modulo theories, especially theories that describe the algebraic properties of cryptosystems. However, this application requires unification algorithms that can be easily implemented and easily extended to combinations of different theories of interest. In practice this has meant that most tools use a version of a technique known as variant unification. This requires, among other things, that the theory be decomposable into a set of axioms B and a set of rewrite rules R such that R has the finite variant property with respect to B. Most theories that arise in cryptographic protocols have decompositions suitable for variant unification, but there is one major exception: the theory that describes encryption that is homomorphic over an Abelian group.
In this paper we address this problem by studying various approximations of homomorphic encryption over an Abelian group. We construct a hierarchy of increasingly richer theories, taking advantage of new results that allow us to automatically verify that their decompositions have the finite variant property. This new verification procedure also allows us to construct a rough metric of the complexity of a theory with respect to variant unification, or variant complexity. We specify different versions of protocols using the different theories, and analyze them in the Maude-NPA cryptographic protocol analysis tool to assess their behavior. This gives us greater understanding of how the theories behave in actual application, and suggests possible techniques for improving performance.
We consider a three-step three-player complete information Colonel Blotto game in this paper, in which the first two players fight against a common adversary. Each player is endowed with a certain amount of resources at the beginning of the game, and the number of battlefields on which a player and the adversary fights is specified. The first two players are allowed to form a coalition if it improves their payoffs. In the first stage, the first two players may add battlefields and incur costs. In the second stage, the first two players may transfer resources among each other. The adversary observes this transfer, and decides on the allocation of its resources to the two battles with the players. At the third step, the adversary and the other two players fight on the updated number of battlefields and receive payoffs. We characterize the subgame-perfect Nash equilibrium (SPNE) of the game in various parameter regions. In particular, we show that there are certain parameter regions in which if the players act according to the SPNE strategies, then (i) one of the first two players add battlefields and transfer resources to the other player (a coalition is formed), (ii) there is no addition of battlefields and no transfer of resources (no coalition is formed). We discuss the implications of the results on resource allocation for securing cyberphysical systems.
This session reports on a workshop convened by the ACM Education Board with funding by the US National Science Foundation and invites discussion from the community on the workshop findings. The topic, curricular directions for cybersecurity, is one that resonates in many departments considering how best to prepare graduates to face the challenges of security issues in employment and future research. The session will include presentation of the workshop context and conclusions, but will be open to participant discussion. This will be the first public presentation of the results of the workshop and the first opportunity for significant response.
Multi-module Cyber-Physical Systems (CPSs), such as satellite clusters, swarms of Unmanned Aerial Vehicles (UAV), and fleets of Unmanned Underwater Vehicles (UUV) are examples of managed distributed real-time systems where mission-critical applications, such as sensor fusion or coordinated flight control, are hosted. These systems are dynamic and reconfigurable, and provide a “CPS cluster-as-a-service” for mission-specific scientific applications that can benefit from the elasticity of the cluster membership and heterogeneity of the cluster members. Distributed and remote nature of these systems often necessitates the use of Deployment and Configuration (D&C) services to manage lifecycle of software applications. Fluctuating resources, volatile cluster membership and changing environmental conditions require resilience. However, due to the dynamic nature of the system, human intervention is often infeasible. This necessitates a self-adaptive D&C infrastructure that supports autonomous resilience. Such an infrastructure must have the ability to adapt existing applications on the fly in order to provide application resilience and must itself be able to adapt to account for changes in the system as well as tolerate failures.
This paper describes the design and architectural considerations to realize a self-adaptive, D&C infrastructure for CPSs. Previous efforts in this area have resulted in D&C infrastructures that support application adaptation via dynamic re-deployment and re-configuration mechanisms. Our work, presented in this paper, improves upon these past efforts by implementing a self- adaptive D&C infrastructure which itself is resilient. The paper concludes with experimental results that demonstrate the autonomous resilience capabilities of our new D&C infrastructure.
Injection vulnerabilities have topped rankings of the most critical web application vulnerabilities for several years [1, 2]. They can occur anywhere where user input may be erroneously executed as code. The injected input is typically aimed at gaining unauthorized access to the system or to private information within it, corrupting the system's data, or disturbing system availability. Injection vulnerabilities are tedious and difficult to prevent.
This paper studies a distributed event-triggered communication and control strategy that solves the multi-agent average consensus problem. The proposed strategy does not rely on the continuous or periodic availability of information to an agent about the state of its neighbors, but instead prescribes isolated event times where both communication and controller updates occur. In addition, all parameters required for its implementation can be locally determined by the agents. We show that the resulting network executions are guaranteed to converge to the average of the initial agents' states, establish that events cannot be triggered an infinite number of times in any finite time period (i.e., absence of Zeno behavior), and characterize the exponential rate of convergence. We also provide sufficient conditions for convergence in scenarios with time-varying communication topologies. Simulations illustrate our results.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
Software architects design systems to achieve quality attributes like security, reliability, and performance. Key to achieving these quality attributes are design constraints governing how components of the system are configured, communicate and access resources. Unfortunately, identifying, specifying, communicating and enforcing important design constraints – achieving architectural control – can be difficult, particularly in large software systems. We argue for the development of architectural frameworks, built to leverage language mechanisms that provide for domain-specific syntax, editor services and explicit control over capabilities, that help increase architectural control. In particular, we argue for concise, centralized architectural descriptions which are responsible for specifying constraints and passing a minimal set of capabilities to downstream system components, or explicitly entrusting them to individuals playing defined roles within a team. By integrating these architectural descriptions directly into the language, the type system can help enforce technical constraints and editor services can help enforce social constraints. We sketch our approach in the context of distributed systems.
Programming languages often include specialized syntax for common datatypes e.g. lists and some also build in support for specific specialized datatypes e.g. regular expressions, but user-defined types must use general-purpose syntax. Frustration with this causes developers to use strings, rather than structured data, with alarming frequency, leading to correctness, performance, security, and usability issues. Allowing library providers to modularly extend a language with new syntax could help address these issues. Unfortunately, prior mechanisms either limit expressiveness or are not safely composable: individually unambiguous extensions can still cause ambiguities when used together. We introduce type-specific languages TSLs: logic associated with a type that determines how the bodies of generic literals, able to contain arbitrary syntax, are parsed and elaborated, hygienically. The TSL for a type is invoked only when a literal appears where a term of that type is expected, guaranteeing non-interference. We give evidence supporting the applicability of this approach and formally specify it with a bidirectionally typed elaboration semantics for the Wyvern programming language.
Complex software systems are becoming increasingly prevalent in aerospace applications: in particular, to
accomplish critical tasks. Ensuring the safety of these systems is crucial, as they can have subtly different behaviors
under slight variations in operating conditions. This paper advocates the use of formal verification techniques and in
particular theorem proving for hybrid software-intensive systems as a well-founded complementary approach to the
classical aerospace verification and validation techniques, such as testing or simulation. As an illustration of these
techniques, a novel lateral midair collision-avoidance maneuver is studied in an ideal setting, without accounting for
the uncertainties of the physical reality. The challenges that naturally arise when applying such technology to
industrial-scale applications is then detailed, and proposals are given on how to address these issues.
Web applications must ultimately command systems like web browsers and database engines using strings. Strings derived from improperly sanitized user input can as a result be a vector for command injection attacks. In this paper, we introduce regular string types, which classify strings constrained statically to be in a regular language specified by a regular expression. Regular strings support standard string operations like concatenation and substitution, as well as safe coercions, so they can be used to implement, in an essentially conventional manner, the pieces of a web application or framework that handle strings arising from user input. Simple type annotations at function interfaces can be used to statically verify that sanitization has been performed correctly without introducing redundant run-time checks. We specify this type system first as a minimal typed lambda calculus, lambdaRS. To be practical, adopting a specialized type system like this should not require the adoption of a new programming language. Instead, we advocate for extensible type systems: new type system fragments like this should be implemented as libraries atop a mechanism that guarantees that they can be safely composed. We support this with two contributions. First, we specify a translation from lambdaRS to a calculus with only standard strings and regular expressions. Then, taking Python as a language with these constructs, we implement the type system together with the translation as a library using typy, an extensible static type system for Python.