Biblio
There has been a great deal of work on learning new robot skills, but very little consideration of how these newly acquired skills can be integrated into an overall intelligent system. A key aspect of such a system is compositionality: newly learned abilities have to be characterized in a form that will allow them to be flexibly combined with existing abilities, affording a (good!) combinatorial explosion in the robot's abilities. In this paper, we focus on learning models of the preconditions and effects of new parameterized skills, in a form that allows those actions to be combined with existing abilities by a generative planning and execution system.
Service composition is currently done by (hierarchical) orchestration and choreography. However, these approaches do not support explicit control flow and total compositionality, which are crucial for the scalability of service-oriented systems. In this paper, we propose exogenous connectors for service composition. These connectors support both explicit control flow and total compositionality in hierarchical service composition. To validate and evaluate our proposal, we present a case study based on the popular MusicCorp.
Information flow security has been considered as a critical requirement on complicated component-based software. The recent efforts on the compositional information flow analyses were limited on the expressiveness of security lattice and the efficiency of compositional enforcement. Extending these approaches to support more general security lattices is usually nontrivial because the compositionality of information flow security properties should be properly treated. In this work, we present a new extension of interface automaton. On this interface structure, we propose two refinement-based security properties, adaptable to any finite security lattice. For each property, we present and prove the security condition that ensures the property to be preserved under composition. Furthermore, we implement the refinement algorithms and the security condition decision procedure. We demonstrate the usability and efficiency of our approach with in-depth case studies. The evaluation results show that our compositional enforcement can effectively reduce the verification cost compared with global verification on composite system.
There are several works on the formalization of security protocols and proofs of their security in Isabelle/HOL; there have also been tools for automatically generating such proofs. This is attractive since a proof in Isabelle gives a higher assurance of the correctness than a pen-and-paper proof or the positive output of a verification tool. However several of these works have used a typed model, where the intruder is restricted to "well-typed" attacks. There also have been several works that show that this is actually not a restriction for a large class of protocols, but all these results so far are again pen-and-paper proofs. In this work we present a formalization of such a typing result in Isabelle/HOL. We formalize a constraint-based approach that is used in the proof argument of such typing results, and prove its soundness, completeness and termination. We then formalize and prove the typing result itself in Isabelle. Finally, to illustrate the real-world feasibility, we prove that the standard Transport Layer Security (TLS) handshake satisfies the main condition of the typing result.
Side channel attacks have been used to extract critical data such as encryption keys and confidential user data in a variety of adversarial settings. In practice, this threat is addressed by adhering to a constant-time programming discipline, which imposes strict constraints on the way in which programs are written. This introduces an additional hurdle for programmers faced with the already difficult task of writing secure code, highlighting the need for solutions that give the same source-level guarantees while supporting more natural programming models. We propose a novel type system for verifying that programs correctly implement constant-resource behavior. Our type system extends recent work on automatic amortized resource analysis (AARA), a set of techniques that automatically derive provable upper bounds on the resource consumption of programs. We devise new techniques that build on the potential method to achieve compositionality, precision, and automation. A strict global requirement that a program always maintains constant resource usage is too restrictive for most practical applications. It is sufficient to require that the program's resource behavior remain constant with respect to an attacker who is only allowed to observe part of the program's state and behavior. To account for this, our type system incorporates information flow tracking into its resource analysis. This allows our system to certify programs that need to violate the constant-time requirement in certain cases, as long as doing so does not leak confidential information to attackers. We formalize this guarantee by defining a new notion of resource-aware noninterference, and prove that our system enforces it. Finally, we show how our type inference algorithm can be used to synthesize a constant-time implementation from one that cannot be verified as secure, effectively repairing insecure programs automatically. We also show how a second novel AARA system that computes lower bounds on reso- rce usage can be used to derive quantitative bounds on the amount of information that a program leaks through its resource use. We implemented each of these systems in Resource Aware ML, and show that it can be applied to verify constant-time behavior in a number of applications including encryption and decryption routines, database queries, and other resource-aware functionality.
With the progressive development of network applications and software dependency, we need to discover more advanced methods for protecting our systems. Each industry is equally affected, and regardless of whether we consider the vulnerability of the government or each individual household or company, we have to find a sophisticated and secure way to defend our systems. The starting point is to create a reliable intrusion detection mechanism that will help us to identify the attack at a very early stage; otherwise in the cyber security space the intrusion can affect the system negatively, which can cause enormous consequences and damage the system's privacy, security or financial stability. This paper proposes a concise, and easy to use statistical learning procedure, abbreviated NASCA, which is a four-stage intrusion detection method that can successfully detect unwanted intrusion to our systems. The model is static, but it can be adapted to a dynamic set up.
The vision of cyber-physical systems (CPSs) considered the Internet as the future communication network for such systems. A challenge with this regard is to provide high communication reliability, especially, for CPSs applications in critical infrastructures. Examples include smart grid applications with reliability requirements between 99-99.9999% [2]. Even though the Internet is a cost effective solution for such applications, the reliability of its end-to-end (e2e) paths is inadequate (often less than 99%). In this paper, we propose Reliable Multipath Communication Approach for Internet-based CPSs (RC4CPS). RC4CPS is an e2e approach that utilizes the inherent redundancy of the Internet and multipath (MP) transport protocols concept to improve reliability measured in terms of availability. It provides online monitoring and MP selection in order to fulfill the application specific reliability requirement. In addition, our MP selection considers e2e paths dependency and unavailability prediction to maximize the reliability gains of MP communication. Our results show that RC4CPS dynamic MP selection satisfied the reliability requirement along with selecting e2e paths with low dependency and unavailability probability.
Cyber-induced dependent failures are important to be considered in composite system reliability evaluation. Because of the complexity and dimensionality, Monte Carlo simulation is a preferred method for composite system reliability evaluation. The non-sequential Monte Carlo or sampling generally requires less computational and storage resources than sequential techniques and is generally preferred for large systems where components are independent or only a limited dependency exists. However, cyber-induced events involve dependent failures, making it difficult to use sampling methods. The difficulties of using sampling with dependent failures are discussed and a solution is proposed. The basic idea is to generate a representative state space from which states can be sampled. The probabilities of representative state space provide an approximation of the joint distribution and are generated by a sequential simulation in this paper but it may be possible to find alternative means of achieving this objective. The proposed method preserves the dependent features of cyber-induced events and also improves the efficiency. Although motivated by cyber-induced failures, the technique can be used for other types of dependent failures as well. A comparative study between a purely sequential methodology and the proposed method is presented on an extended Roy Billinton Test System.
Large-scale infrastructures are critical to economic and social development, and hence their continued performance and security are of high national importance. Such an infrastructure often is a system of systems, and its functionality critically depends on the inherent robustness of its constituent systems and its defense strategy for countering attacks. Additionally, interdependencies between the systems play another critical role in determining the infrastructure robustness specified by its survival probability. In this paper, we develop game-theoretic models between a defender and an attacker for a generic system of systems using inherent parameters and conditional survival probabilities that characterize the interdependencies. We derive Nash Equilibrium conditions for the cases of interdependent and independent systems of systems under sum-form utility functions. We derive expressions for the infrastructure survival probability that capture its dependence on cost and system parameters, and also on dependencies that are specified by conditional probabilities. We apply the results to cyber-physical systems which show the effects on system survival probability due to defense and attack intensities, inherent robustness, unit cost, target valuation, and interdependencies.
Business or military missions are supported by hardware and software systems. Unanticipated cyber activities occurring in supporting systems can impact such missions. In order to quantify such impact, we describe a layered graphical model as an extension of forensic investigation. Our model has three layers: the upper layer models operational tasks that constitute the mission and their inter-dependencies. The middle layer reconstructs attack scenarios from available evidence to reconstruct their inter-relationships. In cases where not all evidence is available, the lower level reconstructs potentially missing attack steps. Using the three levels of graphs constructed in these steps, we present a method to compute the impacts of attack activities on missions. We use NIST National Vulnerability Database's (NVD)-Common Vulnerability Scoring System (CVSS) scores or forensic investigators' estimates in our impact computations. We present a case study to show the utility of our model.
In this paper, we present a decentralized nonlinear robust controller to enhance the transient stability margin of synchronous generators. Although, the trend in power system control is shifting towards centralized or distributed controller approaches, the remote data dependency of these schemes fuels cyber-physical security issues. Since the excessive delay or losing remote data affect severely the operation of those controllers, the designed controller emerges as an alternative for stabilization of Smart Grids in case of unavailability of remote data and in the presence of plant parametric uncertainties. The proposed controller actuates distributed storage systems such as flywheels in order to reduce stabilization time and it implements a novel input time delay compensation technique. Lyapunov stability analysis proves that all the tracking error signals are globally uniformly ultimately bounded. Furthermore, the simulation results demonstrate that the proposed controller outperforms traditional local power systems controllers such as Power System Stabilizers.
Enhancing the security and resilience of interdependent infrastructures is crucial. In this paper, we establish a theoretical framework based on Markov decision processes (MDPs) to design optimal resiliency mechanisms for interdependent infrastructures. We use MDPs to capture the dynamics of the failure of constituent components of an infrastructure and their cyber-physical dependencies. Factored MDPs and approximate linear programming are adopted for an exponentially growing dimension of both state and action spaces. Under our approximation scheme, the optimally distributed policy is equivalent to the centralized one. Finally, case studies in a large-scale interdependent system demonstrate the effectiveness of the control strategy to enhance the network resilience to cascading failures.
Developers of applications relying on cryptographic libraries can easily make mistakes in their use. Popular dynamic languages such as JavaScript make testing or verifying such applications particularly challenging. In this paper, we present our ongoing work toward a methodology for automatically checking security properties in JavaScript code. Our main idea is to attach security annotations to values that encode properties of interest. We illustrate our idea using examples and, as an initial step in our line of work, we present a formalization of security annotations in a statically typed lambda calculus. As next steps, we will translate our annotations to a dynamically typed formalization of JavaScript such as $łambda$JS and implement a runtime checked type extension using code instrumentation for full JavaScript.
Private record linkage (PRL) is the problem of identifying pairs of records that are similar as per an input matching rule from databases held by two parties that do not trust one another. We identify three key desiderata that a PRL solution must ensure: (1) perfect precision and high recall of matching pairs, (2) a proof of end-to-end privacy, and (3) communication and computational costs that scale subquadratically in the number of input records. We show that all of the existing solutions for PRL? including secure 2-party computation (S2PC), and their variants that use non-private or differentially private (DP) blocking to ensure subquadratic cost – violate at least one of the three desiderata. In particular, S2PC techniques guarantee end-to-end privacy but have either low recall or quadratic cost. In contrast, no end-to-end privacy guarantee has been formalized for solutions that achieve subquadratic cost. This is true even for solutions that compose DP and S2PC: DP does not permit the release of any exact information about the databases, while S2PC algorithms for PRL allow the release of matching records. In light of this deficiency, we propose a novel privacy model, called output constrained differential privacy, that shares the strong privacy protection of DP, but allows for the truthful release of the output of a certain function applied to the data. We apply this to PRL, and show that protocols satisfying this privacy model permit the disclosure of the true matching records, but their execution is insensitive to the presence or absence of a single non-matching record. We find that prior work that combine DP and S2PC techniques even fail to satisfy this end-to-end privacy model. Hence, we develop novel protocols that provably achieve this end-to-end privacy guarantee, together with the other two desiderata of PRL. Our empirical evaluation also shows that our protocols obtain high recall, scale near linearly in the size of the input databases and the output set of matching pairs, and have communication and computational costs that are at least 2 orders of magnitude smaller than S2PC baselines.
Privacy preserving on data publication has been an important research field over the past few decades. One of the fundamental challenges in privacy preserving data publication is the trade-off problem between privacy and utility of the single and independent data set. However, recent research works have shown that the advanced privacy mechanism, i.e., differential privacy, is vulnerable when multiple data sets are correlated. In this case, the trade-off problem between privacy and utility is evolved into a game problem, in which the payoff of each player is dependent not only on his privacy parameter, but also on his neighbors' privacy parameters. In this paper, we firstly present the definition of correlated differential privacy to evaluate the real privacy level of a single data set influenced by the other data sets. Then, we construct a game model of multiple players, who each publishes the data set sanitized by differential privacy. Next, we analyze the existence and uniqueness of the pure Nash Equilibrium and demonstrate the sufficient conditions in the game. Finally, we refer to a notion, i.e., the price of anarchy, to evaluate efficiency of the pure Nash Equilibrium.
The Cloud Computing is a developing IT concept that faces some issues, which are slowing down its evolution and adoption by users across the world. The lack of security has been the main concern. Organizations and entities need to ensure, inter alia, the integrity and confidentiality of their outsourced sensible data within a cloud provider server. Solutions have been examined in order to strengthen security models (strong authentication, encryption and fragmentation before storing, access control policies...). More particularly, data remanence is undoubtedly a major threat. How could we be sure that data are, when is requested, truly and appropriately deleted from remote servers? In this paper, we aim to produce a survey about this interesting subject and to address the problem of residual data in a cloud-computing environment, which is characterized by the use of virtual machines instantiated in remote servers owned by a third party.
With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.
In the recent years many companies are shifting towards cloud for expanding their business profit with least additional cost. Cloud computing is a growing technology which has emerged from the development of grid computing, virtualization and utility computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. There was a huge data loss during the recent Chennai floods during Dec 2015. If these data would have been stored at distributed data centers great loss could have been prevented. Though, such natural calamities are tempting many users to shift towards the cloud storage, security threats are inhibiting them to shift towards the cloud. Many solutions have been addressed for these security issues but they do not give guaranteed security. By guaranteed security we mean confidentiality, integrity and availability. Some of the existing techniques for providing security are Cryptographic Protocols, Data Sanitization, Predicate Logic, Access Control Mechanism, Honeypots, Sandboxing, Erasure Coding, RAID(Redundant Arrays of Independent Disks), Homomorphic Encryption and Split-Key Encryption. All these techniques either cannot work alone or adds computational and time complexity. An alternate scheme of combining encryption and channel coding schemes at one-go is proposed for increasing the levels of security. Hybrid encryption scheme is proposed to be used in the interleaver block of Turbo coder for avoiding burst error. Hybrid encryption avoids sharing of secret key via the unsecured channel. This provides both security and reliability by reducing error propagation effect with small additional cost and computational overhead. Time complexity can be reduced when encryption and encoding are done as a single process.
In the present time, there has been a huge increase in large data repositories by corporations, governments, and healthcare organizations. These repositories provide opportunities to design/improve decision-making systems by mining trends and patterns from the data set (that can provide credible information) to improve customer service (e.g., in healthcare). As a result, while data sharing is essential, it is an obligation to maintaining the privacy of the data donors as data custodians have legal and ethical responsibilities to secure confidentiality. This research proposes a 2-layer privacy preserving (2-LPP) data sanitization algorithm that satisfies ε-differential privacy for publishing sanitized data. The proposed algorithm also reduces the re-identification risk of the sanitized data. The proposed algorithm has been implemented, and tested with two different data sets. Compared to other existing works, the results obtained from the proposed algorithm show promising performance.
Data security is a primary concern for every communication system. Communication becomes an essential tool for any business, education, defense services etc. It is essential to transfer data safe and secure. At present, various cryptography algorithms have been proposed and implemented. Those algorithms are classified into symmetric and asymmetric algorithms based on the number of keys used. Even though several algorithms are used for data security, they are compromise the security at the certain period. Now the idea is to combine the several secure algorithms to provide a highly secure environment for data transmission. The algorithms that are going to be combined are AES symmetric cryptographic algorithm, RSA asymmetric algorithm and MD5 hashing algorithm. With these three algorithms, we can ensure three cryptography primitives confidentiality, authentication and integrity of data.
In this paper, the correctness of the routing algorithm for the distributed key-value store based on order preserving linear hashing and Skip Graph is proved. In this system, data are divided by linear hashing and Skip Graph is used for overlay network. The routing table of this system is very uniform. Then, short detours can exist in the route of forwarding. By using these detours, the number of hops for the query forwarding is reduced.