Biblio
A two-server password-based authentication (2PA) protocol is a special kind of authentication primitive that provides additional protection for the user's password. Through a 2PA protocol, a user can distribute his low-entropy password between two authentication servers in the initialization phase and authenticate himself merely via a matching password in the login phase. No single server can learn any information about the user's password, nor impersonate the legitimate user to authenticate to the honest server. In this paper, we first formulate and realize the security definition of two-server password-based authentication in the well-known universal composability (UC) framework, which thus provides desirable properties such as composable security. We show that our construction is suitable for the asymmetric communication model in which one server acts as the front-end server interacting directly with the user and the other stays backstage. Then, we show that our protocol could be easily extended to more complicate password-based cryptographic protocols such as two-server password-authenticated key exchange (2PAKE) and two-server password-authenticated secret sharing (2PASS), which enjoy stronger security guarantees and better efficiency performances in comparison with the existing schemes.
Compositional theories and technologies facilitate the decomposition of a complex system into components, as well as their integration via interfaces. Component interfaces hide the internal details of the components, thereby reducing integration complexity. A system is said to be composable if the properties established and validated for components in isolation hold once the components are integrated to form the system. This brings us the question: "Is compositionality related to composability?" This paper answers this question in the affirmative; it considers a previously known interface for compositionality and shows that it can be used for composability. It also presents a run-time policing mechanism for this interface.
Computational soundness results show that under certain conditions it is possible to conclude computational security whenever symbolic security holds. Unfortunately, each soundness result is usually established for some set of cryptographic primitives and extending the result to encompass new primitives typically requires redoing most of the work. In this paper we suggest a way of getting around this problem. We propose a notion of computational soundness that we term deduction soundness. As for other soundness notions, our definition captures the idea that a computational adversary does not have any more power than a symbolic adversary. However, a key aspect of deduction soundness is that it considers, intrinsically, the use of the primitives in the presence of functions specified by the adversary. As a consequence, the resulting notion is amenable to modular extensions. We prove that a deduction sound implementation of some arbitrary primitives can be extended to include asymmetric encryption and public data-structures (e.g. pairings or list), without repeating the original proof effort. Furthermore, our notion of soundness concerns cryptographic primitives in a way that is independent of any protocol specification language. Nonetheless, we show that deduction soundness leads to computational soundness for languages (or protocols) that satisfy a so called commutation property.
Many solutions for composability and compositionality rely on specifying the interface for a component using bandwidth. Some previous works specify period (P) and budget (Q) as an interface for a component. Q/P provides us with a bandwidth (the share of a processor that this component may request); P specifies the time-granularity of the allocation of this processing capacity. Other works add another parameter deadline which can help to provide tighter bounds on how this processing capacity is distributed. Yet other works use the parameters α and Δ where α is the bandwidth and Δ specifies how smoothly this bandwidth is distributed. It is known [4] that such bandwidth-like interfaces carry a cost: there are tasksets that could be guaranteed to be schedulable if tasks were scheduled directly on the processor, but with bandwidth-like interfaces, it is impossible to guarantee the taskset to be schedulable. And it is also known that this penalty can be infinite, i.e., the use of bandwidth-like interfaces may require the use of a processor that has a speed that is k times faster, and one can show this for any k. This brings the question: "What is the average-case performance penalty of bandwidth-like interfaces?" This paper addresses this question. We answer the question by randomly generating tasksets and then for each of these tasksets, compute a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. We do not consider any specific bandwidth-like scheme; instead, we derive an expression that states a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. For the distributions considered in this paper, we find that (i) the experimental results depend on the experimental setup, (ii) this lower bound on the penalty was never larger than 4.0, (iii) for one experimental setup, for each taskset, it was greater than 2.4, (iv) the histogram of this penalty appears to be unimodal, and (v) for implicit-deadline sporadic tasks, this lower bound on the penalty was exactly 1.
This paper introduces typy, a statically typed programming language embedded by reflection into Python. typy features a fragmentary semantics, i.e. it delegates semantic control over each term, drawn from Python's fixed concrete and abstract syntax, to some contextually relevant user-defined semantic fragment. The delegated fragment programmatically 1) typechecks the term (following a bidirectional protocol); and 2) assigns dynamic meaning to the term by computing a translation to Python. We argue that this design is expressive with examples of fragments that express the static and dynamic semantics of 1) functional records; 2) labeled sums (with nested pattern matching a la ML); 3) a variation on JavaScript's prototypal object system; and 4) typed foreign interfaces to Python and OpenCL. These semantic structures are, or would need to be, defined primitively in conventionally structured languages. We further argue that this design is compositionally well-behaved. It avoids the expression problem and the problems of grammar composition because the syntax is fixed. Moreover, programs are semantically stable under fragment composition (i.e. defining a new fragment will not change the meaning of existing program components.)
Since it becomes increasingly difficult to trick end users to install and run executable files from unknown sources, attackers refer to stealthy ways such as manipulation of DLL (Dynamic Link Library) files to compromise user computers. In this paper, we propose to develop mechanisms that allow the hypervisor to conduct lightweight examination of DLL files and their running environment in guest virtual machines. Different from the approaches that focus on static analysis of the DLL API calling graphs, our mechanisms conduct continuous examination of their running states. In this way, malicious manipulations to DLL files that happen after they are loaded into memory can also be detected. In order to maintain non-intrusive monitoring and reduce the impacts on VM performance, we avoid examinations of the complete DLL file contents but focus on the parameters such as the relative virtual addresses (RVA) of the functions. We have implemented our approach in Xen and conducted experiments with more than 100 malware of different types. The experiment results show that our approach can effectively detect the malware with very low increases in overhead at guest VMs.
Cloud providers typically implement abstractions for network virtualization on the server, within the operating system that hosts the tenant virtual machines or containers. Despite being flexible and convenient, this approach has fundamental problems: incompatibility with bare-metal support, unnecessary performance overhead, and susceptibility to hypervisor breakouts. To solve these, we propose to offload the implementation of network-virtualization abstractions to the top-of-rack switch (ToR). To show that this is feasible and beneficial, we present VNToR, a ToR that takes over the implementation of the security-group abstraction. Our prototype combines commodity switching hardware with a custom software stack and is integrated in OpenStack Neutron. We show that VNToR can store tens of thousands of access rules, adapts to traffic-pattern changes in less than a millisecond, and significantly outperforms the state of the art.
Cloud computing allows clients to upload data and computation to untrusted servers, which leads to potential violations to the confidentiality of client data. We propose JCrypt, a static program analysis which transforms a Java program into an equivalent one, so that it performs computation over encrypted data and preserves data confidentiality. JCrypt minimizes computation over encrypted data. It consists of two stages. The first stage is a type-based information flow analysis which partitions the program so that only sensitive parts need to be encrypted. The second stage is an inter-procedural data-flow analysis, similar to the classical Available Expressions. It deduces the appropriate encryption scheme for sensitive variables. We implemented JCrypt for Java and showed that our analysis is effective and practical using five benchmark suites. JCrypt encrypts a significantly larger percentage of benchmarks compared to MrCrypt, the closest related work.
Cloud Computing is a new evolutionary and dynamic platform that makes use of virtualization technology. In Cloud computing environment, virtualization abstracts the hardware system resources in software so that each application can be run in an isolated environment called the virtual machine and hypervisor does the allocation of virtual machines to different users that are hosted on same server. Although it provides many benefits like resource-sharing, cost-efficiency, high-performance computability and decrease in hardware cost but it also imposes a number of security threats. The threats can be directly on Virtual Machines (VMs) or indirectly on Hyper-visor through virtual machines that are hosted on it. This paper presents a review of all possible security threats and also their countermeasures by using Game-Theoretic approaches. Game Theory can be used as a defensive measure because of independent and strategic rational decision making nature of cloud users where each player would compete for best possible solution
Anti-reverse engineering is one of the core technologies of software intellectual property protection, prevailing techniques of which are static and dynamic obfuscation. Static obfuscation can only prevent static analysis with code mutation done before execution by compressing, encrypting and obfuscating. Dynamic obfuscation can prevent both static and dynamic analysis, which changes code while being executed. Popular dynamic obfuscation techniques include self-modifying code and virtual machine protection. Despite the higher safety, dynamic obfuscation has its problems: 1) code appear in plain text remains a long time; 2) control flow is exposable; 3) time and space overheads are too big. This paper presents a binary protection scheme using dynamic fine-grained code hiding and obfuscation named dynFCHO. In this scheme, basic blocks to be protected are hidden in original code and will be restored while being executed. Code obfuscation is also implemented additionally to enhance safety. Experiments prove that dynFCHO can effectively resist static and dynamic analysis without destructing original software functions. It can be used on most binary programs compiled by standard compilers. This scheme can be widely used with the advantages of strong protection, light-weight implementation, and good extendibility.
Network flow classification is fundamental to network management and network security. However, it is challenging to classify network flows at very high line rates while simultaneously preserving user privacy. Machine learning based classification techniques utilize only meta-information of a flow and have been shown to be effective in identifying network flows. We analyze a group of widely used machine learning classifiers, and observe that the effectiveness of different classification models depends highly upon the protocol types as well as the flow features collected from network data.We propose vTC, a design of virtual network functions to flexibly select and apply the best suitable machine learning classifiers at run time. The experimental results show that the proposed NFV for flow classification can improve the accuracy of classification by up to 13%.
We demonstrate the infrastructure used in the TREC 2015 Total Recall track to facilitate controlled simulation of "assessor in the loop" high-recall retrieval experimentation. The implementation and corresponding design decisions are presented for this platform. This includes the necessary considerations to ensure that experiments are privacy-preserving when using test collections that cannot be distributed. Furthermore, we describe the use of virtual machines as a means of system submission in order to to promote replicable experiments while also ensuring the security of system developers and data providers.
The record-and-replay approach for software testing is important and valuable for developers in designing mobile applications. However, the existing solutions for recording and replaying Android applications are far from perfect. When considering the richness of mobile phones' input capabilities including touch screen, sensors, GPS, etc., existing approaches either fall short of covering all these different input types, or require elevated privileges that are not easily attained and can be dangerous. In this paper, we present a novel system, called MobiPlay, which aims to improve record-and-replay testing. By collaborating between a mobile phone and a server, we are the first to capture all possible inputs by doing so at the application layer, instead of at the Android framework layer or the Linux kernel layer, which would be infeasible without a server. MobiPlay runs the to-be-tested application on the server under exactly the same environment as the mobile phone, and displays the GUI of the application in real time on a thin client application installed on the mobile phone. From the perspective of the mobile phone user, the application appears to be local. We have implemented our system and evaluated it with tens of popular mobile applications showing that MobiPlay is efficient, flexible, and comprehensive. It can record all input data, including all sensor data, all touchscreen gestures, and GPS. It is able to record and replay on both the mobile phone and the server. Furthermore, it is suitable for both white-box and black-box testing.
Virtual machine live migration technology, as an important support for cloud computing, has become a central issue in recent years. The virtual machines' runtime environment is migrated from the original physical server to another physical server, maintaining the virtual machines running at the same time. Therefore, it can make load balancing among servers and ensure the quality of service. However, virtual machine migration security issue cannot be ignored due to the immature development of it. This paper we analyze the security threats of the virtual machine migration, and compare the current proposed protection measures. While, these methods either rely on hardware, or lack adequate security and expansibility. In the end, we propose a security model of live virtual machine migration based on security policy transfer and encryption, named as SPLM (Security Protection of Live Migration) and analyze its security and reliability, which proves that SPLM is better than others. This paper can be useful for the researchers to work on this field. The security study of live virtual machine migration in this paper provides a certain reference for the research of virtualization security, and is of great significance.
We investigate the resiliency of wireless sensor networks against sensor capture attacks when the network uses the random pairwise key distribution scheme of Chan et al. We present conditions on the model parameters so that the network is: 1 unassailable and 2 unsplittable, both with high probability, as the number \$n\$ of sensor nodes becomes large. Both notions are defined against an adversary who has unlimited computing resources and full knowledge of the network topology, but can only capture a negligible fraction \$on\$ of sensors. We also show that the number of cryptographic keys needed to ensure unassailability and unsplittability under the pairwise key predistribution scheme is an order of magnitude smaller than it is under the key predistribution scheme of Eschenauer and Gligor.
This paper presents IPAS, an instruction duplication technique that protects scientific applications from silent data corruption (SDC) in their output. The motivation for IPAS is that, due to natural error masking, only a subset of SDC errors actually affects the output of scientific codes—we call these errors silent output corruption (SOC) errors. Thus applications require duplication only on code that, when affected by a fault, yields SOC. We use machine learning to learn code instructions that must be protected to avoid SOC, and, using a compiler, we protect only those vulnerable instructions by duplication, thus significantly reducing the overhead that is introduced by instruction duplication. In our experiments with five workloads, IPAS reduces the percentage of SOC by up to 90% with a slowdown that ranges between 1.04x and 1.35x, which corresponds to as much as 47% less slowdown than state-of-the-art instruction duplication techniques.
Fault tolerance is a key challenge to building the first exa\textbackslash-scale system. To understand the potential impacts of failures on next-generation systems, significant effort has been devoted to collecting, characterizing and analyzing failures on current systems. These studies require large volumes of data and complex analysis. Because the occurrence of failures in large-scale systems is unpredictable, failures are commonly modeled as a stochastic process. Failure data from current systems is examined in an attempt to identify the underlying probability distribution and its statistical properties. In this paper, we use modeling to examine the impact of failure distributions on the time-to-solution and the optimal checkpoint interval of applications that use coordinated checkpoint/restart. Using this approach, we show that as failures become more frequent, the failure distribution has a larger influence on application performance. We also show that as failure times are less tightly grouped (i.e., as the standard deviation increases) the underlying probability distribution has a greater impact on application performance. Finally, we show that computing the checkpoint interval based on the assumption that failures are exponentially distributed has a modest impact on application performance even when failures are drawn from a different distribution. Our work provides critical analysis and guidance to the process of analyzing failure data in the context of coordinated checkpoint/restart. Specifically, the data presented in this paper helps to distinguish cases where the failure distribution has a strong influence on application performance from those cases when the failure distribution has relatively little impact.
The deterministic nature of existing routing protocols has resulted into an ossified Internet with static and predictable network routes. This gives persistent attackers (e.g. eavesdroppers and DDoS attackers) plenty of time to study the network and identify the vulnerable (critical) links to plan devastating and stealthy attacks. Recently, Moving Target Defense (MTD) based approaches have been proposed to to defend against DoS attacks. However, MTD based approaches for route mutation are oriented towards re-configuring the parameters in Local Area Networks (LANs), and do not provide any protection against infrastructure level attacks, which inherently limits their use for mission critical services over the Internet infrastructure. To cope with these issues, we extend the current routing architecture to consider end-hosts as routing elements, and present a formal method based agile defense mechanism to embed resiliency in the existing cyber infrastructure. The major contributions of this paper include: (1) formalization of efficient and resilient End to End (E2E) reachability problem as a constraint satisfaction problem, which identifies the potential end-hosts to reach a destination while satisfying resilience and QoS constraints, (2) design and implementation of a novel decentralized End Point Route Mutation (EPRM) protocol, and (3) design and implementation of planning algorithm to minimize the overlap between multiple flows, for the sake of maximizing the agility in the system. Our PlanetLab based implementation and evaluation validates the correctness, effectiveness and scalability of the proposed approach.
In-depth consideration and evaluation of security and resilience is necessary for developing the scientific foundations and technology of Cyber-Physical Systems (CPS). In this demonstration, we present SURE [1], a CPS experimentation and evaluation testbed for security and resilience focusing on transportation networks. The testbed includes (1) a heterogeneous modeling and simulation integration platform, (2) a Web-based tool for modeling CPS in adversarial environments, and (3) a framework for evaluating resilience using attacker-defender games. Users such as CPS designers and operators can interact with the testbed to evaluate monitoring and control schemes that include sensor placement and traffic signal configuration.
At the very high level of abstraction, the Internet of Things (IoT) can be modeled as the hyper-scale, hyper-complex cyber-physical system. Study of resilience of IoT systems is the first step towards engineering of the future IoT eco-systems. Exploration of this domain is highly promising avenue for many aspiring Ph.D. and M.Sc. students.
We present a first of its kind framework which overcomes a major challenge in the design of digital systems that are resilient to reliability failures: achieve desired resilience targets at minimal costs (energy, power, execution time, area) by combining resilience techniques across various layers of the system stack (circuit, logic, architecture, software, algorithm). This is also referred to as cross-layer resilience. In this paper, we focus on radiation-induced soft errors in processor cores. We address both single-event upsets (SEUs) and single-event multiple upsets (SEMUs) in terrestrial environments. Our framework automatically and systematically explores the large space of comprehensive resilience techniques and their combinations across various layers of the system stack (798 cross-layer combinations in this paper), derives cost-effective solutions that achieve resilience targets at minimal costs, and provides guidelines for the design of new resilience techniques. We demonstrate the practicality and effectiveness of our framework using two diverse designs: a simple, in-order processor core and a complex, out-of-order processor core. Our results demonstrate that a carefully optimized combination of circuit-level hardening, logic-level parity checking, and micro-architectural recovery provides a highly cost-effective soft error resilience solution for general-purpose processor cores. For example, a 50× improvement in silent data corruption rate is achieved at only 2.1% energy cost for an out-of-order core (6.1% for an in-order core) with no speed impact. However, selective circuit-level hardening alone, guided by a thorough analysis of the effects of soft errors on application benchmarks, provides a cost-effective soft error resilience solution as well (with \textasciitilde1% additional energy cost for a 50× improvement in silent data corruption rate).
Security isolation is a foundation of computing systems that enables resilience to different forms of attacks. This article seeks to understand existing security isolation techniques by systematically classifying different approaches and analyzing their properties. We provide a hierarchical classification structure for grouping different security isolation techniques. At the top level, we consider two principal aspects: mechanism and policy. Each aspect is broken down into salient dimensions that describe key properties. We break the mechanism into two dimensions, enforcement location and isolation granularity, and break the policy aspect down into three dimensions: policy generation, policy configurability, and policy lifetime. We apply our classification to a set of representative articles that cover a breadth of security isolation techniques and discuss tradeoffs among different design choices and limitations of existing approaches.
The premise of the SafeConfig'16 Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners. The workshop will convene a panel of experts to explore this concept. The topic will be discussed from three different perspectives. One perspective is that of the practitioner. We will explore whether active and resilient technologies are or are planned for deployment and whether the verification methodology affects that decision. The second perspective will be that of the research community. We will address the shortcomings of current approaches and the research directions needed to address the practitioner's concerns. The third perspective is that of the policy community. Specifically, we will explore the dynamics between technology, verification, and policy.
The emerging trends of volatile distributed energy resources and micro-grids are putting pressure on electrical power system infrastructure. This pressure is motivating the integration of digital technology and advanced power-industry practices to improve the management of distributed electricity generation, transmission, and distribution, thereby creating a web of systems. Unlike legacy power system infrastructure, however, this emerging next-generation smart grid should be context-aware and adaptive to enable the creation of applications needed to enhance grid robustness and efficiency. This paper describes key factors that are driving the architecture of smart grids and describes orchestration middleware needed to make the infrastructure resilient. We use an example of adaptive protection logic in smart grid substations as a use case to motivate the need for contextawareness and adaptivity.
We are confronted with the threat from the theft of user-id / password information caused by phishing attacks. Now authentication by using the user-id and password is no longer safe. We can use the PKI-based authentication as a safer authentication mechanism. In our university, Japan Advanced Institute of Science and Technology (JAIST), we deployed On Demand Digital Certificate Issuing System for our users, and employ the PKI-based client certificates for log-on to web application, connecting to wireless network (including eduroam), using VPN service, and email sender signing. In addition, National In-stitute of Information (NII), which are providing common ICT infrastructure services for Japanese universities and institutes, started a service to issue client certificates in this year. So use of the electronic certificates will become more popular within a few years in Japan. However, there are not so enough cases deploying the electronic certificate based authentication in University infrastructure, we still has many tips and issues on operating this. In this paper, we introduce the use case of the electronic certificate in JAIST, the challenges and issues, and consider the future prospects.