Biblio
Denial of Service (DoS) attacks have been a serious security concern, as no service is, in principle, protected against them. Although a Dolev-Yao intruder with unlimited resources can trivially render any service unavailable, DoS attacks do not necessarily have to be carried out by such (extremely) powerful intruders. It is useful in practice and more challenging for formal protocol verification to determine whether a service is vulnerable even to resource-bounded intruders that cannot generate or intercept arbitrary large volumes of traffic. This paper proposes a novel, more refined intruder model where the intruder can only consume at most some specified amount of resources in any given time window. Additionally, we propose protocol theories that may contain timeouts and specify service resource usage during protocol execution. In contrast to the existing resource-conscious protocol verification models, our model allows finer and more subtle analysis of DoS problems. We illustrate the power of our approach by representing a number of classes of DoS attacks, such as, Slow, Asymmetric and Amplification DoS attacks, exhausting different types of resources of the target, such as, number of workers, processing power, memory, and network bandwidth. We show that the proposed DoS problem is undecidable in general and is PSPACE-complete for the class of resource-bounded, balanced systems. Finally, we implemented our formal verification model in the rewriting logic tool Maude and analyzed a number of DoS attacks in Maude using Rewriting Modulo SMT in an automated fashion.
The increase of the digitalization taking place in various industrial domains is leading developers towards the design and implementation of more and more complex networked control systems (NCS) supported by Wireless Sensor Networks (WSN). This naturally raises new challenges for the current WSN technology, namely in what concerns improved guarantees of technical aspects such as real-time communications together with safe and secure transmissions. Notably, in what concerns security aspects, several cryptographic protocols have been proposed. Since the design of these protocols is usually error-prone, security breaches can still be exposed and MALICIOUSly exploited unless they are rigorously analyzed and verified. In this paper we formally verify, using ProVerif, three cryptographic protocols used in WSN, regarding the security properties of secrecy and authenticity. The security analysis performed in this paper is more robust than the ones performed in related work. Our contributions involve analyzing protocols that were modeled considering an unbounded number of participants and actions, and also the use of a hierarchical system to classify the authenticity results. Our verification shows that the three analyzed protocols guarantee secrecy, but can only provide authenticity in specific scenarios.
In this paper, we show how practical the little theorem of witness functions is in detecting security flaws in some categories of cryptographic protocols. We convey a formal analysis of the Needham-Schroeder symmetric-key protocol in the theory of witness functions. We show how it helps to warn about a security vulnerability in a given step of this protocol where the value of security of a sensitive ticket in a sent message unexpectedly decreases compared with its value when received. This vulnerability may be exploited by an intruder to mount a replay attack as described by Denning and Sacco.
Accountability is a recent paradigm in security protocol design which aims to eliminate traditional trust assumptions on parties and hold them accountable for their misbehavior. It is meant to establish trust in the first place and to recognize and react if this trust is violated. In this work, we discuss a protocol-agnostic definition of accountability: a protocol provides accountability (w.r.t. some security property) if it can identify all misbehaving parties, where misbehavior is defined as a deviation from the protocol that causes a security violation. We provide a mechanized method for the verification of accountability and demonstrate its use for verification and attack finding on various examples from the accountability and causality literature, including Certificate Transparency and Krollˆ\textbackslashtextbackslashprimes Accountable Algorithms protocol. We reach a high degree of automation by expressing accountability in terms of a set of trace properties and show their soundness and completeness.
Growing amounts of research on IoT and its implications for security, privacy, economy and society has been carried out to inform policies and design. However, ordinary people who are citizens and users of these emerging technologies have rarely been involved in the processes that inform these policies, governance mechanisms and design due to the institutionalised processes that prioritise objective knowledge over subjective ones. People's subjective experiences are often discarded. This priority is likely to further widen the gap between people, technology policies and design as technologies advance towards delegated human agencies, which decreases human interfaces in technology-mediated relationships with objects, systems, services, trade and other (often) unknown third-party beneficiaries. Such a disconnection can have serious implications for policy implementation, especially when it involves human limitations. To address this disconnection, we argue that a space for people to meaningfully contribute their subjective knowledge — experience- to complex technology policies that, in turn, shape their experience and well-being needs to be constructed. To this end, our paper contributes the design and pilot implementation of a method to reconnect and involve people in IoT security policymaking and development.
Since cyber-physical systems are inherently vulnerable to information leaks, software architects need to reason about security policies to define desired and undesired information flow through a system. The microservice architectural style requires the architects to refine a macro-level security policy into micro-level policies for individual microservices. However, when policies are refined in an ill-formed way, information leaks can emerge on composition of microservices. Related approaches to prevent such leaks do not take into account characteristics of cyber-physical systems like real-time behavior or message passing communication. In this paper, we enable the refinement and verification of information-flow security policies for cyber-physical microservice architectures. We provide architects with a set of well-formedness rules for refining a macro-level policy in a way that enforces its security restrictions. Based on the resulting micro-level policies, we present a verification technique to check if the real-time message passing of microservices is secure. In combination, our contributions prevent information leaks from emerging on composition. We evaluate the accuracy of our approach using an extension of the CoCoME case study.
In this article the combination of secret sharing schemes and the requirement of discretionary security policy is considered. Secret sharing schemes of Shamir and Blakley are investigated. Conditions for parameters of schemes the providing forbidden information channels are received. Ways for concealment of the forbidden channels are suggested. Three modifications of the Shamir's scheme and two modifications of the Blakley's scheme are suggested. Transition from polynoms to exponential functions for formation the parts of a secret is carried out. The problem of masking the presence of the forbidden information channels is solved. Several approaches with the complete and partial concealment are suggested.
The method of assessment of degree of compliance of divisions of the complex distributed corporate information system to a number of information security indicators is offered. As a result of the methodology implementation a comparative assessment of compliance level of each of the divisions for the corporate information security policy requirements may be given. This assessment may be used for the purpose of further decision-making by the management of the corporation on measures to minimize risks as a result of possible implementation of threats to information security.
Despite the wide of range of research and technologies that deal with the problem of routing in computer networks, there remains a gap between the level of network hardware administration and the level of business requirements and constraints. Not much has been accomplished in literature in order to have a direct enforcement of such requirements on the network. This paper presents a new solution in specifying and directly enforcing security policies to control the routing configuration in a software-defined network by using Row-Level Security checks which enable fine-grained security policies on individual rows in database tables. We show, as a first step, how a specific class of such policies, namely multilevel security policies, can be enforced on a database-defined network, which presents an abstraction of a network's configuration as a set of database tables. We show that such policies can be used to control the flow of data in the network either in an upward or downward manner.
Security policy is widely used in network management systems to ensure network security. It is necessary to detect and resolve conflicts in security policies. This paper analyzes the shortcomings of existing security policy conflict detection methods and proposes a B+ tree-based security policy conflict detection method. First, the security policy is dimensioned to make each attribute corresponds to one dimension. Then, a layer of B+ tree index is constructed at each dimension level. Each rule will be uniquely mapped by multiple layers of nested indexes. This method can greatly improve the efficiency of conflict detection. The experimental results show that the method has very stable performance which can effectively prevent conflicts, the type of policy conflict can be detected quickly and accurately.
The zero-day attack in networks exploits an undiscovered vulnerability, in order to affect/damage networks or programs. The term “zero-day” refers to the number of days available to the software or the hardware vendor to issue a patch for this new vulnerability. Currently, the best-known defense mechanism against the zero-day attacks focuses on detection and response, as a prevention effort, which typically fails against unknown or new vulnerabilities. To the best of our knowledge, this attack has not been widely investigated for Software-Defined Networks (SDNs). Therefore, in this work we are motivated to develop anew zero-day attack detection and prevention mechanism, which is designed and implemented for SDN using a modified sandbox tool, named Cuckoo. Our experiments results, under UNIX system, show that our proposed design successfully stops zero-day malwares by isolating the infected client, and thus, prevents these malwares from infesting other clients.
Java is a safe programming language by providing bytecode verification and enforcing memory protection. For instance, programmers cannot directly access the memory but have to use object references. Yet, the Java runtime provides an Unsafe API as a backdoor for the developers to access the low- level system code. Whereas the Unsafe API is designed to be used by the Java core library, a growing community of third-party libraries use it to achieve high performance. The Unsafe API is powerful, but dangerous, which leads to data corruption, resource leaks and difficult-to-diagnose JVM crash if used improperly. In this work, we study the Unsafe crash patterns and propose a memory checker to enforce memory safety, thus avoiding the JVM crash caused by the misuse of the Unsafe API at the bytecode level. We evaluate our technique on real crash cases from the openJDK bug system and real-world applications from AJDK. Our tool reduces the efforts from several days to a few minutes for the developers to diagnose the Unsafe related crashes. We also evaluate the runtime overhead of our tool on projects using intensive Unsafe operations, and the result shows that our tool causes a negligible perturbation to the execution of the applications.
Much recent work focuses on finding bugs and security vulnerabilities in smart contracts written in existing languages. Although this approach may be helpful, it does not address flaws in the underlying programming language, which can facilitate writing buggy code in the first place. We advocate a re-thinking of the blockchain software engineering tool set, starting with the programming language in which smart contracts are written. In this paper, we propose and justify requirements for a new generation of blockchain software development tools. New tools should (1) consider users' needs as a primary concern; (2) seek to facilitate safe development by detecting relevant classes of serious bugs at compile time; (3) as much as possible, be blockchain-agnostic, given the wide variety of different blockchain platforms available, and leverage the properties that are common among blockchain environments to improve safety and developer effectiveness.
Software rejuvenation has been proposed as a strategy to protect cyber-physical systems (CSPs) against unanticipated and undetectable cyber attacks. The basic idea is to refresh the system periodically with a secure and trusted copy of the online software so as to eliminate all effects of malicious modifications to the run-time code and data. This paper considers software rejuvenation design from a control-theoretic perspective. Invariant sets for the Lyapunov function for the safety controller are used to derive bounds on the time that the CPS can operate in mission control mode before the software must be refreshed. With these results it can be guaranteed that the CPS will remain safe under cyber attacks against the run-time system. The approach is illustrated using simulation of the nonlinear dynamics of a quadrotor system. The concluding section discusses directions for further research.
In monolithic operating system (OS), any error of system software can be exploit to destroy the whole system. The situation becomes much more severe in cloud environment, when the kernel and the hypervisor share the same address space. The security of guest Virtual Machines (VMs), both sensitive data and vital code, can no longer be guaranteed, once the hypervisor is compromised. Therefore, it is essential to deploy some security approaches to secure VMs, regardless of the hypervisor is safe or not. Some approaches propose microhypervisor reducing attack surface, or a new software requiring a higher privilege level than hypervisor. In this paper, we propose a novel approach, named HyperPS, which separates the fundamental and crucial privilege into a new trusted environment in order to monitor hypervisor. A pivotal condition for HyperPS is that hypervisor must not be allowed to manipulate any security-sensitive system resources, such as page tables, system control registers, interaction between VM and hypervisor as well as VM memory mapping. Besides, HyperPS proposes a trusted environment which does not rely on any higher privilege than the hypervisor. We have implemented a prototype for KVM hypervisor on x86 platform with multiple VMs running Linux. KVM with HyperPS can be applied to current commercial cloud computing industry with portability. The security analysis shows that this approach can provide effective monitoring against attacks, and the performance evaluation confirms the efficiency of HyperPS.
Cryptographically cloud computing may be an innovative safe cloud computing design. Cloud computing may be a huge size dispersed computing model that ambitious by the economy of the level. It integrates a group of inattentive virtualized animatedly scalable and managed possessions like computing control storage space platform and services. External end users will approach to resources over the net victimization fatal particularly mobile terminals, Cloud's architecture structures are advances in on-demand new trends. That are the belongings are animatedly assigned to a user per his request and hand over when the task is finished. So, this paper projected biometric coding to boost the confidentiality in Cloud computing for biometric knowledge. Also, this paper mentioned virtualization for Cloud computing also as statistics coding. Indeed, this paper overviewed the safety weaknesses of Cloud computing and the way biometric coding will improve the confidentiality in Cloud computing atmosphere. Excluding this confidentiality is increased in Cloud computing by victimization biometric coding for biometric knowledge. The novel approach of biometric coding is to reinforce the biometric knowledge confidentiality in Cloud computing. Implementation of identification mechanism can take the security of information and access management in the cloud to a higher level. This section discusses, however, a projected statistics system with relation to alternative recognition systems to date is a lot of advantageous and result oriented as a result of it does not work on presumptions: it's distinctive and provides quick and contact less authentication. Thus, this paper reviews the new discipline techniques accustomed to defend methodology encrypted info in passing remote cloud storage.
Agile methods frequently have difficulties with qualities, often specifying quality requirements as stories, e.g., "As a user, I need a safe and secure system." Such projects will generally schedule some capability releases followed by safety and security releases, only to discover user-developer misunderstandings and unsecurable agile code, leading to project failure. Very large agile projects also have further difficulties with project velocity and scalability. Examples are trying to use daily standup meetings, 2-week sprints, shared tacit knowledge vs. documents, and dealing with user-developer misunderstandings. At USC, our Parallel Agile, Executable Architecture research project shows some success at mid-scale (50 developers). We also examined several large (hundreds of developers) TRW projects that had succeeded with rapid, high-quality development. The paper elaborates on their common Critical Quality Factors: a concurrent 3-team approach, an empowered Keeper of the Project Vision, and a management approach emphasizing qualities.