Biblio
Accountability is a recent paradigm in security protocol design which aims to eliminate traditional trust assumptions on parties and hold them accountable for their misbehavior. It is meant to establish trust in the first place and to recognize and react if this trust is violated. In this work, we discuss a protocol-agnostic definition of accountability: a protocol provides accountability (w.r.t. some security property) if it can identify all misbehaving parties, where misbehavior is defined as a deviation from the protocol that causes a security violation. We provide a mechanized method for the verification of accountability and demonstrate its use for verification and attack finding on various examples from the accountability and causality literature, including Certificate Transparency and Krollˆ\textbackslashtextbackslashprimes Accountable Algorithms protocol. We reach a high degree of automation by expressing accountability in terms of a set of trace properties and show their soundness and completeness.
We present a methodology for using the EasyCrypt proof assistant (originally designed for mechanizing the generation of proofs of game-based security of cryptographic schemes and protocols) to mechanize proofs of security of cryptographic protocols within the universally composable (UC) security framework. This allows, for the first time, the mechanization and formal verification of the entire sequence of steps needed for proving simulation-based security in a modular way: Specifying a protocol and the desired ideal functionality; Constructing a simulator and demonstrating its validity, via reduction to hard computational problems; Invoking the universal composition operation and demonstrating that it indeed preserves security. We demonstrate our methodology on a simple example: stating and proving the security of secure message communication via a one-time pad, where the key comes from a Diffie-Hellman key-exchange, assuming ideally authenticated communication. We first put together EasyCrypt-verified proofs that: (a) the Diffie-Hellman protocol UC-realizes an ideal key-exchange functionality, assuming hardness of the Decisional Diffie-Hellman problem, and (b) one-time-pad encryption, with a key obtained using ideal key-exchange, UC-realizes an ideal secure-communication functionality. We then mechanically combine the two proofs into an EasyCrypt-verified proof that the composed protocol realizes the same ideal secure-communication functionality. Although formulating a methodology that is both sound and workable has proven to be a complex task, we are hopeful that it will prove to be the basis for mechanized UC security analyses for significantly more complex protocols and tasks.
For aerospace FPGA software products, traditional simulation method faces severe challenges to verify product requirements under complicated scenarios. Given the increasing maturity of formal verification technology, this method can significantly improve verification work efficiency and product design quality, by expanding coverage on those "blind spots" in product design which were not easily identified previously. Taking UART communication as an example, this paper proposes several critical points to use formal verification for asynchronous communication protocol. Experiments and practices indicate that formal verification for asynchronous communication protocol can effectively reduce the time required, ensure a complete verification process and more importantly, achieve more accurate and intuitive results.
Hardware Trojan Horses and active fault attacks are a threat to the safety and security of electronic systems. By such manipulations, an attacker can extract sensitive information or disturb the functionality of a device. Therefore, several protections against malicious inclusions have been devised in recent years. A prominent technique to detect abnormal behavior in the field is run-time verification. It relies on dedicated monitoring circuits and on verification rules generated from a set of temporal properties. An important question when dealing with such protections is the effectiveness of the protection against unknown attacks. In this paper, we present a methodology based on automatic generation of monitoring and formal verification techniques that can be used to validate and analyze the quality of a set of temporal properties when used as protection against generic attackers of variable strengths.
Cloud Computing is the most suitable environment for the collaboration of multiple organizations via its multi-tenancy architecture. However, due to the distributed management of policies within these collaborations, they may contain several anomalies, such as conflicts and redundancies, which may lead to both safety and availability problems. On the other hand, current cloud computing solutions do not offer verification tools to manage access control policies. In this paper, we propose a cloud policy verification service (CPVS), that facilitates to users the management of there own security policies within Openstack cloud environment. Specifically, the proposed cloud service offers a policy verification approach to dynamically choose the adequate policy using Aspect-Oriented Finite State Machines (AO-FSM), where pointcuts and advices are used to adopt Domain-Specific Language (DSL) state machine artifacts. The pointcuts define states' patterns representing anomalies (e.g., conflicts) that may occur in a security policy, while the advices define the actions applied at the selected pointcuts to remove the anomalies. In order to demonstrate the efficiency of our approach, we provide time and space complexities. The approach was implemented as middleware service within Openstack cloud environment. The implementation results show that the middleware can detect and resolve different policy anomalies in an efficient manner.
Modern large scale technical systems often face iterative changes on their behaviours with the requirement of validated quality which is not easy to achieve completely with traditional testing. Regression verification is a powerful tool for the formal correctness analysis of software-driven systems. By proving that a new revision of the software behaves similarly as the original version of the software, some of the trust that the old software and system had earned during the validation processes or operation histories can be inherited to the new revision. This trust inheritance by the formal analysis relies on a number of implicit assumptions which are not self-evident but easy to miss, and may lead to a false sense of safety induced by a misunderstood regression verification processes. This paper aims at pointing out hidden, implicit assumptions of regression verification in the context of cyber-physical systems by making them explicit using practical examples. The explicit trust inheritance analysis would clarify for the engineers to understand the extent of the trust that regression verification provides and consequently facilitate them to utilize this formal technique for the system validation.
Industrial control systems are moving from monolithic to distributed and cloud-connected architectures, which increases system complexity and vulnerability, thus complicates security analysis. When exhaustive verification accounts for this complexity the state space being sought grows drastically as the system model evolves and more details are considered. Eventually this may lead to state space explosion, which makes exhaustive verification infeasible. To address this, we use VDM-SL's combinatorial testing feature to generate security attacks that are executed against the model to verify whether the system has the desired security properties. We demonstrate our approach using a cloud-connected industrial control system that is responsible for performing safety-critical tasks and handling client requests sent to the control network. Although the approach is not exhaustive it enables verification of mitigation strategies for a large number of attacks and complex systems within reasonable time.
The increase of the digitalization taking place in various industrial domains is leading developers towards the design and implementation of more and more complex networked control systems (NCS) supported by Wireless Sensor Networks (WSN). This naturally raises new challenges for the current WSN technology, namely in what concerns improved guarantees of technical aspects such as real-time communications together with safe and secure transmissions. Notably, in what concerns security aspects, several cryptographic protocols have been proposed. Since the design of these protocols is usually error-prone, security breaches can still be exposed and MALICIOUSly exploited unless they are rigorously analyzed and verified. In this paper we formally verify, using ProVerif, three cryptographic protocols used in WSN, regarding the security properties of secrecy and authenticity. The security analysis performed in this paper is more robust than the ones performed in related work. Our contributions involve analyzing protocols that were modeled considering an unbounded number of participants and actions, and also the use of a hierarchical system to classify the authenticity results. Our verification shows that the three analyzed protocols guarantee secrecy, but can only provide authenticity in specific scenarios.
In this paper, we show how practical the little theorem of witness functions is in detecting security flaws in some categories of cryptographic protocols. We convey a formal analysis of the Needham-Schroeder symmetric-key protocol in the theory of witness functions. We show how it helps to warn about a security vulnerability in a given step of this protocol where the value of security of a sensitive ticket in a sent message unexpectedly decreases compared with its value when received. This vulnerability may be exploited by an intruder to mount a replay attack as described by Denning and Sacco.
This paper presents an overview of the H2020 project VESSEDIA [9] aimed at verifying the security and safety of modern connected systems also called IoT. The originality relies in using Formal Methods inherited from high-criticality applications domains to analyze the source code at different levels of intensity, to gather possible faults and weaknesses. The analysis methods are mostly exhaustive an guarantee that, after analysis, the source code of the application is error-free. This paper is structured as follows: after an introductory section 1 giving some factual data, section 2 presents the aims and the problems addressed; section 3 describes the project's use-cases and section 4 describes the proposed approach for solving these problems and the results achieved until now; finally, section 5 discusses some remaining future work.
As an efficient deletion method, unlinking is widely used in cloud storage. While unlinking is a kind of incomplete deletion, `deleted data' remains on cloud and can be recovered. To make `deleted data' unrecoverable, overwriting is an effective method on cloud. Users lose control over their data on cloud once deleted, so it is difficult for them to confirm overwriting. In face of such a crucial problem, we propose a Provable and Traceable Assured Deletion (PTAD) scheme in cloud storage based on blockchain. PTAD scheme relies on overwriting to achieve assured deletion. We reference the idea of data integrity checking and design algorithms to verify if cloud overwrites original blocks properly as specific patterns. We utilize technique of smart contract in blockchain to automatically execute verification and keep transaction in ledger for tracking. The whole scheme can be divided into three stages-unlinking, overwriting and verification-and we design one specific algorithm for each stage. For evaluation, we implement PTAD scheme on cloud and construct a consortium chain with Hyperledger Fabric. The performance shows that PTAD scheme is effective and feasible.
In VLSI industry the design cycle is categorized into Front End Design and Back End Design. Front End Design flow is from Specifications to functional verification of RTL design. Back End Design is from logic synthesis to fabrication of chip. Handheld devices like Mobile SOC's is an amalgamation of many components like GPU, camera, sensor, display etc. on one single chip. In order to integrate these components protocols are needed. One such protocol in the emerging trend is I3C protocol. I3C is abbreviated as Improved Inter Integrated Circuit developed by Mobile Industry Processor Interface (MIPI) alliance. Most probably used for the interconnection of sensors in Mobile SOC's. The main motivation of adapting the standard is for the increase speed and low pin count in most of the hardware chips. The bus protocol is backward compatible with I2C devices. The paper includes detailed study I3C bus protocol and developing verification environment for the protocol. The test bench environment is written and verified using system Verilog and UVM. The Universal Verification Methodology (UVM) is base class library built using System Verilog which provides the fundamental blocks needed to quickly develop reusable and well-constructed verification components and test environments. The Functional Coverage of around 93.55 % and Code Coverage of around 98.89 % is achieved by verification closure.
Data Distribution Service (DDS) is a realtime peer-to-peer protocol that serves as a scalable middleware between distributed networked systems found in many Industrial IoT domains such as automotive, medical, energy, and defense. Since the initial ratification of the standard, specifications have introduced a Security Model and Service Plugin Interface (SPI) architecture, facilitating authenticated encryption and data centric access control while preserving interoperable data exchange. However, as Secure DDS v1.1, the default plugin specifications presently exchanges digitally signed capability lists of both participants in the clear during the crypto handshake for permission attestation; thus breaching confidentiality of the context of the connection. In this work, we present an attacker model that makes use of network reconnaissance afforded by this leaked context in conjunction with formal verification and model checking to arbitrarily reason about the underlying topology and reachability of information flow, enabling targeted attacks such as selective denial of service, adversarial partitioning of the data bus, or vulnerability excavation of vendor implementations.
Denial of Service (DoS) attacks have been a serious security concern, as no service is, in principle, protected against them. Although a Dolev-Yao intruder with unlimited resources can trivially render any service unavailable, DoS attacks do not necessarily have to be carried out by such (extremely) powerful intruders. It is useful in practice and more challenging for formal protocol verification to determine whether a service is vulnerable even to resource-bounded intruders that cannot generate or intercept arbitrary large volumes of traffic. This paper proposes a novel, more refined intruder model where the intruder can only consume at most some specified amount of resources in any given time window. Additionally, we propose protocol theories that may contain timeouts and specify service resource usage during protocol execution. In contrast to the existing resource-conscious protocol verification models, our model allows finer and more subtle analysis of DoS problems. We illustrate the power of our approach by representing a number of classes of DoS attacks, such as, Slow, Asymmetric and Amplification DoS attacks, exhausting different types of resources of the target, such as, number of workers, processing power, memory, and network bandwidth. We show that the proposed DoS problem is undecidable in general and is PSPACE-complete for the class of resource-bounded, balanced systems. Finally, we implemented our formal verification model in the rewriting logic tool Maude and analyzed a number of DoS attacks in Maude using Rewriting Modulo SMT in an automated fashion.
The intelligent production line is a complex application with a large number of independent equipment network integration. In view of the characteristics of CPS, the existing modeling methods cannot well meet the application requirements of large scale high-performance system. a formal simulation verification framework and verification method are designed for the performance constraints such as the real-time and security of the intelligent production line based on soft bus. A model-based service-oriented integration approach is employed, which adopts a model-centric way to automate the development course of the entire software life cycle. Developing experience indicate that the proposed approach based on the formal modeling and verification framework in this paper can improve the performance of the system, which is also helpful to achieve the balance of the production line and maintain the reasonable use rate of the processing equipment.