Biblio
Generating a secure source of publicly-verifiable randomness could be the single most fundamental technical challenge on a distributed network, especially in the blockchain context. Many current proposals face serious problems of scalability and security issues. We present a protocol which can be implemented on a blockchain that ensures unpredictable, tamper-resistant, scalable and publicly-verifiable outcomes. The main building blocks of our protocol are homomorphic encryption (HE) and verifiable random functions (VRF). The use of homomorphic encryption enables mathematical operations to be performed on encrypted data, to ensure no one knows the outcome prior to being generated. The protocol requires O(n) elliptic curve multiplications and additions as well as O(n) signature signing and verification operations, which permits great scalability. We present a comparison between recent approaches to the generation of random beacons.
Data Distribution Service (DDS) is a realtime peer-to-peer protocol that serves as a scalable middleware between distributed networked systems found in many Industrial IoT domains such as automotive, medical, energy, and defense. Since the initial ratification of the standard, specifications have introduced a Security Model and Service Plugin Interface (SPI) architecture, facilitating authenticated encryption and data centric access control while preserving interoperable data exchange. However, as Secure DDS v1.1, the default plugin specifications presently exchanges digitally signed capability lists of both participants in the clear during the crypto handshake for permission attestation; thus breaching confidentiality of the context of the connection. In this work, we present an attacker model that makes use of network reconnaissance afforded by this leaked context in conjunction with formal verification and model checking to arbitrarily reason about the underlying topology and reachability of information flow, enabling targeted attacks such as selective denial of service, adversarial partitioning of the data bus, or vulnerability excavation of vendor implementations.
Formally verifying functional and security properties of a large-scale production operating system is highly desirable. However, it is challenging as such OSes are often written in multiple source languages that have no formal semantics - a prerequisite for formal reasoning. To avoid expensive formalization of the semantics of multiple high-level source languages, we present a lightweight and rigorous verification toolchain that verifies OS code at the binary level, targeting ARM machines. To reason about ARM instructions, we first translate the ARM Specification Language that describes the semantics of the ARMv8 ISA into the PVS7 theorem prover and verify the translation. We leverage the radare2 reverse engineering tool to decode ARM binaries into PVS7 and verify the translation. Our translation verification methodology is a lightweight formal validation technique that generates large-scale instruction emulation test lemmas whose proof obligations are automatically discharged. To demonstrate our verification methodology, we apply the technique on two OSes: Google's Zircon and a subset of Linux. We extract a set of 370 functions from these OSes, translate them into PVS7, and verify the correctness of the translation by automatically discharging hundreds of thousands of proof obligations and tests. This took 27.5 person-months to develop.
An abundance of data in many disciplines of science, engineering, national security, health care, and business has led to the emerging field of Big Data Analytics that run in a cloud computing environment. To process massive quantities of data in the cloud, developers leverage Data-Intensive Scalable Computing (DISC) systems such as Google's MapReduce, Hadoop, and Spark. Currently, developers do not have easy means to debug DISC applications. The use of cloud computing makes application development feel more like batch jobs and the nature of debugging is therefore post-mortem. Developers of big data applications write code that implements a data processing pipeline and test it on their local workstation with a small sample data, downloaded from a TB-scale data warehouse. They cross fingers and hope that the program works in the expensive production cloud. When a job fails or they get a suspicious result, data scientists spend hours guessing at the source of the error, digging through post-mortem logs. In such cases, the data scientists may want to pinpoint the root cause of errors by investigating a subset of corresponding input records. The vision of my work is to provide interactive, real-time and automated debugging services for big data processing programs in modern DISC systems with minimum performance impact. My work investigates the following research questions in the context of big data analytics: (1) What are the necessary debugging primitives for interactive big data processing? (2) What scalable fault localization algorithms are needed to help the user to localize and characterize the root causes of errors? (3) How can we improve testing efficiency during iterative development of DISC applications by reasoning the semantics of dataflow operators and user-defined functions used inside dataflow operators in tandem? To answer these questions, we synthesize and innovate ideas from software engineering, big data systems, and program analysis, and coordinate innovations across the software stack from the user-facing API all the way down to the systems infrastructure.
A blockchain is a distributed ledger forming a distributed consensus on a history of transactions, and is the underlying technology for the Bitcoin cryptocurrency. However, its applications are far beyond the financial sector. The transaction verification process for cryptocurrencies is much slower than traditional digital transaction systems. One approach to increase transaction speed and scalability is to identify a solution that offers faster Proof of Work. In this paper, we propose a method for accelerating the process of Proof of Work based on parallel mining rather than solo mining. The goal is to ensure that no more than two or more miners put the same effort into solving a specific block. The proposed method includes a process for selection of a manager, distribution of work and a reward system. This method has been implemented in a test environment that contains all the characteristics needed to perform Proof of Work for Bitcoin and has been tested, using a variety of case scenarios, by varying the difficulty level and number of validators. Preliminary results show improvement in the scalability of Proof of Work up to 34% compared to the current system.
Industrial control systems are changing from monolithic to distributed and interconnected architectures, entering the era of industrial IoT. One fundamental issue is that security properties of such distributed control systems are typically only verified empirically, during development and after system deployment. We propose a novel modelling framework for the security verification of distributed industrial control systems, with the goal of moving towards early design stage formal verification. In our framework we model industrial IoT infrastructures, attack patterns, and mitigation strategies for countering attacks. We conduct model checking-based formal analysis of system security through scenario execution, where the analysed system is exposed to attacks and implement mitigation strategies. We study the applicability of our framework for large systems using a scalability analysis.
This Since the past century, the digital design industry has performed an outstanding role in the development of electronics. Hence, a great variety of designs are developed daily, these designs must be submitted to high standards of verification in order to ensure the 100% of reliability and the achievement of all design requirements. The Universal Verification Methodology (UVM) is the current standard at the industry for the verification process due to its reusability, scalability, time-efficiency and feasibility of handling high-level designs. This research proposes a functional verification framework using UVM for an AES encryption module based on a very detailed and robust verification plan. This document describes the complete verification process as done in the industry for a popular module used in information-security applications in the field of cryptography, defining the basis for future projects. The overall results show the achievement of the high verification standards required in industry applications and highlight the advantages of UVM against System Verilog-based functional verification and direct verification methodologies previously developed for the AES module.
Software verification has been well applied in safety critical areas and has shown the ability to provide better quality assurance for modern software. However, as lines of code and complexity of software systems increase, the scalability of verification becomes a challenge. In this paper, we present an automatic software verification framework TSV to address the scalability issues: (i) the extended structural abstraction and property-guided program slicing to solve large-scale program verification problem, saving time and memory without losing accuracy; (ii) automatically select different verification methods according to the program and property context to improve the verification efficiency. For evaluation, we compare TSV's different configurations with existing C program verifiers based on open benchmarks. We found that TSV with auto-selection performs better than with bounded model checking only or with extended structural abstraction only. Compared to existing tools such as CMBC and CPAChecker, it acquires 10%-20% improvement of accuracy and 50%-90% improvement of memory consumption.
Many test case prioritization criteria have been proposed for speeding up fault detection. Among them, similarity-based approaches give priority to the test cases that are the most dissimilar from those already selected. However, the proposed criteria do not scale up to handle the many thousands or even some millions test suite sizes of modern industrial systems and simple heuristics are used instead. We introduce the FAST family of test case prioritization techniques that radically changes this landscape by borrowing algorithms commonly exploited in the big data domain to find similar items. FAST techniques provide scalable similarity-based test case prioritization in both white-box and black-box fashion. The results from experimentation on real world C and Java subjects show that the fastest members of the family outperform other black-box approaches in efficiency with no significant impact on effectiveness, and also outperform white-box approaches, including greedy ones, if preparation time is not counted. A simulation study of scalability shows that one FAST technique can prioritize a million test cases in less than 20 minutes.
Permissionless blockchains allow the execution of arbitrary programs (called smart contracts), enabling mutually untrusted entities to interact without relying on trusted third parties. Despite their potential, repeated security concerns have shaken the trust in handling billions of USD by smart contracts. To address this problem, we present Securify, a security analyzer for Ethereum smart contracts that is scalable, fully automated, and able to prove contract behaviors as safe/unsafe with respect to a given property. Securify's analysis consists of two steps. First, it symbolically analyzes the contract's dependency graph to extract precise semantic information from the code. Then, it checks compliance and violation patterns that capture sufficient conditions for proving if a property holds or not. To enable extensibility, all patterns are specified in a designated domain-specific language. Securify is publicly released, it has analyzed 18K contracts submitted by its users, and is regularly used to conduct security audits by experts. We present an extensive evaluation of Securify over real-world Ethereum smart contracts and demonstrate that it can effectively prove the correctness of smart contracts and discover critical violations.
In this paper, we present a formal verification tool for the Ethereum Virtual Machine (EVM) bytecode. To precisely reason about all possible behaviors of the EVM bytecode, we adopted KEVM, a complete formal semantics of the EVM, and instantiated the K-framework's reachability logic theorem prover to generate a correct-by-construction deductive verifier for the EVM. We further optimized the verifier by introducing EVM-specific abstractions and lemmas to improve its scalability. Our EVM verifier has been used to verify various high-profile smart contracts including the ERC20 token, Ethereum Casper, and DappHub MakerDAO contracts.
We present the design and implementation of p4v, a practical tool for verifying data planes described using the P4 programming language. The design of p4v is based on classic verification techniques but adds several key innovations including a novel mechanism for incorporating assumptions about the control plane and domain-specific optimizations which are needed to scale to large programs. We present case studies showing that p4v verifies important properties and finds bugs in real-world programs. We conduct experiments to quantify the scalability of p4v on a wide range of additional examples. We show that with just a few hundred lines of control-plane annotations, p4v is able to verify critical safety properties for switch.p4, a program that implements the functionality of on a modern data center switch, in under three minutes.