Visible to the public Biblio

Filters: Keyword is program verification  [Clear All Filters]
2021-03-22
Kellogg, M., Schäf, M., Tasiran, S., Ernst, M. D..  2020.  Continuous Compliance. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :511–523.
Vendors who wish to provide software or services to large corporations and governments must often obtain numerous certificates of compliance. Each certificate asserts that the software satisfies a compliance regime, like SOC or the PCI DSS, to protect the privacy and security of sensitive data. The industry standard for obtaining a compliance certificate is an auditor manually auditing source code. This approach is expensive, error-prone, partial, and prone to regressions. We propose continuous compliance to guarantee that the codebase stays compliant on each code change using lightweight verification tools. Continuous compliance increases assurance and reduces costs. Continuous compliance is applicable to any source-code compliance requirement. To illustrate our approach, we built verification tools for five common audit controls related to data security: cryptographically unsafe algorithms must not be used, keys must be at least 256 bits long, credentials must not be hard-coded into program text, HTTPS must always be used instead of HTTP, and cloud data stores must not be world-readable. We evaluated our approach in three ways. (1) We applied our tools to over 5 million lines of open-source software. (2) We compared our tools to other publicly-available tools for detecting misuses of encryption on a previously-published benchmark, finding that only ours are suitable for continuous compliance. (3) We deployed a continuous compliance process at AWS, a large cloud-services company: we integrated verification tools into the compliance process (including auditors accepting their output as evidence) and ran them on over 68 million lines of code. Our tools and the data for the former two evaluations are publicly available.
2021-03-15
Cortiñas, C. T., Vassena, M., Russo, A..  2020.  Securing Asynchronous Exceptions. 2020 IEEE 33rd Computer Security Foundations Symposium (CSF). :214–229.

Language-based information-flow control (IFC) techniques often rely on special purpose, ad-hoc primitives to address different covert channels that originate in the runtime system, beyond the scope of language constructs. Since these piecemeal solutions may not compose securely, there is a need for a unified mechanism to control covert channels. As a first step towards this goal, we argue for the design of a general interface that allows programs to safely interact with the runtime system and the available computing resources. To coordinate the communication between programs and the runtime system, we propose the use of asynchronous exceptions (interrupts), which, to the best of our knowledge, have not been considered before in the context of IFC languages. Since asynchronous exceptions can be raised at any point during execution-often due to the occurrence of an external event-threads must temporarily mask them out when manipulating locks and shared data structures to avoid deadlocks and, therefore, breaking program invariants. Crucially, the naive combination of asynchronous exceptions with existing features of IFC languages (e.g., concurrency and synchronization variables) may open up new possibilities of information leakage. In this paper, we present MACasync, a concurrent, statically enforced IFC language that, as a novelty, features asynchronous exceptions. We show how asynchronous exceptions easily enable (out of the box) useful programming patterns like speculative execution and some degree of resource management. We prove that programs in MACasync satisfy progress-sensitive non-interference and mechanize our formal claims in the Agda proof assistant.

2021-01-25
Lanotte, R., Merro, M., Munteanu, A..  2020.  Runtime Enforcement for Control System Security. 2020 IEEE 33rd Computer Security Foundations Symposium (CSF). :246–261.
With the explosion of Industry 4.0, industrial facilities and critical infrastructures are transforming into “smart” systems that dynamically adapt to external events. The result is an ecosystem of heterogeneous physical and cyber components, such as programmable logic controllers, which are more and more exposed to cyber-physical attacks, i.e., security breaches in cyberspace that adversely affect the physical processes at the core of industrial control systems. We apply runtime enforcement techniques, based on an ad-hoc sub-class of Ligatti et al.'s edit automata, to enforce specification compliance in networks of potentially compromised controllers, formalised in Hennessy and Regan's Timed Process Language. We define a synthesis algorithm that, given an alphabet P of observable actions and an enforceable regular expression e capturing a timed property for controllers, returns a monitor that enforces the property e during the execution of any (potentially corrupted) controller with alphabet P and complying with the property e. Our monitors correct and suppress incorrect actions coming from corrupted controllers and emit actions in full autonomy when the controller under scrutiny is not able to do so in a correct manner. Besides classical properties, such as transparency and soundness, the proposed enforcement ensures non-obvious properties, such as polynomial complexity of the synthesis, deadlock- and diverge-freedom of monitored controllers, together with scalability when dealing with networks of controllers.
2020-12-02
Niz, D. de, Andersson, B., Klein, M., Lehoczky, J., Vasudevan, A., Kim, H., Moreno, G..  2019.  Mixed-Trust Computing for Real-Time Systems. 2019 IEEE 25th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). :1—11.

Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The framework introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a realtime scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.

2020-11-16
Tamimi, A., Touhiduzzaman, M., Hahn, A..  2019.  Modeling and Analysis Cyber Threats in Power Systems Using Architecture Analysis Design Language (AADL). 2019 Resilience Week (RWS). 1:213–218.
The lack of strong cyber-physical modeling capabilities presents many challenges across the design, development, verification, and maintenance phases of a system [7]. Novel techniques for modeling the cyber-grid components, along with analysis and verification techniques, are imperative to the deployment of a resilient and robust power grid. Several works address False Data Injection (FDI) attacks to the power grid. However, most of them suffer from the lack of a model to investigate the effects of attacks. This paper proposed a cyber-physical model using Architecture Analysis & Design Language (AADL) [15] and power system information models to address different attacks in power systems.
Ibrahim, M., Alsheikh, A..  2018.  Assessing Level of Resilience Using Attack Graphs. 2018 10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1–6.
Cyber-Physical-Systems are subject to cyber-attacks due to existing vulnerabilities in the various components constituting them. System Resiliency is concerned with the extent the system is able to bounce back to a normal state under attacks. In this paper, two communication Networks are analyzed, formally described, and modeled using Architecture Analysis & Design Language (AADL), identifying their architecture, connections, vulnerabilities, resources, possible attack instances as well as their pre-and post-conditions. The generated network models are then verified against a security property using JKind model checker integrated tool. The union of the generated attack sequences/scenarios resulting in overall network compromise (given by its loss of stability) is the Attack graph. The generated Attack graph is visualized graphically using Unity software, and then used to assess the worst Level of Resilience for both networks.
2020-11-02
Pinisetty, Srinivas, Schneider, Gerardo, Sands, David.  2018.  Runtime Verification of Hyperproperties for Deterministic Programs. 2018 IEEE/ACM 6th International FME Workshop on Formal Methods in Software Engineering (FormaliSE). :20—29.
In this paper, we consider the runtime verification problem of safety hyperproperties for deterministic programs. Several security and information-flow policies such as data minimality, non-interference, integrity, and software doping are naturally expressed formally as safety hyperproperties. Although there are monitoring results for hyperproperties, the algorithms are very complex since these are properties over set of traces, and not over single traces. For the deterministic input-output programs that we consider, and the specific safety hyperproperties we are interested in, the problem can be reduced to monitoring of trace properties. In this paper, we present a simpler monitoring approach for safety hyperproperties of deterministic programs. The approach involves transforming the given safety hyperproperty into a trace property, extracting a characteristic predicate for the given hyperproperty, and providing a parametric monitor taking such predicate as parameter. For any hyperproperty in the considered subclass, we show how runtime verification monitors can be synthesised. We have implemented our approach in the form of a parameterised monitor for the given class, and have applied it to a number of hyperproperties including data minimisation, non-interference, integrity and software doping. We show results concerning both offline and online monitoring.
Ermakov, Anton D., Prokopenko, Svetlana A., Yevtushenko, Nina V..  2018.  Security Checking Experiments with Mobile Services. 2018 19th International Conference of Young Specialists on Micro/Nanotechnologies and Electron Devices (EDM). :139—141.
In this paper, we continue to investigate the problem of software security. The problem is to check if software under test has some vulnerabilities such as exceeding of admissible values of input/output parameters or internal variables or can reach states where the software (service) behavior is not defined. We illustrate by experiments that the well-known verifier Java Path Finder (JPF) can be utilized for this purpose. We apply JPF-mobile to Android applications and results of security checking experiments are presented.
2020-10-26
Sun, Pengfei, Garcia, Luis, Zonouz, Saman.  2019.  Tell Me More Than Just Assembly! Reversing Cyber-Physical Execution Semantics of Embedded IoT Controller Software Binaries. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :349–361.
The safety of critical cyber-physical IoT devices hinges on the security of their embedded software that implements control algorithms for monitoring and control of the associated physical processes, e.g., robotics and drones. Reverse engineering of the corresponding embedded controller software binaries enables their security analysis by extracting high-level, domain-specific, and cyber-physical execution semantic information from executables. We present MISMO, a domain-specific reverse engineering framework for embedded binary code in emerging cyber-physical IoT control application domains. The reverse engineering outcomes can be used for firmware vulnerability assessment, memory forensics analysis, targeted memory data attacks, or binary patching for dynamic selective memory protection (e.g., important control algorithm parameters). MISMO performs semantic-matching at an algorithmic level that can help with the understanding of any possible cyber-physical security flaws. MISMO compares low-level binary symbolic values and high-level algorithmic expressions to extract domain-specific semantic information for the binary's code and data. MISMO enables a finer-grained understanding of the controller by identifying the specific control and state estimation algorithms used. We evaluated MISMO on 2,263 popular firmware binaries by 30 commercial vendors from 6 application domains including drones, self-driving cars, smart homes, robotics, 3D printers, and the Linux kernel controllers. The results show that MISMO can accurately extract the algorithm-level semantics of the embedded binary code and data regions. We discovered a zero-day vulnerability in the Linux kernel controllers versions 3.13 and above.
2020-10-06
Baruah, Sanjoy, Burns, Alan.  2019.  Incorporating Robustness and Resilience into Mixed-Criticality Scheduling Theory. 2019 IEEE 22nd International Symposium on Real-Time Distributed Computing (ISORC). :155—162.

Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).

2020-04-20
Huang, Zhen, Lie, David, Tan, Gang, Jaeger, Trent.  2019.  Using Safety Properties to Generate Vulnerability Patches. 2019 IEEE Symposium on Security and Privacy (SP). :539–554.
Security vulnerabilities are among the most critical software defects in existence. When identified, programmers aim to produce patches that prevent the vulnerability as quickly as possible, motivating the need for automatic program repair (APR) methods to generate patches automatically. Unfortunately, most current APR methods fall short because they approximate the properties necessary to prevent the vulnerability using examples. Approximations result in patches that either do not fix the vulnerability comprehensively, or may even introduce new bugs. Instead, we propose property-based APR, which uses human-specified, program-independent and vulnerability-specific safety properties to derive source code patches for security vulnerabilities. Unlike properties that are approximated by observing the execution of test cases, such safety properties are precise and complete. The primary challenge lies in mapping such safety properties into source code patches that can be instantiated into an existing program. To address these challenges, we propose Senx, which, given a set of safety properties and a single input that triggers the vulnerability, detects the safety property violated by the vulnerability input and generates a corresponding patch that enforces the safety property and thus, removes the vulnerability. Senx solves several challenges with property-based APR: it identifies the program expressions and variables that must be evaluated to check safety properties and identifies the program scopes where they can be evaluated, it generates new code to selectively compute the values it needs if calling existing program code would cause unwanted side effects, and it uses a novel access range analysis technique to avoid placing patches inside loops where it could incur performance overhead. Our evaluation shows that the patches generated by Senx successfully fix 32 of 42 real-world vulnerabilities from 11 applications including various tools or libraries for manipulating graphics/media files, a programming language interpreter, a relational database engine, a collection of programming tools for creating and managing binary programs, and a collection of basic file, shell, and text manipulation tools.
2020-03-16
Tahat, Amer, Joshi, Sarang, Goswami, Pronnoy, Ravindran, Binoy.  2019.  Scalable Translation Validation of Unverified Legacy OS Code. 2019 Formal Methods in Computer Aided Design (FMCAD). :1–9.

Formally verifying functional and security properties of a large-scale production operating system is highly desirable. However, it is challenging as such OSes are often written in multiple source languages that have no formal semantics - a prerequisite for formal reasoning. To avoid expensive formalization of the semantics of multiple high-level source languages, we present a lightweight and rigorous verification toolchain that verifies OS code at the binary level, targeting ARM machines. To reason about ARM instructions, we first translate the ARM Specification Language that describes the semantics of the ARMv8 ISA into the PVS7 theorem prover and verify the translation. We leverage the radare2 reverse engineering tool to decode ARM binaries into PVS7 and verify the translation. Our translation verification methodology is a lightweight formal validation technique that generates large-scale instruction emulation test lemmas whose proof obligations are automatically discharged. To demonstrate our verification methodology, we apply the technique on two OSes: Google's Zircon and a subset of Linux. We extract a set of 370 functions from these OSes, translate them into PVS7, and verify the correctness of the translation by automatically discharging hundreds of thousands of proof obligations and tests. This took 27.5 person-months to develop.

Chondamrongkul, Nacha, Sun, Jing, Wei, Bingyang, Warren, Ian.  2019.  Parallel Verification of Software Architecture Design. 2019 IEEE 19th International Symposium on High Assurance Systems Engineering (HASE). :50–57.
In the component-based software system, certain behaviours of components and their composition may affect system reliability at runtime. This problem can be early detected through the automated verification of software architecture design, by which model checking is one of the techniques to achieve this. However, its practicality and performance issue remain challenges. This paper presents a scalable approach for the software architecture verification. The modelling is proposed to manifest the behaviours in the software component, in order to detect problematic behaviours, such as circular dependency and performance bottleneck. The outcome of the verification identifies the problem and the scenarios that cause it. In order to mitigate the verification performance issue, the parallelism is applied to the verification process so that multiple decomposed models can be simultaneously verified on a multi-threaded environment. As some software systems are designed as the monolithic architecture, we present a method that helps to automatically decompose a large monolithic model into a set of smaller sub-models. Our approach was evaluated and proved to enhance the performance of the verification process for the large-scale complex software systems.
2020-03-09
Babu, T. Kishore, Guruprakash, C. D..  2019.  A Systematic Review of the Third Party Auditing in Cloud Security: Security Analysis, Computation Overhead and Performance Evaluation. 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). :86–91.
Cloud storage offers a considerable efficiency and security to the user's data and provide high flexibility to the user. The hackers make attempt of several attacks to steal the data that increase the concern of data security in cloud. The Third Party Auditing (TPA) method is introduced to check the data integrity. There are several TPA methods developed to improve the privacy and efficiency of the data integrity checking method. Various methods involved in TPA, have been analyzed in this review in terms of function, security and overall performance. Merkel Hash Tree (MHT) method provides efficiency and security in checking the integrity of data. The computational overhead of the proof verify is also analyzed in this review. The communication cost of the most TPA methods observed as low and there is a need of improvement in security of the public auditing.
2019-12-17
Huang, Bo-Yuan, Ray, Sayak, Gupta, Aarti, Fung, Jason M., Malik, Sharad.  2018.  Formal Security Verification of Concurrent Firmware in SoCs Using Instruction-Level Abstraction for Hardware*. 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC). :1-6.

Formal security verification of firmware interacting with hardware in modern Systems-on-Chip (SoCs) is a critical research problem. This faces the following challenges: (1) design complexity and heterogeneity, (2) semantics gaps between software and hardware, (3) concurrency between firmware/hardware and between Intellectual Property Blocks (IPs), and (4) expensive bit-precise reasoning. In this paper, we present a co-verification methodology to address these challenges. We model hardware using the Instruction-Level Abstraction (ILA), capturing firmware-visible behavior at the architecture level. This enables integrating hardware behavior with firmware in each IP into a single thread. The co-verification with multiple firmware across IPs is formulated as a multi-threaded program verification problem, for which we leverage software verification techniques. We also propose an optimization using abstraction to prevent expensive bit-precise reasoning. The evaluation of our methodology on an industry SoC Secure Boot design demonstrates its applicability in SoC security verification.

2019-12-02
Protzenko, Jonathan, Beurdouche, Benjamin, Merigoux, Denis, Bhargavan, Karthikeyan.  2019.  Formally Verified Cryptographic Web Applications in WebAssembly. 2019 IEEE Symposium on Security and Privacy (SP). :1256–1274.
After suffering decades of high-profile attacks, the need for formal verification of security-critical software has never been clearer. Verification-oriented programming languages like F* are now being used to build high-assurance cryptographic libraries and implementations of standard protocols like TLS. In this paper, we seek to apply these verification techniques to modern Web applications, like WhatsApp, that embed sophisticated custom cryptographic components. The problem is that these components are often implemented in JavaScript, a language that is both hostile to cryptographic code and hard to reason about. So we instead target WebAssembly, a new instruction set that is supported by all major JavaScript runtimes. We present a new toolchain that compiles Low*, a low-level subset of the F* programming language, into WebAssembly. Unlike other WebAssembly compilers like Emscripten, our compilation pipeline is focused on compactness and auditability: we formalize the full translation rules in the paper and implement it in a few thousand lines of OCaml. Using this toolchain, we present two case studies. First, we build WHACL*, a WebAssembly version of the existing, verified HACL* cryptographic library. Then, we present LibSignal*, a brand new, verified implementation of the Signal protocol in WebAssembly, that can be readily used by messaging applications like WhatsApp, Skype, and Signal.
Abate, Carmine, Blanco, Roberto, Garg, Deepak, Hritcu, Catalin, Patrignani, Marco, Thibault, Jérémy.  2019.  Journey Beyond Full Abstraction: Exploring Robust Property Preservation for Secure Compilation. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :256–25615.
Good programming languages provide helpful abstractions for writing secure code, but the security properties of the source language are generally not preserved when compiling a program and linking it with adversarial code in a low-level target language (e.g., a library or a legacy application). Linked target code that is compromised or malicious may, for instance, read and write the compiled program's data and code, jump to arbitrary memory locations, or smash the stack, blatantly violating any source-level abstraction. By contrast, a fully abstract compilation chain protects source-level abstractions all the way down, ensuring that linked adversarial target code cannot observe more about the compiled program than what some linked source code could about the source program. However, while research in this area has so far focused on preserving observational equivalence, as needed for achieving full abstraction, there is a much larger space of security properties one can choose to preserve against linked adversarial code. And the precise class of security properties one chooses crucially impacts not only the supported security goals and the strength of the attacker model, but also the kind of protections a secure compilation chain has to introduce. We are the first to thoroughly explore a large space of formal secure compilation criteria based on robust property preservation, i.e., the preservation of properties satisfied against arbitrary adversarial contexts. We study robustly preserving various classes of trace properties such as safety, of hyperproperties such as noninterference, and of relational hyperproperties such as trace equivalence. This leads to many new secure compilation criteria, some of which are easier to practically achieve and prove than full abstraction, and some of which provide strictly stronger security guarantees. For each of the studied criteria we propose an equivalent “property-free” characterization that clarifies which proof techniques apply. For relational properties and hyperproperties, which relate the behaviors of multiple programs, our formal definitions of the property classes themselves are novel. We order our criteria by their relative strength and show several collapses and separation results. Finally, we adapt existing proof techniques to show that even the strongest of our secure compilation criteria, the robust preservation of all relational hyperproperties, is achievable for a simple translation from a statically typed to a dynamically typed language.
Besson, Frédéric, Dang, Alexandre, Jensen, Thomas.  2019.  Information-Flow Preservation in Compiler Optimisations. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :230–23012.

Correct compilers perform program transformations preserving input/output behaviours of programs. Yet, correctness does not prevent program optimisations from introducing information-flow leaks that would make the target program more vulnerable to side-channel attacks than the source program. To tackle this problem, we propose a notion of Information-Flow Preserving (IFP) program transformation which ensures that a target program is no more vulnerable to passive side-channel attacks than a source program. To protect against a wide range of attacks, we model an attacker who is granted arbitrary memory accesses for a pre-defined set of observation points. We propose a compositional proof principle for proving that a transformation is IFP. Using this principle, we show how a translation validation technique can be used to automatically verify and even close information-flow leaks introduced by standard compiler passes such as dead-store elimination and register allocation. The technique has been experimentally validated on the CompCert C compiler.

Simon, Laurent, Chisnall, David, Anderson, Ross.  2018.  What You Get is What You C: Controlling Side Effects in Mainstream C Compilers. 2018 IEEE European Symposium on Security and Privacy (EuroS P). :1–15.
Security engineers have been fighting with C compilers for years. A careful programmer would test for null pointer dereferencing or division by zero; but the compiler would fail to understand, and optimize the test away. Modern compilers now have dedicated options to mitigate this. But when a programmer tries to control side effects of code, such as to make a cryptographic algorithm execute in constant time, the problem remains. Programmers devise complex tricks to obscure their intentions, but compiler writers find ever smarter ways to optimize code. A compiler upgrade can suddenly and without warning open a timing channel in previously secure code. This arms race is pointless and has to stop. We argue that we must stop fighting the compiler, and instead make it our ally. As a starting point, we analyze the ways in which compiler optimization breaks implicit properties of crypto code; and add guarantees for two of these properties in Clang/LLVM. Our work explores what is actually involved in controlling side effects on modern CPUs with a standard toolchain. Similar techniques can and should be applied to other security properties; achieving intentions by compiler commands or annotations makes them explicit, so we can reason about them. It is already understood that explicitness is essential for cryptographic protocol security and for compiler performance; it is essential for language security too. We therefore argue that this should be only the first step in a sustained engineering effort.
2019-11-12
Padon, Oded.  2018.  Deductive Verification of Distributed Protocols in First-Order Logic. 2018 Formal Methods in Computer Aided Design (FMCAD). :1-1.

Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.

E.V., Jaideep Varier, V., Prabakar, Balamurugan, Karthigha.  2019.  Design of Generic Verification Procedure for IIC Protocol in UVM. 2019 3rd International Conference on Electronics, Communication and Aerospace Technology (ICECA). :1146-1150.

With the growth of technology, designs became more complex and may contain bugs. This makes verification an indispensable part in product development. UVM describe a standard method for verification of designs which is reusable and portable. This paper verifies IIC bus protocol using Universal Verification Methodology. IIC controller is designed in Verilog using Vivado. It have APB interface and its function and code coverage is carried out in Mentor graphic Questasim 10.4e. This work achieved 83.87% code coverage and 91.11% functional coverage.

2019-09-26
Pfeffer, T., Herber, P., Druschke, L., Glesner, S..  2018.  Efficient and Safe Control Flow Recovery Using a Restricted Intermediate Language. 2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). :235-240.

Approaches for the automatic analysis of security policies on source code level cannot trivially be applied to binaries. This is due to the lacking high-level semantics of low-level object code, and the fundamental problem that control-flow recovery from binaries is difficult. We present a novel approach to recover the control-flow of binaries that is both safe and efficient. The key idea of our approach is to use the information contained in security mechanisms to approximate the targets of computed branches. To achieve this, we first define a restricted control transition intermediate language (RCTIL), which restricts the number of possible targets for each branch to a finite number of given targets. Based on this intermediate language, we demonstrate how a safe model of the control flow can be recovered without data-flow analyses. Our evaluation shows that that makes our solution more efficient than existing solutions.

2019-06-28
Chen, G., Wang, D., Li, T., Zhang, C., Gu, M., Sun, J..  2018.  Scalable Verification Framework for C Program. 2018 25th Asia-Pacific Software Engineering Conference (APSEC). :129-138.

Software verification has been well applied in safety critical areas and has shown the ability to provide better quality assurance for modern software. However, as lines of code and complexity of software systems increase, the scalability of verification becomes a challenge. In this paper, we present an automatic software verification framework TSV to address the scalability issues: (i) the extended structural abstraction and property-guided program slicing to solve large-scale program verification problem, saving time and memory without losing accuracy; (ii) automatically select different verification methods according to the program and property context to improve the verification efficiency. For evaluation, we compare TSV's different configurations with existing C program verifiers based on open benchmarks. We found that TSV with auto-selection performs better than with bounded model checking only or with extended structural abstraction only. Compared to existing tools such as CMBC and CPAChecker, it acquires 10%-20% improvement of accuracy and 50%-90% improvement of memory consumption.

2019-05-01
Mili, S., Nguyen, N., Chelouah, R..  2018.  Attack Modeling and Verification for Connected System Security. 2018 13th Annual Conference on System of Systems Engineering (SoSE). :157–162.

In the development process of critical systems, one of the main challenges is to provide early system validation and verification against vulnerabilities in order to reduce cost caused by late error detection. We propose in this paper an approach that, firstly allows formally describe system security specifications, thanks to our suggested extended attack tree. Secondly, static and dynamic system modeling by using a SysML connectivity profile to model error propagation is introduced. Finally, a model checker has been used in order to validate system specifications.

2019-02-14
Peng, H., Shoshitaishvili, Y., Payer, M..  2018.  T-Fuzz: Fuzzing by Program Transformation. 2018 IEEE Symposium on Security and Privacy (SP). :697-710.

Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries.