Visible to the public Biblio

Filters: Keyword is scalable verification  [Clear All Filters]
2023-02-17
Mahmood, Riyadh, Pennington, Jay, Tsang, Danny, Tran, Tan, Bogle, Andrea.  2022.  A Framework for Automated API Fuzzing at Enterprise Scale. 2022 IEEE Conference on Software Testing, Verification and Validation (ICST). :377–388.
Web-based Application Programming Interfaces (APIs) are often described using SOAP, OpenAPI, and GraphQL specifications. These specifications provide a consistent way to define web services and enable automated fuzz testing. As such, many fuzzers take advantage of these specifications. However, in an enterprise setting, the tools are usually installed and scaled by individual teams, leading to duplication of efforts. There is a need for an enterprise-wide fuzz testing solution to provide shared, cost efficient, off-nominal testing at scale where fuzzers can be plugged-in as needed. Internet cloud-based fuzz testing-as-a-service solutions mitigate scalability concerns but are not always feasible as they require artifacts to be uploaded to external infrastructure. Typically, corporate policies prevent sharing artifacts with third parties due to cost, intellectual property, and security concerns. We utilize API specifications and combine them with cluster computing elasticity to build an automated, scalable framework that can fuzz multiple apps at once and retain the trust boundary of the enterprise.
ISSN: 2159-4848
Dreyer, Julian, Tönjes, Ralf, Aschenbruck, Nils.  2022.  Decentralizing loT Public- Key Storage using Distributed Ledger Technology. 2022 International Wireless Communications and Mobile Computing (IWCMC). :172–177.
The secure Internet of Things (loT) increasingly relies on digital cryptographic signatures which require a private signature and public verification key. By their intrinsic nature, public keys are meant to be accessible to any interested party willing to verify a given signature. Thus, the storing of such keys is of great concern, since an adversary shall not be able to tamper with the public keys, e.g., on a local filesystem. Commonly used public-key infrastructures (PKIs), which handle the key distribution and storage, are not feasible in most use-cases, due to their resource intensity and high complexity. Thus, the general storing of the public verification keys is of notable interest for low-resource loT networks. By using the Distributed Ledger Technology (DLT), this paper proposes a decentralized concept for storing public signature verification keys in a tamper-resistant, secure, and resilient manner. By combining lightweight public-key exchange protocols with the proposed approach, the storing of verification keys becomes scalable and especially suitable for low-resource loT devices. This paper provides a Proof-of-Concept implementation of the DLT public-key store by extending our previously proposed NFC-Key Exchange (NFC-KE) protocol with a decentralized Hyperledger Fabric public-key store. The provided performance analysis shows that by using the decentralized keystore, the NFC- KE protocol gains an increased tamper resistance and overall system resilience while also showing expected performance degradations with a low real-world impact.
ISSN: 2376-6506
Vélez, Tatiana Castro, Khatchadourian, Raffi, Bagherzadeh, Mehdi, Raja, Anita.  2022.  Challenges in Migrating Imperative Deep Learning Programs to Graph Execution: An Empirical Study. 2022 IEEE/ACM 19th International Conference on Mining Software Repositories (MSR). :469–481.
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. While hybrid approaches aim for the “best of both worlds,” the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges-and resultant bugs-involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation-the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.
ISSN: 2574-3864
Li, Ying, Chen, Lan, Wang, Jian, Gong, Guanfei.  2022.  Partial Reconfiguration for Run-time Memory Faults and Hardware Trojan Attacks Detection. 2022 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :173–176.
Embedded memory are important components in system-on-chip, which may be crippled by aging and wear faults or Hardware Trojan attacks to compromise run-time security. The current built-in self-test and pre-silicon verification lack efficiency and flexibility to solve this problem. To this end, we address such vulnerabilities by proposing a run-time memory security detecting framework in this paper. The solution builds mainly upon a centralized security detection controller for partially reconfigurable inspection content, and a static memory wrapper to handle access conflicts and buffering testing cells. We show that a field programmable gate array prototype of the proposed framework can pursue 16 memory faults and 3 types Hardware Trojans detection with one reconfigurable partition, whereas saves 12.7% area and 2.9% power overhead compared to a static implementation. This architecture has more scalable capability with little impact on the memory accessing throughput of the original chip system in run-time detection.
Yang, Kaicheng, Wu, Yongtang, Chen, Yuling.  2022.  A Blockchain-based Scalable Electronic Contract Signing System. 2022 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). :343–348.
As the COVID-19 continues to spread globally, more and more companies are transforming into remote online offices, leading to the expansion of electronic signatures. However, the existing electronic signatures platform has the problem of data-centered management. The system is subject to data loss, tampering, and leakage when an attack from outside or inside occurs. In response to the above problems, this paper designs an electronic signature solution and implements a prototype system based on the consortium blockchain. The solution divides the contract signing process into four states: contract upload, initiation signing, verification signing, and confirm signing. The signing process is mapped with the blockchain-linked data. Users initiate the signature transaction by signing the uploaded contract's hash. The sign state transition is triggered when the transaction is uploaded to the blockchain under the consensus mechanism and the smart contract control, which effectively ensures the integrity of the electronic contract and the non-repudiation of the electronic signature. Finally, the blockchain performance test shows that the system can be applied to the business scenario of contract signing.
Luo, Zhengwu, Wang, Lina, Wang, Run, Yang, Kang, Ye, Aoshuang.  2022.  Improving Robustness Verification of Neural Networks with General Activation Functions via Branching and Optimization. 2022 International Joint Conference on Neural Networks (IJCNN). :1–8.
Robustness verification of neural networks (NNs) is a challenging and significant problem, which draws great attention in recent years. Existing researches have shown that bound propagation is a scalable and effective method for robustness verification, and it can be implemented on GPUs and TPUs to get parallelized. However, the bound propagation methods naturally produce weak bound due to linear relaxations on the neurons, which may cause failure in verification. Although tightening techniques for simple ReLU networks have been explored, they are not applicable for NNs with general activation functions such as Sigmoid and Tanh. Improving robustness verification on these NNs is still challenging. In this paper, we propose a Branch-and-Bound (BaB) style method to address this problem. The proposed BaB procedure improves the weak bound by splitting the input domain of neurons into sub-domains and solving the corresponding sub-problems. We propose a generic heuristic function to determine the priority of neuron splitting by scoring the relaxation and impact of neurons. Moreover, we combine bound optimization with the BaB procedure to improve the weak bound. Experimental results demonstrate that the proposed method gains up to 35% improvement compared to the state-of-art CROWN method on Sigmoid and Tanh networks.
ISSN: 2161-4407
Yang, Jingcong, Xia, Qi, Gao, Jianbin, Obiri, Isaac Amankona, Sun, Yushan, Yang, Wenwu.  2022.  A Lightweight Scalable Blockchain Architecture for IoT Devices. 2022 IEEE 5th International Conference on Electronics Technology (ICET). :1014–1018.
With the development of Internet of Things (IoT) technology, the transaction behavior of IoT devices has gradually increased, which also brings the problem of transaction data security and transaction processing efficiency. As one of the research hotspots in the field of data security, blockchain technology has been widely applied in the maintenance of transaction records and the construction of financial payment systems. However, the proportion of microtransactions in the Internet of Things poses challenges to the coupling of blockchain and IoT devices. This paper proposes a three-party scalable architecture based on “IoT device-edge server-blockchain”. In view of the characteristics of micropayment, the verification mechanism of the execution results of the off-chain transaction is designed, and the bridge node is designed in the off-chain architecture, which ensures the finality of the blockchain to the transaction. According to system evaluation, this scalable architecture improves the processing efficiency of micropayments on blockchain, while ensuring its decentration equal to that of blockchain. Compared with other blockchain-based IoT device payment schemes, our architecture is more excellent in activity.
ISSN: 2768-6515
Islam, Tariqul, Hasan, Kamrul, Singh, Saheb, Park, Joon S..  2022.  A Secure and Decentralized Auditing Scheme for Cloud Ensuring Data Integrity and Fairness in Auditing. 2022 IEEE 9th International Conference on Cyber Security and Cloud Computing (CSCloud)/2022 IEEE 8th International Conference on Edge Computing and Scalable Cloud (EdgeCom). :74–79.
With the advent of cloud storage services many users tend to store their data in the cloud to save storage cost. However, this has lead to many security concerns, and one of the most important ones is ensuring data integrity. Public verification schemes are able to employ a third party auditor to perform data auditing on behalf of the user. But most public verification schemes are vulnerable to procrastinating auditors who may not perform auditing on time. These schemes do not have fair arbitration also, i.e. they lack a way to punish the malicious Cloud Service Provider (CSP) and compensate user whose data has been corrupted. On the other hand, CSP might be storing redundant data that could increase the storage cost for the CSP and computational cost of data auditing for the user. In this paper, we propose a Blockchain-based public auditing and deduplication scheme with a fair arbitration system against procrastinating auditors. The key idea requires auditors to record each verification using smart contract and store the result into a Blockchain as a transaction. Our scheme can detect and punish the procrastinating auditors and compensate users in the case of any data loss. Additionally, our scheme can detect and delete duplicate data that improve storage utilization and reduce the computational cost of data verification. Experimental evaluation demonstrates that our scheme is provably secure and does not incur overhead compared to the existing public auditing techniques while offering an additional feature of verifying the auditor’s performance.
ISSN: 2693-8928
2023-02-13
Rupasri, M., Lakhanpal, Anupam, Ghosh, Soumalya, Hedage, Atharav, Bangare, Manoj L., Ketaraju, K. V. Daya Sagar.  2022.  Scalable and Adaptable End-To-End Collection and Analysis of Cloud Computing Security Data: Towards End-To-End Security in Cloud Computing Systems. 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM). 2:8—14.

Cloud computing provides customers with enormous compute power and storage capacity, allowing them to deploy their computation and data-intensive applications without having to invest in infrastructure. Many firms use cloud computing as a means of relocating and maintaining resources outside of their enterprise, regardless of the cloud server's location. However, preserving the data in cloud leads to a number of issues related to data loss, accountability, security etc. Such fears become a great barrier to the adoption of the cloud services by users. Cloud computing offers a high scale storage facility for internet users with reference to the cost based on the usage of facilities provided. Privacy protection of a user's data is considered as a challenge as the internal operations offered by the service providers cannot be accessed by the users. Hence, it becomes necessary for monitoring the usage of the client's data in cloud. In this research, we suggest an effective cloud storage solution for accessing patient medical records across hospitals in different countries while maintaining data security and integrity. In the suggested system, multifactor authentication for user login to the cloud, homomorphic encryption for data storage with integrity verification, and integrity verification have all been implemented effectively. To illustrate the efficacy of the proposed strategy, an experimental investigation was conducted.

Yu, Beiyuan, Li, Pan, Liu, Jianwei, Zhou, Ziyu, Han, Yiran, Li, Zongxiao.  2022.  Advanced Analysis of Email Sender Spoofing Attack and Related Security Problems. 2022 IEEE 9th International Conference on Cyber Security and Cloud Computing (CSCloud)/2022 IEEE 8th International Conference on Edge Computing and Scalable Cloud (EdgeCom). :80—85.

A mail spoofing attack is a harmful activity that modifies the source of the mail and trick users into believing that the message originated from a trusted sender whereas the actual sender is the attacker. Based on the previous work, this paper analyzes the transmission process of an email. Our work identifies new attacks suitable for bypassing SPF, DMARC, and Mail User Agent’s protection mechanisms. We can forge much more realistic emails to penetrate the famous mail service provider like Tencent by conducting the attack. By completing a large-scale experiment on these well-known mail service providers, we find some of them are affected by the related vulnerabilities. Some of the bypass methods are different from previous work. Our work found that this potential security problem can only be effectively protected when all email service providers have a standard view of security and can configure appropriate security policies for each email delivery node. In addition, we also propose a mitigate method to defend against these attacks. We hope our work can draw the attention of email service providers and users and effectively reduce the potential risk of phishing email attacks on them.

2023-02-02
Oakley, Lisa, Oprea, Alina, Tripakis, Stavros.  2022.  Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. 2022 IEEE 35th Computer Security Foundations Symposium (CSF). :380–395.

Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components. In this paper, we develop a formal framework for adversarial robustness in systems modeled as discrete time Markov chains (DTMCs). We base our framework on existing methods for verifying probabilistic temporal logic properties and extend it to include deterministic, memoryless policies acting in Markov decision processes (MDPs). Our framework includes a flexible approach for specifying structure-preserving and non structure-preserving adversarial models. We outline a class of threat models under which adversaries can perturb system transitions, constrained by an ε ball around the original transition probabilities. We define three main DTMC adversarial robustness problems: adversarial robustness verification, maximal δ synthesis, and worst case attack synthesis. We present two optimization-based solutions to these three problems, leveraging traditional and parametric probabilistic model checking techniques. We then evaluate our solutions on two stochastic protocols and a collection of Grid World case studies, which model an agent acting in an environment described as an MDP. We find that the parametric solution results in fast computation for small parameter spaces. In the case of less restrictive (stronger) adversaries, the number of parameters increases, and directly computing property satisfaction probabilities is more scalable. We demonstrate the usefulness of our definitions and solutions by comparing system outcomes over various properties, threat models, and case studies.

2023-01-06
Shaikh, Rizwan Ahmed, Sohaib Khan, Muhammad, Rashid, Imran, Abbas, Haidar, Naeem, Farrukh, Siddiqi, Muhammad Haroon.  2022.  A Framework for Human Error, Weaknesses, Threats & Mitigation Measures in an Airgapped Network. 2022 2nd International Conference on Digital Futures and Transformative Technologies (ICoDT2). :1—8.

Many organizations process and store classified data within their computer networks. Owing to the value of data that they hold; such organizations are more vulnerable to targets from adversaries. Accordingly, the sensitive organizations resort to an ‘air-gap’ approach on their networks, to ensure better protection. However, despite the physical and logical isolation, the attackers have successfully manifested their capabilities by compromising such networks; examples of Stuxnet and Agent.btz in view. Such attacks were possible due to the successful manipulation of human beings. It has been observed that to build up such attacks, persistent reconnaissance of the employees, and their data collection often forms the first step. With the rapid integration of social media into our daily lives, the prospects for data-seekers through that platform are higher. The inherent risks and vulnerabilities of social networking sites/apps have cultivated a rich environment for foreign adversaries to cherry-pick personal information and carry out successful profiling of employees assigned with sensitive appointments. With further targeted social engineering techniques against the identified employees and their families, attackers extract more and more relevant data to make an intelligent picture. Finally, all the information is fused to design their further sophisticated attacks against the air-gapped facility for data pilferage. In this regard, the success of the adversaries in harvesting the personal information of the victims largely depends upon the common errors committed by legitimate users while on duty, in transit, and after their retreat. Such errors would keep on repeating unless these are aligned with their underlying human behaviors and weaknesses, and the requisite mitigation framework is worked out.

2022-03-15
Amir, Guy, Schapira, Michael, Katz, Guy.  2021.  Towards Scalable Verification of Deep Reinforcement Learning. 2021 Formal Methods in Computer Aided Design (FMCAD). :193—203.
Deep neural networks (DNNs) have gained significant popularity in recent years, becoming the state of the art in a variety of domains. In particular, deep reinforcement learning (DRL) has recently been employed to train DNNs that realize control policies for various types of real-world systems. In this work, we present the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for DRL systems. To demonstrate the benefits of whiRL 2.0, we apply it to case studies from the communication networks domain that have recently been used to motivate formal verification of DRL systems, and which exhibit characteristics that are conducive for scalable verification. We propose techniques for performing k-induction and semi-automated invariant inference on such systems, and leverage these techniques for proving safety and liveness properties that were previously impossible to verify due to the scalability barriers of prior approaches. Furthermore, we show how our proposed techniques provide insights into the inner workings and the generalizability of DRL systems. whiRL 2.0 is publicly available online.
Zhou, Zequan, Wang, Yupeng, Luo, Xiling, Bai, Yi, Wang, Xiaochao, Zeng, Feng.  2021.  Secure Accountable Dynamic Storage Integrity Verification. 2021 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/IOP/SCI). :440—447.
Integrity verification of cloud data is of great importance for secure and effective cloud storage since attackers can change the data even though it is encrypted. Traditional integrity verification schemes only let the client know the integrity status of the remote data. When the data is corrupted, the system cannot hold the server accountable. Besides, almost all existing schemes assume that the users are credible. Instead, especially in a dynamic operation environment, users can deny their behaviors, and let the server bear the penalty of data loss. To address the issues above, we propose an accountable dynamic storage integrity verification (ADS-IV) scheme which provides means to detect or eliminate misbehavior of all participants. In the meanwhile, we modify the Invertible Bloom Filter (IBF) to recover the corrupted data and use the Mahalanobis distance to calculate the degree of damage. We prove that our scheme is secure under Computational Diffie-Hellman (CDH) assumption and Discrete Logarithm (DL) assumption and that the audit process is privacy-preserving. The experimental results demonstrate that the computational complexity of the audit is constant; the storage overhead is \$O(\textbackslashtextbackslashsqrt n )\$, which is only 1/400 of the size of the original data; and the whole communication overhead is O(1).As a result, the proposed scheme is not only suitable for large-scale cloud data storage systems, but also for systems with sensitive data, such as banking systems, medical systems, and so on.
Wang, Hong, Liu, Xiangyang, Xie, Yunhong, Zeng, Han.  2021.  The Scalable Group Testing of Invalid Signatures based on Latin Square in Wireless Sensors Networks. 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP). :1153—1158.
Digital signature is more appropriate for message security in Wireless Sensors Networks (WSNs), which is energy-limited, than costly encryption. However, it meets with difficulty of verification when a large amount of message-signature pairs swarm into the central node in WSNs. In this paper, a scalable group testing algorithm based on Latin square (SGTLS) is proposed, which focus on both batch verification of signatures and invalid signature identification. To address the problem of long time-delay during individual verification, we adapt aggregate signature for batch verification so as to judge whether there are any invalid signatures among the collection of signatures once. In particular, when batch verification fails, an invalid signature identification algorithm is presented based on scalable OR-checking matrix of Latin square, which can adjust the number of group testing by itself with the variation of invalid signatures. Comprehensive analyses show that SGTLS has more advantages, such as scalability, suitability for parallel computing and flexible design (Latin square is popular), than other algorithm.
Cherupally, Sumanth Reddy, Boga, Srinivas, Podili, Prashanth, Kataoka, Kotaro.  2021.  Lightweight and Scalable DAG based distributed ledger for verifying IoT data integrity. 2021 International Conference on Information Networking (ICOIN). :267—272.
Verifying the integrity of IoT data in cloud-based IoT architectures is crucial for building reliable IoT applications. Traditional data integrity verification methods rely on a Trusted Third Party (TTP) that has issues of risk and operational cost by centralization. Distributed Ledger Technology (DLT) has a high potential to verify IoT data integrity and overcome the problems with TTPs. However, the existing DLTs have low transaction throughput, high computational and storage overhead, and are unsuitable for IoT environments, where a massive scale of data is generated. Recently, Directed Acyclic Graph (DAG) based DLTs have been proposed to address the low transaction throughput of linear DLTs. However, the integration of IoT Gateways (GWs) into the peer to peer (P2P) DLT network is challenging because of their low storage and computational capacity. This paper proposes Lightweight and Scalable DAG based distributed ledger for IoT (LSDI) that can work with resource-constrained IoT GWs to provide fast and scalable IoT data integrity verification. LSDI uses two key techniques: Pruning and Clustering, to reduce 1) storage overhead in IoT GWs by removing sufficiently old transactions, and 2) computational overhead of IoT GWs by partitioning a large P2P network into smaller P2P networks. The evaluation results of the proof of concept implementation showed that the proposed LSDI system achieves high transaction throughput and scalability while efficiently managing storage and computation overhead of the IoT GWs.
Keshani, Mehdi.  2021.  Scalable Call Graph Constructor for Maven. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :99—101.
As a rich source of data, Call Graphs are used for various applications including security vulnerability detection. Despite multiple studies showing that Call Graphs can drastically improve the accuracy of analysis, existing ecosystem-scale tools like Dependabot do not use Call Graphs and work at the package-level. Using Call Graphs in ecosystem use cases is not practical because of the scalability problems that Call Graph generators have. Call Graph generation is usually considered to be a "full program analysis" resulting in large Call Graphs and expensive computation. To make an analysis applicable to ecosystem scale, this pragmatic approach does not work, because the number of possible combinations of how a particular artifact can be combined in a full program explodes. Therefore, it is necessary to make the analysis incremental. There are existing studies on different types of incremental program analysis. However, none of them focuses on Call Graph generation for an entire ecosystem. In this paper, we propose an incremental implementation of the CHA algorithm that can generate Call Graphs on-demand, by stitching together partial Call Graphs that have been extracted for libraries before. Our preliminary evaluation results show that the proposed approach scales well and outperforms the most scalable existing framework called OPAL.
Cristescu, Mihai-Corneliu, Bob, Cristian.  2021.  Flexible Framework for Stimuli Redundancy Reduction in Functional Verification Using Artificial Neural Networks. 2021 International Symposium on Signals, Circuits and Systems (ISSCS). :1—4.
Within the ASIC development process, the phase of functional verification is a major bottleneck that affects the product time to market. A technique that decreases the time cost for reaching functional coverage closure is reducing the stimuli redundancy during the test regressions. This paper addresses such a solution and presents a novel, efficient, and scalable implementation that harnesses the power of artificial neural networks. This article outlines the concept strategy, highlights the framework structure, lists the experimental results, and underlines future research directions.
Kadlubowski, Lukasz A., Kmon, Piotr.  2021.  Test and Verification Environment and Methodology for Vernier Time-to-Digital Converter Pixel Array. 2021 24th International Symposium on Design and Diagnostics of Electronic Circuits Systems (DDECS). :137—140.
The goal of building a system for precise time measurement in pixel radiation detectors motivates the development of flexible design and verification environment. It should be suitable for quick simulations when individual elements of the system are developed and should be scalable so that systemlevel verification is possible as well. The approach presented in this paper is to utilize the power of SystemVerilog language and apply basic Object-Oriented Programming concepts to the test program. Since the design under test is a full-custom mixed-signal design, it must be simulated with AMS simulator and various features of analog design environment are used as well (Monte Carlo analysis, corner analysis, schematic capture GUI-related functions). The presented approach combines these two worlds and should be suitable for small academia projects, where design and verification is seldom done by separate teams.
Haowei, Liang, Chunyan, Hou, Jinsong, Wang, Chen, Chen.  2021.  Software Safety Verification Framework based on Predicate Abstraction. 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC). :1327—1332.
Program verification techniques have gained increasing popularity in academic and industrial circles during the last years. Predicate abstraction is a traditional and practical verification technique, which can solve the problem of state space explosion pretty well. Many software verification tools have implemented it. But these implementations are not user-friendly, or scalable. Aimed at these problems, we describe and implement a new automatic predicate abstraction framework, CChecker, for proving the safety of procedural programs with integer assignments. CChecker is a whole system composed of two parts: front and back end. The front end preprocesses and parses the source programs into logic models based on Clang. And the back end resolves the models based on Z3 to get software safety property. At last, the experiments show the potential of CChecker.
Baluta, Teodora, Chua, Zheng Leong, Meel, Kuldeep S., Saxena, Prateek.  2021.  Scalable Quantitative Verification for Deep Neural Networks. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :248—249.
Despite the functional success of deep neural networks (DNNs), their trustworthiness remains a crucial open challenge. To address this challenge, both testing and verification techniques have been proposed. But these existing techniques pro- vide either scalability to large networks or formal guarantees, not both. In this paper, we propose a scalable quantitative verification framework for deep neural networks, i.e., a test-driven approach that comes with formal guarantees that a desired probabilistic property is satisfied. Our technique performs enough tests until soundness of a formal probabilistic property can be proven. It can be used to certify properties of both deterministic and randomized DNNs. We implement our approach in a tool called PROVERO1 and apply it in the context of certifying adversarial robustness of DNNs. In this context, we first show a new attack- agnostic measure of robustness which offers an alternative to purely attack-based methodology of evaluating robustness being reported today. Second, PROVERO provides certificates of robustness for large DNNs, where existing state-of-the-art verification tools fail to produce conclusive results. Our work paves the way forward for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees, even where testers only have black-box access to the neural network.
2021-05-03
Das, Arnab, Briggs, Ian, Gopalakrishnan, Ganesh, Krishnamoorthy, Sriram, Panchekha, Pavel.  2020.  Scalable yet Rigorous Floating-Point Error Analysis. SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. :1–14.
Automated techniques for rigorous floating-point round-off error analysis are a prerequisite to placing important activities in HPC such as precision allocation, verification, and code optimization on a formal footing. Yet existing techniques cannot provide tight bounds for expressions beyond a few dozen operators-barely enough for HPC. In this work, we offer an approach embedded in a new tool called SATIHE that scales error analysis by four orders of magnitude compared to today's best-of-class tools. We explain how three key ideas underlying SATIHE helps it attain such scale: path strength reduction, bound optimization, and abstraction. SATIHE provides tight bounds and rigorous guarantees on significantly larger expressions with well over a hundred thousand operators, covering important examples including FFT, matrix multiplication, and PDE stencils.
Naik, Nikhil, Nuzzo, Pierluigi.  2020.  Robustness Contracts for Scalable Verification of Neural Network-Enabled Cyber-Physical Systems. 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE). :1–12.
The proliferation of artificial intelligence based systems in all walks of life raises concerns about their safety and robustness, especially for cyber-physical systems including multiple machine learning components. In this paper, we introduce robustness contracts as a framework for compositional specification and reasoning about the robustness of cyber-physical systems based on neural network (NN) components. Robustness contracts can encompass and generalize a variety of notions of robustness which were previously proposed in the literature. They can seamlessly apply to NN-based perception as well as deep reinforcement learning (RL)-enabled control applications. We present a sound and complete algorithm that can efficiently verify the satisfaction of a class of robustness contracts on NNs by leveraging notions from Lagrangian duality to identify system configurations that violate the contracts. We illustrate the effectiveness of our approach on the verification of NN-based perception systems and deep RL-based control systems.
Herber, Paula, Liebrenz, Timm.  2020.  Dependence Analysis and Automated Partitioning for Scalable Formal Analysis of SystemC Designs. 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE). :1–6.
Embedded systems often consist of deeply intertwined hardware and software components. At the same time, they are often used in safety-critical applications, where an error may result in enormous costs or even loss of human lives. Existing verification techniques that show the absence of errors do not scale well for complex integrated HW/SW systems. In this paper, we present a dependence analysis and automated partitioning approach for the formal analysis of HW/SW codesigns that are modeled in SystemC. The key idea of our approach is threefold: first, we partition a given system into loosely coupled submodels. Second, we analyze the dependences between these submodels and compute an abstract verification interface for each of them, which captures all possible influences of all other submodels. Third, we verify global properties of the overall system by verifying them separately for each subsystem. We demonstrate that our approach significantly reduces verification times and increases scalability with results for an anti-lock braking system.
Sharma, Mohit, Strathman, Hunter J., Walker, Ross M..  2020.  Verification of a Rapidly Multiplexed Circuit for Scalable Action Potential Recording. 2020 IEEE International Symposium on Circuits and Systems (ISCAS). :1–1.
This report presents characterizations of in vivo neural recordings performed with a CMOS multichannel chip that uses rapid multiplexing directly at the electrodes, without any pre-amplification or buffering. Neural recordings were taken from a 16-channel microwire array implanted in rodent cortex, with comparison to a gold-standard commercial bench-top recording system. We were able to record well-isolated threshold crossings from 10 multiplexed electrodes and typical local field potential waveforms from 16, with strong agreement with the standard system (average SNR = 2.59 and 3.07 respectively). For 10 electrodes, the circuit achieves an effective area per channel of 0.0077 mm2, which is \textbackslashtextgreater5× smaller than typical multichannel chips. Extensive characterizations of noise and signal quality are presented and compared to fundamental theory, as well as results from in vivo and in vitro experiments. By demonstrating the validation of rapid multiplexing directly at the electrodes, this report confirms it as a promising approach for reducing circuit area in massively-multichannel neural recording systems, which is crucial for scaling recording site density and achieving large-scale sensing of brain activity with high spatiotemporal resolution.