Visible to the public Biblio

Filters: Keyword is Measurement and Metrics Testing  [Clear All Filters]
2022-02-22
Duvalsaint, Danielle, Blanton, R. D. Shawn.  2021.  Characterizing Corruptibility of Logic Locks using ATPG. 2021 IEEE International Test Conference (ITC). :213–222.

The outsourcing of portions of the integrated circuit design chain, mainly fabrication, to untrusted parties has led to an increasing concern regarding the security of fabricated ICs. To mitigate these concerns a number of approaches have been developed, including logic locking. The development of different logic locking methods has influenced research looking at different security evaluations, typically aimed at uncovering a secret key. In this paper, we make the case that corruptibility for incorrect keys is an important metric of logic locking. To measure corruptibility for circuits too large to exhaustively simulate, we describe an ATPG-based method to measure the corruptibility of incorrect keys. Results from applying the method to various circuits demonstrate that this method is effective at measuring the corruptibility for different locks.

Cancela, Brais, Bolón-Canedo, Verónica, Alonso-Betanzos, Amparo.  2021.  A delayed Elastic-Net approach for performing adversarial attacks. 2020 25th International Conference on Pattern Recognition (ICPR). :378–384.
With the rise of the so-called Adversarial Attacks, there is an increased concern on model security. In this paper we present two different contributions: novel measures of robustness (based on adversarial attacks) and a novel adversarial attack. The key idea behind these metrics is to obtain a measure that could compare different architectures, with independence of how the input is preprocessed (robustness against different input sizes and value ranges). To do so, a novel adversarial attack is presented, performing a delayed elastic-net adversarial attack (constraints are only used whenever a successful adversarial attack is obtained). Experimental results show that our approach obtains state-of-the-art adversarial samples, in terms of minimal perturbation distance. Finally, a benchmark of ImageNet pretrained models is used to conduct experiments aiming to shed some light about which model should be selected whenever security is a role factor.
Bouyeddou, Benamar, Harrou, Fouzi, Sun, Ying.  2021.  Detecting Cyber-Attacks in Modern Power Systems Using an Unsupervised Monitoring Technique. 2021 IEEE 3rd Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability (ECBIOS). :259–263.
Cyber-attacks detection in modern power systems is undoubtedly indispensable to enhance their resilience and guarantee the continuous production of electricity. As the number of attacks is very small compared to normal events, and attacks are unpredictable, it is not obvious to build a model for attacks. Here, only anomaly-free measurements are utilized to build a reference model for intrusion detection. Specifically, this study presents an unsupervised intrusion detection approach using the k-nearest neighbor algorithm and exponential smoothing monitoring scheme for uncovering attacks in modern power systems. Essentially, the k-nearest neighbor algorithm is implemented to compute the deviation between actual measurements and the faultless (training) data. Then, the exponential smoothing method is used to set up a detection decision-based kNN metric for anomaly detection. The proposed procedure has been tested to detect cyber-attacks in a two-line three-bus power transmission system. The proposed approach has been shown good detection performance.
Ouyang, Tinghui, Marco, Vicent Sanz, Isobe, Yoshinao, Asoh, Hideki, Oiwa, Yutaka, Seo, Yoshiki.  2021.  Corner Case Data Description and Detection. 2021 IEEE/ACM 1st Workshop on AI Engineering - Software Engineering for AI (WAIN). :19–26.
As the major factors affecting the safety of deep learning models, corner cases and related detection are crucial in AI quality assurance for constructing safety- and security-critical systems. The generic corner case researches involve two interesting topics. One is to enhance DL models' robustness to corner case data via the adjustment on parameters/structure. The other is to generate new corner cases for model retraining and improvement. However, the complex architecture and the huge amount of parameters make the robust adjustment of DL models not easy, meanwhile it is not possible to generate all real-world corner cases for DL training. Therefore, this paper proposes a simple and novel approach aiming at corner case data detection via a specific metric. This metric is developed on surprise adequacy (SA) which has advantages on capture data behaviors. Furthermore, targeting at characteristics of corner case data, three modifications on distanced-based SA are developed for classification applications in this paper. Consequently, through the experiment analysis on MNIST data and industrial data, the feasibility and usefulness of the proposed method on corner case data detection are verified.
Vakili, Ramin, Khorsand, Mojdeh.  2021.  Machine-Learning-based Advanced Dynamic Security Assessment: Prediction of Loss of Synchronism in Generators. 2020 52nd North American Power Symposium (NAPS). :1–6.
This paper proposes a machine-learning-based advanced online dynamic security assessment (DSA) method, which provides a detailed evaluation of the system stability after a disturbance by predicting impending loss of synchronism (LOS) of generators. Voltage angles at generator buses are used as the features of the different random forest (RF) classifiers which are trained to consecutively predict LOS of the generators as a contingency proceeds and updated measurements become available. A wide range of contingencies for various topologies and operating conditions of the IEEE 118-bus system has been studied in offline analysis using the GE positive sequence load flow analysis (PSLF) software to create a comprehensive dataset for training and testing the RF models. The performances of the trained models are evaluated in the presence of measurement errors using various metrics. The results reveal that the trained models are accurate, fast, and robust to measurement errors.
Wang, Mingzhe, Liang, Jie, Zhou, Chijin, Chen, Yuanliang, Wu, Zhiyong, Jiang, Yu.  2021.  Industrial Oriented Evaluation of Fuzzing Techniques. 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST). :306–317.
Fuzzing is a promising method for discovering vulnerabilities. Recently, various techniques are developed to improve the efficiency of fuzzing, and impressive gains are observed in evaluation results. However, evaluation is complex, as many factors affect the results, for example, test suites, baseline and metrics. Even more, most experiment setups are lab-oriented, lacking industrial settings such as large code-base and parallel runs. The correlation between the academic evaluation results and the bug-finding ability in real industrial settings has not been sufficiently studied. In this paper, we test representative fuzzing techniques to reveal their efficiency in industrial settings. First, we apply typical fuzzers on academic widely used small projects from LAVAM suite. We also apply the same fuzzers on large practical projects from Google's fuzzer-test-suite, which is rarely used in academic settings. Both experiments are performed in both single and parallel run. By analyzing the results, we found that most optimizations working well on LAVA-M suite fail to achieve satisfying results on Google's fuzzer-test-suite (e.g. compared to AFL, QSYM detects 82x more synthesized bugs in LAVA-M, but only detects 26% real bugs in Google's fuzzer-test-suite), and the original AFL even outperforms most academic optimization variants in industry widely used parallel runs (e.g. AFL covers 13% more paths than AFLFast). Then, we summarize common pitfalls of those optimizations, analyze the corresponding root causes, and propose potential directions such as orchestrations and synchronization to overcome the problems. For example, when running in parallel on those large practical projects, the proposed horizontal orchestration could cover 36%-82% more paths, and discover 46%-150% more unique crashes or bugs, compared to fuzzers such as AFL, FairFuzz and QSYM.
Leitold, Ferenc, Holló, Krisztina Győrffyné, Király, Zoltán.  2021.  Quantitative metrics characterizing malicious samples. 2021 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1–2.
In this work a time evolution model is used to help categorize malicious samples. This method can be used in anti-malware testing procedures as well as in detecting cyber-attacks. The time evolution mathematical model can help security experts to better understand the behaviour of malware attacks and malware families. It can be used for estimating much better their spreading and for planning the required defence actions against them. The basic time dependent variable of this model is the Ratio of the malicious files within an investigated time window. To estimate the main characteristics of the time series describing the change of the Ratio values related to a specific malicious file, nonlinear, exponential curve fitting method is used. The free parameters of the model were determined by numerical searching algorithms. The three parameters can be used in the information security field to describe more precisely the behaviour of a piece of malware and a family of malware as well. In the case of malware families, the aggregation of these parameters can provide effective solution for estimating the cyberthreat trends.
Farzana, Nusrat, Ayalasomayajula, Avinash, Rahman, Fahim, Farahmandi, Farimah, Tehranipoor, Mark.  2021.  SAIF: Automated Asset Identification for Security Verification at the Register Transfer Level. 2021 IEEE 39th VLSI Test Symposium (VTS). :1–7.
With the increasing complexity, modern system-onchip (SoC) designs are becoming more susceptible to security attacks and require comprehensive security assurance. However, establishing a comprehensive assurance for security often involves knowledge of relevant security assets. Since modern SoCs contain myriad confidential assets, the identification of security assets is not straightforward. The number and types of assets change due to numerous embedded hardware blocks within the SoC and their complex interactions. Some security assets are easily identifiable because of their distinct characteristics and unique definitions, while others remain in the blind-spot during design and verification and can be utilized as potential attack surfaces to violate confidentiality, integrity, and availability of the SoC. Therefore, it is essential to automatically identify security assets in an SoC at pre-silicon design stages to protect them and prevent potential attacks. In this paper, we propose an automated CAD framework called SAF to identify an SoC's security assets at the register transfer level (RTL) through comprehensive vulnerability analysis under different threat models. Moreover, we develop and incorporate metrics with SAF to quantitatively assess multiple vulnerabilities for the identified security assets. We demonstrate the effectiveness of SAF on MSP430 micro-controller and CEP SoC benchmarks. Our experimental results show that SAF can successfully and automatically identify an SoC's most vulnerable underlying security assets for protection.
Lanus, Erin, Freeman, Laura J., Richard Kuhn, D., Kacker, Raghu N..  2021.  Combinatorial Testing Metrics for Machine Learning. 2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). :81–84.
This paper defines a set difference metric for comparing machine learning (ML) datasets and proposes the difference between datasets be a function of combinatorial coverage. We illustrate its utility for evaluating and predicting performance of ML models. Identifying and measuring differences between datasets is of significant value for ML problems, where the accuracy of the model is heavily dependent on the degree to which training data are sufficiently representative of data encountered in application. The method is illustrated for transfer learning without retraining, the problem of predicting performance of a model trained on one dataset and applied to another.