Visible to the public Biblio

Filters: Keyword is regression analysis  [Clear All Filters]
2023-06-22
Das, Soumyajit, Dayam, Zeeshaan, Chatterjee, Pinaki Sankar.  2022.  Application of Random Forest Classifier for Prevention and Detection of Distributed Denial of Service Attacks. 2022 OITS International Conference on Information Technology (OCIT). :380–384.
A classification issue in machine learning is the issue of spotting Distributed Denial of Service (DDos) attacks. A Denial of Service (DoS) assault is essentially a deliberate attack launched from a single source with the implied intent of rendering the target's application unavailable. Attackers typically aims to consume all available network bandwidth in order to accomplish this, which inhibits authorized users from accessing system resources and denies them access. DDoS assaults, in contrast to DoS attacks, include several sources being used by the attacker to launch an attack. At the network, transportation, presentation, and application layers of a 7-layer OSI architecture, DDoS attacks are most frequently observed. With the help of the most well-known standard dataset and multiple regression analysis, we have created a machine learning model in this work that can predict DDoS and bot assaults based on traffic.
2022-09-16
Kozlov, Aleksandr, Noga, Nikolai.  2021.  Applying the Methods of Regression Analysis and Fuzzy Logic for Assessing the Information Security Risk of Complex Systems. 2021 14th International Conference Management of large-scale system development (MLSD). :1—5.
The proposed method allows us to determine the predicted value of the complex systems information security risk and its confidence interval using regression analysis and fuzzy logic in terms of the risk dependence on various factors: the value of resources, the level of threats, potential damage, the level of costs for creating and operating the system, the information resources control level.
2021-03-22
Penugonda, S., Yong, S., Gao, A., Cai, K., Sen, B., Fan, J..  2020.  Generic Modeling of Differential Striplines Using Machine Learning Based Regression Analysis. 2020 IEEE International Symposium on Electromagnetic Compatibility Signal/Power Integrity (EMCSI). :226–230.
In this paper, a generic model for a differential stripline is created using machine learning (ML) based regression analysis. A recursive approach of creating various inputs is adapted instead of traditional design of experiments (DoE) approach. This leads to reduction of number of simulations as well as control the data points required for performing simulations. The generic model is developed using 48 simulations. It is comparable to the linear regression model, which is obtained using 1152 simulations. Additionally, a tabular W-element model of a differential stripline is used to take into consideration the frequency-dependent dielectric loss. In order to demonstrate the expandability of this approach, the methodology was applied to two differential pairs of striplines in the frequency range of 10 MHz to 20 GHz.
2021-02-23
Park, S. H., Park, H. J., Choi, Y..  2020.  RNN-based Prediction for Network Intrusion Detection. 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). :572—574.
We investigate a prediction model using RNN for network intrusion detection in industrial IoT environments. For intrusion detection, we use anomaly detection methods that estimate the next packet, measure and score the distance measurement in real packets to distinguish whether it is a normal packet or an abnormal packet. When the packet was learned in the LSTM model, two-gram and sliding window of N-gram showed the best performance in terms of errors and the performance of the LSTM model was the highest compared with other data mining regression techniques. Finally, cosine similarity was used as a scoring function, and anomaly detection was performed by setting a boundary for cosine similarity that consider as normal packet.
Ashraf, S., Ahmed, T..  2020.  Sagacious Intrusion Detection Strategy in Sensor Network. 2020 International Conference on UK-China Emerging Technologies (UCET). :1—4.
Almost all smart appliances are operated through wireless sensor networks. With the passage of time, due to various applications, the WSN becomes prone to various external attacks. Preventing such attacks, Intrusion Detection strategy (IDS) is very crucial to secure the network from the malicious attackers. The proposed IDS methodology discovers the pattern in large data corpus which works for different types of algorithms to detect four types of Denial of service (DoS) attacks, namely, Grayhole, Blackhole, Flooding, and TDMA. The state-of-the-art detection algorithms, such as KNN, Naïve Bayes, Logistic Regression, Support Vector Machine (SVM), and ANN are applied to the data corpus and analyze the performance in detecting the attacks. The analysis shows that these algorithms are applicable for the detection and prediction of unavoidable attacks and can be recommended for network experts and analysts.
2021-02-03
Lyons, J. B., Nam, C. S., Jessup, S. A., Vo, T. Q., Wynne, K. T..  2020.  The Role of Individual Differences as Predictors of Trust in Autonomous Security Robots. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—5.

This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.

Xu, J., Howard, A..  2020.  Would you Take Advice from a Robot? Developing a Framework for Inferring Human-Robot Trust in Time-Sensitive Scenarios 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :814—820.

Trust is a key element for successful human-robot interaction. One challenging problem in this domain is the issue of how to construct a formulation that optimally models this trust phenomenon. This paper presents a framework for modeling human-robot trust based on representing the human decision-making process as a formulation based on trust states. Using this formulation, we then discuss a generalized model of human-robot trust based on Hidden Markov Models and Logistic Regression. The proposed approach is validated on datasets collected from two different human subject studies in which the human is provided the ability to take advice from a robot. Both experimental scenarios were time-sensitive, in that a decision had to be made by the human in a limited time period, but each scenario featured different levels of cognitive load. The experimental results demonstrate that the proposed formulation can be utilized to model trust, in which the system can predict whether the human will decide to take advice (or not) from the robot. It was found that our prediction performance degrades after the robot made a mistake. The validation of this approach on two scenarios implies that this model can be applied to other interactive scenarios as long as the interaction dynamics fits into the proposed formulation. Directions for future improvements are discussed.

2021-01-15
Kharbat, F. F., Elamsy, T., Mahmoud, A., Abdullah, R..  2019.  Image Feature Detectors for Deepfake Video Detection. 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA). :1—4.
Detecting DeepFake videos are one of the challenges in digital media forensics. This paper proposes a method to detect deepfake videos using Support Vector Machine (SVM) regression. The SVM classifier can be trained with feature points extracted using one of the different feature-point detectors such as HOG, ORB, BRISK, KAZE, SURF, and FAST algorithms. A comprehensive test of the proposed method is conducted using a dataset of original and fake videos from the literature. Different feature point detectors are tested. The result shows that the proposed method of using feature-detector-descriptors for training the SVM can be effectively used to detect false videos.
2021-01-11
Wu, N., Farokhi, F., Smith, D., Kaafar, M. A..  2020.  The Value of Collaboration in Convex Machine Learning with Differential Privacy. 2020 IEEE Symposium on Security and Privacy (SP). :304–317.
In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets. We use noisy, differentially-private gradients to minimize the fitness cost of the machine learning model using stochastic gradient descent. We quantify the quality of the trained model, using the fitness cost, as a function of privacy budget and size of the distributed datasets to capture the trade-off between privacy and utility in machine learning. This way, we can predict the outcome of collaboration among privacy-aware data owners prior to executing potentially computationally-expensive machine learning algorithms. Particularly, we show that the difference between the fitness of the trained machine learning model using differentially-private gradient queries and the fitness of the trained machine model in the absence of any privacy concerns is inversely proportional to the size of the training datasets squared and the privacy budget squared. We successfully validate the performance prediction with the actual performance of the proposed privacy-aware learning algorithms, applied to: financial datasets for determining interest rates of loans using regression; and detecting credit card frauds using support vector machines.
2020-11-20
Chin, J., Zufferey, T., Shyti, E., Hug, G..  2019.  Load Forecasting of Privacy-Aware Consumers. 2019 IEEE Milan PowerTech. :1—6.

The roll-out of smart meters (SMs) in the electric grid has enabled data-driven grid management and planning techniques. SM data can be used together with short-term load forecasts (STLFs) to overcome polling frequency constraints for better grid management. However, the use of SMs that report consumption data at high spatial and temporal resolutions entails consumer privacy risks, motivating work in protecting consumer privacy. The impact of privacy protection schemes on STLF accuracy is not well studied, especially for smaller aggregations of consumers, whose load profiles are subject to more volatility and are, thus, harder to predict. In this paper, we analyse the impact of two user demand shaping privacy protection schemes, model-distribution predictive control (MDPC) and load-levelling, on STLF accuracy. Support vector regression is used to predict the load profiles at different consumer aggregation levels. Results indicate that, while the MDPC algorithm marginally affects forecast accuracy for smaller consumer aggregations, this diminishes at higher aggregation levels. More importantly, the load-levelling scheme significantly improves STLF accuracy as it smoothens out the grid visible consumer load profile.

2020-11-04
Liang, Y., He, D., Chen, D..  2019.  Poisoning Attack on Load Forecasting. 2019 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia). :1230—1235.

Short-term load forecasting systems for power grids have demonstrated high accuracy and have been widely employed for commercial use. However, classic load forecasting systems, which are based on statistical methods, are subject to vulnerability from training data poisoning. In this paper, we demonstrate a data poisoning strategy that effectively corrupts the forecasting model even in the presence of outlier detection. To the best of our knowledge, poisoning attack on short-term load forecasting with outlier detection has not been studied in previous works. Our method applies to several forecasting models, including the most widely-adapted and best-performing ones, such as multiple linear regression (MLR) and neural network (NN) models. Starting with the MLR model, we develop a novel closed-form solution to quickly estimate the new MLR model after a round of data poisoning without retraining. We then employ line search and simulated annealing to find the poisoning attack solution. Furthermore, we use the MLR attacking solution to generate a numerical solution for other models, such as NN. The effectiveness of our algorithm has been tested on the Global Energy Forecasting Competition (GEFCom2012) data set with the presence of outlier detection.

2020-10-12
Chia, Pern Hui, Desfontaines, Damien, Perera, Irippuge Milinda, Simmons-Marengo, Daniel, Li, Chao, Day, Wei-Yen, Wang, Qiushi, Guevara, Miguel.  2019.  KHyperLogLog: Estimating Reidentifiability and Joinability of Large Data at Scale. 2019 IEEE Symposium on Security and Privacy (SP). :350–364.
Understanding the privacy relevant characteristics of data sets, such as reidentifiability and joinability, is crucial for data governance, yet can be difficult for large data sets. While computing the data characteristics by brute force is straightforward, the scale of systems and data collected by large organizations demands an efficient approach. We present KHyperLogLog (KHLL), an algorithm based on approximate counting techniques that can estimate the reidentifiability and joinability risks of very large databases using linear runtime and minimal memory. KHLL enables one to measure reidentifiability of data quantitatively, rather than based on expert judgement or manual reviews. Meanwhile, joinability analysis using KHLL helps ensure the separation of pseudonymous and identified data sets. We describe how organizations can use KHLL to improve protection of user privacy. The efficiency of KHLL allows one to schedule periodic analyses that detect any deviations from the expected risks over time as a regression test for privacy. We validate the performance and accuracy of KHLL through experiments using proprietary and publicly available data sets.
2020-09-18
Zhang, Fan, Kodituwakku, Hansaka Angel Dias Edirisinghe, Hines, J. Wesley, Coble, Jamie.  2019.  Multilayer Data-Driven Cyber-Attack Detection System for Industrial Control Systems Based on Network, System, and Process Data. IEEE Transactions on Industrial Informatics. 15:4362—4369.
The growing number of attacks against cyber-physical systems in recent years elevates the concern for cybersecurity of industrial control systems (ICSs). The current efforts of ICS cybersecurity are mainly based on firewalls, data diodes, and other methods of intrusion prevention, which may not be sufficient for growing cyber threats from motivated attackers. To enhance the cybersecurity of ICS, a cyber-attack detection system built on the concept of defense-in-depth is developed utilizing network traffic data, host system data, and measured process parameters. This attack detection system provides multiple-layer defense in order to gain the defenders precious time before unrecoverable consequences occur in the physical system. The data used for demonstrating the proposed detection system are from a real-time ICS testbed. Five attacks, including man in the middle (MITM), denial of service (DoS), data exfiltration, data tampering, and false data injection, are carried out to simulate the consequences of cyber attack and generate data for building data-driven detection models. Four classical classification models based on network data and host system data are studied, including k-nearest neighbor (KNN), decision tree, bootstrap aggregating (bagging), and random forest (RF), to provide a secondary line of defense of cyber-attack detection in the event that the intrusion prevention layer fails. Intrusion detection results suggest that KNN, bagging, and RF have low missed alarm and false alarm rates for MITM and DoS attacks, providing accurate and reliable detection of these cyber attacks. Cyber attacks that may not be detectable by monitoring network and host system data, such as command tampering and false data injection attacks by an insider, are monitored for by traditional process monitoring protocols. In the proposed detection system, an auto-associative kernel regression model is studied to strengthen early attack detection. The result shows that this approach detects physically impactful cyber attacks before significant consequences occur. The proposed multiple-layer data-driven cyber-attack detection system utilizing network, system, and process data is a promising solution for safeguarding an ICS.
2020-09-04
Song, Chengru, Xu, Changqiao, Yang, Shujie, Zhou, Zan, Gong, Changhui.  2019.  A Black-Box Approach to Generate Adversarial Examples Against Deep Neural Networks for High Dimensional Input. 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC). :473—479.
Generating adversarial samples is gathering much attention as an intuitive approach to evaluate the robustness of learning models. Extensive recent works have demonstrated that numerous advanced image classifiers are defenseless to adversarial perturbations in the white-box setting. However, the white-box setting assumes attackers to have prior knowledge of model parameters, which are generally inaccessible in real world cases. In this paper, we concentrate on the hard-label black-box setting where attackers can only pose queries to probe the model parameters responsible for classifying different images. Therefore, the issue is converted into minimizing non-continuous function. A black-box approach is proposed to address both massive queries and the non-continuous step function problem by applying a combination of a linear fine-grained search, Fibonacci search, and a zeroth order optimization algorithm. However, the input dimension of a image is so high that the estimation of gradient is noisy. Hence, we adopt a zeroth-order optimization method in high dimensions. The approach converts calculation of gradient into a linear regression model and extracts dimensions that are more significant. Experimental results illustrate that our approach can relatively reduce the amount of queries and effectively accelerate convergence of the optimization method.
2020-08-28
He, Chengkang, Cui, Aijiao, Chang, Chip-Hong.  2019.  Identification of State Registers of FSM Through Full Scan by Data Analytics. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :1—6.

Finite-state machine (FSM) is widely used as control unit in most digital designs. Many intellectual property protection and obfuscation techniques leverage on the exponential number of possible states and state transitions of large FSM to secure a physical design with the reason that it is challenging to retrieve the FSM design from its downstream design or physical implementation without knowledge of the design. In this paper, we postulate that this assumption may not be sustainable with big data analytics. We demonstrate by applying a data mining technique to analyze sufficiently large amount of data collected from a full scan design to identify its FSM state registers. An impact metric is introduced to discriminate FSM state registers from other registers. A decision tree algorithm is constructed from the scan data for the regression analysis of the dependency of other registers on a chosen register to deduce its impact. The registers with the greater impact are more likely to be the FSM state registers. The proposed scheme is applied on several complex designs from OpenCores. The experiment results show the feasibility of our scheme in correctly identifying most FSM state registers with a high hit rate for a large majority of the designs.

2020-08-07
Moriai, Shiho.  2019.  Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH). :198—198.

We aim at creating a society where we can resolve various social challenges by incorporating the innovations of the fourth industrial revolution (e.g. IoT, big data, AI, robot, and the sharing economy) into every industry and social life. By doing so the society of the future will be one in which new values and services are created continuously, making people's lives more conformable and sustainable. This is Society 5.0, a super-smart society. Security and privacy are key issues to be addressed to realize Society 5.0. Privacy-preserving data analytics will play an important role. In this talk we show our recent works on privacy-preserving data analytics such as privacy-preserving logistic regression and privacy-preserving deep learning. Finally, we show our ongoing research project under JST CREST “AI”. In this project we are developing privacy-preserving financial data analytics systems that can detect fraud with high security and accuracy. To validate the systems, we will perform demonstration tests with several financial institutions and solve the problems necessary for their implementation in the real world.

2020-06-19
Cha, Suhyun, Ulbrich, Mattias, Weigl, Alexander, Beckert, Bernhard, Land, Kathrin, Vogel-Heuser, Birgit.  2019.  On the Preservation of the Trust by Regression Verification of PLC software for Cyber-Physical Systems of Systems. 2019 IEEE 17th International Conference on Industrial Informatics (INDIN). 1:413—418.

Modern large scale technical systems often face iterative changes on their behaviours with the requirement of validated quality which is not easy to achieve completely with traditional testing. Regression verification is a powerful tool for the formal correctness analysis of software-driven systems. By proving that a new revision of the software behaves similarly as the original version of the software, some of the trust that the old software and system had earned during the validation processes or operation histories can be inherited to the new revision. This trust inheritance by the formal analysis relies on a number of implicit assumptions which are not self-evident but easy to miss, and may lead to a false sense of safety induced by a misunderstood regression verification processes. This paper aims at pointing out hidden, implicit assumptions of regression verification in the context of cyber-physical systems by making them explicit using practical examples. The explicit trust inheritance analysis would clarify for the engineers to understand the extent of the trust that regression verification provides and consequently facilitate them to utilize this formal technique for the system validation.

2020-06-08
Hovhannes, H. Hakobyan, Arman, V. Vardumyan, Harutyun, T. Kostanyan.  2019.  Unit Regression Test Selection According To Different Hashing Algorithms. 2019 IEEE East-West Design Test Symposium (EWDTS). :1–4.
An approach for effective regression test selection is proposed, which minimizes the resource usage and amount of time required for complete testing of new features. Provided are the details of the analysis of hashing algorithms used during implementation in-depth review of the software, together with the results achieved during the testing process.
2020-05-22
Abdelhadi, Ameer M.S., Bouganis, Christos-Savvas, Constantinides, George A..  2019.  Accelerated Approximate Nearest Neighbors Search Through Hierarchical Product Quantization. 2019 International Conference on Field-Programmable Technology (ICFPT). :90—98.
A fundamental recurring task in many machine learning applications is the search for the Nearest Neighbor in high dimensional metric spaces. Towards answering queries in large scale problems, state-of-the-art methods employ Approximate Nearest Neighbors (ANN) search, a search that returns the nearest neighbor with high probability, as well as techniques that compress the dataset. Product-Quantization (PQ) based ANN search methods have demonstrated state-of-the-art performance in several problems, including classification, regression and information retrieval. The dataset is encoded into a Cartesian product of multiple low-dimensional codebooks, enabling faster search and higher compression. Being intrinsically parallel, PQ-based ANN search approaches are amendable for hardware acceleration. This paper proposes a novel Hierarchical PQ (HPQ) based ANN search method as well as an FPGA-tailored architecture for its implementation that outperforms current state of the art systems. HPQ gradually refines the search space, reducing the number of data compares and enabling a pipelined search. The mapping of the architecture on a Stratix 10 FPGA device demonstrates over ×250 speedups over current state-of-the-art systems, opening the space for addressing larger datasets and/or improving the query times of current systems.
2020-05-11
Cui, Zhicheng, Zhang, Muhan, Chen, Yixin.  2018.  Deep Embedding Logistic Regression. 2018 IEEE International Conference on Big Knowledge (ICBK). :176–183.
Logistic regression (LR) is used in many areas due to its simplicity and interpretability. While at the same time, those two properties limit its classification accuracy. Deep neural networks (DNNs), instead, achieve state-of-the-art performance in many domains. However, the nonlinearity and complexity of DNNs make it less interpretable. To balance interpretability and classification performance, we propose a novel nonlinear model, Deep Embedding Logistic Regression (DELR), which augments LR with a nonlinear dimension-wise feature embedding. In DELR, each feature embedding is learned through a deep and narrow neural network and LR is attached to decide feature importance. A compact and yet powerful model, DELR offers great interpretability: it can tell the importance of each input feature, yield meaningful embedding of categorical features, and extract actionable changes, making it attractive for tasks such as market analysis and clinical prediction.
2020-04-24
Shuvro, Rezoan A., Das, Pankaz, Hayat, Majeed M., Talukder, Mitun.  2019.  Predicting Cascading Failures in Power Grids using Machine Learning Algorithms. 2019 North American Power Symposium (NAPS). :1—6.
Although there has been notable progress in modeling cascading failures in power grids, few works included using machine learning algorithms. In this paper, cascading failures that lead to massive blackouts in power grids are predicted and classified into no, small, and large cascades using machine learning algorithms. Cascading-failure data is generated using a cascading failure simulator framework developed earlier. The data set includes the power grid operating parameters such as loading level, level of load shedding, the capacity of the failed lines, and the topological parameters such as edge betweenness centrality and the average shortest distance for numerous combinations of two transmission line failures as features. Then several machine learning algorithms are used to classify cascading failures. Further, linear regression is used to predict the number of failed transmission lines and the amount of load shedding during a cascade based on initial feature values. This data-driven technique can be used to generate cascading failure data set for any real-world power grids and hence, power-grid engineers can use this approach for cascade data generation and hence predicting vulnerabilities and enhancing robustness of the grid.
2020-04-20
Kundu, Suprateek, Suthaharan, Shan.  2019.  Privacy-Preserving Predictive Model Using Factor Analysis for Neuroscience Applications. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :67–73.
The purpose of this article is to present an algorithm which maximizes prediction accuracy under a linear regression model while preserving data privacy. This approach anonymizes the data such that the privacy of the original features is fully guaranteed, and the deterioration in predictive accuracy using the anonymized data is minimal. The proposed algorithm employs two stages: the first stage uses a probabilistic latent factor approach to anonymize the original features into a collection of lower dimensional latent factors, while the second stage uses an optimization algorithm to tune the anonymized data further, in a way which ensures a minimal loss in prediction accuracy under the predictive approach specified by the user. We demonstrate the advantages of our approach via numerical studies and apply our method to high-dimensional neuroimaging data where the goal is to predict the behavior of adolescents and teenagers based on functional magnetic resonance imaging (fMRI) measurements.
Kundu, Suprateek, Suthaharan, Shan.  2019.  Privacy-Preserving Predictive Model Using Factor Analysis for Neuroscience Applications. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :67–73.
The purpose of this article is to present an algorithm which maximizes prediction accuracy under a linear regression model while preserving data privacy. This approach anonymizes the data such that the privacy of the original features is fully guaranteed, and the deterioration in predictive accuracy using the anonymized data is minimal. The proposed algorithm employs two stages: the first stage uses a probabilistic latent factor approach to anonymize the original features into a collection of lower dimensional latent factors, while the second stage uses an optimization algorithm to tune the anonymized data further, in a way which ensures a minimal loss in prediction accuracy under the predictive approach specified by the user. We demonstrate the advantages of our approach via numerical studies and apply our method to high-dimensional neuroimaging data where the goal is to predict the behavior of adolescents and teenagers based on functional magnetic resonance imaging (fMRI) measurements.
2020-03-09
Hettiarachchi, Charitha, Do, Hyunsook.  2019.  A Systematic Requirements and Risks-Based Test Case Prioritization Using a Fuzzy Expert System. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security (QRS). :374–385.

The use of risk information can help software engineers identify software components that are likely vulnerable or require extra attention when testing. Some studies have shown that the requirements risk-based approaches can be effective in improving the effectiveness of regression testing techniques. However, the risk estimation processes used in such approaches can be subjective, time-consuming, and costly. In this research, we introduce a fuzzy expert system that emulates human thinking to address the subjectivity related issues in the risk estimation process in a systematic and an efficient way and thus further improve the effectiveness of test case prioritization. Further, the required data for our approach was gathered by employing a semi-automated process that made the risk estimation process less subjective. The empirical results indicate that the new prioritization approach can improve the rate of fault detection over several existing test case prioritization techniques, while reducing threats to subjective risk estimation.

2020-02-10
Tenentes, Vasileios, Das, Shidhartha, Rossi, Daniele, Al-Hashimi, Bashir M..  2019.  Run-time Detection and Mitigation of Power-Noise Viruses. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :275–280.
Power-noise viruses can be used as denial-of-service attacks by causing voltage emergencies in multi-core microprocessors that may lead to data corruptions and system crashes. In this paper, we present a run-time system for detecting and mitigating power-noise viruses. We present voltage noise data from a power-noise virus and benchmarks collected from an Arm multi-core processor, and we observe that the frequency of voltage emergencies is dramatically increasing during the execution of power-noise attacks. Based on this observation, we propose a regression model that allows for a run-time estimation of the severity of voltage emergencies by monitoring the frequency of voltage emergencies and the operating frequency of the microprocessor. For mitigating the problem, during the execution of critical tasks that require protection, we propose a system which periodically evaluates the severity of voltage emergencies and adapts its operating frequency in order to honour a predefined severity constraint. We demonstrate the efficacy of the proposed run-time system.