Visible to the public Biblio

Found 789 results

Filters: Keyword is learning (artificial intelligence)  [Clear All Filters]
2020-11-20
Efstathopoulos, G., Grammatikis, P. R., Sarigiannidis, P., Argyriou, V., Sarigiannidis, A., Stamatakis, K., Angelopoulos, M. K., Athanasopoulos, S. K..  2019.  Operational Data Based Intrusion Detection System for Smart Grid. 2019 IEEE 24th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1—6.

With the rapid progression of Information and Communication Technology (ICT) and especially of Internet of Things (IoT), the conventional electrical grid is transformed into a new intelligent paradigm, known as Smart Grid (SG). SG provides significant benefits both for utility companies and energy consumers such as the two-way communication (both electricity and information), distributed generation, remote monitoring, self-healing and pervasive control. However, at the same time, this dependence introduces new security challenges, since SG inherits the vulnerabilities of multiple heterogeneous, co-existing legacy and smart technologies, such as IoT and Industrial Control Systems (ICS). An effective countermeasure against the various cyberthreats in SG is the Intrusion Detection System (IDS), informing the operator timely about the possible cyberattacks and anomalies. In this paper, we provide an anomaly-based IDS especially designed for SG utilising operational data from a real power plant. In particular, many machine learning and deep learning models were deployed, introducing novel parameters and feature representations in a comparative study. The evaluation analysis demonstrated the efficacy of the proposed IDS and the improvement due to the suggested complex data representation.

Prasad, G., Huo, Y., Lampe, L., Leung, V. C. M..  2019.  Machine Learning Based Physical-Layer Intrusion Detection and Location for the Smart Grid. 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1—6.
Security and privacy of smart grid communication data is crucial given the nature of the continuous bidirectional information exchange between the consumer and the utilities. Data security has conventionally been ensured using cryptographic techniques implemented at the upper layers of the network stack. However, it has been shown that security can be further enhanced using physical layer (PHY) methods. To aid and/or complement such PHY and upper layer techniques, in this paper, we propose a PHY design that can detect and locate not only an active intruder but also a passive eavesdropper in the network. Our method can either be used as a stand-alone solution or together with existing techniques to achieve improved smart grid data security. Our machine learning based solution intelligently and automatically detects and locates a possible intruder in the network by reusing power line transmission modems installed in the grid for communication purposes. Simulation results show that our cost-efficient design provides near ideal intruder detection rates and also estimates its location with a high degree of accuracy.
Roy, D. D., Shin, D..  2019.  Network Intrusion Detection in Smart Grids for Imbalanced Attack Types Using Machine Learning Models. 2019 International Conference on Information and Communication Technology Convergence (ICTC). :576—581.
Smart grid has evolved as the next generation power grid paradigm which enables the transfer of real time information between the utility company and the consumer via smart meter and advanced metering infrastructure (AMI). These information facilitate many services for both, such as automatic meter reading, demand side management, and time-of-use (TOU) pricing. However, there have been growing security and privacy concerns over smart grid systems, which are built with both smart and legacy information and operational technologies. Intrusion detection is a critical security service for smart grid systems, alerting the system operator for the presence of ongoing attacks. Hence, there has been lots of research conducted on intrusion detection in the past, especially anomaly-based intrusion detection. Problems emerge when common approaches of pattern recognition are used for imbalanced data which represent much more data instances belonging to normal behaviors than to attack ones, and these approaches cause low detection rates for minority classes. In this paper, we study various machine learning models to overcome this drawback by using CIC-IDS2018 dataset [1].
Goyal, Y., Sharma, A..  2019.  A Semantic Machine Learning Approach for Cyber Security Monitoring. 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC). :439—442.
Security refers to precautions designed to shield the availability and integrity of information exchanged among the digital global community. Information safety measure typically protects the virtual facts from unauthorized sources to get a right of entry to, disclosure, manipulation, alteration or destruction on both hardware and software technologies. According to an evaluation through experts operating in the place of information safety, some of the new cyber-attacks are keep on emerging in all the business processes. As a stop result of the analyses done, it's been determined that although the level of risk is not excessive in maximum of the attacks, it's far a severe risk for important data and the severity of those attacks is prolonged. Prior safety structures has been established to monitor various cyber-threats, predominantly using a gadget processed data or alerts for showing each deterministic and stochastic styles. The principal finding for deterministic patterns in cyber- attacks is that they're neither unbiased nor random over the years. Consequently, the quantity of assaults in the past helps to monitor the range of destiny attacks. The deterministic styles can often be leveraged to generate moderately correct monitoring.
2020-11-17
Zhou, Z., Qian, L., Xu, H..  2019.  Intelligent Decentralized Dynamic Power Allocation in MANET at Tactical Edge based on Mean-Field Game Theory. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :604—609.

In this paper, decentralized dynamic power allocation problem has been investigated for mobile ad hoc network (MANET) at tactical edge. Due to the mobility and self-organizing features in MANET and environmental uncertainties in the battlefield, many existing optimal power allocation algorithms are neither efficient nor practical. Furthermore, the continuously increasing large scale of the wireless connection population in emerging Internet of Battlefield Things (IoBT) introduces additional challenges for optimal power allocation due to the “Curse of Dimensionality”. In order to address these challenges, a novel Actor-Critic-Mass algorithm is proposed by integrating the emerging Mean Field game theory with online reinforcement learning. The proposed approach is able to not only learn the optimal power allocation for IoBT in a decentralized manner, but also effectively handle uncertainties from harsh environment at tactical edge. In the developed scheme, each agent in IoBT has three neural networks (NN), i.e., 1) Critic NN learns the optimal cost function that minimizes the Signal-to-interference-plus-noise ratio (SINR), 2) Actor NN estimates the optimal transmitter power adjustment rate, and 3) Mass NN learns the probability density function of all agents' transmitting power in IoBT. The three NNs are tuned based on the Fokker-Planck-Kolmogorov (FPK) and Hamiltonian-Jacobian-Bellman (HJB) equation given in the Mean Field game theory. An IoBT wireless network has been simulated to evaluate the effectiveness of the proposed algorithm. The results demonstrate that the actor-critic-mass algorithm can effectively approximate the probability distribution of all agents' transmission power and converge to the target SINR. Moreover, the optimal decentralized power allocation is obtained through integrated mean-field game theory with reinforcement learning.

Agadakos, I., Ciocarlie, G. F., Copos, B., George, J., Leslie, N., Michaelis, J..  2019.  Security for Resilient IoBT Systems: Emerging Research Directions. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1—6.

Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, a multitude of operational conditions (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a set of research directions are proposed that aim to fundamentally address the issues of trust and trustworthiness in contested battlefield environments, building on prior research in the cybersecurity domain. These research directions focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) Ensuring continued trust of known IoBT assets and systems.

Agadakos, I., Ciocarlie, G. F., Copos, B., Emmi, M., George, J., Leslie, N., Michaelis, J..  2019.  Application of Trust Assessment Techniques to IoBT Systems. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :833—840.

Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, current operational conditions necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a collection of prior-developed cybersecurity techniques is reviewed for applicability to conditions presented by IoBT operational environments (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) through use of supporting case study examples. The research techniques covered focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) ensuring continued trust of known IoT assets and IoBT systems.

Hu, Y., Sanjab, A., Saad, W..  2019.  Dynamic Psychological Game Theory for Secure Internet of Battlefield Things (IoBT) Systems. IEEE Internet of Things Journal. 6:3712—3726.

In this paper, a novel anti-jamming mechanism is proposed to analyze and enhance the security of adversarial Internet of Battlefield Things (IoBT) systems. In particular, the problem is formulated as a dynamic psychological game between a soldier and an attacker. In this game, the soldier seeks to accomplish a time-critical mission by traversing a battlefield within a certain amount of time, while maintaining its connectivity with an IoBT network. The attacker, on the other hand, seeks to find the optimal opportunity to compromise the IoBT network and maximize the delay of the soldier's IoBT transmission link. The soldier and the attacker's psychological behavior are captured using tools from psychological game theory, with which the soldier's and attacker's intentions to harm one another are considered in their utilities. To solve this game, a novel learning algorithm based on Bayesian updating is proposed to find an ∈ -like psychological self-confirming equilibrium of the game.

2020-11-09
Li, H., Patnaik, S., Sengupta, A., Yang, H., Knechtel, J., Yu, B., Young, E. F. Y., Sinanoglu, O..  2019.  Attacking Split Manufacturing from a Deep Learning Perspective. 2019 56th ACM/IEEE Design Automation Conference (DAC). :1–6.
The notion of integrated circuit split manufacturing which delegates the front-end-of-line (FEOL) and back-end-of-line (BEOL) parts to different foundries, is to prevent overproduction, piracy of the intellectual property (IP), or targeted insertion of hardware Trojans by adversaries in the FEOL facility. In this work, we challenge the security promise of split manufacturing by formulating various layout-level placement and routing hints as vector- and image-based features. We construct a sophisticated deep neural network which can infer the missing BEOL connections with high accuracy. Compared with the publicly available network-flow attack [1], for the same set of ISCAS-85benchmarks, we achieve 1.21× accuracy when splitting on M1 and 1.12× accuracy when splitting on M3 with less than 1% running time.
Kemp, C., Calvert, C., Khoshgoftaar, T..  2018.  Utilizing Netflow Data to Detect Slow Read Attacks. 2018 IEEE International Conference on Information Reuse and Integration (IRI). :108–116.
Attackers can leverage several techniques to compromise computer networks, ranging from sophisticated malware to DDoS (Distributed Denial of Service) attacks that target the application layer. Application layer DDoS attacks, such as Slow Read, are implemented with just enough traffic to tie up CPU or memory resources causing web and application servers to go offline. Such attacks can mimic legitimate network requests making them difficult to detect. They also utilize less volume than traditional DDoS attacks. These low volume attack methods can often go undetected by network security solutions until it is too late. In this paper, we explore the use of machine learners for detecting Slow Read DDoS attacks on web servers at the application layer. Our approach uses a generated dataset based upon Netflow data collected at the application layer on a live network environment. Our Netflow data uses the IP Flow Information Export (IPFIX) standard providing significant flexibility and features. These Netflow features can process and handle a growing amount of traffic and have worked well in our previous DDoS work detecting evasion techniques. Our generated dataset consists of real-world network data collected from a production network. We use eight different classifiers to build Slow Read attack detection models. Our wide selection of learners provides us with a more comprehensive analysis of Slow Read detection models. Experimental results show that the machine learners were quite successful in identifying the Slow Read attacks with a high detection and low false alarm rate. The experiment demonstrates that our chosen Netflow features are discriminative enough to detect such attacks accurately.
Zhang, T., Wang, R., Ding, J., Li, X., Li, B..  2018.  Face Recognition Based on Densely Connected Convolutional Networks. 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). :1–6.
The face recognition methods based on convolutional neural network have achieved great success. The existing model usually used the residual network as the core architecture. The residual network is good at reusing features, but it is difficult to explore new features. And the densely connected network can be used to explore new features. We proposed a face recognition model named Dense Face to explore the performance of densely connected network in face recognition. The model is based on densely connected convolutional neural network and composed of Dense Block layers, transition layers and classification layer. The model was trained with the joint supervision of center loss and softmax loss through feature normalization and enabled the convolutional neural network to learn more discriminative features. The Dense Face model was trained using the public available CASIA-WebFace dataset and was tested on the LFW and the CAS-PEAL-Rl datasets. Experimental results showed that the densely connected convolutional neural network has achieved higher face verification accuracy and has better robustness than other model such as VGG Face and ResNet model.
Yang, J., Kang, X., Wong, E. K., Shi, Y..  2018.  Deep Learning with Feature Reuse for JPEG Image Steganalysis. 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). :533–538.
It is challenging to detect weak hidden information in a JPEG compressed image. In this paper, we propose a 32-layer convolutional neural networks (CNNs) with feature reuse by concatenating all features from previous layers. The proposed method can improve the flow of gradient and information, and the shared features and bottleneck layers in the proposed CNN model further reduce the number of parameters dramatically. The experimental results shown that the proposed method significantly reduce the detection error rate compared with the existing JPEG steganalysis methods, e.g. state-of-the-art XuNet method and the conventional SCA-GFR method. Compared with XuNet method and conventional method SCA-GFR in detecting J-UNIWARD at 0.1 bpnzAC (bit per non-zero AC DCT coefficient), the proposed method can reduce detection error rate by 4.33% and 6.55% respectively.
Wheelus, C., Bou-Harb, E., Zhu, X..  2018.  Tackling Class Imbalance in Cyber Security Datasets. 2018 IEEE International Conference on Information Reuse and Integration (IRI). :229–232.
It is clear that cyber-attacks are a danger that must be addressed with great resolve, as they threaten the information infrastructure upon which we all depend. Many studies have been published expressing varying levels of success with machine learning approaches to combating cyber-attacks, but many modern studies still focus on training and evaluating with very outdated datasets containing old attacks that are no longer a threat, and also lack data on new attacks. Recent datasets like UNSW-NB15 and SANTA have been produced to address this problem. Even so, these modern datasets suffer from class imbalance, which reduces the efficacy of predictive models trained using these datasets. Herein we evaluate several pre-processing methods for addressing the class imbalance problem; using several of the most popular machine learning algorithms and a variant of UNSW-NB15 based upon the attributes from the SANTA dataset.
2020-11-04
Apruzzese, G., Colajanni, M., Ferretti, L., Marchetti, M..  2019.  Addressing Adversarial Attacks Against Security Systems Based on Machine Learning. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1—18.

Machine-learning solutions are successfully adopted in multiple contexts but the application of these techniques to the cyber security domain is complex and still immature. Among the many open issues that affect security systems based on machine learning, we concentrate on adversarial attacks that aim to affect the detection and prediction capabilities of machine-learning models. We consider realistic types of poisoning and evasion attacks targeting security solutions devoted to malware, spam and network intrusion detection. We explore the possible damages that an attacker can cause to a cyber detector and present some existing and original defensive techniques in the context of intrusion detection systems. This paper contains several performance evaluations that are based on extensive experiments using large traffic datasets. The results highlight that modern adversarial attacks are highly effective against machine-learning classifiers for cyber detection, and that existing solutions require improvements in several directions. The paper paves the way for more robust machine-learning-based techniques that can be integrated into cyber security platforms.

Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M..  2019.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :188—193.

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.

Chacon, H., Silva, S., Rad, P..  2019.  Deep Learning Poison Data Attack Detection. 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). :971—978.

Deep neural networks are widely used in many walks of life. Techniques such as transfer learning enable neural networks pre-trained on certain tasks to be retrained for a new duty, often with much less data. Users have access to both pre-trained model parameters and model definitions along with testing data but have either limited access to training data or just a subset of it. This is risky for system-critical applications, where adversarial information can be maliciously included during the training phase to attack the system. Determining the existence and level of attack in a model is challenging. In this paper, we present evidence on how adversarially attacking training data increases the boundary of model parameters using as an example of a CNN model and the MNIST data set as a test. This expansion is due to new characteristics of the poisonous data that are added to the training data. Approaching the problem from the feature space learned by the network provides a relation between them and the possible parameters taken by the model on the training phase. An algorithm is proposed to determine if a given network was attacked in the training by comparing the boundaries of parameters distribution on intermediate layers of the model estimated by using the Maximum Entropy Principle and the Variational inference approach.

Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S..  2019.  Poisoning Attack in Federated Learning using Generative Adversarial Nets. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :374—380.

Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.

Jin, Y., Tomoishi, M., Matsuura, S..  2019.  A Detection Method Against DNS Cache Poisoning Attacks Using Machine Learning Techniques: Work in Progress. 2019 IEEE 18th International Symposium on Network Computing and Applications (NCA). :1—3.

DNS based domain name resolution has been known as one of the most fundamental Internet services. In the meanwhile, DNS cache poisoning attacks also have become a critical threat in the cyber world. In addition to Kaminsky attacks, the falsified data from the compromised authoritative DNS servers also have become the threats nowadays. Several solutions have been proposed in order to prevent DNS cache poisoning attacks in the literature for the former case such as DNSSEC (DNS Security Extensions), however no effective solutions have been proposed for the later case. Moreover, due to the performance issue and significant workload increase on DNS cache servers, DNSSEC has not been deployed widely yet. In this work, we propose an advanced detection method against DNS cache poisoning attacks using machine learning techniques. In the proposed method, in addition to the basic 5-tuple information of a DNS packet, we intend to add a lot of special features extracted based on the standard DNS protocols as well as the heuristic aspects such as “time related features”, “GeoIP related features” and “trigger of cached DNS data”, etc., in order to identify the DNS response packets used for cache poisoning attacks especially those from compromised authoritative DNS servers. In this paper, as a work in progress, we describe the basic idea and concept of our proposed method as well as the intended network topology of the experimental environment while the prototype implementation, training data preparation and model creation as well as the evaluations will belong to the future work.

Khurana, N., Mittal, S., Piplai, A., Joshi, A..  2019.  Preventing Poisoning Attacks On AI Based Threat Intelligence Systems. 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). :1—6.

As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we use an ensembled semi-supervised approach to determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.

Shen, J., Zhu, X., Ma, D..  2019.  TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications. IEEE Access. 7:41498—41506.

Internet application providers now have more incentive than ever to collect user data, which greatly increases the risk of user privacy violations due to the emerging of deep neural networks. In this paper, we propose TensorClog-a poisoning attack technique that is designed for privacy protection against deep neural networks. TensorClog has three properties with each of them serving a privacy protection purpose: 1) training on TensorClog poisoned data results in lower inference accuracy, reducing the incentive of abusive data collection; 2) training on TensorClog poisoned data converges to a larger loss, which prevents the neural network from learning the privacy; and 3) TensorClog regularizes the perturbation to remain a high structure similarity, so that the poisoning does not affect the actual content in the data. Applying our TensorClog poisoning technique to CIFAR-10 dataset results in an increase in both converged training loss and test error by 300% and 272%, respectively. It manages to maintain data's human perception with a high SSIM index of 0.9905. More experiments including different limited information attack scenarios and a real-world application transferred from pre-trained ImageNet models are presented to further evaluate TensorClog's effectiveness in more complex situations.

Sultana, K. Z., Williams, B. J., Bosu, A..  2018.  A Comparison of Nano-Patterns vs. Software Metrics in Vulnerability Prediction. 2018 25th Asia-Pacific Software Engineering Conference (APSEC). :355—364.

Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.

2020-11-02
Huang, S., Chen, Q., Chen, Z., Chen, L., Liu, J., Yang, S..  2019.  A Test Cases Generation Technique Based on an Adversarial Samples Generation Algorithm for Image Classification Deep Neural Networks. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :520–521.

With widely applied in various fields, deep learning (DL) is becoming the key driving force in industry. Although it has achieved great success in artificial intelligence tasks, similar to traditional software, it has defects that, once it failed, unpredictable accidents and losses would be caused. In this paper, we propose a test cases generation technique based on an adversarial samples generation algorithm for image classification deep neural networks (DNNs), which can generate a large number of good test cases for the testing of DNNs, especially in case that test cases are insufficient. We briefly introduce our method, and implement the framework. We conduct experiments on some classic DNN models and datasets. We further evaluate the test set by using a coverage metric based on states of the DNN.

Ping, C., Jun-Zhe, Z..  2019.  Research on Intelligent Evaluation Method of Transient Analysis Software Function Test. 2019 International Conference on Advances in Construction Machinery and Vehicle Engineering (ICACMVE). :58–61.

In transient distributed cloud computing environment, software is vulnerable to attack, which leads to software functional completeness, so it is necessary to carry out functional testing. In order to solve the problem of high overhead and high complexity of unsupervised test methods, an intelligent evaluation method for transient analysis software function testing based on active depth learning algorithm is proposed. Firstly, the active deep learning mathematical model of transient analysis software function test is constructed by using association rule mining method, and the correlation dimension characteristics of software function failure are analyzed. Then the reliability of the software is measured by the spectral density distribution method of software functional completeness. The intelligent evaluation model of transient analysis software function testing is established in the transient distributed cloud computing environment, and the function testing and reliability intelligent evaluation are realized. Finally, the performance of the transient analysis software is verified by the simulation experiment. The results show that the accuracy of the software functional integrity positioning is high and the intelligent evaluation of the transient analysis software function testing has a good self-adaptability by using this method to carry out the function test of the transient analysis software. It ensures the safe and reliable operation of the software.

Zhang, Z., Xie, X..  2019.  On the Investigation of Essential Diversities for Deep Learning Testing Criteria. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security (QRS). :394–405.

Recent years, more and more testing criteria for deep learning systems has been proposed to ensure system robustness and reliability. These criteria were defined based on different perspectives of diversity. However, there lacks comprehensive investigation on what are the most essential diversities that should be considered by a testing criteria for deep learning systems. Therefore, in this paper, we conduct an empirical study to investigate the relation between test diversities and erroneous behaviors of deep learning models. We define five metrics to reflect diversities in neuron activities, and leverage metamorphic testing to detect erroneous behaviors. We investigate the correlation between metrics and erroneous behaviors. We also go further step to measure the quality of test suites under the guidance of defined metrics. Our results provided comprehensive insights on the essential diversities for testing criteria to exhibit good fault detection ability.

Chong, T., Anu, V., Sultana, K. Z..  2019.  Using Software Metrics for Predicting Vulnerable Code-Components: A Study on Java and Python Open Source Projects. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :98–103.

Software vulnerabilities often remain hidden until an attacker exploits the weak/insecure code. Therefore, testing the software from a vulnerability discovery perspective becomes challenging for developers if they do not inspect their code thoroughly (which is time-consuming). We propose that vulnerability prediction using certain software metrics can support the testing process by identifying vulnerable code-components (e.g., functions, classes, etc.). Once a code-component is predicted as vulnerable, the developers can focus their testing efforts on it, thereby avoiding the time/effort required for testing the entire application. The current paper presents a study that compares how software metrics perform as vulnerability predictors for software projects developed in two different languages (Java vs Python). The goal of this research is to analyze the vulnerability prediction performance of software metrics for different programming languages. We designed and conducted experiments on security vulnerabilities reported for three Java projects (Apache Tomcat 6, Tomcat 7, Apache CXF) and two Python projects (Django and Keystone). In this paper, we focus on a specific type of code component: Functions. We apply Machine Learning models for predicting vulnerable functions. Overall results show that software metrics-based vulnerability prediction is more useful for Java projects than Python projects (i.e., software metrics when used as features were able to predict Java vulnerable functions with a higher recall and precision compared to Python vulnerable functions prediction).