Biblio
Filters: Keyword is Training [Clear All Filters]
A convolutional neural network-based reviews classification method for explainable recommendations. 2020 Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS). :1–5.
.
2020. Recent advances in information filtering have resulted in effective recommender systems that are able to provide online personalized recommendations to millions of users from all over the world. However, most of these systems ignore the explanation purpose while producing recommendations with high-quality results. Moreover, the classification of reviews given to users as explanations is not fully exploited in previous studies. In this paper, we develop a convolutional neural network-based reviews classification method for explainable recommendation systems. The convolutional neural network is used to extract the reviews features for predicting whether the reviews provided as explanations are positive or negative. Based on such additional information, users can understand not only why certain items are recommended for them but also get support to know the nature of such explanations. We conduct experiments on a dataset from Amazon. The experimental results show that our method outperforms state-of-the-art methods.
IPv6 DoS Attacks Detection Using Machine Learning Enhanced IDS in SDN/NFV Environment. 2020 21st Asia-Pacific Network Operations and Management Symposium (APNOMS). :263–266.
.
2020. The rapid growth of IPv6 traffic makes security issues become more important. This paper proposes an IPv6 network security system that integrates signature-based Intrusion Detection Systems (IDS) and machine learning classification technologies to improve the accuracy of IPv6 denial-of-service (DoS) attacks detection. In addition, this paper has also enhanced IPv6 network security defense capabilities through software-defined networking (SDN) and network function virtualization (NFV) technologies. The experimental results prove that the detection and defense mechanisms proposed in this paper can effectively strengthen IPv6 network security.
Active DNN IP Protection: A Novel User Fingerprint Management and DNN Authorization Control Technique. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :975—982.
.
2020. The training process of deep learning model is costly. As such, deep learning model can be treated as an intellectual property (IP) of the model creator. However, a pirate can illegally copy, redistribute or abuse the model without permission. In recent years, a few Deep Neural Networks (DNN) IP protection works have been proposed. However, most of existing works passively verify the copyright of the model after the piracy occurs, and lack of user identity management, thus cannot provide commercial copyright management functions. In this paper, a novel user fingerprint management and DNN authorization control technique based on backdoor is proposed to provide active DNN IP protection. The proposed method can not only verify the ownership of the model, but can also authenticate and manage the user's unique identity, so as to provide a commercially applicable DNN IP management mechanism. Experimental results on CIFAR-10, CIFAR-100 and Fashion-MNIST datasets show that the proposed method can achieve high detection rate for user authentication (up to 100% in the three datasets). Illegal users with forged fingerprints cannot pass authentication as the detection rates are all 0 % in the three datasets. Model owner can verify his ownership since he can trigger the backdoor with a high confidence. In addition, the accuracy drops are only 0.52%, 1.61 % and -0.65% on CIFAR-10, CIFAR-100 and Fashion-MNIST, respectively, which indicate that the proposed method will not affect the performance of the DNN models. The proposed method is also robust to model fine-tuning and pruning attacks. The detection rates for owner verification on CIFAR-10, CIFAR-100 and Fashion-MNIST are all 100% after model pruning attack, and are 90 %, 83 % and 93 % respectively after model fine-tuning attack, on the premise that the attacker wants to preserve the accuracy of the model.
A Machine-Learning-Resistant 3D PUF with 8-layer Stacking Vertical RRAM and 0.014% Bit Error Rate Using In-Cell Stabilization Scheme for IoT Security Applications. 2020 IEEE International Electron Devices Meeting (IEDM). :28.6.1–28.6.4.
.
2020. In this work, we propose and demonstrate a multi-layer 3-dimensional (3D) vertical RRAM (VRRAM) PUF with in-cell stabilization scheme to improve both cost efficiency and reliability. An 8-layer VRRAM array was manufactured with excellent uniformity and good endurance of \textbackslashtextgreater107. Apart from the variation in RRAM resistance, enhanced randomness is obtained thanks to the parasitic IR drop and abundant sneak current paths in 3D VRRAM. To deal with the common issue of unstable bits in PUF output, in-cell stabilization is proposed by first employing asymmetric biasing to detect the unstable bits and then exploiting reprogramming to expand the deviation to stabilize the output. The bit error rate is reduced by \textbackslashtextgreater7X (68X) for 3(5) times reprogramming. The proposed PUF features excellent resistance against machine learning attack and passes both National Institute of Standards and Technology (NIST) 800-22 and NIST 800-90B test suites.
Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :3450—3459.
.
2020. Person re-identification (re-ID) is a key problem in smart supervision of camera networks. Over the past years, models using deep learning have become state of the art. However, it has been shown that deep neural networks are flawed with adversarial examples, i.e. human-imperceptible perturbations. Extensively studied for the task of image closed- set classification, this problem can also appear in the case of open-set retrieval tasks. Indeed, recent work has shown that we can also generate adversarial examples for metric learning systems such as re-ID ones. These models remain vulnerable: when faced with adversarial examples, they fail to correctly recognize a person, which represents a security breach. These attacks are all the more dangerous as they are impossible to detect for a human operator. Attacking a metric consists in altering the distances between the feature of an attacked image and those of reference images, i.e. guides. In this article, we investigate different possible attacks depending on the number and type of guides available. From this metric attack family, two particularly effective attacks stand out. The first one, called Self Metric Attack, is a strong attack that does not need any image apart from the attacked image. The second one, called FurthestNegative Attack, makes full use of a set of images. Attacks are evaluated on commonly used datasets: Market1501 and DukeMTMC. Finally, we propose an efficient extension of adversarial training protocol adapted to metric learning as a defense that increases the robustness of re-ID models.1
A Two-Layer Moving Target Defense for Image Classification in Adversarial Environment. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :410—414.
.
2020. Deep learning plays an increasingly important role in various fields due to its superior performance, and it also achieves advanced recognition performance in the field of image classification. However, the vulnerability of deep learning in the adversarial environment cannot be ignored, and the prediction result of the model is likely to be affected by the small perturbations added to the samples by the adversary. In this paper, we propose a two-layer dynamic defense method based on defensive techniques pool and retrained branch model pool. First, we randomly select defense methods from the defense pool to process the input. The perturbation ability of the adversarial samples preprocessed by different defense methods changed, which would produce different classification results. In addition, we conduct adversarial training based on the original model and dynamically generate multiple branch models. The classification results of these branch models for the same adversarial sample is inconsistent. We can detect the adversarial samples by using the inconsistencies in the output results of the two layers. The experimental results show that the two-layer dynamic defense method we designed achieves a good defense effect.
A New Black Box Attack Generating Adversarial Examples Based on Reinforcement Learning. 2020 Information Communication Technologies Conference (ICTC). :141–146.
.
2020. Machine learning can be misled by adversarial examples, which is formed by making small changes to the original data. Nowadays, there are kinds of methods to produce adversarial examples. However, they can not apply non-differentiable models, reduce the amount of calculations, and shorten the sample generation time at the same time. In this paper, we propose a new black box attack generating adversarial examples based on reinforcement learning. By using deep Q-learning network, we can train the substitute model and generate adversarial examples at the same time. Experimental results show that this method only needs 7.7ms to produce an adversarial example, which solves the problems of low efficiency, large amount of calculation and inapplicable to non-differentiable model.
A Practical Black-Box Attack Against Autonomous Speech Recognition Model. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–6.
.
2020. With the wild applications of machine learning (ML) technology, automatic speech recognition (ASR) has made great progress in recent years. Despite its great potential, there are various evasion attacks of ML-based ASR, which could affect the security of applications built upon ASR. Up to now, most studies focus on white-box attacks in ASR, and there is almost no attention paid to black-box attacks where attackers can only query the target model to get output labels rather than probability vectors in audio domain. In this paper, we propose an evasion attack against ASR in the above-mentioned situation, which is more feasible in realistic scenarios. Specifically, we first train a substitute model by using data augmentation, which ensures that we have enough samples to train with a small number of times to query the target model. Then, based on the substitute model, we apply Differential Evolution (DE) algorithm to craft adversarial examples and implement black-box attack against ASR models from the Speech Commands dataset. Extensive experiments are conducted, and the results illustrate that our approach achieves untargeted attacks with over 70% success rate while still maintaining the authenticity of the original data well.
Missing Load Situation Reconstruction Based on Generative Adversarial Networks. 2020 IEEE/IAS Industrial and Commercial Power System Asia (I CPS Asia). :1528—1534.
.
2020. The completion and the correction of measurement data are the foundation of the ubiquitous power internet of things construction. However, data missing may occur during the data transporting process. Therefore, a model of missing load situation reconstruction based on the generative adversarial networks is proposed in this paper to overcome the disadvantage of depending on data of other relevant factors in conventional methods. Through the unsupervised training, the proposed model can automatically learn the complex features of loads that are difficult to model explicitly to fill the incomplete load data without using other relevant data. Meanwhile, a method of online correction is put forward to improve the robustness of the reconstruction model in different scenarios. The proposed method is fully data-driven and contains no explicit modeling process. The test results indicate that the proposed algorithm is well-matched for the various scenarios, including the discontinuous missing load reconstruction and the continuous missing load reconstruction even massive data missing. Specifically, the reconstruction error rate of the proposed algorithm is within 4% under the absence of 50% load data.
Training Strategies for Autoencoder-based Detection of False Data Injection Attacks. 2020 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe). :1—5.
.
2020. The security of energy supply in a power grid critically depends on the ability to accurately estimate the state of the system. However, manipulated power flow measurements can potentially hide overloads and bypass the bad data detection scheme to interfere the validity of estimated states. In this paper, we use an autoencoder neural network to detect anomalous system states and investigate the impact of hyperparameters on the detection performance for false data injection attacks that target power flows. Experimental results on the IEEE 118 bus system indicate that the proposed mechanism has the ability to achieve satisfactory learning efficiency and detection accuracy.
Detection of False Data Injection Attacks Using the Autoencoder Approach. 2020 International Conference on Probabilistic Methods Applied to Power Systems (PMAPS). :1—6.
.
2020. State estimation is of considerable significance for the power system operation and control. However, well-designed false data injection attacks can utilize blind spots in conventional residual-based bad data detection methods to manipulate measurements in a coordinated manner and thus affect the secure operation and economic dispatch of grids. In this paper, we propose a detection approach based on an autoencoder neural network. By training the network on the dependencies intrinsic in `normal' operation data, it effectively overcomes the challenge of unbalanced training data that is inherent in power system attack detection. To evaluate the detection performance of the proposed mechanism, we conduct a series of experiments on the IEEE 118-bus power system. The experiments demonstrate that the proposed autoencoder detector displays robust detection performance under a variety of attack scenarios.
Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
.
2020. The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
Taking advantage of unsupervised learning in incident response. 2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1–6.
.
2020. This paper looks at new ways to improve the necessary time for incident response triage operations. By employing unsupervised K-means, enhanced by both manual and automated feature extraction techniques, the incident response team can quickly and decisively extrapolate malicious web requests that concluded to the investigated exploitation. More precisely, we evaluated the benefits of different visualization enhancing methods that can improve feature selection and other dimensionality reduction techniques. Furthermore, early tests of the gross framework have shown that the necessary time for triage is diminished, more so if a hybrid multi-model is employed. Our case study revolved around the need for unsupervised classification of unknown web access logs. However, the demonstrated principals may be considered for other applications of machine learning in the cybersecurity domain.
FairFed: Cross-Device Fair Federated Learning. 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). :1–7.
.
2020. Federated learning (FL) is the rapidly developing machine learning technique that is used to perform collaborative model training over decentralized datasets. FL enables privacy-preserving model development whereby the datasets are scattered over a large set of data producers (i.e., devices and/or systems). These data producers train the learning models, encapsulate the model updates with differential privacy techniques, and share them to centralized systems for global aggregation. However, these centralized models are always prone to adversarial attacks (such as data-poisoning and model poisoning attacks) due to a large number of data producers. Hence, FL methods need to ensure fairness and high-quality model availability across all the participants in the underlying AI systems. In this paper, we propose a novel FL framework, called FairFed, to meet fairness and high-quality data requirements. The FairFed provides a fairness mechanism to detect adversaries across the devices and datasets in the FL network and reject their model updates. We use a Python-simulated FL framework to enable large-scale training over MNIST dataset. We simulate a cross-device model training settings to detect adversaries in the training network. We used TensorFlow Federated and Python to implement the fairness protocol, the deep neural network, and the outlier detection algorithm. We thoroughly test the proposed FairFed framework with random and uniform data distributions across the training network and compare our initial results with the baseline fairness scheme. Our proposed work shows promising results in terms of model accuracy and loss.
Blockchain Technology and Neural Networks for the Internet of Medical Things. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :508–513.
.
2020. In today's technological climate, users require fast automation and digitization of results for large amounts of data at record speeds. Especially in the field of medicine, where each patient is often asked to undergo many different examinations within one diagnosis or treatment. Each examination can help in the diagnosis or prediction of further disease progression. Furthermore, all produced data from these examinations must be stored somewhere and available to various medical practitioners for analysis who may be in geographically diverse locations. The current medical climate leans towards remote patient monitoring and AI-assisted diagnosis. To make this possible, medical data should ideally be secured and made accessible to many medical practitioners, which makes them prone to malicious entities. Medical information has inherent value to malicious entities due to its privacy-sensitive nature in a variety of ways. Furthermore, if access to data is distributively made available to AI algorithms (particularly neural networks) for further analysis/diagnosis, the danger to the data may increase (e.g., model poisoning with fake data introduction). In this paper, we propose a federated learning approach that uses decentralized learning with blockchain-based security and a proposition that accompanies that training intelligent systems using distributed and locally-stored data for the use of all patients. Our work in progress hopes to contribute to the latest trend of the Internet of Medical Things security and privacy.
Adversarial Attacks on AI based Intrusion Detection System for Heterogeneous Wireless Communications Networks. 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC). :1–6.
.
2020. It has been recognized that artificial intelligence (AI) will play an important role in future societies. AI has already been incorporated in many industries to improve business processes and automation. Although the aviation industry has successfully implemented flight management systems or autopilot to automate flight operations, it is expected that full embracement of AI remains a challenge. Given the rigorous validation process and the requirements for the highest level of safety standards and risk management, AI needs to prove itself being safe to operate. This paper addresses the safety issues of AI deployment in an aviation network compatible with the Future Communication Infrastructure that utilizes heterogeneous wireless access technologies for communications between the aircraft and the ground networks. It further considers the exploitation of software defined networking (SDN) technologies in the ground network while the adoption of SDN in the airborne network can be optional. Due to the nature of centralized management in SDN-based network, the SDN controller can become a single point of failure or a target for cyber attacks. To countermeasure such attacks, an intrusion detection system utilises AI techniques, more specifically deep neural network (DNN), is considered. However, an adversary can target the AI-based intrusion detection system. This paper examines the impact of AI security attacks on the performance of the DNN algorithm. Poisoning attacks targeting the DSL-KDD datasets which were used to train the DNN algorithm were launched at the intrusion detection system. Results showed that the performance of the DNN algorithm has been significantly degraded in terms of the mean square error, accuracy rate, precision rate and the recall rate.
Decentralized Min-Max Optimization: Formulations, Algorithms and Applications in Network Poisoning Attack. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :5755–5759.
.
2020. This paper discusses formulations and algorithms which allow a number of agents to collectively solve problems involving both (non-convex) minimization and (concave) maximization operations. These problems have a number of interesting applications in information processing and machine learning, and in particular can be used to model an adversary learning problem called network data poisoning. We develop a number of algorithms to efficiently solve these non-convex min-max optimization problems, by combining techniques such as gradient tracking in the decentralized optimization literature and gradient descent-ascent schemes in the min-max optimization literature. Also, we establish convergence to a first order stationary point under certain conditions. Finally, we perform experiments to demonstrate that the proposed algorithms are effective in the data poisoning attack.
Unified Architectural Support for Secure and Robust Deep Learning. 2020 57th ACM/IEEE Design Automation Conference (DAC). :1—6.
.
2020. Recent advances in Deep Learning (DL) have enabled a paradigm shift to include machine intelligence in a wide range of autonomous tasks. As a result, a largely unexplored surface has opened up for attacks jeopardizing the integrity of DL models and hindering the success of autonomous systems. To enable ubiquitous deployment of DL approaches across various intelligent applications, we propose to develop architectural support for hardware implementation of secure and robust DL. Towards this goal, we leverage hardware/software co-design to develop a DL execution engine that supports algorithms specifically designed to defend against various attacks. The proposed framework is enhanced with two real-time defense mechanisms, securing both DL training and execution stages. In particular, we enable model-level Trojan detection to mitigate backdoor attacks and malicious behaviors induced on the DL model during training. We further realize real-time adversarial attack detection to avert malicious behavior during execution. The proposed execution engine is equipped with hardware-level IP protection and usage control mechanism to attest the legitimacy of the DL model mapped to the device. Our design is modular and can be tuned to task-specific demands, e.g., power, throughput, and memory bandwidth, by means of a customized hardware compiler. We further provide an accompanying API to reduce the nonrecurring engineering cost and ensure automated adaptation to various domains and applications.
A Neural Embedding for Source Code: Security Analysis and CWE Lists. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :523—530.
.
2020. In this paper, we design a technique for mapping the source code into a vector space and we show its application in the recognition of security weaknesses. By applying ideas commonly used in Natural Language Processing, we train a model for producing an embedding of programs starting from their Abstract Syntax Trees. We then show how such embedding is able to infer clusters roughly separating different classes of software weaknesses. Even if the training of the embedding is unsupervised and made on a generic Java dataset, we show that the model can be used for supervised learning of specific classes of vulnerabilities, helping to capture some features distinguishing them in code. Finally, we discuss how our model performs over the different types of vulnerabilities categorized by the CWE initiative.
Technology of Image Steganography and Steganalysis Based on Adversarial Training. 2020 16th International Conference on Computational Intelligence and Security (CIS). :77–80.
.
2020. Steganography has made great progress over the past few years due to the advancement of deep convolutional neural networks (DCNN), which has caused severe problems in the network security field. Ensuring the accuracy of steganalysis is becoming increasingly difficult. In this paper, we designed a two-channel generative adversarial network (TGAN), inspired by the idea of adversarial training that is based on our previous work. The TGAN consisted of three parts: The first hiding network had two input channels and one output channel. For the second extraction network, the input was a hidden image embedded with the secret image. The third detecting network had two input channels and one output channel. Experimental results on two independent image data sets showed that the proposed TGAN performed well and had better detecting capability compared to other algorithms, thus having important theoretical significance and engineering value.
Blockchain Based End-to-End Tracking System for Distributed IoT Intelligence Application Security Enhancement. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1028–1035.
.
2020. IoT devices provide a rich data source that is not available in the past, which is valuable for a wide range of intelligence applications, especially deep neural network (DNN) applications that are data-thirsty. An established DNN model provides useful analysis results that can improve the operation of IoT systems in turn. The progress in distributed/federated DNN training further unleashes the potential of integration of IoT and intelligence applications. When a large number of IoT devices are deployed in different physical locations, distributed training allows training modules to be deployed to multiple edge data centers that are close to the IoT devices to reduce the latency and movement of large amounts of data. In practice, these IoT devices and edge data centers are usually owned and managed by different parties, who do not fully trust each other or have conflicting interests. It is hard to coordinate them to provide end-to-end integrity protection of the DNN construction and application with classical security enhancement tools. For example, one party may share an incomplete data set with others, or contribute a modified sub DNN model to manipulate the aggregated model and affect the decision-making process. To mitigate this risk, we propose a novel blockchain based end-to-end integrity protection scheme for DNN applications integrated with an IoT system in the edge computing environment. The protection system leverages a set of cryptography primitives to build a blockchain adapted for edge computing that is scalable to handle a large number of IoT devices. The customized blockchain is integrated with a distributed/federated DNN to offer integrity and authenticity protection services.
Something-Else: Compositional Action Recognition With Spatial-Temporal Interaction Networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :1046–1056.
.
2020. Human action is naturally compositional: humans can easily recognize and perform actions with objects that are different from those used in training demonstrations. In this paper, we study the compositionality of action by looking into the dynamics of subject-object interactions. We propose a novel model which can explicitly reason about the geometric relations between constituent objects and an agent performing an action. To train our model, we collect dense object box annotations on the Something-Something dataset. We propose a novel compositional action recognition task where the training combinations of verbs and nouns do not overlap with the test set. The novel aspects of our model are applicable to activities with prominent object interaction dynamics and to objects which can be tracked using state-of-the-art approaches; for activities without clearly defined spatial object-agent interactions, we rely on baseline scene-level spatio-temporal representations. We show the effectiveness of our approach not only on the proposed compositional action recognition task but also in a few-shot compositional setting which requires the model to generalize across both object appearance and action category.
HIGhER: Improving instruction following with Hindsight Generation for Experience Replay. 2020 IEEE Symposium Series on Computational Intelligence (SSCI). :225–232.
.
2020. Language creates a compact representation of the world and allows the description of unlimited situations and objectives through compositionality. While these characterizations may foster instructing, conditioning or structuring interactive agent behavior, it remains an open-problem to correctly relate language understanding and reinforcement learning in even simple instruction following scenarios. This joint learning problem is alleviated through expert demonstrations, auxiliary losses, or neural inductive biases. In this paper, we propose an orthogonal approach called Hindsight Generation for Experience Replay (HIGhER) that extends the Hindsight Experience Replay approach to the language-conditioned policy setting. Whenever the agent does not fulfill its instruction, HIGhER learns to output a new directive that matches the agent trajectory, and it relabels the episode with a positive reward. To do so, HIGhER learns to map a state into an instruction by using past successful trajectories, which removes the need to have external expert interventions to relabel episodes as in vanilla HER. We show the efficiency of our approach in the BabyAI environment, and demonstrate how it complements other instruction following methods.
Next-Generation CPS Testbed-based Grid Exercise - Synthetic Grid, Attack, and Defense Modeling. 2020 Resilience Week (RWS). :92—98.
.
2020. Quasi-Realistic cyber-physical system (QR-CPS) testbed architecture and operational environment are critical for testing and validating various cyber attack-defense algorithms for the wide-area resilient power systems. These QR-CPS testbed environments provide a realistic platform for conducting the Grid Exercise (GridEx), CPS security training, and attack-defense exercise at a broader scale for the cybersecurity of Energy Delivery Systems. The NERC has established a tabletop based GridEx platform for the North American power utilities to demonstrate how they would respond to and recover from cyber threats and incidents. The NERC-GridEx is a bi-annual activity with tabletop attack injects and incidence response management. There is a significant need to build a testbed-based hands-on GridEx for the utilities by leveraging the CPS testbeds, which imitates the pragmatic CPS grid environment. We propose a CPS testbed-based Quasi-Realistic Grid Exercise (QR-GridEx), which is a model after the NERC's tabletop GridEx. We have designed the CPS testbed-based QR-GridEx into two parts. Part-I focuses on the modeling of synthetic grid models for the utilities, including SCADA and WAMS communications, and attack-and-defense software systems; and the Part-II focuses on the incident response management and risk-based CPS grid investment strategies. This paper presents the Part-I of the CPS testbed-based QRGridEx, which includes modeling of the synthetic grid models in the real-time digital simulator, stealthy, and coordinated cyberattack vectors, and integration of intrusion/anomaly detection systems. We have used our existing HIL CPS security testbed to demonstrate the testbed-based QR-GridEx for a Texas-2000 bus US synthetic grid model and the IEEE-39 bus grid models. The experiments demonstrated significant results by 100% real-time performance with zero overruns for grid impact characteristics against stealthy and coordinated cyberattack vectors.
Analysis of Subject Recognition Algorithms based on Neural Networks. 2020 International Conference on Information Science and Communications Technologies (ICISCT). :1—4.
.
2020. This article describes the principles of construction, training and use of neural networks. The features of the neural network approach are indicated, as well as the range of tasks for which it is most preferable. Algorithms of functioning, software implementation and results of work of an artificial neural network are presented.