Visible to the public Biblio

Found 288 results

Filters: Keyword is artificial intelligence  [Clear All Filters]
2020-12-14
Efendioglu, H. S., Asik, U., Karadeniz, C..  2020.  Identification of Computer Displays Through Their Electromagnetic Emissions Using Support Vector Machines. 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). :1–5.
As a TEMPEST information security problem, electromagnetic emissions from the computer displays can be captured, and reconstructed using signal processing techniques. It is necessary to identify the display type to intercept the image of the display. To determine the display type not only significant for attackers but also for protectors to prevent display compromising emanations. This study relates to the identification of the display type using Support Vector Machines (SVM) from electromagnetic emissions emitted from computer displays. After measuring the emissions using receiver measurement system, the signals were processed and training/test data sets were formed and the classification performance of the displays was examined with the SVM. Moreover, solutions for a better classification under real conditions have been proposed. Thus, one of the important step of the display image capture can accomplished by automatically identification the display types. The performance of the proposed method was evaluated in terms of confusion matrix and accuracy, precision, F1-score, recall performance measures.
2020-12-07
Jeong, T., Mandal, A..  2018.  Flexible Selecting of Style to Content Ratio in Neural Style Transfer. 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). :264–269.

Humans have created many pioneers of art from the beginning of time. There are not many notable achievements by an artificial intelligence to create something visually captivating in the field of art. However, some breakthroughs were made in the past few years by learning the differences between the content and style of an image using convolution neural networks and texture synthesis. But most of the approaches have the limitations on either processing time, choosing a certain style image or altering the weight ratio of style image. Therefore, we are to address these restrictions and provide a system which allows any style image selection with a user defined style weight ratio in minimum time possible.

2020-12-01
Harris, L., Grzes, M..  2019.  Comparing Explanations between Random Forests and Artificial Neural Networks. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2978—2985.

The decisions made by machines are increasingly comparable in predictive performance to those made by humans, but these decision making processes are often concealed as black boxes. Additional techniques are required to extract understanding, and one such category are explanation methods. This research compares the explanations of two popular forms of artificial intelligence; neural networks and random forests. Researchers in either field often have divided opinions on transparency, and comparing explanations may discover similar ground truths between models. Similarity can help to encourage trust in predictive accuracy alongside transparent structure and unite the respective research fields. This research explores a variety of simulated and real-world datasets that ensure fair applicability to both learning algorithms. A new heuristic explanation method that extends an existing technique is introduced, and our results show that this is somewhat similar to the other methods examined whilst also offering an alternative perspective towards least-important features.

Xu, J., Bryant, D. G., Howard, A..  2018.  Would You Trust a Robot Therapist? Validating the Equivalency of Trust in Human-Robot Healthcare Scenarios 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :442—447.

With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.

2020-11-23
Sutton, A., Samavi, R., Doyle, T. E., Koff, D..  2018.  Digitized Trust in Human-in-the-Loop Health Research. 2018 16th Annual Conference on Privacy, Security and Trust (PST). :1–10.
In this paper, we propose an architecture that utilizes blockchain technology for enabling verifiable trust in collaborative health research environments. The architecture supports the human-in-the-loop paradigm for health research by establishing trust between participants, including human researchers and AI systems, by making all data transformations transparent and verifiable by all participants. We define the trustworthiness of the system and provide an analysis of the architecture in terms of trust requirements. We then evaluate our architecture by analyzing its resiliency to common security threats and through an experimental realization.
2020-11-17
Russell, S., Abdelzaher, T., Suri, N..  2019.  Multi-Domain Effects and the Internet of Battlefield Things. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :724—730.

This paper reviews the definitions and characteristics of military effects, the Internet of Battlefield Things (IoBT), and their impact on decision processes in a Multi-Domain Operating environment (MDO). The aspects of contemporary military decision-processes are illustrated and an MDO Effect Loop decision process is introduced. We examine the concept of IoBT effects and their implications in MDO. These implications suggest that when considering the concept of MDO, as a doctrine, the technological advances of IoBTs empower enhancements in decision frameworks and increase the viability of novel operational approaches and options for military effects.

2020-11-04
Khurana, N., Mittal, S., Piplai, A., Joshi, A..  2019.  Preventing Poisoning Attacks On AI Based Threat Intelligence Systems. 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). :1—6.

As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we use an ensembled semi-supervised approach to determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.

2020-11-02
Zhao, Xinghan, Gao, Xiangfei.  2018.  An AI Software Test Method Based on Scene Deductive Approach. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :14—20.
Artificial intelligence (AI) software has high algorithm complexity, and the scale and dimension of the input and output parameters are high, and the test oracle isn't explicit. These features make a lot of difficulties for the design of test cases. This paper proposes an AI software testing method based on scene deductive approach. It models the input, output parameters and the environment, uses the random algorithm to generate the inputs of the test cases, then use the algorithm of deductive approach to make the software testing automatically, and use the test assertions to verify the results of the test. After description of the theory, this paper uses intelligent tracking car as an example to illustrate the application of this method and the problems needing attention. In the end, the paper describes the shortcoming of this method and the future research directions.
2020-10-19
Hasan, Khondokar Fida, Kaur, Tarandeep, Hasan, Md. Mhedi, Feng, Yanming.  2019.  Cognitive Internet of Vehicles: Motivation, Layered Architecture and Security Issues. 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI). :1–6.
Over the past few years, we have experienced great technological advancements in the information and communication field, which has significantly contributed to reshaping the Intelligent Transportation System (ITS) concept. Evolving from the platform of a collection of sensors aiming to collect data, the data exchanged paradigm among vehicles is shifted from the local network to the cloud. With the introduction of cloud and edge computing along with ubiquitous 5G mobile network, it is expected to see the role of Artificial Intelligence (AI) in data processing and smart decision imminent. So as to fully understand the future automobile scenario in this verge of industrial revolution 4.0, it is necessary first of all to get a clear understanding of the cutting-edge technologies that going to take place in the automotive ecosystem so that the cyber-physical impact on transportation system can be measured. CIoV, which is abbreviated from Cognitive Internet of Vehicle, is one of the recently proposed architectures of the technological evolution in transportation, and it has amassed great attention. It introduces cloud-based artificial intelligence and machine learning into transportation system. What are the future expectations of CIoV? To fully contemplate this architecture's future potentials, and milestones set to achieve, it is crucial to understand all the technologies that leaned into it. Also, the security issues to meet the security requirements of its practical implementation. Aiming to that, this paper presents the evolution of CIoV along with the layer abstractions to outline the distinctive functional parts of the proposed architecture. It also gives an investigation of the prime security and privacy issues associated with technological evolution to take measures.
2020-10-14
Trevizan, Rodrigo D., Ruben, Cody, Nagaraj, Keerthiraj, Ibukun, Layiwola L., Starke, Allen C., Bretas, Arturo S., McNair, Janise, Zare, Alina.  2019.  Data-driven Physics-based Solution for False Data Injection Diagnosis in Smart Grids. 2019 IEEE Power Energy Society General Meeting (PESGM). :1—5.
This paper presents a data-driven and physics-based method for detection of false data injection (FDI) in Smart Grids (SG). As the power grid transitions to the use of SG technology, it becomes more vulnerable to cyber-attacks like FDI. Current strategies for the detection of bad data in the grid rely on the physics based State Estimation (SE) process and statistical tests. This strategy is naturally vulnerable to undetected bad data as well as false positive scenarios, which means it can be exploited by an intelligent FDI attack. In order to enhance the robustness of bad data detection, the paper proposes the use of data-driven Machine Intelligence (MI) working together with current bad data detection via a combined Chi-squared test. Since MI learns over time and uses past data, it provides a different perspective on the data than the SE, which analyzes only the current data and relies on the physics based model of the system. This combined bad data detection strategy is tested on the IEEE 118 bus system.
2020-10-12
Rudd-Orthner, Richard N M, Mihaylova, Lyudmilla.  2019.  An Algebraic Expert System with Neural Network Concepts for Cyber, Big Data and Data Migration. 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1–6.

This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.

2020-10-05
Kumar, Suren, Dhiman, Vikas, Koch, Parker A, Corso, Jason J..  2018.  Learning Compositional Sparse Bimodal Models. IEEE Transactions on Pattern Analysis and Machine Intelligence. 40:1032—1044.

Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal perceptual domains that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on red triangles and blue squares; yet, implicitly will also have learned red squares and blue triangles. The structure of the projections and hence the compositional basis is learned automatically; no assumption is made on the ordering of the compositional elements in either modality. Although our modeling paradigm is general, we explicitly focus on a tabletop building-blocks setting. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes (blocks) in the tabletop setting. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.

Ong, Desmond, Soh, Harold, Zaki, Jamil, Goodman, Noah.  2019.  Applying Probabilistic Programming to Affective Computing. IEEE Transactions on Affective Computing. :1—1.

Affective Computing is a rapidly growing field spurred by advancements in artificial intelligence, but often, held back by the inability to translate psychological theories of emotion into tractable computational models. To address this, we propose a probabilistic programming approach to affective computing, which models psychological-grounded theories as generative models of emotion, and implements them as stochastic, executable computer programs. We first review probabilistic approaches that integrate reasoning about emotions with reasoning about other latent mental states (e.g., beliefs, desires) in context. Recently-developed probabilistic programming languages offer several key desidarata over previous approaches, such as: (i) flexibility in representing emotions and emotional processes; (ii) modularity and compositionality; (iii) integration with deep learning libraries that facilitate efficient inference and learning from large, naturalistic data; and (iv) ease of adoption. Furthermore, using a probabilistic programming framework allows a standardized platform for theory-building and experimentation: Competing theories (e.g., of appraisal or other emotional processes) can be easily compared via modular substitution of code followed by model comparison. To jumpstart adoption, we illustrate our points with executable code that researchers can easily modify for their own models. We end with a discussion of applications and future directions of the probabilistic programming approach

2020-09-28
Abie, Habtamu.  2019.  Cognitive Cybersecurity for CPS-IoT Enabled Healthcare Ecosystems. 2019 13th International Symposium on Medical Information and Communication Technology (ISMICT). :1–6.

Cyber Physical Systems (CPS)-Internet of Things (IoT) enabled healthcare services and infrastructures improve human life, but are vulnerable to a variety of emerging cyber-attacks. Cybersecurity specialists are finding it hard to keep pace of the increasingly sophisticated attack methods. There is a critical need for innovative cognitive cybersecurity for CPS-IoT enabled healthcare ecosystem. This paper presents a cognitive cybersecurity framework for simulating the human cognitive behaviour to anticipate and respond to new and emerging cybersecurity and privacy threats to CPS-IoT and critical infrastructure systems. It includes the conceptualisation and description of a layered architecture which combines Artificial Intelligence, cognitive methods and innovative security mechanisms.

2020-09-21
Zhang, Bing, Zhao, Yongli, Yan, Boyuan, Yan, Longchuan, WANG, YING, Zhang, Jie.  2019.  Failure Disposal by Interaction of the Cross-Layer Artificial Intelligence on ONOS-Based SDON Platform. 2019 Optical Fiber Communications Conference and Exhibition (OFC). :1–3.
We propose a new architecture introducing AI to span the control layer and the data layer in SDON. This demonstration shows the cooperation of the AI engines in two layers in dealing with failure disposal.
2020-09-14
Ortiz Garcés, Ivan, Cazares, Maria Fernada, Andrade, Roberto Omar.  2019.  Detection of Phishing Attacks with Machine Learning Techniques in Cognitive Security Architecture. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :366–370.
The number of phishing attacks has increased in Latin America, exceeding the operational skills of cybersecurity analysts. The cognitive security application proposes the use of bigdata, machine learning, and data analytics to improve response times in attack detection. This paper presents an investigation about the analysis of anomalous behavior related with phishing web attacks and how machine learning techniques can be an option to face the problem. This analysis is made with the use of an contaminated data sets, and python tools for developing machine learning for detect phishing attacks through of the analysis of URLs to determinate if are good or bad URLs in base of specific characteristics of the URLs, with the goal of provide realtime information for take proactive decisions that minimize the impact of an attack.
2020-09-04
Khan, Aasher, Rehman, Suriya, Khan, Muhammad U.S, Ali, Mazhar.  2019.  Synonym-based Attack to Confuse Machine Learning Classifiers Using Black-box Setting. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1—7.
Twitter being the most popular content sharing platform is giving rise to automated accounts called “bots”. Majority of the users on Twitter are bots. Various machine learning (ML) algorithms are designed to detect bots avoiding the vulnerability constraints of ML-based models. This paper contributes to exploit vulnerabilities of machine learning (ML) algorithms through black-box attack. An adversarial text sequence misclassifies the results of deep learning (DL) classifiers for bot detection. Literature shows that ML models are vulnerable to attacks. The aim of this paper is to compromise the accuracy of ML-based bot detection algorithms by replacing original words in tweets with their synonyms. Our results show 7.2% decrease in the accuracy for bot tweets, therefore classifying bot tweets as legitimate tweets.
Jing, Huiyun, Meng, Chengrui, He, Xin, Wei, Wei.  2019.  Black Box Explanation Guided Decision-Based Adversarial Attacks. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1592—1596.
Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack.
2020-08-28
McFadden, Danny, Lennon, Ruth, O’Raw, John.  2019.  AIS Transmission Data Quality: Identification of Attack Vectors. 2019 International Symposium ELMAR. :187—190.

Due to safety concerns and legislation implemented by various governments, the maritime sector adopted Automatic Identification System (AIS). Whilst governments and state agencies have an increasing reliance on AIS data, the underlying technology can be found to be fundamentally insecure. This study identifies and describes a number of potential attack vectors and suggests conceptual countermeasures to mitigate such attacks. With interception by Navy and Coast Guard as well as marine navigation and obstacle avoidance, the vulnerabilities within AIS call into question the multiple deployed overlapping AIS networks, and what the future holds for the protocol.

2020-08-24
Yeboah-Ofori, Abel, Islam, Shareeful, Brimicombe, Allan.  2019.  Detecting Cyber Supply Chain Attacks on Cyber Physical Systems Using Bayesian Belief Network. 2019 International Conference on Cyber Security and Internet of Things (ICSIoT). :37–42.

Identifying cyberattack vectors on cyber supply chains (CSC) in the event of cyberattacks are very important in mitigating cybercrimes effectively on Cyber Physical Systems CPS. However, in the cyber security domain, the invincibility nature of cybercrimes makes it difficult and challenging to predict the threat probability and impact of cyber attacks. Although cybercrime phenomenon, risks, and treats contain a lot of unpredictability's, uncertainties and fuzziness, cyberattack detection should be practical, methodical and reasonable to be implemented. We explore Bayesian Belief Networks (BBN) as knowledge representation in artificial intelligence to be able to be formally applied probabilistic inference in the cyber security domain. The aim of this paper is to use Bayesian Belief Networks to detect cyberattacks on CSC in the CPS domain. We model cyberattacks using DAG method to determine the attack propagation. Further, we use a smart grid case study to demonstrate the applicability of attack and the cascading effects. The results show that BBN could be adapted to determine uncertainties in the event of cyberattacks in the CSC domain.

2020-08-13
Zhang, Yueqian, Kantarci, Burak.  2019.  Invited Paper: AI-Based Security Design of Mobile Crowdsensing Systems: Review, Challenges and Case Studies. 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE). :17—1709.
Mobile crowdsensing (MCS) is a distributed sensing paradigm that uses a variety of built-in sensors in smart mobile devices to enable ubiquitous acquisition of sensory data from surroundings. However, non-dedicated nature of MCS results in vulnerabilities in the presence of malicious participants to compromise the availability of the MCS components, particularly the servers and participants' devices. In this paper, we focus on Denial of Service attacks in MCS where malicious participants submit illegitimate task requests to the MCS platform to keep MCS servers busy while having sensing devices expend energy needlessly. After reviewing Artificial Intelligence-based security solutions for MCS systems, we focus on a typical location-based and energy-oriented DoS attack, and present a security solution that applies ensemble techniques in machine learning to identify illegitimate tasks and prevent personal devices from pointless energy consumption so as to improve the availability of the whole system. Through simulations, we show that ensemble techniques are capable of identifying illegitimate and legitimate tasks while gradient boosting appears to be a preferable solution with an AUC performance higher than 0.88 in the precision-recall curve. We also investigate the impact of environmental settings on the detection performance so as to provide a clearer understanding of the model. Our performance results show that MCS task legitimacy decisions with high F-scores are possible for both illegitimate and legitimate tasks.
Augusto, Cristian, Morán, Jesús, De La Riva, Claudio, Tuya, Javier.  2019.  Test-Driven Anonymization for Artificial Intelligence. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :103—110.
In recent years, data published and shared with third parties to develop artificial intelligence (AI) tools and services has significantly increased. When there are regulatory or internal requirements regarding privacy of data, anonymization techniques are used to maintain privacy by transforming the data. The side-effect is that the anonymization may lead to useless data to train and test the AI because it is highly dependent on the quality of the data. To overcome this problem, we propose a test-driven anonymization approach for artificial intelligence tools. The approach tests different anonymization efforts to achieve a trade-off in terms of privacy (non-functional quality) and functional suitability of the artificial intelligence technique (functional quality). The approach has been validated by means of two real-life datasets in the domains of healthcare and health insurance. Each of these datasets is anonymized with several privacy protections and then used to train classification AIs. The results show how we can anonymize the data to achieve an adequate functional suitability in the AI context while maintaining the privacy of the anonymized data as high as possible.
Sadeghi, Koosha, Banerjee, Ayan, Gupta, Sandeep K. S..  2019.  An Analytical Framework for Security-Tuning of Artificial Intelligence Applications Under Attack. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :111—118.
Machine Learning (ML) algorithms, as the core technology in Artificial Intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process. This paper proposes an analytical framework for tuning the ML parameters to be secure against attacks, while keeping its accuracy high. The framework finds the optimal set of parameters by defining a novel objective function, which takes into account the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application is implemented to recognize whether a subject's eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on her Electroencephalogram (EEG) signals. In this application, the number of neighbors (k) and the distance metric type, as the two main parameters of kNN, are chosen for tuning. The input data perturbation attack, as one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k = 43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.
Jiang, Wei, Anton, Simon Duque, Dieter Schotten, Hans.  2019.  Intelligence Slicing: A Unified Framework to Integrate Artificial Intelligence into 5G Networks. 2019 12th IFIP Wireless and Mobile Networking Conference (WMNC). :227—232.
The fifth-generation and beyond mobile networks should support extremely high and diversified requirements from a wide variety of emerging applications. It is envisioned that more advanced radio transmission, resource allocation, and networking techniques are required to be developed. Fulfilling these tasks is challenging since network infrastructure becomes increasingly complicated and heterogeneous. One promising solution is to leverage the great potential of Artificial Intelligence (AI) technology, which has been explored to provide solutions ranging from channel prediction to autonomous network management, as well as network security. As of today, however, the state of the art of integrating AI into wireless networks is mainly limited to use a dedicated AI algorithm to tackle a specific problem. A unified framework that can make full use of AI capability to solve a wide variety of network problems is still an open issue. Hence, this paper will present the concept of intelligence slicing where an AI module is instantiated and deployed on demand. Intelligence slices are applied to conduct different intelligent tasks with the flexibility of accommodating arbitrary AI algorithms. Two example slices, i.e., neural network based channel prediction and anomaly detection based industrial network security, are illustrated to demonstrate this framework.
Yang, Huiting, Bai, Yunxiao, Zou, Zhenwan, Shi, Yuanyuan, Chen, Shuting, Ni, Chenxi.  2019.  Research on Security Self-defense of Power Information Network Based on Artificial Intelligence. 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). 1:1248—1251.
By studying the problems of network information security in power system, this paper proposes a self-defense research and solution for power information network based on artificial intelligence. At the same time, it proposes active defense new technologies such as vulnerability scanning, baseline scanning, network security attack and defense drills in power information network security, aiming at improving the security level of network information and ensuring the security of the information network in the power system.