Visible to the public Biblio

Found 765 results

Filters: Keyword is Training  [Clear All Filters]
2022-02-22
Vakili, Ramin, Khorsand, Mojdeh.  2021.  Machine-Learning-based Advanced Dynamic Security Assessment: Prediction of Loss of Synchronism in Generators. 2020 52nd North American Power Symposium (NAPS). :1–6.
This paper proposes a machine-learning-based advanced online dynamic security assessment (DSA) method, which provides a detailed evaluation of the system stability after a disturbance by predicting impending loss of synchronism (LOS) of generators. Voltage angles at generator buses are used as the features of the different random forest (RF) classifiers which are trained to consecutively predict LOS of the generators as a contingency proceeds and updated measurements become available. A wide range of contingencies for various topologies and operating conditions of the IEEE 118-bus system has been studied in offline analysis using the GE positive sequence load flow analysis (PSLF) software to create a comprehensive dataset for training and testing the RF models. The performances of the trained models are evaluated in the presence of measurement errors using various metrics. The results reveal that the trained models are accurate, fast, and robust to measurement errors.
Wink, Tobias, Nochta, Zoltan.  2021.  An Approach for Peer-to-Peer Federated Learning. 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :150—157.
We present a novel approach for the collaborative training of neural network models in decentralized federated environments. In the iterative process a group of autonomous peers run multiple training rounds to train a common model. Thereby, participants perform all model training steps locally, such as stochastic gradient descent optimization, using their private, e.g. mission-critical, training datasets. Based on locally updated models, participants can jointly determine a common model by averaging all associated model weights without sharing the actual weight values. For this purpose we introduce a simple n-out-of-n secret sharing schema and an algorithm to calculate average values in a peer-to-peer manner. Our experimental results with deep neural networks on well-known sample datasets prove the generic applicability of the approach, with regard to model quality parameters. Since there is no need to involve a central service provider in model training, the approach can help establish trustworthy collaboration platforms for businesses with high security and data protection requirements.
2022-02-09
Mygdalis, Vasileios, Tefas, Anastasios, Pitas, Ioannis.  2021.  Introducing K-Anonymity Principles to Adversarial Attacks for Privacy Protection in Image Classification Problems. 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.
The network output activation values for a given input can be employed to produce a sorted ranking. Adversarial attacks typically generate the least amount of perturbation required to change the classifier label. In that sense, generated adversarial attack perturbation only affects the output in the 1st sorted ranking position. We argue that meaningful information about the adversarial examples i.e., their original labels, is still encoded in the network output ranking and could potentially be extracted, using rule-based reasoning. To this end, we introduce a novel adversarial attack methodology inspired by the K-anonymity principles, that generates adversarial examples that are not only misclassified, but their output sorted ranking spreads uniformly along K different positions. Any additional perturbation arising from the strength of the proposed objectives, is regularized by a visual similarity-based term. Experimental results denote that the proposed approach achieves the optimization goals inspired by K-anonymity with reduced perturbation as well.
Zhai, Tongqing, Li, Yiming, Zhang, Ziqi, Wu, Baoyuan, Jiang, Yong, Xia, Shu-Tao.  2021.  Backdoor Attack Against Speaker Verification. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2560–2564.
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data (e.g., data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data. Specifically, we design a clustering-based attack scheme where poisoned samples from different clusters will contain different triggers (i.e., pre-defined utterances), based on our understanding of verification tasks. The infected models behave normally on benign samples, while attacker-specified unenrolled triggers will successfully pass the verification even if the attacker has no information about the enrolled speaker. We also demonstrate that existing back-door attacks cannot be directly adopted in attacking speaker verification. Our approach not only provides a new perspective for designing novel attacks, but also serves as a strong baseline for improving the robustness of verification methods. The code for reproducing main results is available at https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification.
Ranade, Priyanka, Piplai, Aritran, Mittal, Sudip, Joshi, Anupam, Finin, Tim.  2021.  Generating Fake Cyber Threat Intelligence Using Transformer-Based Models. 2021 International Joint Conference on Neural Networks (IJCNN). :1–9.
Cyber-defense systems are being developed to automatically ingest Cyber Threat Intelligence (CTI) that contains semi-structured data and/or text to populate knowledge graphs. A potential risk is that fake CTI can be generated and spread through Open-Source Intelligence (OSINT) communities or on the Web to effect a data poisoning attack on these systems. Adversaries can use fake CTI examples as training input to subvert cyber defense systems, forcing their models to learn incorrect inputs to serve the attackers' malicious needs. In this paper, we show how to automatically generate fake CTI text descriptions using transformers. Given an initial prompt sentence, a public language model like GPT-2 with fine-tuning can generate plausible CTI text that can mislead cyber-defense systems. We use the generated fake CTI text to perform a data poisoning attack on a Cybersecurity Knowledge Graph (CKG) and a cybersecurity corpus. The attack introduced adverse impacts such as returning incorrect reasoning outputs, representation poisoning, and corruption of other dependent AI-based cyber defense systems. We evaluate with traditional approaches and conduct a human evaluation study with cyber-security professionals and threat hunters. Based on the study, professional threat hunters were equally likely to consider our fake generated CTI and authentic CTI as true.
Cinà, Antonio Emanuele, Vascon, Sebastiano, Demontis, Ambra, Biggio, Battista, Roli, Fabio, Pelillo, Marcello.  2021.  The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers? 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time. Availability poisoning is a particularly worrisome subset of poisoning attacks where the attacker aims to cause a Denial-of-Service (DoS) attack. However, the state-of-the-art algorithms are computationally expensive because they try to solve a complex bi-level optimization problem (the ``hammer''). We observed that in particular conditions, namely, where the target model is linear (the ``nut''), the usage of computationally costly procedures can be avoided. We propose a counter-intuitive but efficient heuristic that allows contaminating the training set such that the target system's performance is highly compromised. We further suggest a re-parameterization trick to decrease the number of variables to be optimized. Finally, we demonstrate that, under the considered settings, our framework achieves comparable, or even better, performances in terms of the attacker's objective while being significantly more computationally efficient.
Guo, Hao, Dolhansky, Brian, Hsin, Eric, Dinh, Phong, Ferrer, Cristian Canton, Wang, Song.  2021.  Deep Poisoning: Towards Robust Image Data Sharing against Visual Disclosure. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). :686–696.
Due to respectively limited training data, different entities addressing the same vision task based on certain sensitive images may not train a robust deep network. This paper introduces a new vision task where various entities share task-specific image data to enlarge each other's training data volume without visually disclosing sensitive contents (e.g. illegal images). Then, we present a new structure-based training regime to enable different entities learn task-specific and reconstruction-proof image representations for image data sharing. Specifically, each entity learns a private Deep Poisoning Module (DPM) and insert it to a pre-trained deep network, which is designed to perform the specific vision task. The DPM deliberately poisons convolutional image features to prevent image reconstructions, while ensuring that the altered image data is functionally equivalent to the non-poisoned data for the specific vision task. Given this equivalence, the poisoned features shared from one entity could be used by another entity for further model refinement. Experimental results on image classification prove the efficacy of the proposed method.
2022-02-07
Khalifa, Marwa Mohammed, Ucan, Osman Nuri, Ali Alheeti, Khattab M..  2021.  New Intrusion Detection System to Protect MANET Networks Employing Machine Learning Techniques. 2021 International Conference of Modern Trends in Information and Communication Technology Industry (MTICTI). :1–6.
The Intrusion Detection System (IDS) is one of the technologies available to protect mobile ad hoc networks. The system monitors the network and detects intrusion from malicious nodes, aiming at passive (eavesdropping) or positive attack to disrupt the network. This paper proposes a new Intrusion detection system using three Machine Learning (ML) techniques. The ML techniques were Random Forest (RF), support vector machines (SVM), and Naïve Bayes(NB) were used to classify nodes in MANET. The data set was generated by the simulator network simulator-2 (NS-2). The routing protocol was used is Dynamic Source Routing (DSR). The type of IDS used is a Network Intrusion Detection System (NIDS). The dataset was pre-processed, then split into two subsets, 67% for training and 33% for testing employing Python Version 3.8.8. Obtaining good results for RF, SVM and NB when applied randomly selected features in the trial and error method from the dataset to improve the performance of the IDS and reduce time spent for training and testing. The system showed promising results, especially with RF, where the accuracy rate reached 100%.
Abdel-Fattah, Farhan, AlTamimi, Fadel, Farhan, Khalid A..  2021.  Machine Learning and Data Mining in Cybersecurty. 2021 International Conference on Information Technology (ICIT). :952–956.
A wireless technology Mobile Ad hoc Network (MANET) that connects a group of mobile devices such as phones, laptops, and tablets suffers from critical security problems, so the traditional defense mechanism Intrusion Detection System (IDS) techniques are not sufficient to safeguard and protect MANET from malicious actions performed by intruders. Due to the MANET dynamic decentralized structure, distributed architecture, and rapid growing of MANET over years, vulnerable MANET does not need to change its infrastructure rather than using intelligent and advance methods to secure them and prevent intrusions. This paper focuses essentially on machine learning methodologies and algorithms to solve the shortage of the first line defense IDS to overcome the security issues MANET experience. Threads such as black hole, routing loops, network partition, selfishness, sleep deprivation, and denial of service (DoS), may be easily classified and recognized using machine learning methodologies and algorithms. Also, machine learning methodologies and algorithms help find ways to reduce and solve mischievous and harmful attacks against intimidation and prying. The paper describes few machine learning algorithms in detail such as Neural Networks, Support vector machine (SVM) algorithm and K-nearest neighbors, and how these methodologies help MANET to resolve their security problems.
Ben Abdel Ouahab, Ikram, Elaachak, Lotfi, Alluhaidan, Yasser A., Bouhorma, Mohammed.  2021.  A new approach to detect next generation of malware based on machine learning. 2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT). :230–235.
In these days, malware attacks target different kinds of devices as IoT, mobiles, servers even the cloud. It causes several hardware damages and financial losses especially for big companies. Malware attacks represent a serious issue to cybersecurity specialists. In this paper, we propose a new approach to detect unknown malware families based on machine learning classification and visualization technique. A malware binary is converted to grayscale image, then for each image a GIST descriptor is used as input to the machine learning model. For the malware classification part we use 3 machine learning algorithms. These classifiers are so efficient where the highest precision reach 98%. Once we train, test and evaluate models we move to simulate 2 new malware families. We do not expect a good prediction since the model did not know the family; however our goal is to analyze the behavior of our classifiers in the case of new family. Finally, we propose an approach using a filter to know either the classification is normal or it's a zero-day malware.
Singh, Shirish, Kaiser, Gail.  2021.  Metamorphic Detection of Repackaged Malware. 2021 IEEE/ACM 6th International Workshop on Metamorphic Testing (MET). :9–16.
Machine learning-based malware detection systems are often vulnerable to evasion attacks, in which a malware developer manipulates their malicious software such that it is misclassified as benign. Such software hides some properties of the real class or adopts some properties of a different class by applying small perturbations. A special case of evasive malware hides by repackaging a bonafide benign mobile app to contain malware in addition to the original functionality of the app, thus retaining most of the benign properties of the original app. We present a novel malware detection system based on metamorphic testing principles that can detect such benign-seeming malware apps. We apply metamorphic testing to the feature representation of the mobile app, rather than to the app itself. That is, the source input is the original feature vector for the app and the derived input is that vector with selected features removed. If the app was originally classified benign, and is indeed benign, the output for the source and derived inputs should be the same class, i.e., benign, but if they differ, then the app is exposed as (likely) malware. Malware apps originally classified as malware should retain that classification, since only features prevalent in benign apps are removed. This approach enables the machine learning model to classify repackaged malware with reasonably few false negatives and false positives. Our training pipeline is simpler than many existing ML-based malware detection methods, as the network is trained end-to-end to jointly learn appropriate features and to perform classification. We pre-trained our classifier model on 3 million apps collected from the widely-used AndroZoo dataset.1 We perform an extensive study on other publicly available datasets to show our approach's effectiveness in detecting repackaged malware with more than 94% accuracy, 0.98 precision, 0.95 recall, and 0.96 F1 score.
Kumar, Shashank, Meena, Shivangi, Khosla, Savya, Parihar, Anil Singh.  2021.  AE-DCNN: Autoencoder Enhanced Deep Convolutional Neural Network For Malware Classification. 2021 International Conference on Intelligent Technologies (CONIT). :1–5.
Malware classification is a problem of great significance in the domain of information security. This is because the classification of malware into respective families helps in determining their intent, activity, and level of threat. In this paper, we propose a novel deep learning approach to malware classification. The proposed method converts malware executables into image-based representations. These images are then classified into different malware families using an autoencoder enhanced deep convolutional neural network (AE-DCNN). In particular, we propose a novel training mechanism wherein a DCNN classifier is trained with the help of an encoder. We conjecture that using an encoder in the proposed way provides the classifier with the extra information that is perhaps lost during the forward propagation, thereby leading to better results. The proposed approach eliminates the use of feature engineering, reverse engineering, disassembly, and other domain-specific techniques earlier used for malware classification. On the standard Malimg dataset, we achieve a 10-fold cross-validation accuracy of 99.38% and F1-score of 99.38%. Further, due to the texture-based analysis of malware files, the proposed technique is resilient to several obfuscation techniques.
Lee, Shan-Hsin, Lan, Shen-Chieh, Huang, Hsiu-Chuan, Hsu, Chia-Wei, Chen, Yung-Shiu, Shieh, Shiuhpyng.  2021.  EC-Model: An Evolvable Malware Classification Model. 2021 IEEE Conference on Dependable and Secure Computing (DSC). :1–8.
Malware evolves quickly as new attack, evasion and mutation techniques are commonly used by hackers to build new malicious malware families. For malware detection and classification, multi-class learning model is one of the most popular machine learning models being used. To recognize malicious programs, multi-class model requires malware types to be predefined as output classes in advance which cannot be dynamically adjusted after the model is trained. When a new variant or type of malicious programs is discovered, the trained multi-class model will be no longer valid and have to be retrained completely. This consumes a significant amount of time and resources, and cannot adapt quickly to meet the timely requirement in dealing with dynamically evolving malware types. To cope with the problem, an evolvable malware classification deep learning model, namely EC-Model, is proposed in this paper which can dynamically adapt to new malware types without the need of fully retraining. Consequently, the reaction time can be significantly reduced to meet the timely requirement of malware classification. To our best knowledge, our work is the first attempt to adopt multi-task, deep learning for evolvable malware classification.
Gao, Tan, Li, Xudong, Chen, Wen.  2021.  Co-training For Image-Based Malware Classification. 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). :568–572.
A malware detection model based on semi-supervised learning is proposed in the paper. Our model includes mainly three parts: malware visualization, feature extraction, and classification. Firstly, the malware visualization converts malware into grayscale images; then the features of the images are extracted to reflect the coding patterns of malware; finally, a collaborative learning model is applied to malware detections using both labeled and unlabeled software samples. The proposed model was evaluated based on two commonly used benchmark datasets. The results demonstrated that compared with traditional methods, our model not only reduced the cost of sample labeling but also improved the detection accuracy through incorporating unlabeled samples into the collaborative learning process, thereby achieved higher classification performance.
Mohandas, Pavitra, Santhosh Kumar, Sudesh Kumar, Kulyadi, Sandeep Pai, Shankar Raman, M J, S, Vasan V, Venkataswami, Balaji.  2021.  Detection of Malware using Machine Learning based on Operation Code Frequency. 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT). :214–220.
One of the many methods for identifying malware is to disassemble the malware files and obtain the opcodes from them. Since malware have predominantly been found to contain specific opcode sequences in them, the presence of the same sequences in any incoming file or network content can be taken up as a possible malware identification scheme. Malware detection systems help us to understand more about ways on how malware attack a system and how it can be prevented. The proposed method analyses malware executable files with the help of opcode information by converting the incoming executable files to assembly language thereby extracting opcode information (opcode count) from the same. The opcode count is then converted into opcode frequency which is stored in a CSV file format. The CSV file is passed to various machine learning algorithms like Decision Tree Classifier, Random Forest Classifier and Naive Bayes Classifier. Random Forest Classifier produced the highest accuracy and hence the same model was used to predict whether an incoming file contains a potential malware or not.
Chkirbene, Zina, Hamila, Ridha, Erbad, Aiman, Kiranyaz, Serkan, Al-Emadi, Nasser, Hamdi, Mounir.  2021.  Cooperative Machine Learning Techniques for Cloud Intrusion Detection. 2021 International Wireless Communications and Mobile Computing (IWCMC). :837–842.
Cloud computing is attracting a lot of attention in the past few years. Although, even with its wide acceptance, cloud security is still one of the most essential concerns of cloud computing. Many systems have been proposed to protect the cloud from attacks using attack signatures. Most of them may seem effective and efficient; however, there are many drawbacks such as the attack detection performance and the system maintenance. Recently, learning-based methods for security applications have been proposed for cloud anomaly detection especially with the advents of machine learning techniques. However, most researchers do not consider the attack classification which is an important parameter for proposing an appropriate countermeasure for each attack type. In this paper, we propose a new firewall model called Secure Packet Classifier (SPC) for cloud anomalies detection and classification. The proposed model is constructed based on collaborative filtering using two machine learning algorithms to gain the advantages of both learning schemes. This strategy increases the learning performance and the system's accuracy. To generate our results, a publicly available dataset is used for training and testing the performance of the proposed SPC. Our results show that the accuracy of the SPC model increases the detection accuracy by 20% compared to the existing machine learning algorithms while keeping a high attack detection rate.
Or-Meir, Ori, Cohen, Aviad, Elovici, Yuval, Rokach, Lior, Nissim, Nir.  2021.  Pay Attention: Improving Classification of PE Malware Using Attention Mechanisms Based on System Call Analysis. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Malware poses a threat to computing systems worldwide, and security experts work tirelessly to detect and classify malware as accurately and quickly as possible. Since malware can use evasion techniques to bypass static analysis and security mechanisms, dynamic analysis methods are more useful for accurately analyzing the behavioral patterns of malware. Previous studies showed that malware behavior can be represented by sequences of executed system calls and that machine learning algorithms can leverage such sequences for the task of malware classification (a.k.a. malware categorization). Accurate malware classification is helpful for malware signature generation and is thus beneficial to antivirus vendors; this capability is also valuable to organizational security experts, enabling them to mitigate malware attacks and respond to security incidents. In this paper, we propose an improved methodology for malware classification, based on analyzing sequences of system calls invoked by malware in a dynamic analysis environment. We show that adding an attention mechanism to a LSTM model improves accuracy for the task of malware classification, thus outperforming the state-of-the-art algorithm by up to 6%. We also show that the transformer architecture can be used to analyze very long sequences with significantly lower time complexity for training and prediction. Our proposed method can serve as the basis for a decision support system for security experts, for the task of malware categorization.
2022-02-03
Battistuzzi, Linda, Grassi, Lucrezia, Recchiuto, Carmine Tommaso, Sgorbissa, Antonio.  2021.  Towards Ethics Training in Disaster Robotics: Design and Usability Testing of a Text-Based Simulation. 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :104—109.
Rescue robots are expected to soon become commonplace at disaster sites, where they are increasingly being deployed to provide rescuers with improved access and intervention capabilities while mitigating risks. The presence of robots in operation areas, however, is likely to carry a layer of additional ethical complexity to situations that are already ethically challenging. In addition, limited guidance is available for ethically informed, practical decision-making in real-life disaster settings, and specific ethics training programs are lacking. The contribution of this paper is thus to propose a tool aimed at supporting ethics training for rescuers operating with rescue robots. To this end, we have designed an interactive text-based simulation. The simulation was developed in Python, using Tkinter, Python's de-facto standard GUI. It is designed in accordance with the Case-Based Learning approach, a widely used instructional method that has been found to work well for ethics training. The simulation revolves around a case grounded in ethical themes we identified in previous work on ethical issues in rescue robotics: fairness and discrimination, false or excessive expectations, labor replacement, safety, and trust. Here we present the design of the simulation and the results of usability testing.
Huang, Chao, Luo, Wenhao, Liu, Rui.  2021.  Meta Preference Learning for Fast User Adaptation in Human-Supervisory Multi-Robot Deployments. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :5851—5856.
As multi-robot systems (MRS) are widely used in various tasks such as natural disaster response and social security, people enthusiastically expect an MRS to be ubiquitous that a general user without heavy training can easily operate. However, humans have various preferences on balancing between task performance and safety, imposing different requirements onto MRS control. Failing to comply with preferences makes people feel difficult in operation and decreases human willingness of using an MRS. Therefore, to improve social acceptance as well as performance, there is an urgent need to adjust MRS behaviors according to human preferences before triggering human corrections, which increases cognitive load. In this paper, a novel Meta Preference Learning (MPL) method was developed to enable an MRS to fast adapt to user preferences. MPL based on meta learning mechanism can quickly assess human preferences from limited instructions; then, a neural network based preference model adjusts MRS behaviors for preference adaption. To validate method effectiveness, a task scenario "An MRS searches victims in an earthquake disaster site" was designed; 20 human users were involved to identify preferences as "aggressive", "medium", "reserved"; based on user guidance and domain knowledge, about 20,000 preferences were simulated to cover different operations related to "task quality", "task progress", "robot safety". The effectiveness of MPL in preference adaption was validated by the reduced duration and frequency of human interventions.
2022-01-31
Dai, Wei, Berleant, Daniel.  2021.  Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. 2021 IEEE International Conference on Big Data (Big Data). :5085–5094.
Deep learning (DL) classifiers are often unstable in that they may change significantly when retested on perturbed images or low quality images. This paper adds to the fundamental body of work on the robustness of DL classifiers. We introduce a new two-dimensional benchmarking matrix to evaluate robustness of DL classifiers, and we also innovate a four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we create comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and results are shared on the GitHub website at https://github.com/caperock/robustai to support future academic research and industry projects.
Liu, Yong, Zhu, Xinghua, Wang, Jianzong, Xiao, Jing.  2021.  A Quantitative Metric for Privacy Leakage in Federated Learning. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3065–3069.
In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients to evaluate the potential risk of information leakage in their gradients. Mutual information has received increasing attention in the machine learning and data mining community over the past few years. However, existing mutual information estimation methods cannot handle high-dimensional variables. In this paper, we propose a novel method to approximate the mutual information between the high-dimensional gradients and batched input data. Experimental results show that the proposed metric reliably reflect the extent of information leakage in federated learning. In addition, using the proposed metric, we investigate the influential factors of risk level. It is proven that, the risk of information leakage is related to the status of the task model, as well as the inherent data distribution.
2022-01-11
Roberts, Ciaran, Ngo, Sy-Toan, Milesi, Alexandre, Scaglione, Anna, Peisert, Sean, Arnold, Daniel.  2021.  Deep Reinforcement Learning for Mitigating Cyber-Physical DER Voltage Unbalance Attacks. 2021 American Control Conference (ACC). :2861–2867.
The deployment of DER with smart-inverter functionality is increasing the controllable assets on power distribution networks and, consequently, the cyber-physical attack surface. Within this work, we consider the use of reinforcement learning as an online controller that adjusts DER Volt/Var and Volt/Watt control logic to mitigate network voltage unbalance. We specifically focus on the case where a network-aware cyber-physical attack has compromised a subset of single-phase DER, causing a large voltage unbalance. We show how deep reinforcement learning successfully learns a policy minimizing the unbalance, both during normal operation and during a cyber-physical attack. In mitigating the attack, the learned stochastic policy operates alongside legacy equipment on the network, i.e. tap-changing transformers, adjusting optimally predefined DER control-logic.
2022-01-10
Al-Ameer, Ali, AL-Sunni, Fouad.  2021.  A Methodology for Securities and Cryptocurrency Trading Using Exploratory Data Analysis and Artificial Intelligence. 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA). :54–61.
This paper discusses securities and cryptocurrency trading using artificial intelligence (AI) in the sense that it focuses on performing Exploratory Data Analysis (EDA) on selected technical indicators before proceeding to modelling, and then to develop more practical models by introducing new reward loss function that maximizes the returns during training phase. The results of EDA reveal that the complex patterns within the data can be better captured by discriminative classification models and this was endorsed by performing back-testing on two securities using Artificial Neural Network (ANN) and Random Forests (RF) as discriminative models against their counterpart Na\"ıve Bayes as a generative model. To enhance the learning process, the new reward loss function is utilized to retrain the ANN with testing on AAPL, IBM, BRENT CRUDE and BTC using auto-trading strategy that serves as the intelligent unit, and the results indicate this loss superiorly outperforms the conventional cross-entropy used in predictive models. The overall results of this work suggest that there should be larger focus on EDA and more practical losses in the research of machine learning modelling for stock market prediction applications.
Hu, Guangjun, Li, Haiwei, Li, Kun, Wang, Rui.  2021.  A Network Asset Detection Scheme Based on Website Icon Intelligent Identification. 2021 Asia-Pacific Conference on Communications Technology and Computer Science (ACCTCS). :255–257.
With the rapid development of the Internet and communication technologies, efficient management of cyberspace, safe monitoring and protection of various network assets can effectively improve the overall level of network security protection. Accurate, effective and comprehensive network asset detection is the prerequisite for effective network asset management, and it is also the basis for security monitoring and analysis. This paper proposed an artificial intelligence algorithm based scheme which accurately identify the website icon and help to determine the ownership of network assets. Through experiments based on data set collected from real network, the result demonstrate that the proposed scheme has higher accuracy and lower false alarm rate, and can effectively reduce the training cost.
Xu, Baoyue, Du, Dajun, Zhang, Changda, Zhang, Jin.  2021.  A Honeypot-based Attack Detection Method for Networked Inverted Pendulum System. 2021 40th Chinese Control Conference (CCC). :8645–8650.
The data transmitted via the network may be vulnerable to cyber attacks in networked inverted pendulum system (NIPS), how to detect cyber attacks is a challenging issue. To solve this problem, this paper investigates a honeypot-based attack detection method for NIPS. Firstly, honeypot for NIPS attack detection (namely NipsPot) is constructed by deceptive environment module of a virtual closed-loop control system, and the stealthiness of typical covert attacks is analysed. Secondly, attack data is collected by NipsPot, which is used to train supported vector machine (SVM) model for attack detection. Finally, simulation results demonstrate that NipsPot-based attack detector can achieve the accuracy rate of 99.78%, the precision rate of 98.75%, and the recall rate of 100%.