Visible to the public Biblio

Found 631 results

Filters: Keyword is Deep Learning  [Clear All Filters]
2021-09-07
Vamsi, G Krishna, Rasool, Akhtar, Hajela, Gaurav.  2020.  Chatbot: A Deep Neural Network Based Human to Machine Conversation Model. 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.
A conversational agent (chatbot) is computer software capable of communicating with humans using natural language processing. The crucial part of building any chatbot is the development of conversation. Despite many developments in Natural Language Processing (NLP) and Artificial Intelligence (AI), creating a good chatbot model remains a significant challenge in this field even today. A conversational bot can be used for countless errands. In general, they need to understand the user's intent and deliver appropriate replies. This is a software program of a conversational interface that allows a user to converse in the same manner one would address a human. Hence, these are used in almost every customer communication platform, like social networks. At present, there are two basic models used in developing a chatbot. Generative based models and Retrieval based models. The recent advancements in deep learning and artificial intelligence, such as the end-to-end trainable neural networks have rapidly replaced earlier methods based on hand-written instructions and patterns or statistical methods. This paper proposes a new method of creating a chatbot using a deep neural learning method. In this method, a neural network with multiple layers is built to learn and process the data.
Lakshmi V., Santhana.  2020.  A Study on Machine Learning based Conversational Agents and Designing Techniques. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :965–968.
Chatbots are a computer program that was created to imitate the human during a conversation. In this technological era, humans were replaced by machines for performing most of the work. So chatbots were developed to mimic the conversation a human does with another person. The work a chatbot does ranges from answering simple queries to acting as personal assistant to the boss. There are different kinds of chatbots developed to cater to the needs of the people in different domain. The methodology of creating them also varies depending on their type. In this paper, the various types of chatbots and techniques such as Machine Learning, deep learning and natural language processing used for designing them were discussed in detail.
Hossain, Md Delwar, Inoue, Hiroyuki, Ochiai, Hideya, FALL, Doudou, Kadobayashi, Youki.  2020.  Long Short-Term Memory-Based Intrusion Detection System for In-Vehicle Controller Area Network Bus. 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). :10–17.
The Controller Area Network (CAN) bus system works inside connected cars as a central system for communication between electronic control units (ECUs). Despite its central importance, the CAN does not support an authentication mechanism, i.e., CAN messages are broadcast without basic security features. As a result, it is easy for attackers to launch attacks at the CAN bus network system. Attackers can compromise the CAN bus system in several ways: denial of service, fuzzing, spoofing, etc. It is imperative to devise methodologies to protect modern cars against the aforementioned attacks. In this paper, we propose a Long Short-Term Memory (LSTM)-based Intrusion Detection System (IDS) to detect and mitigate the CAN bus network attacks. We first inject attacks at the CAN bus system in a car that we have at our disposal to generate the attack dataset, which we use to test and train our model. Our results demonstrate that our classifier is efficient in detecting the CAN attacks. We achieved a detection accuracy of 99.9949%.
2021-08-31
Ebrahimian, Mahsa, Kashef, Rasha.  2020.  Efficient Detection of Shilling’s Attacks in Collaborative Filtering Recommendation Systems Using Deep Learning Models. 2020 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). :460–464.
Recommendation systems, especially collaborative filtering recommenders, are vulnerable to shilling attacks as some profit-driven users may inject fake profiles into the system to alter recommendation outputs. Current shilling attack detection methods are mostly based on feature extraction techniques. The hand-designed features can confine the model to specific domains or datasets while deep learning techniques enable us to derive deeper level features, enhance detection performance, and generalize the solution on various datasets and domains. This paper illustrates the application of two deep learning methods to detect shilling attacks. We conducted experiments on the MovieLens 100K and Netflix Dataset with different levels of attacks and types. Experimental results show that deep learning models can achieve an accuracy of up to 99%.
Zarzour, Hafed, Al shboul, Bashar, Al-Ayyoub, Mahmoud, Jararweh, Yaser.  2020.  A convolutional neural network-based reviews classification method for explainable recommendations. 2020 Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS). :1–5.
Recent advances in information filtering have resulted in effective recommender systems that are able to provide online personalized recommendations to millions of users from all over the world. However, most of these systems ignore the explanation purpose while producing recommendations with high-quality results. Moreover, the classification of reviews given to users as explanations is not fully exploited in previous studies. In this paper, we develop a convolutional neural network-based reviews classification method for explainable recommendation systems. The convolutional neural network is used to extract the reviews features for predicting whether the reviews provided as explanations are positive or negative. Based on such additional information, users can understand not only why certain items are recommended for them but also get support to know the nature of such explanations. We conduct experiments on a dataset from Amazon. The experimental results show that our method outperforms state-of-the-art methods.
2021-08-11
Xue, Mingfu, Wu, Zhiyu, He, Can, Wang, Jian, Liu, Weiqiang.  2020.  Active DNN IP Protection: A Novel User Fingerprint Management and DNN Authorization Control Technique. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :975—982.
The training process of deep learning model is costly. As such, deep learning model can be treated as an intellectual property (IP) of the model creator. However, a pirate can illegally copy, redistribute or abuse the model without permission. In recent years, a few Deep Neural Networks (DNN) IP protection works have been proposed. However, most of existing works passively verify the copyright of the model after the piracy occurs, and lack of user identity management, thus cannot provide commercial copyright management functions. In this paper, a novel user fingerprint management and DNN authorization control technique based on backdoor is proposed to provide active DNN IP protection. The proposed method can not only verify the ownership of the model, but can also authenticate and manage the user's unique identity, so as to provide a commercially applicable DNN IP management mechanism. Experimental results on CIFAR-10, CIFAR-100 and Fashion-MNIST datasets show that the proposed method can achieve high detection rate for user authentication (up to 100% in the three datasets). Illegal users with forged fingerprints cannot pass authentication as the detection rates are all 0 % in the three datasets. Model owner can verify his ownership since he can trigger the backdoor with a high confidence. In addition, the accuracy drops are only 0.52%, 1.61 % and -0.65% on CIFAR-10, CIFAR-100 and Fashion-MNIST, respectively, which indicate that the proposed method will not affect the performance of the DNN models. The proposed method is also robust to model fine-tuning and pruning attacks. The detection rates for owner verification on CIFAR-10, CIFAR-100 and Fashion-MNIST are all 100% after model pruning attack, and are 90 %, 83 % and 93 % respectively after model fine-tuning attack, on the premise that the attacker wants to preserve the accuracy of the model.
Ferrag, Mohamed Amine, Maglaras, Leandros.  2020.  DeepCoin: A Novel Deep Learning and Blockchain-Based Energy Exchange Framework for Smart Grids. IEEE Transactions on Engineering Management. 67:1285–1297.
In this paper, we propose a novel deep learning and blockchain-based energy framework for smart grids, entitled DeepCoin. The DeepCoin framework uses two schemes, a blockchain-based scheme and a deep learning-based scheme. The blockchain-based scheme consists of five phases: setup phase, agreement phase, creating a block phase and consensus-making phase, and view change phase. It incorporates a novel reliable peer-to-peer energy system that is based on the practical Byzantine fault tolerance algorithm and it achieves high throughput. In order to prevent smart grid attacks, the proposed framework makes the generation of blocks using short signatures and hash functions. The proposed deep learning-based scheme is an intrusion detection system (IDS), which employs recurrent neural networks for detecting network attacks and fraudulent transactions in the blockchain-based energy network. We study the performance of the proposed IDS on three different sources the CICIDS2017 dataset, a power system dataset, and a web robot (Bot)-Internet of Things (IoT) dataset.
2021-08-05
Alecakir, Huseyin, Kabukcu, Muhammet, Can, Burcu, Sen, Sevil.  2020.  Discovering Inconsistencies between Requested Permissions and Application Metadata by using Deep Learning. 2020 International Conference on Information Security and Cryptology (ISCTURKEY). :56—56.
Android gives us opportunity to extract meaningful information from metadata. From the security point of view, the missing important information in metadata of an application could be a sign of suspicious application, which could be directed for extensive analysis. Especially the usage of dangerous permissions is expected to be explained in app descriptions. The permission-to-description fidelity problem in the literature aims to discover such inconsistencies between the usage of permissions and descriptions. This study proposes a new method based on natural language processing and recurrent neural networks. The effect of user reviews on finding such inconsistencies is also investigated in addition to application descriptions. The experimental results show that high precision is obtained by the proposed solution, and the proposed method could be used for triage of Android applications.
2021-08-02
Peng, Ye, Fu, Guobin, Luo, Yingguang, Yu, Qi, Li, Bin, Hu, Jia.  2020.  A Two-Layer Moving Target Defense for Image Classification in Adversarial Environment. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :410—414.
Deep learning plays an increasingly important role in various fields due to its superior performance, and it also achieves advanced recognition performance in the field of image classification. However, the vulnerability of deep learning in the adversarial environment cannot be ignored, and the prediction result of the model is likely to be affected by the small perturbations added to the samples by the adversary. In this paper, we propose a two-layer dynamic defense method based on defensive techniques pool and retrained branch model pool. First, we randomly select defense methods from the defense pool to process the input. The perturbation ability of the adversarial samples preprocessed by different defense methods changed, which would produce different classification results. In addition, we conduct adversarial training based on the original model and dynamically generate multiple branch models. The classification results of these branch models for the same adversarial sample is inconsistent. We can detect the adversarial samples by using the inconsistencies in the output results of the two layers. The experimental results show that the two-layer dynamic defense method we designed achieves a good defense effect.
2021-07-28
Vinzamuri, Bhanukiran, Khabiri, Elham, Bhamidipaty, Anuradha, Mckim, Gregory, Gandhi, Biren.  2020.  An End-to-End Context Aware Anomaly Detection System. 2020 IEEE International Conference on Big Data (Big Data). :1689—1698.
Anomaly detection (AD) is very important across several real-world problems in the heavy industries and Internet-of-Things (IoT) domains. Traditional methods so far have categorized anomaly detection into (a) unsupervised, (b) semi-supervised and (c) supervised techniques. A relatively unexplored direction is the development of context aware anomaly detection systems which can build on top of any of these three techniques by using side information. Context can be captured from a different modality such as semantic graphs encoding grouping of sensors governed by the physics of the asset. Process flow diagrams of an operational plant depicting causal relationships between sensors can also provide useful context for ML algorithms. Capturing such semantics by itself can be pretty challenging, however, our paper mainly focuses on, (a) designing and implementing effective anomaly detection pipelines using sparse Gaussian Graphical Models with various statistical distance metrics, and (b) differentiating these pipelines by embedding contextual semantics inferred from graphs so as to obtain better KPIs in practice. The motivation for the latter of these two has been explained above, and the former in particular is well motivated by the relatively mediocre performance of highly parametric deep learning methods for small tabular datasets (compared to images) such as IoT sensor data. In contrast to such traditional automated deep learning (AutoAI) techniques, our anomaly detection system is based on developing semantics-driven industry specific ML pipelines which perform scalable computation evaluating several models to identify the best model. We benchmark our AD method against state-of-the-art AD techniques on publicly available UCI datasets. We also conduct a case study on IoT sensor and semantic data procured from a large thermal energy asset to evaluate the importance of semantics in enhancing our pipelines. In addition, we also provide explainable insights for our model which provide a complete perspective to a reliability engineer.
2021-07-27
Kim, Hyeji, Jiang, Yihan, Kannan, Sreeram, Oh, Sewoong, Viswanath, Pramod.  2020.  Deepcode: Feedback Codes via Deep Learning. IEEE Journal on Selected Areas in Information Theory. 1:194—206.
The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly outperforms state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability and achieve a 3dB gain in terms of SNR. We also demonstrate several desirable properties of the codes: (a) generalization to larger block lengths, (b) composability with known codes, and (c) adaptation to practical constraints. This result also has broader ramifications for coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel-specific information-theoretic insights, can potentially beat state-of-the-art codes constructed over decades of mathematical research.
2021-07-08
Li, Sichun, Jin, Xin, Yao, Sibing, Yang, Shuyu.  2020.  Underwater Small Target Recognition Based on Convolutional Neural Network. Global Oceans 2020: Singapore – U.S. Gulf Coast. :1—7.
With the increasingly extensive use of diver and unmanned underwater vehicle in military, it has posed a serious threat to the security of the national coastal area. In order to prevent the underwater diver's impact on the safety of water area, it is of great significance to identify underwater small targets in time to make early warning for it. In this paper, convolutional neural network is applied to underwater small target recognition. The recognition targets are diver, whale and dolphin. Due to the time-frequency spectrum can reflect the essential features of underwater target, convolutional neural network can learn a variety of features of the acoustic signal through the image processed by the time-frequency spectrum, time-frequency image is input to convolutional neural network to recognize the underwater small targets. According to the study of learning rate and pooling mode, the network parameters and structure suitable for underwater small target recognition in this paper are selected. The results of data processing show that the method can identify underwater small targets accurately.
2021-06-28
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
Lee, Hyunjun, Bere, Gomanth, Kim, Kyungtak, Ochoa, Justin J., Park, Joung-hu, Kim, Taesic.  2020.  Deep Learning-Based False Sensor Data Detection for Battery Energy Storage Systems. 2020 IEEE CyberPELS (CyberPELS). :1–6.
Battery energy storage systems are facing risks of unreliable battery sensor data which might be caused by sensor faults in an embedded battery management system, communication failures, and even cyber-attacks. It is crucial to evaluate the trustworthiness of battery sensor data since inaccurate sensor data could lead to not only serious damages to battery energy storage systems, but also threaten the overall reliability of their applications (e.g., electric vehicles or power grids). This paper introduces a battery sensor data trust framework enabling detecting unreliable data using a deep learning algorithm. The proposed sensor data trust mechanism could potentially improve safety and reliability of the battery energy storage systems. The proposed deep learning-based battery sensor fault detection algorithm is validated by simulation studies using a convolutional neural network.
2021-06-24
Habib ur Rehman, Muhammad, Mukhtar Dirir, Ahmed, Salah, Khaled, Svetinovic, Davor.  2020.  FairFed: Cross-Device Fair Federated Learning. 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). :1–7.
Federated learning (FL) is the rapidly developing machine learning technique that is used to perform collaborative model training over decentralized datasets. FL enables privacy-preserving model development whereby the datasets are scattered over a large set of data producers (i.e., devices and/or systems). These data producers train the learning models, encapsulate the model updates with differential privacy techniques, and share them to centralized systems for global aggregation. However, these centralized models are always prone to adversarial attacks (such as data-poisoning and model poisoning attacks) due to a large number of data producers. Hence, FL methods need to ensure fairness and high-quality model availability across all the participants in the underlying AI systems. In this paper, we propose a novel FL framework, called FairFed, to meet fairness and high-quality data requirements. The FairFed provides a fairness mechanism to detect adversaries across the devices and datasets in the FL network and reject their model updates. We use a Python-simulated FL framework to enable large-scale training over MNIST dataset. We simulate a cross-device model training settings to detect adversaries in the training network. We used TensorFlow Federated and Python to implement the fairness protocol, the deep neural network, and the outlier detection algorithm. We thoroughly test the proposed FairFed framework with random and uniform data distributions across the training network and compare our initial results with the baseline fairness scheme. Our proposed work shows promising results in terms of model accuracy and loss.
Dang, Tran Khanh, Truong, Phat T. Tran, Tran, Pi To.  2020.  Data Poisoning Attack on Deep Neural Network and Some Defense Methods. 2020 International Conference on Advanced Computing and Applications (ACOMP). :15–22.
In recent years, Artificial Intelligence has disruptively changed information technology and software engineering with a proliferation of technologies and applications based-on it. However, recent researches show that AI models in general and the most greatest invention since sliced bread - Deep Learning models in particular, are vulnerable to being hacked and can be misused for bad purposes. In this paper, we carry out a brief review of data poisoning attack - one of the two recently dangerous emerging attacks - and the state-of-the-art defense methods for this problem. Finally, we discuss current challenges and future developments.
2021-05-25
Meghdouri, Fares, Vázquez, Félix Iglesias, Zseby, Tanja.  2020.  Cross-Layer Profiling of Encrypted Network Data for Anomaly Detection. 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA). :469—478.

In January 2017 encrypted Internet traffic surpassed non-encrypted traffic. Although encryption increases security, it also masks intrusions and attacks by blocking the access to packet contents and traffic features, therefore making data analysis unfeasible. In spite of the strong effect of encryption, its impact has been scarcely investigated in the field. In this paper we study how encryption affects flow feature spaces and machine learning-based attack detection. We propose a new cross-layer feature vector that simultaneously represents traffic at three different levels: application, conversation, and endpoint behavior. We analyze its behavior under TLS and IPSec encryption and evaluate the efficacy with recent network traffic datasets and by using Random Forests classifiers. The cross-layer multi-key approach shows excellent attack detection in spite of TLS encryption. When IPsec is applied, the reduced variant obtains satisfactory detection for botnets, yet considerable performance drops for other types of attacks. The high complexity of network traffic is unfeasible for monolithic data analysis solutions, therefore requiring cross-layer analysis for which the multi-key vector becomes a powerful profiling core.

2021-05-20
Yu, Jia ao, Peng, Lei.  2020.  Black-box Attacks on DNN Classifier Based on Fuzzy Adversarial Examples. 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP). :965—969.
The security of deep learning becomes increasing important with the more and more related applications. The adversarial attack is the known method that makes the performance of deep learning network (DNN) decline rapidly. However, adversarial attack needs the gradient knowledge of the target networks to craft the specific adversarial examples, which is the white-box attack and hardly becomes true in reality. In this paper, we implement a black-box attack on DNN classifier via a functionally equivalent network without knowing the internal structure and parameters of the target networks. And we increase the entropy of the noise via deep convolution generative adversarial networks (DCGAN) to make it seems fuzzier, avoiding being probed and eliminated easily by adversarial training. Experiments show that this method can produce a large number of adversarial examples quickly in batch and the target network cannot improve its accuracy via adversarial training simply.
2021-05-18
Niloy, Nishat Tasnim, Islam, Md. Shariful.  2020.  IntellCache: An Intelligent Web Caching Scheme for Multimedia Contents. 2020 Joint 9th International Conference on Informatics, Electronics Vision (ICIEV) and 2020 4th International Conference on Imaging, Vision Pattern Recognition (icIVPR). :1–6.
The traditional reactive web caching system is getting less popular day by day due to its inefficiency in handling the overwhelming requests for multimedia content. An intelligent web caching system intends to take optimal cache decisions by predicting future popular contents (FPC) proactively. In recent years, a few approaches have proposed some intelligent caching system where they were concerned about proactive caching. Those works intensified the importance of FPC prediction using the prediction models. However, only FPC prediction may not help to get the optimal solution in every scenario. In this paper, a technique named IntellCache has been proposed that increases the caching efficiency by taking a cache decision i.e. content storing decision before storing the predicted FPC. Different deep learning models such as- multilayer perceptron (MLP), Long short-term memory (LSTM) of Recurrent Neural Network (RNN) and ConvLSTM a combination of LSTM and Convolutional Neural Network (CNN) are compared to identify the most efficient model for FPC. The information on the contents of 18 years from the MovieLens data repository has been mined to evaluate the proposed approach. Results show that this proposed scheme outperforms previous solutions by achieving a higher cache hit ratio and lower average delay and thus, ensures users' satisfaction.
Fidalgo, Ana, Medeiros, Ibéria, Antunes, Paulo, Neves, Nuno.  2020.  Towards a Deep Learning Model for Vulnerability Detection on Web Application Variants. 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). :465–476.
Reported vulnerabilities have grown significantly over the recent years, with SQL injection (SQLi) being one of the most prominent, especially in web applications. For these, such increase can be explained by the integration of multiple software parts (e.g., various plugins and modules), often developed by different organizations, composing thus web application variants. Machine Learning has the potential to be a great ally on finding vulnerabilities, aiding experts by reducing the search space or even by classifying programs on their own. However, previous work usually does not consider SQLi or utilizes techniques hard to scale. Moreover, there is a clear gap in vulnerability detection with machine learning for PHP, the most popular server-side language for web applications. This paper presents a Deep Learning model able to classify PHP slices as vulnerable (or not) to SQLi. As slices can belong to any variant, we propose the use of an intermediate language to represent the slices and interpret them as text, resorting to well-studied Natural Language Processing (NLP) techniques. Preliminary results of the use of the model show that it can discover SQLi, helping programmers and precluding attacks that would eventually cost a lot to repair.
Zheng, Wei, Gao, Jialiang, Wu, Xiaoxue, Xun, Yuxing, Liu, Guoliang, Chen, Xiang.  2020.  An Empirical Study of High-Impact Factors for Machine Learning-Based Vulnerability Detection. 2020 IEEE 2nd International Workshop on Intelligent Bug Fixing (IBF). :26–34.
Ahstract-Vulnerability detection is an important topic of software engineering. To improve the effectiveness and efficiency of vulnerability detection, many traditional machine learning-based and deep learning-based vulnerability detection methods have been proposed. However, the impact of different factors on vulnerability detection is unknown. For example, classification models and vectorization methods can directly affect the detection results and code replacement can affect the features of vulnerability detection. We conduct a comparative study to evaluate the impact of different classification algorithms, vectorization methods and user-defined variables and functions name replacement. In this paper, we collected three different vulnerability code datasets. These datasets correspond to different types of vulnerabilities and have different proportions of source code. Besides, we extract and analyze the features of vulnerability code datasets to explain some experimental results. Our findings from the experimental results can be summarized as follows: (i) the performance of using deep learning is better than using traditional machine learning and BLSTM can achieve the best performance. (ii) CountVectorizer can improve the performance of traditional machine learning. (iii) Different vulnerability types and different code sources will generate different features. We use the Random Forest algorithm to generate the features of vulnerability code datasets. These generated features include system-related functions, syntax keywords, and user-defined names. (iv) Datasets without user-defined variables and functions name replacement will achieve better vulnerability detection results.
2021-05-13
Gomathi, S., Parmar, Nilesh, Devi, Jyoti, Patel, Namrata.  2020.  Detecting Malware Attack on Cloud using Deep Learning Vector Quantization. 2020 12th International Conference on Computational Intelligence and Communication Networks (CICN). :356—361.

In recent times cloud services are used widely and due to which there are so many attacks on the cloud devices. One of the major attacks is DDos (distributed denial-of-service) -attack which mainly targeted the Memcached which is a caching system developed for speeding the websites and the networks through Memcached's database. The DDoS attack tries to destroy the database by creating a flood of internet traffic at the targeted server end. Attackers send the spoofing applications to the vulnerable UDP Memcached server which even manipulate the legitimate identity of the sender. In this work, we have proposed a vector quantization approach based on a supervised deep learning approach to detect the Memcached attack performed by the use of malicious firmware on different types of Cloud attached devices. This vector quantization approach detects the DDoas attack performed by malicious firmware on the different types of cloud devices and this also classifies the applications which are vulnerable to attack based on cloud-The Hackbeased services. The result computed during the testing shows the 98.2 % as legally positive and 0.034% as falsely negative.

Feng, Xiaohua, Feng, Yunzhong, Dawam, Edward Swarlat.  2020.  Artificial Intelligence Cyber Security Strategy. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :328—333.
Nowadays, STEM (science, technology, engineering and mathematics) have never been treated so seriously before. Artificial Intelligence (AI) has played an important role currently in STEM. Under the 2020 COVID-19 pandemic crisis, coronavirus disease across over the world we are living in. Every government seek advices from scientist before making their strategic plan. Most of countries collect data from hospitals (and care home and so on in the society), carried out data analysis, using formula to make some AI models, to predict the potential development patterns, in order to make their government strategy. AI security become essential. If a security attack make the pattern wrong, the model is not a true prediction, that could result in thousands life loss. The potential consequence of this non-accurate forecast would be even worse. Therefore, take security into account during the forecast AI modelling, step-by-step data governance, will be significant. Cyber security should be applied during this kind of prediction process using AI deep learning technology and so on. Some in-depth discussion will follow.AI security impact is a principle concern in the world. It is also significant for both nature science and social science researchers to consider in the future. In particular, because many services are running on online devices, security defenses are essential. The results should have properly data governance with security. AI security strategy should be up to the top priority to influence governments and their citizens in the world. AI security will help governments' strategy makers to work reasonably balancing between technologies, socially and politics. In this paper, strategy related challenges of AI and Security will be discussed, along with suggestions AI cyber security and politics trade-off consideration from an initial planning stage to its near future further development.
S, Naveen, Puzis, Rami, Angappan, Kumaresan.  2020.  Deep Learning for Threat Actor Attribution from Threat Reports. 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP). :1–6.
Threat Actor Attribution is the task of identifying an attacker responsible for an attack. This often requires expert analysis and involves a lot of time. There had been attempts to detect a threat actor using machine learning techniques that use information obtained from the analysis of malware samples. These techniques will only be able to identify the attack, and it is trivial to guess the attacker because various attackers may adopt an attack method. A state-of-the-art method performs attribution of threat actors from text reports using Machine Learning and NLP techniques using Threat Intelligence reports. We use the same set of Threat Reports of Advanced Persistent Threats (APT). In this paper, we propose a Deep Learning architecture to attribute Threat actors based on threat reports obtained from various Threat Intelligence sources. Our work uses Neural Networks to perform the task of attribution and show that our method makes the attribution more accurate than other techniques and state-of-the-art methods.
Liu, Shuyong, Jiang, Hongrui, Li, Sizhao, Yang, Yang, Shen, Linshan.  2020.  A Feature Compression Technique for Anomaly Detection Using Convolutional Neural Networks. 2020 IEEE 14th International Conference on Anti-counterfeiting, Security, and Identification (ASID). :39–42.
Anomaly detection classification technology based on deep learning is one of the crucial technologies supporting network security. However, as the data increasing, this traditional model cannot guarantee that the false alarm rate is minimized while meeting the high detection rate. Additionally, distribution of imbalanced abnormal samples will lead to an increase in the error rate of the classification results. In this work, since CNN is effective in network intrusion classification, we embed a compressed feature layer in CNN (Convolutional Neural Networks). The purpose is to improve the efficiency of network intrusion detection. After our model was trained for 55 epochs and we set the learning rate of the model to 0.01, the detection rate reaches over 98%.