Visible to the public Biblio

Found 431 results

Filters: Keyword is Neural networks  [Clear All Filters]
2021-07-27
Kim, Hyeji, Jiang, Yihan, Kannan, Sreeram, Oh, Sewoong, Viswanath, Pramod.  2020.  Deepcode: Feedback Codes via Deep Learning. IEEE Journal on Selected Areas in Information Theory. 1:194—206.
The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly outperforms state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability and achieve a 3dB gain in terms of SNR. We also demonstrate several desirable properties of the codes: (a) generalization to larger block lengths, (b) composability with known codes, and (c) adaptation to practical constraints. This result also has broader ramifications for coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel-specific information-theoretic insights, can potentially beat state-of-the-art codes constructed over decades of mathematical research.
2021-06-30
Wang, Chenguang, Pan, Kaikai, Tindemans, Simon, Palensky, Peter.  2020.  Training Strategies for Autoencoder-based Detection of False Data Injection Attacks. 2020 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe). :1—5.
The security of energy supply in a power grid critically depends on the ability to accurately estimate the state of the system. However, manipulated power flow measurements can potentially hide overloads and bypass the bad data detection scheme to interfere the validity of estimated states. In this paper, we use an autoencoder neural network to detect anomalous system states and investigate the impact of hyperparameters on the detection performance for false data injection attacks that target power flows. Experimental results on the IEEE 118 bus system indicate that the proposed mechanism has the ability to achieve satisfactory learning efficiency and detection accuracy.
Zhao, Yi, Jia, Xian, An, Dou, Yang, Qingyu.  2020.  LSTM-Based False Data Injection Attack Detection in Smart Grids. 2020 35th Youth Academic Annual Conference of Chinese Association of Automation (YAC). :638—644.
As a typical cyber-physical system, smart grid has attracted growing attention due to the safe and efficient operation. The false data injection attack against energy management system is a new type of cyber-physical attack, which can bypass the bad data detector of the smart grid to influence the results of state estimation directly, causing the energy management system making wrong estimation and thus affects the stable operation of power grid. We transform the false data injection attack detection problem into binary classification problem in this paper, which use the long-term and short-term memory network (LSTM) to construct the detection model. After that, we use the BP algorithm to update neural network parameters and utilize the dropout method to alleviate the overfitting problem and to improve the detection accuracy. Simulation results prove that the LSTM-based detection method can achieve higher detection accuracy comparing with the BPNN-based approach.
2021-06-28
Li, Meng, Zhong, Qi, Zhang, Leo Yu, Du, Yajuan, Zhang, Jun, Xiang, Yong.  2020.  Protecting the Intellectual Property of Deep Neural Networks with Watermarking: The Frequency Domain Approach. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :402–409.
Similar to other digital assets, deep neural network (DNN) models could suffer from piracy threat initiated by insider and/or outsider adversaries due to their inherent commercial value. DNN watermarking is a promising technique to mitigate this threat to intellectual property. This work focuses on black-box DNN watermarking, with which an owner can only verify his ownership by issuing special trigger queries to a remote suspicious model. However, informed attackers, who are aware of the watermark and somehow obtain the triggers, could forge fake triggers to claim their ownerships since the poor robustness of triggers and the lack of correlation between the model and the owner identity. This consideration calls for new watermarking methods that can achieve better trade-off for addressing the discrepancy. In this paper, we exploit frequency domain image watermarking to generate triggers and build our DNN watermarking algorithm accordingly. Since watermarking in the frequency domain is high concealment and robust to signal processing operation, the proposed algorithm is superior to existing schemes in resisting fraudulent claim attack. Besides, extensive experimental results on 3 datasets and 8 neural networks demonstrate that the proposed DNN watermarking algorithm achieves similar performance on functionality metrics and better performance on security metrics when compared with existing algorithms.
2021-06-24
Habib ur Rehman, Muhammad, Mukhtar Dirir, Ahmed, Salah, Khaled, Svetinovic, Davor.  2020.  FairFed: Cross-Device Fair Federated Learning. 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). :1–7.
Federated learning (FL) is the rapidly developing machine learning technique that is used to perform collaborative model training over decentralized datasets. FL enables privacy-preserving model development whereby the datasets are scattered over a large set of data producers (i.e., devices and/or systems). These data producers train the learning models, encapsulate the model updates with differential privacy techniques, and share them to centralized systems for global aggregation. However, these centralized models are always prone to adversarial attacks (such as data-poisoning and model poisoning attacks) due to a large number of data producers. Hence, FL methods need to ensure fairness and high-quality model availability across all the participants in the underlying AI systems. In this paper, we propose a novel FL framework, called FairFed, to meet fairness and high-quality data requirements. The FairFed provides a fairness mechanism to detect adversaries across the devices and datasets in the FL network and reject their model updates. We use a Python-simulated FL framework to enable large-scale training over MNIST dataset. We simulate a cross-device model training settings to detect adversaries in the training network. We used TensorFlow Federated and Python to implement the fairness protocol, the deep neural network, and the outlier detection algorithm. We thoroughly test the proposed FairFed framework with random and uniform data distributions across the training network and compare our initial results with the baseline fairness scheme. Our proposed work shows promising results in terms of model accuracy and loss.
Połap, Dawid, Srivastava, Gautam, Jolfaei, Alireza, Parizi, Reza M..  2020.  Blockchain Technology and Neural Networks for the Internet of Medical Things. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :508–513.
In today's technological climate, users require fast automation and digitization of results for large amounts of data at record speeds. Especially in the field of medicine, where each patient is often asked to undergo many different examinations within one diagnosis or treatment. Each examination can help in the diagnosis or prediction of further disease progression. Furthermore, all produced data from these examinations must be stored somewhere and available to various medical practitioners for analysis who may be in geographically diverse locations. The current medical climate leans towards remote patient monitoring and AI-assisted diagnosis. To make this possible, medical data should ideally be secured and made accessible to many medical practitioners, which makes them prone to malicious entities. Medical information has inherent value to malicious entities due to its privacy-sensitive nature in a variety of ways. Furthermore, if access to data is distributively made available to AI algorithms (particularly neural networks) for further analysis/diagnosis, the danger to the data may increase (e.g., model poisoning with fake data introduction). In this paper, we propose a federated learning approach that uses decentralized learning with blockchain-based security and a proposition that accompanies that training intelligent systems using distributed and locally-stored data for the use of all patients. Our work in progress hopes to contribute to the latest trend of the Internet of Medical Things security and privacy.
Dang, Tran Khanh, Truong, Phat T. Tran, Tran, Pi To.  2020.  Data Poisoning Attack on Deep Neural Network and Some Defense Methods. 2020 International Conference on Advanced Computing and Applications (ACOMP). :15–22.
In recent years, Artificial Intelligence has disruptively changed information technology and software engineering with a proliferation of technologies and applications based-on it. However, recent researches show that AI models in general and the most greatest invention since sliced bread - Deep Learning models in particular, are vulnerable to being hacked and can be misused for bad purposes. In this paper, we carry out a brief review of data poisoning attack - one of the two recently dangerous emerging attacks - and the state-of-the-art defense methods for this problem. Finally, we discuss current challenges and future developments.
Ali, Muhammad, Hu, Yim-Fun, Luong, Doanh Kim, Oguntala, George, Li, Jian-Ping, Abdo, Kanaan.  2020.  Adversarial Attacks on AI based Intrusion Detection System for Heterogeneous Wireless Communications Networks. 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC). :1–6.
It has been recognized that artificial intelligence (AI) will play an important role in future societies. AI has already been incorporated in many industries to improve business processes and automation. Although the aviation industry has successfully implemented flight management systems or autopilot to automate flight operations, it is expected that full embracement of AI remains a challenge. Given the rigorous validation process and the requirements for the highest level of safety standards and risk management, AI needs to prove itself being safe to operate. This paper addresses the safety issues of AI deployment in an aviation network compatible with the Future Communication Infrastructure that utilizes heterogeneous wireless access technologies for communications between the aircraft and the ground networks. It further considers the exploitation of software defined networking (SDN) technologies in the ground network while the adoption of SDN in the airborne network can be optional. Due to the nature of centralized management in SDN-based network, the SDN controller can become a single point of failure or a target for cyber attacks. To countermeasure such attacks, an intrusion detection system utilises AI techniques, more specifically deep neural network (DNN), is considered. However, an adversary can target the AI-based intrusion detection system. This paper examines the impact of AI security attacks on the performance of the DNN algorithm. Poisoning attacks targeting the DSL-KDD datasets which were used to train the DNN algorithm were launched at the intrusion detection system. Results showed that the performance of the DNN algorithm has been significantly degraded in terms of the mean square error, accuracy rate, precision rate and the recall rate.
Lee, Dongseop, Kim, Hyunjin, Ryou, Jaecheol.  2020.  Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :538–541.
Recently, deep neural network technology has been developed and used in various fields. The image recognition model can be used for automatic safety checks at the electric factory. However, as the deep neural network develops, the importance of security increases. A poisoning attack is one of security problems. It is an attack that breaks down by entering malicious data into the training data set of the model. This paper generates adversarial data that modulates feature values to different targets by manipulating less RGB values. Then, poisoning attacks in one of the image recognition models, the show and tell model. Then use autoencoder to defend adversarial data.
2021-05-26
Boursinos, Dimitrios, Koutsoukos, Xenofon.  2020.  Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems. 2020 IEEE Security and Privacy Workshops (SPW). :228—233.

Cyber-physical systems (CPS) can benefit by the use of learning enabled components (LECs) such as deep neural networks (DNNs) for perception and decision making tasks. However, DNNs are typically non-transparent making reasoning about their predictions very difficult, and hence their application to safety-critical systems is very challenging. LECs could be integrated easier into CPS if their predictions could be complemented with a confidence measure that quantifies how much we trust their output. The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP). We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set. Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet. The approach is evaluated using a robotic navigation benchmark and the results show that we can computed trusted confidence bounds efficiently in real-time.

2021-05-25
Cai, Feiyang, Li, Jiani, Koutsoukos, Xenofon.  2020.  Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression. 2020 IEEE Security and Privacy Workshops (SPW). :208–214.

Learning-enabled components (LECs) are widely used in cyber-physical systems (CPS) since they can handle the uncertainty and variability of the environment and increase the level of autonomy. However, it has been shown that LECs such as deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction. The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS. The proposed approach is based on inductive conformal prediction and uses a regression model based on variational autoencoder. The architecture allows to take into consideration both the input and the neural network prediction for detecting adversarial, and more generally, out-of-distribution examples. We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars where a DNN is used to estimate the distance to an obstacle. The simulation results show that the method can effectively detect adversarial examples with a short detection delay.

2021-05-20
Mheisn, Alaa, Shurman, Mohammad, Al-Ma’aytah, Abdallah.  2020.  WSNB: Wearable Sensors with Neural Networks Located in a Base Station for IoT Environment. 2020 7th International Conference on Internet of Things: Systems, Management and Security (IOTSMS). :1—4.
The Internet of Things (IoT) is a system paradigm that recently introduced, which includes different smart devices and applications, especially, in smart cities, e.g.; manufacturing, homes, and offices. To improve their awareness capabilities, it is attractive to add more sensors to their framework. In this paper, we propose adding a new sensor as a wearable sensor connected wirelessly with a neural network located on the base station (WSNB). WSNB enables the added sensor to refine their labels through active learning. The new sensors achieve an average accuracy of 93.81%, which is 4.5% higher than the existing method, removing human support and increasing the life cycle for the sensors by using neural network approach in the base station.
2021-05-18
Ogawa, Yuji, Kimura, Tomotaka, Cheng, Jun.  2020.  Vulnerability Assessment for Machine Learning Based Network Anomaly Detection System. 2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). :1–2.
In this paper, we assess the vulnerability of network anomaly detection systems that use machine learning methods. Although the performance of these network anomaly detection systems is high in comparison to that of existing methods without machine learning methods, the use of machine learning methods for detecting vulnerabilities is a growing concern among researchers of image processing. If the vulnerabilities of machine learning used in the network anomaly detection method are exploited by attackers, large security threats are likely to emerge in the near future. Therefore, in this paper we clarify how vulnerability detection of machine learning network anomaly detection methods affects their performance.
2021-05-13
Wenhui, Sun, Kejin, Wang, Aichun, Zhu.  2020.  The Development of Artificial Intelligence Technology And Its Application in Communication Security. 2020 International Conference on Computer Engineering and Application (ICCEA). :752—756.
Artificial intelligence has been widely used in industries such as smart manufacturing, medical care and home furnishings. Among them, the value of the application in communication security is very important. This paper makes a further exploration of the artificial intelligence technology and its application, and gives a detailed analysis of its development, standardization and the application.
Wu, Xiaohe, Calderon, Juan, Obeng, Morrison.  2021.  Attribution Based Approach for Adversarial Example Generation. SoutheastCon 2021. :1–6.
Neural networks with deep architectures have been used to construct state-of-the-art classifiers that can match human level accuracy in areas such as image classification. However, many of these classifiers can be fooled by examples slightly modified from their original forms. In this work, we propose a novel approach for generating adversarial examples that makes use of only attribution information of the features and perturbs only features that are highly influential to the output of the classifier. We call this approach Attribution Based Adversarial Generation (ABAG). To demonstrate the effectiveness of this approach, three somewhat arbitrary algorithms are proposed and examined. In the first algorithm all non-zero attributions are utilized and associated features perturbed; in the second algorithm only the top-n most positive and top-n most negative attributions are used and corresponding features perturbed; and in the third algorithm the level of perturbation is increased in an iterative manner until an adversarial example is discovered. All of the three algorithms are implemented and experiments are performed on the well-known MNIST dataset. Experiment results show that adversarial examples can be generated very efficiently, and thus prove the validity and efficacy of ABAG - utilizing attributions for the generation of adversarial examples. Furthermore, as shown by examples, ABAG can be adapted to provides a systematic searching approach to generate adversarial examples by perturbing a minimum amount of features.
Monakhov, Yuri, Monakhov, Mikhail, Telny, Andrey, Mazurok, Dmitry, Kuznetsova, Anna.  2020.  Improving Security of Neural Networks in the Identification Module of Decision Support Systems. 2020 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). :571–574.
In recent years, neural networks have been implemented while solving various tasks. Deep learning algorithms provide state of the art performance in computer vision, NLP, speech recognition, speaker recognition and many other fields. In spite of the good performance, neural networks have significant drawback- they have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. While being imperceptible to a human eye, such perturbations lead to significant drop in classification accuracy. It is demonstrated by many studies related to neural network security. Considering the pros and cons of neural networks, as well as a variety of their applications, developing of the methods to improve the robustness of neural networks against adversarial attacks becomes an urgent task. In the article authors propose the “minimalistic” attacker model of the decision support system identification unit, adaptive recommendations on security enhancing, and a set of protective methods. Suggested methods allow for significant increase in classification accuracy under adversarial attacks, as it is demonstrated by an experiment outlined in this article.
Li, Yizhi.  2020.  Research on Application of Convolutional Neural Network in Intrusion Detection. 2020 7th International Forum on Electrical Engineering and Automation (IFEEA). :720–723.
At present, our life is almost inseparable from the network, the network provides a lot of convenience for our life. However, a variety of network security incidents occur very frequently. In recent years, with the continuous development of neural network technology, more and more researchers have applied neural network to intrusion detection, which has developed into a new research direction in intrusion detection. As long as the neural network is provided with input data including network data packets, through the process of self-learning, the neural network can separate abnormal data features and effectively detect abnormal data. Therefore, the article innovatively proposes an intrusion detection method based on deep convolutional neural networks (CNN), which is used to test on public data sets. The results show that the model has a higher accuracy rate and a lower false negative rate than traditional intrusion detection methods.
Sheptunov, Sergey A., Sukhanova, Natalia V..  2020.  The Problems of Design and Application of Switching Neural Networks in Creation of Artificial Intelligence. 2020 International Conference Quality Management, Transport and Information Security, Information Technologies (IT QM IS). :428–431.
The new switching architecture of the neural networks was proposed. The switching neural networks consist of the neurons and the switchers. The goal is to reduce expenses on the artificial neural network design and training. For realization of complex models, algorithms and methods of management the neural networks of the big size are required. The number of the interconnection links “everyone with everyone” grows with the number of neurons. The training of big neural networks requires the resources of supercomputers. Time of training of neural networks also depends on the number of neurons in the network. Switching neural networks are divided into fragments connected by the switchers. Training of switcher neuron network is provided by fragments. On the basis of switching neural networks the devices of associative memory were designed with the number of neurons comparable to the human brain.
Sheng, Mingren, Liu, Hongri, Yang, Xu, Wang, Wei, Huang, Junheng, Wang, Bailing.  2020.  Network Security Situation Prediction in Software Defined Networking Data Plane. 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications( AEECA). :475–479.
Software-Defined Networking (SDN) simplifies network management by separating the control plane from the data forwarding plane. However, the plane separation technology introduces many new loopholes in the SDN data plane. In order to facilitate taking proactive measures to reduce the damage degree of network security events, this paper proposes a security situation prediction method based on particle swarm optimization algorithm and long-short-term memory neural network for network security events on the SDN data plane. According to the statistical information of the security incident, the analytic hierarchy process is used to calculate the SDN data plane security situation risk value. Then use the historical data of the security situation risk value to build an artificial neural network prediction model. Finally, a prediction model is used to predict the future security situation risk value. Experiments show that this method has good prediction accuracy and stability.
2021-05-03
Naik, Nikhil, Nuzzo, Pierluigi.  2020.  Robustness Contracts for Scalable Verification of Neural Network-Enabled Cyber-Physical Systems. 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE). :1–12.
The proliferation of artificial intelligence based systems in all walks of life raises concerns about their safety and robustness, especially for cyber-physical systems including multiple machine learning components. In this paper, we introduce robustness contracts as a framework for compositional specification and reasoning about the robustness of cyber-physical systems based on neural network (NN) components. Robustness contracts can encompass and generalize a variety of notions of robustness which were previously proposed in the literature. They can seamlessly apply to NN-based perception as well as deep reinforcement learning (RL)-enabled control applications. We present a sound and complete algorithm that can efficiently verify the satisfaction of a class of robustness contracts on NNs by leveraging notions from Lagrangian duality to identify system configurations that violate the contracts. We illustrate the effectiveness of our approach on the verification of NN-based perception systems and deep RL-based control systems.
Paulsen, Brandon, Wang, Jingbo, Wang, Jiawei, Wang, Chao.  2020.  NEURODIFF: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :784–796.
As neural networks make their way into safety-critical systems, where misbehavior can lead to catastrophes, there is a growing interest in certifying the equivalence of two structurally similar neural networks - a problem known as differential verification. For example, compression techniques are often used in practice for deploying trained neural networks on computationally- and energy-constrained devices, which raises the question of how faithfully the compressed network mimics the original network. Unfortunately, existing methods either focus on verifying a single network or rely on loose approximations to prove the equivalence of two networks. Due to overly conservative approximation, differential verification lacks scalability in terms of both accuracy and computational cost. To overcome these problems, we propose NEURODIFF, a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification on feed-forward ReLU networks while achieving many orders-of-magnitude speedup. NEURODIFF has two key contributions. The first one is new convex approximations that more accurately bound the difference of two networks under all possible inputs. The second one is judicious use of symbolic variables to represent neurons whose difference bounds have accumulated significant error. We find that these two techniques are complementary, i.e., when combined, the benefit is greater than the sum of their individual benefits. We have evaluated NEURODIFF on a variety of differential verification tasks. Our results show that NEURODIFF is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
2021-04-27
Noh, S., Rhee, K.-H..  2020.  Implicit Authentication in Neural Key Exchange Based on the Randomization of the Public Blockchain. 2020 IEEE International Conference on Blockchain (Blockchain). :545—549.

A neural key exchange is a secret key exchange technique based on neural synchronization of the neural network. Since the neural key exchange is based on synchronizing weights within the neural network structure, the security of the algorithm does not depend on the attacker's computational capabilities. However, due to the neural key exchange's repetitive mutual-learning processes, using explicit user authentication methods -such as a public key certificate- is inefficient due to high communication overhead. Implicit authentication based on information that only authorized users know can significantly reduce overhead in communications. However, there was a lack of realistic methods to distribute secret information for authentication among authorized users. In this paper, we propose the concept idea of distributing shared secret values for implicit authentication based on the randomness of the public blockchain. Moreover, we present a method to prevent the unintentional disclosure of shared secret values to third parties in the network due to the transparency of the blockchain.

Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., Shafique, M..  2020.  Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.
2021-04-08
Ayub, M. A., Continella, A., Siraj, A..  2020.  An I/O Request Packet (IRP) Driven Effective Ransomware Detection Scheme using Artificial Neural Network. 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI). :319–324.
In recent times, there has been a global surge of ransomware attacks targeted at industries of various types and sizes from retail to critical infrastructure. Ransomware researchers are constantly coming across new kinds of ransomware samples every day and discovering novel ransomware families out in the wild. To mitigate this ever-growing menace, academia and industry-based security researchers have been utilizing unique ways to defend against this type of cyber-attacks. I/O Request Packet (IRP), a low-level file system I/O log, is a newly found research paradigm for defense against ransomware that is being explored frequently. As such in this study, to learn granular level, actionable insights of ransomware behavior, we analyze the IRP logs of 272 ransomware samples belonging to 18 different ransomware families captured during individual execution. We further our analysis by building an effective Artificial Neural Network (ANN) structure for successful ransomware detection by learning the underlying patterns of the IRP logs. We evaluate the ANN model with three different experimental settings to prove the effectiveness of our approach. The model demonstrates outstanding performance in terms of accuracy, precision score, recall score, and F1 score, i.e., in the range of 99.7%±0.2%.
2021-03-30
Ashiku, L., Dagli, C..  2020.  Agent Based Cybersecurity Model for Business Entity Risk Assessment. 2020 IEEE International Symposium on Systems Engineering (ISSE). :1—6.

Computer networks and surging advancements of innovative information technology construct a critical infrastructure for network transactions of business entities. Information exchange and data access though such infrastructure is scrutinized by adversaries for vulnerabilities that lead to cyber-attacks. This paper presents an agent-based system modelling to conceptualize and extract explicit and latent structure of the complex enterprise systems as well as human interactions within the system to determine common vulnerabilities of the entity. The model captures emergent behavior resulting from interactions of multiple network agents including the number of workstations, regular, administrator and third-party users, external and internal attacks, defense mechanisms for the network setting, and many other parameters. A risk-based approach to modelling cybersecurity of a business entity is utilized to derive the rate of attacks. A neural network model will generalize the type of attack based on network traffic features allowing dynamic state changes. Rules of engagement to generate self-organizing behavior will be leveraged to appoint a defense mechanism suitable for the attack-state of the model. The effectiveness of the model will be depicted by time-state chart that shows the number of affected assets for the different types of attacks triggered by the entity risk and the time it takes to revert into normal state. The model will also associate a relevant cost per incident occurrence that derives the need for enhancement of security solutions.