Visible to the public Biblio

Filters: Keyword is deep neural networks  [Clear All Filters]
2023-07-12
Xiang, Peng, Peng, ChengWei, Li, Qingshan.  2022.  Hierarchical Association Features Learning for Network Traffic Recognition. 2022 International Conference on Information Processing and Network Provisioning (ICIPNP). :129—133.
With the development of network technology, identifying specific traffic has become important in network monitoring and security. However, designing feature sets that can accurately describe network traffic is still an urgent problem. Most of existing researches cannot realize effectively the identification of targets, and don't perform well in the complex and dynamic network environment. Aiming at these problems, we propose a novel method in this paper, which learns correlation features of network traffic based on the hierarchical structure. Firstly, the method learns the spatial-temporal features using convolutional neural networks (CNNs) and the bidirectional long short-term memory networks (Bi-LSTMs), then builds network topology to capture dependency characteristics between sessions and learns the context-related features through the graph attention networks (GATs). Finally, the network traffic session is classified using a fully connected network. The experimental results show that our method can effectively improve the detection ability and achieve a better classification performance overall.
2023-06-22
Bennet, Ms. Deepthi Tabitha, Bennet, Ms. Preethi Samantha, Anitha, D.  2022.  Securing Smart City Networks - Intelligent Detection Of DDoS Cyber Attacks. 2022 5th International Conference on Contemporary Computing and Informatics (IC3I). :1575–1580.

A distributed denial-of-service (DDoS) is a malicious attempt by attackers to disrupt the normal traffic of a targeted server, service or network. This is done by overwhelming the target and its surrounding infrastructure with a flood of Internet traffic. The multiple compromised computer systems (bots or zombies) then act as sources of attack traffic. Exploited machines can include computers and other network resources such as IoT devices. The attack results in either degraded network performance or a total service outage of critical infrastructure. This can lead to heavy financial losses and reputational damage. These attacks maximise effectiveness by controlling the affected systems remotely and establishing a network of bots called bot networks. It is very difficult to separate the attack traffic from normal traffic. Early detection is essential for successful mitigation of the attack, which gives rise to a very important role in cybersecurity to detect the attacks and mitigate the effects. This can be done by deploying machine learning or deep learning models to monitor the traffic data. We propose using various machine learning and deep learning algorithms to analyse the traffic patterns and separate malicious traffic from normal traffic. Two suitable datasets have been identified (DDoS attack SDN dataset and CICDDoS2019 dataset). All essential preprocessing is performed on both datasets. Feature selection is also performed before detection techniques are applied. 8 different Neural Networks/ Ensemble/ Machine Learning models are chosen and the datasets are analysed. The best model is chosen based on the performance metrics (DEEP NEURAL NETWORK MODEL). An alternative is also suggested (Next best - Hypermodel). Optimisation by Hyperparameter tuning further enhances the accuracy. Based on the nature of the attack and the intended target, suitable mitigation procedures can then be deployed.

2022-11-08
Yang, Shaofei, Liu, Longjun, Li, Baoting, Sun, Hongbin, Zheng, Nanning.  2020.  Exploiting Variable Precision Computation Array for Scalable Neural Network Accelerators. 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS). :315–319.
In this paper, we present a flexible Variable Precision Computation Array (VPCA) component for different accelerators, which leverages a sparsification scheme for activations and a low bits serial-parallel combination computation unit for improving the efficiency and resiliency of accelerators. The VPCA can dynamically decompose the width of activation/weights (from 32bit to 3bit in different accelerators) into 2-bits serial computation units while the 2bits computing units can be combined in parallel computing for high throughput. We propose an on-the-fly compressing and calculating strategy SLE-CLC (single lane encoding, cross lane calculation), which could further improve performance of 2-bit parallel computing. The experiments results on image classification datasets show VPCA can outperforms DaDianNao, Stripes, Loom-2bit by 4.67×, 2.42×, 1.52× without other overhead on convolution layers.
2022-09-30
Alqurashi, Saja, Shirazi, Hossein, Ray, Indrakshi.  2021.  On the Performance of Isolation Forest and Multi Layer Perceptron for Anomaly Detection in Industrial Control Systems Networks. 2021 8th International Conference on Internet of Things: Systems, Management and Security (IOTSMS). :1–6.
With an increasing number of adversarial attacks against Industrial Control Systems (ICS) networks, enhancing the security of such systems is invaluable. Although attack prevention strategies are often in place, protecting against all attacks, especially zero-day attacks, is becoming impossible. Intrusion Detection Systems (IDS) are needed to detect such attacks promptly. Machine learning-based detection systems, especially deep learning algorithms, have shown promising results and outperformed other approaches. In this paper, we study the efficacy of a deep learning approach, namely, Multi Layer Perceptron (MLP), in detecting abnormal behaviors in ICS network traffic. We focus on very common reconnaissance attacks in ICS networks. In such attacks, the adversary focuses on gathering information about the targeted network. To evaluate our approach, we compare MLP with isolation Forest (i Forest), a statistical machine learning approach. Our proposed deep learning approach achieves an accuracy of more than 99% while i Forest achieves only 75%. This helps to reinforce the promise of using deep learning techniques for anomaly detection.
2022-06-09
Yan, Longchuan, Zhang, Zhaoxia, Huang, Huige, Yuan, Xiaoyu, Peng, Yuanlong, Zhang, Qingyun.  2021.  An Improved Deep Pairwise Supervised Hashing Algorithm for Fast Image Retrieval. 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA). 2:1152–1156.
In recent years, hashing algorithm has been widely researched and has made considerable progress in large-scale image retrieval tasks due to its advantages of convenient storage and fast calculation efficiency. Nowadays most researchers use deep convolutional neural networks (CNNs) to perform feature learning and hash coding learning at the same time for image retrieval and the deep hashing methods based on deep CNNs perform much better than the traditional manual feature hashing methods. But most methods are designed to handle simple binary similarity and decrease quantization error, ignoring that the features of similar images and hashing codes generated are not compact enough. In order to enhance the performance of CNNs-based hashing algorithms for large scale image retrieval, this paper proposes a new deep-supervised hashing algorithm in which a novel channel attention mechanism is added and the loss function is elaborately redesigned to generate compact binary codes. It experimentally proves that, compared with the existing hashing methods, this method has better performance on two large scale image datasets CIFAR-10 and NUS-WIDE.
2022-06-08
Wang, Runhao, Kang, Jiexiang, Yin, Wei, Wang, Hui, Sun, Haiying, Chen, Xiaohong, Gao, Zhongjie, Wang, Shuning, Liu, Jing.  2021.  DeepTrace: A Secure Fingerprinting Framework for Intellectual Property Protection of Deep Neural Networks. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :188–195.

Deep Neural Networks (DNN) has gained great success in solving several challenging problems in recent years. It is well known that training a DNN model from scratch requires a lot of data and computational resources. However, using a pre-trained model directly or using it to initialize weights cost less time and often gets better results. Therefore, well pre-trained DNN models are valuable intellectual property that we should protect. In this work, we propose DeepTrace, a framework for model owners to secretly fingerprinting the target DNN model using a special trigger set and verifying from outputs. An embedded fingerprint can be extracted to uniquely identify the information of model owner and authorized users. Our framework benefits from both white-box and black-box verification, which makes it useful whether we know the model details or not. We evaluate the performance of DeepTrace on two different datasets, with different DNN architectures. Our experiment shows that, with the advantages of combining white-box and black-box verification, our framework has very little effect on model accuracy, and is robust against different model modifications. It also consumes very little computing resources when extracting fingerprint.

2022-04-19
Shafique, Muhammad, Marchisio, Alberto, Wicaksana Putra, Rachmad Vidya, Hanif, Muhammad Abdullah.  2021.  Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework ICCAD Special Session Paper. 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). :1–9.
The security and privacy concerns along with the amount of data that is required to be processed on regular basis has pushed processing to the edge of the computing systems. Deploying advanced Neural Networks (NN), such as deep neural networks (DNNs) and spiking neural networks (SNNs), that offer state-of-the-art results on resource-constrained edge devices is challenging due to the stringent memory and power/energy constraints. Moreover, these systems are required to maintain correct functionality under diverse security and reliability threats. This paper first discusses existing approaches to address energy efficiency, reliability, and security issues at different system layers, i.e., hardware (HW) and software (SW). Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation. To address reliability threats (like permanent and transient faults), we highlight cost-effective mitigation techniques, like fault-aware training and mapping. Moreover, we briefly discuss effective detection and protection techniques to address security threats (like model and data corruption). Towards the end, we discuss how these techniques can be combined in an integrated cross-layer framework for realizing robust and energy-efficient Edge AI systems.
2022-03-22
Bai, Zhihao, Wang, Ke, Zhu, Hang, Cao, Yinzhi, Jin, Xin.  2021.  Runtime Recovery of Web Applications under Zero-Day ReDoS Attacks. 2021 IEEE Symposium on Security and Privacy (SP). :1575—1588.
Regular expression denial of service (ReDoS)— which exploits the super-linear running time of matching regular expressions against carefully crafted inputs—is an emerging class of DoS attacks to web services. One challenging question for a victim web service under ReDoS attacks is how to quickly recover its normal operation after ReDoS attacks, especially these zero-day ones exploiting previously unknown vulnerabilities.In this paper, we present RegexNet, the first payload-based, automated, reactive ReDoS recovery system for web services. RegexNet adopts a learning model, which is updated constantly in a feedback loop during runtime, to classify payloads of upcoming requests including the request contents and database query responses. If detected as a cause leading to ReDoS, RegexNet migrates those requests to a sandbox and isolates their execution for a fast, first-measure recovery.We have implemented a RegexNet prototype and integrated it with HAProxy and Node.js. Evaluation results show that RegexNet is effective in recovering the performance of web services against zero-day ReDoS attacks, responsive on reacting to attacks in sub-minute, and resilient to different ReDoS attack types including adaptive ones that are designed to evade RegexNet on purpose.
2021-08-11
Xue, Mingfu, Wu, Zhiyu, He, Can, Wang, Jian, Liu, Weiqiang.  2020.  Active DNN IP Protection: A Novel User Fingerprint Management and DNN Authorization Control Technique. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :975—982.
The training process of deep learning model is costly. As such, deep learning model can be treated as an intellectual property (IP) of the model creator. However, a pirate can illegally copy, redistribute or abuse the model without permission. In recent years, a few Deep Neural Networks (DNN) IP protection works have been proposed. However, most of existing works passively verify the copyright of the model after the piracy occurs, and lack of user identity management, thus cannot provide commercial copyright management functions. In this paper, a novel user fingerprint management and DNN authorization control technique based on backdoor is proposed to provide active DNN IP protection. The proposed method can not only verify the ownership of the model, but can also authenticate and manage the user's unique identity, so as to provide a commercially applicable DNN IP management mechanism. Experimental results on CIFAR-10, CIFAR-100 and Fashion-MNIST datasets show that the proposed method can achieve high detection rate for user authentication (up to 100% in the three datasets). Illegal users with forged fingerprints cannot pass authentication as the detection rates are all 0 % in the three datasets. Model owner can verify his ownership since he can trigger the backdoor with a high confidence. In addition, the accuracy drops are only 0.52%, 1.61 % and -0.65% on CIFAR-10, CIFAR-100 and Fashion-MNIST, respectively, which indicate that the proposed method will not affect the performance of the DNN models. The proposed method is also robust to model fine-tuning and pruning attacks. The detection rates for owner verification on CIFAR-10, CIFAR-100 and Fashion-MNIST are all 100% after model pruning attack, and are 90 %, 83 % and 93 % respectively after model fine-tuning attack, on the premise that the attacker wants to preserve the accuracy of the model.
2021-06-28
Li, Meng, Zhong, Qi, Zhang, Leo Yu, Du, Yajuan, Zhang, Jun, Xiang, Yong.  2020.  Protecting the Intellectual Property of Deep Neural Networks with Watermarking: The Frequency Domain Approach. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :402–409.
Similar to other digital assets, deep neural network (DNN) models could suffer from piracy threat initiated by insider and/or outsider adversaries due to their inherent commercial value. DNN watermarking is a promising technique to mitigate this threat to intellectual property. This work focuses on black-box DNN watermarking, with which an owner can only verify his ownership by issuing special trigger queries to a remote suspicious model. However, informed attackers, who are aware of the watermark and somehow obtain the triggers, could forge fake triggers to claim their ownerships since the poor robustness of triggers and the lack of correlation between the model and the owner identity. This consideration calls for new watermarking methods that can achieve better trade-off for addressing the discrepancy. In this paper, we exploit frequency domain image watermarking to generate triggers and build our DNN watermarking algorithm accordingly. Since watermarking in the frequency domain is high concealment and robust to signal processing operation, the proposed algorithm is superior to existing schemes in resisting fraudulent claim attack. Besides, extensive experimental results on 3 datasets and 8 neural networks demonstrate that the proposed DNN watermarking algorithm achieves similar performance on functionality metrics and better performance on security metrics when compared with existing algorithms.
2021-05-26
Boursinos, Dimitrios, Koutsoukos, Xenofon.  2020.  Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems. 2020 IEEE Security and Privacy Workshops (SPW). :228—233.

Cyber-physical systems (CPS) can benefit by the use of learning enabled components (LECs) such as deep neural networks (DNNs) for perception and decision making tasks. However, DNNs are typically non-transparent making reasoning about their predictions very difficult, and hence their application to safety-critical systems is very challenging. LECs could be integrated easier into CPS if their predictions could be complemented with a confidence measure that quantifies how much we trust their output. The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP). We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set. Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet. The approach is evaluated using a robotic navigation benchmark and the results show that we can computed trusted confidence bounds efficiently in real-time.

2021-05-13
Venceslai, Valerio, Marchisio, Alberto, Alouani, Ihsen, Martina, Maurizio, Shafique, Muhammad.  2020.  NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.

2021-02-01
Mangaokar, N., Pu, J., Bhattacharya, P., Reddy, C. K., Viswanath, B..  2020.  Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :139–157.
Advances in deep neural networks (DNNs) have shown tremendous promise in the medical domain. However, the deep learning tools that are helping the domain, can also be used against it. Given the prevalence of fraud in the healthcare domain, it is important to consider the adversarial use of DNNs in manipulating sensitive data that is crucial to patient healthcare. In this work, we present the design and implementation of a DNN-based image translation attack on biomedical imagery. More specifically, we propose Jekyll, a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition. The potential for fraudulent claims based on such generated `fake' medical images is significant, and we demonstrate successful attacks on both X-rays and retinal fundus image modalities. We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes. Lastly, we also investigate defensive measures based on machine learning to detect images generated by Jekyll.
2021-01-28
Kariyappa, S., Qureshi, M. K..  2020.  Defending Against Model Stealing Attacks With Adaptive Misinformation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :767—775.

Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access. Such attacks are typically carried out by querying the target model using inputs that are synthetically generated or sampled from a surrogate dataset to construct a labeled dataset. The adversary can use this labeled dataset to train a clone model, which achieves a classification accuracy comparable to that of the target model. We propose "Adaptive Misinformation" to defend against such model stealing attacks. We identify that all existing model stealing attacks invariably query the target model with Out-Of-Distribution (OOD) inputs. By selectively sending incorrect predictions for OOD queries, our defense substantially degrades the accuracy of the attacker's clone model (by up to 40%), while minimally impacting the accuracy (\textbackslashtextless; 0.5%) for benign users. Compared to existing defenses, our defense has a significantly better security vs accuracy trade-off and incurs minimal computational overhead.

2021-01-25
Chen, J., Lin, X., Shi, Z., Liu, Y..  2020.  Link Prediction Adversarial Attack Via Iterative Gradient Attack. IEEE Transactions on Computational Social Systems. 7:1081–1094.
Increasing deep neural networks are applied in solving graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep models can be revealed using carefully crafted adversarial examples generated by various adversarial attack methods. To explore this security problem, we define the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) strategy using the gradient information in the trained graph autoencoder (GAE) model. Not surprisingly, GAE can be fooled by an adversarial graph with a few links perturbed on the clean one. The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE. We can benefit the attack as an efficient privacy protection tool from the link prediction of unknown violations. On the other hand, the adversarial attack is a robust evaluation metric for current link prediction algorithms of their defensibility.
2021-01-22
Mani, G., Pasumarti, V., Bhargava, B., Vora, F. T., MacDonald, J., King, J., Kobes, J..  2020.  DeCrypto Pro: Deep Learning Based Cryptomining Malware Detection Using Performance Counters. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). :109—118.
Autonomy in cybersystems depends on their ability to be self-aware by understanding the intent of services and applications that are running on those systems. In case of mission-critical cybersystems that are deployed in dynamic and unpredictable environments, the newly integrated unknown applications or services can either be benign and essential for the mission or they can be cyberattacks. In some cases, these cyberattacks are evasive Advanced Persistent Threats (APTs) where the attackers remain undetected for reconnaissance in order to ascertain system features for an attack e.g. Trojan Laziok. In other cases, the attackers can use the system only for computing e.g. cryptomining malware. APTs such as cryptomining malware neither disrupt normal system functionalities nor trigger any warning signs because they simply perform bitwise and cryptographic operations as any other benign compression or encoding application. Thus, it is difficult for defense mechanisms such as antivirus applications to detect these attacks. In this paper, we propose an Operating Context profiling system based on deep neural networks-Long Short-Term Memory (LSTM) networks-using Windows Performance Counters data for detecting these evasive cryptomining applications. In addition, we propose Deep Cryptomining Profiler (DeCrypto Pro), a detection system with a novel model selection framework containing a utility function that can select a classification model for behavior profiling from both the light-weight machine learning models (Random Forest and k-Nearest Neighbors) and a deep learning model (LSTM), depending on available computing resources. Given data from performance counters, we show that individual models perform with high accuracy and can be trained with limited training data. We also show that the DeCrypto Profiler framework reduces the use of computational resources and accurately detects cryptomining applications by selecting an appropriate model, given the constraints such as data sample size and system configuration.
2021-01-15
Khodabakhsh, A., Busch, C..  2020.  A Generalizable Deepfake Detector based on Neural Conditional Distribution Modelling. 2020 International Conference of the Biometrics Special Interest Group (BIOSIG). :1—5.
Photo- and video-realistic generation techniques have become a reality following the advent of deep neural networks. Consequently, there are immense concerns regarding the difficulty in differentiating what content is real from what is synthetic. An example of video-realistic generation techniques is the infamous Deepfakes, which exploit the main modality by which humans identify each other. Deepfakes are a category of synthetic face generation methods and are commonly based on generative adversarial networks. In this article, we propose a novel two-step synthetic face image detection method in which general-purpose features are extracted in a first step, trivializing the task of detecting synthetic images. The anomaly detector predicts the conditional probabilities for observing every individual pixel in the image and is trained on pristine data only. The extracted anomaly features demonstrate true generalization capacity across widely different unknown synthesis methods while showing a minimal loss in performance with regard to the detection of known synthetic samples.
2021-01-11
Mihanpour, A., Rashti, M. J., Alavi, S. E..  2020.  Human Action Recognition in Video Using DB-LSTM and ResNet. 2020 6th International Conference on Web Research (ICWR). :133—138.

Human action recognition in video is one of the most widely applied topics in the field of image and video processing, with many applications in surveillance (security, sports, etc.), activity detection, video-content-based monitoring, man-machine interaction, and health/disability care. Action recognition is a complex process that faces several challenges such as occlusion, camera movement, viewpoint move, background clutter, and brightness variation. In this study, we propose a novel human action recognition method using convolutional neural networks (CNN) and deep bidirectional LSTM (DB-LSTM) networks, using only raw video frames. First, deep features are extracted from video frames using a pre-trained CNN architecture called ResNet152. The sequential information of the frames is then learned using the DB-LSTM network, where multiple layers are stacked together in both forward and backward passes of DB-LSTM, to increase depth. The evaluation results of the proposed method using PyTorch, compared to the state-of-the-art methods, show a considerable increase in the efficiency of action recognition on the UCF 101 dataset, reaching 95% recognition accuracy. The choice of the CNN architecture, proper tuning of input parameters, and techniques such as data augmentation contribute to the accuracy boost in this study.

Gautam, A., Singh, S..  2020.  A Comparative Analysis of Deep Learning based Super-Resolution Techniques for Thermal Videos. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :919—925.

Video streams acquired from thermal cameras are proven to be beneficial in diverse number of fields including military, healthcare, law enforcement, and security. Despite the hype, thermal imaging is increasingly affected by poor resolution, where it has expensive optical sensors and inability to attain optical precision. In recent years, deep learning based super-resolution algorithms are developed to enhance the video frame resolution at high accuracy. This paper presents a comparative analysis of super resolution (SR) techniques based on deep neural networks (DNN) that are applied on thermal video dataset. SRCNN, EDSR, Auto-encoder, and SRGAN are also discussed and investigated. Further the results on benchmark thermal datasets including FLIR, OSU thermal pedestrian database and OSU color thermal database are evaluated and analyzed. Based on the experimental results, it is concluded that, SRGAN has delivered a superior performance on thermal frames when compared to other techniques and improvements, which has the ability to provide state-of-the art performance in real time operations.

Rajapkar, A., Binnar, P., Kazi, F..  2020.  Design of Intrusion Prevention System for OT Networks Using Deep Neural Networks. 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–6.

The Automation industries that uses Supervisory Control and Data Acquisition (SCADA) systems are highly vulnerable for Network threats. Systems that are air-gapped and isolated from the internet are highly affected due to insider attacks like Spoofing, DOS and Malware threats that affects confidentiality, integrity and availability of Operational Technology (OT) system elements and degrade its performance even though security measures are taken. In this paper, a behavior-based intrusion prevention system (IPS) is designed for OT networks. The proposed system is implemented on SCADA test bed with two systems replicates automation scenarios in industry. This paper describes 4 main classes of cyber-attacks with their subclasses against SCADA systems and methodology with design of components of IPS system, database creation, Baselines and deployment of system in environment. IPS system identifies not only IT protocols but also Industry Control System (ICS) protocols Modbus and DNP3 with their inside communication fields using deep packet inspection (DPI). The analytical results show 99.89% accuracy on binary classification and 97.95% accuracy on multiclass classification of different attack vectors performed on network with low false positive rate. These results are also validated by actual deployment of IPS in SCADA systems with the prevention of DOS attack.

2020-12-11
Mikołajczyk, A., Grochowski, M..  2019.  Style transfer-based image synthesis as an efficient regularization technique in deep learning. 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR). :42—47.

These days deep learning is the fastest-growing area in the field of Machine Learning. Convolutional Neural Networks are currently the main tool used for the image analysis and classification purposes. Although great achievements and perspectives, deep neural networks and accompanying learning algorithms have some relevant challenges to tackle. In this paper, we have focused on the most frequently mentioned problem in the field of machine learning, that is relatively poor generalization abilities. Partial remedies for this are regularization techniques e.g. dropout, batch normalization, weight decay, transfer learning, early stopping and data augmentation. In this paper we have focused on data augmentation. We propose to use a method based on a neural style transfer, which allows to generate new unlabeled images of high perceptual quality that combine the content of a base image with the appearance of another one. In a proposed approach, the newly created images are described with pseudo-labels, and then used as a training dataset. Real, labeled images are divided into the validation and test set. We validated proposed method on a challenging skin lesion classification case study. Four representative neural architectures are examined. Obtained results show the strong potential of the proposed approach.

Cao, Y., Tang, Y..  2019.  Development of Real-Time Style Transfer for Video System. 2019 3rd International Conference on Circuits, System and Simulation (ICCSS). :183—187.

Re-drawing the image as a certain artistic style is considered to be a complicated task for computer machine. On the contrary, human can easily master the method to compose and describe the style between different images. In the past, many researchers studying on the deep neural networks had found an appropriate representation of the artistic style using perceptual loss and style reconstruction loss. In the previous works, Gatys et al. proposed an artificial system based on convolutional neural networks that creates artistic images of high perceptual quality. Whereas in terms of running speed, it was relatively time-consuming, thus it cannot apply to video style transfer. Recently, a feed-forward CNN approach has shown the potential of fast style transformation, which is an end-to-end system without hundreds of iteration while transferring. We combined the benefits of both approaches, optimized the feed-forward network and defined time loss function to make it possible to implement the style transfer on video in real time. In contrast to the past method, our method runs in real time with higher resolution while creating competitive visually pleasing and temporally consistent experimental results.

2020-12-07
Handa, A., Garg, P., Khare, V..  2018.  Masked Neural Style Transfer using Convolutional Neural Networks. 2018 International Conference on Recent Innovations in Electrical, Electronics Communication Engineering (ICRIEECE). :2099–2104.

In painting, humans can draw an interrelation between the style and the content of a given image in order to enhance visual experiences. Deep neural networks like convolutional neural networks are being used to draw a satisfying conclusion of this problem of neural style transfer due to their exceptional results in the key areas of visual perceptions such as object detection and face recognition.In this study, along with style transfer on whole image it is also outlined how transfer of style can be performed only on the specific parts of the content image which is accomplished by using masks. The style is transferred in a way that there is a least amount of loss to the content image i.e., semantics of the image is preserved.

2020-11-04
Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M..  2019.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :188—193.

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.

Chacon, H., Silva, S., Rad, P..  2019.  Deep Learning Poison Data Attack Detection. 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). :971—978.

Deep neural networks are widely used in many walks of life. Techniques such as transfer learning enable neural networks pre-trained on certain tasks to be retrained for a new duty, often with much less data. Users have access to both pre-trained model parameters and model definitions along with testing data but have either limited access to training data or just a subset of it. This is risky for system-critical applications, where adversarial information can be maliciously included during the training phase to attack the system. Determining the existence and level of attack in a model is challenging. In this paper, we present evidence on how adversarially attacking training data increases the boundary of model parameters using as an example of a CNN model and the MNIST data set as a test. This expansion is due to new characteristics of the poisonous data that are added to the training data. Approaching the problem from the feature space learned by the network provides a relation between them and the possible parameters taken by the model on the training phase. An algorithm is proposed to determine if a given network was attacked in the training by comparing the boundaries of parameters distribution on intermediate layers of the model estimated by using the Maximum Entropy Principle and the Variational inference approach.