Biblio

Found 773 results

Filters: Keyword is Training  [Clear All Filters]
2021-09-30
Manning, Derek, Li, Peilong, Wu, Xiaoban, Luo, Yan, Zhang, Tong, Li, Weigang.  2020.  ACETA: Accelerating Encrypted Traffic Analytics on Network Edge. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
Applying machine learning techniques to detect malicious encrypted network traffic has become a challenging research topic. Traditional approaches based on studying network patterns fail to operate on encrypted data, especially without compromising the integrity of encryption. In addition, the requirement of rendering network-wide intelligent protection in a timely manner further exacerbates the problem. In this paper, we propose to leverage ×86 multicore platforms provisioned at enterprises' network edge with the software accelerators to design an encrypted traffic analytics (ETA) system with accelerated speed. Specifically, we explore a suite of data features and machine learning models with an open dataset. Then we show that by using Intel DAAL and OpenVINO libraries in model training and inference, we are able to reduce the training and inference time by a maximum order of 31× and 46× respectively while retaining the model accuracy.
2021-01-15
Li, Y., Yang, X., Sun, P., Qi, H., Lyu, S..  2020.  Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :3204—3213.
AI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for datasets of DeepFake videos. However, current DeepFake datasets suffer from low visual quality and do not resemble DeepFake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5,639 high-quality DeepFake videos of celebrities generated using improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF.
2021-03-04
Carlini, N., Farid, H..  2020.  Evading Deepfake-Image Detectors with White- and Black-Box Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2804—2813.

It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent socialmedia profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content.We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near- 0% accuracy. We develop five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.

2021-06-01
Materzynska, Joanna, Xiao, Tete, Herzig, Roei, Xu, Huijuan, Wang, Xiaolong, Darrell, Trevor.  2020.  Something-Else: Compositional Action Recognition With Spatial-Temporal Interaction Networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :1046–1056.
Human action is naturally compositional: humans can easily recognize and perform actions with objects that are different from those used in training demonstrations. In this paper, we study the compositionality of action by looking into the dynamics of subject-object interactions. We propose a novel model which can explicitly reason about the geometric relations between constituent objects and an agent performing an action. To train our model, we collect dense object box annotations on the Something-Something dataset. We propose a novel compositional action recognition task where the training combinations of verbs and nouns do not overlap with the test set. The novel aspects of our model are applicable to activities with prominent object interaction dynamics and to objects which can be tracked using state-of-the-art approaches; for activities without clearly defined spatial object-agent interactions, we rely on baseline scene-level spatio-temporal representations. We show the effectiveness of our approach not only on the proposed compositional action recognition task but also in a few-shot compositional setting which requires the model to generalize across both object appearance and action category.
2021-06-24
Nilă, Constantin, Patriciu, Victor.  2020.  Taking advantage of unsupervised learning in incident response. 2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1–6.
This paper looks at new ways to improve the necessary time for incident response triage operations. By employing unsupervised K-means, enhanced by both manual and automated feature extraction techniques, the incident response team can quickly and decisively extrapolate malicious web requests that concluded to the investigated exploitation. More precisely, we evaluated the benefits of different visualization enhancing methods that can improve feature selection and other dimensionality reduction techniques. Furthermore, early tests of the gross framework have shown that the necessary time for triage is diminished, more so if a hybrid multi-model is employed. Our case study revolved around the need for unsupervised classification of unknown web access logs. However, the demonstrated principals may be considered for other applications of machine learning in the cybersecurity domain.
2021-08-02
Bouniot, Quentin, Audigier, Romaric, Loesch, Angélique.  2020.  Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :3450—3459.
Person re-identification (re-ID) is a key problem in smart supervision of camera networks. Over the past years, models using deep learning have become state of the art. However, it has been shown that deep neural networks are flawed with adversarial examples, i.e. human-imperceptible perturbations. Extensively studied for the task of image closed- set classification, this problem can also appear in the case of open-set retrieval tasks. Indeed, recent work has shown that we can also generate adversarial examples for metric learning systems such as re-ID ones. These models remain vulnerable: when faced with adversarial examples, they fail to correctly recognize a person, which represents a security breach. These attacks are all the more dangerous as they are impossible to detect for a human operator. Attacking a metric consists in altering the distances between the feature of an attacked image and those of reference images, i.e. guides. In this article, we investigate different possible attacks depending on the number and type of guides available. From this metric attack family, two particularly effective attacks stand out. The first one, called Self Metric Attack, is a strong attack that does not need any image apart from the attacked image. The second one, called FurthestNegative Attack, makes full use of a set of images. Attacks are evaluated on commonly used datasets: Market1501 and DukeMTMC. Finally, we propose an efficient extension of adversarial training protocol adapted to metric learning as a defense that increases the robustness of re-ID models.1
2021-09-30
Jagadamba, G, Sheeba, R, Brinda, K N, Rohini, K C, Pratik, S K.  2020.  Adaptive E-Learning Authentication and Monitoring. 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). :277–283.
E-learning enables the transfer of skills, knowledge, and education to a large number of recipients. The E-Learning platform has the tendency to provide face-to-face learning through a learning management system (LMS) and facilitated an improvement in traditional educational methods. The LMS saves organization time, money and easy administration. LMS also saves user time to move across the learning place by providing a web-based environment. However, a few students could be willing to exploit such a system's weakness in a bid to cheat if the conventional authentication methods are employed. In this scenario user authentication and surveillance of end user is more challenging. A system with the simultaneous authentication is put forth through multifactor adaptive authentication methods. The proposed system provides an efficient, low cost and human intervention adaptive for e-learning environment authentication and monitoring system.
2021-06-24
Tsaknakis, Ioannis, Hong, Mingyi, Liu, Sijia.  2020.  Decentralized Min-Max Optimization: Formulations, Algorithms and Applications in Network Poisoning Attack. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :5755–5759.
This paper discusses formulations and algorithms which allow a number of agents to collectively solve problems involving both (non-convex) minimization and (concave) maximization operations. These problems have a number of interesting applications in information processing and machine learning, and in particular can be used to model an adversary learning problem called network data poisoning. We develop a number of algorithms to efficiently solve these non-convex min-max optimization problems, by combining techniques such as gradient tracking in the decentralized optimization literature and gradient descent-ascent schemes in the min-max optimization literature. Also, we establish convergence to a first order stationary point under certain conditions. Finally, we perform experiments to demonstrate that the proposed algorithms are effective in the data poisoning attack.
2021-01-11
Wu, N., Farokhi, F., Smith, D., Kaafar, M. A..  2020.  The Value of Collaboration in Convex Machine Learning with Differential Privacy. 2020 IEEE Symposium on Security and Privacy (SP). :304–317.
In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets. We use noisy, differentially-private gradients to minimize the fitness cost of the machine learning model using stochastic gradient descent. We quantify the quality of the trained model, using the fitness cost, as a function of privacy budget and size of the distributed datasets to capture the trade-off between privacy and utility in machine learning. This way, we can predict the outcome of collaboration among privacy-aware data owners prior to executing potentially computationally-expensive machine learning algorithms. Particularly, we show that the difference between the fitness of the trained machine learning model using differentially-private gradient queries and the fitness of the trained machine model in the absence of any privacy concerns is inversely proportional to the size of the training datasets squared and the privacy budget squared. We successfully validate the performance prediction with the actual performance of the proposed privacy-aware learning algorithms, applied to: financial datasets for determining interest rates of loans using regression; and detecting credit card frauds using support vector machines.
2021-05-05
Zhu, Jianping, HOU, RUI, Wang, XiaoFeng, Wang, Wenhao, Cao, Jiangfeng, Zhao, Boyan, Wang, Zhongpu, Zhang, Yuhui, Ying, Jiameng, Zhang, Lixin et al..  2020.  Enabling Rack-scale Confidential Computing using Heterogeneous Trusted Execution Environment. 2020 IEEE Symposium on Security and Privacy (SP). :1450—1465.

With its huge real-world demands, large-scale confidential computing still cannot be supported by today's Trusted Execution Environment (TEE), due to the lack of scalable and effective protection of high-throughput accelerators like GPUs, FPGAs, and TPUs etc. Although attempts have been made recently to extend the CPU-like enclave to GPUs, these solutions require change to the CPU or GPU chips, may introduce new security risks due to the side-channel leaks in CPU-GPU communication and are still under the resource constraint of today's CPU TEE.To address these problems, we present the first Heterogeneous TEE design that can truly support large-scale compute or data intensive (CDI) computing, without any chip-level change. Our approach, called HETEE, is a device for centralized management of all computing units (e.g., GPUs and other accelerators) of a server rack. It is uniquely designed to work with today's data centres and clouds, leveraging modern resource pooling technologies to dynamically compartmentalize computing tasks, and enforce strong isolation and reduce TCB through hardware support. More specifically, HETEE utilizes the PCIe ExpressFabric to allocate its accelerators to the server node on the same rack for a non-sensitive CDI task, and move them back into a secure enclave in response to the demand for confidential computing. Our design runs a thin TCB stack for security management on a security controller (SC), while leaving a large set of software (e.g., AI runtime, GPU driver, etc.) to the integrated microservers that operate enclaves. An enclaves is physically isolated from others through hardware and verified by the SC at its inception. Its microserver and computing units are restored to a secure state upon termination.We implemented HETEE on a real hardware system, and evaluated it with popular neural network inference and training tasks. Our evaluations show that HETEE can easily support the CDI tasks on the real-world scale and incurred a maximal throughput overhead of 2.17% for inference and 0.95% for training on ResNet152.

2020-12-17
Amrouche, F., Lagraa, S., Frank, R., State, R..  2020.  Intrusion detection on robot cameras using spatio-temporal autoencoders: A self-driving car application. 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). :1—5.

Robot Operating System (ROS) is becoming more and more important and is used widely by developers and researchers in various domains. One of the most important fields where it is being used is the self-driving cars industry. However, this framework is far from being totally secure, and the existing security breaches do not have robust solutions. In this paper we focus on the camera vulnerabilities, as it is often the most important source for the environment discovery and the decision-making process. We propose an unsupervised anomaly detection tool for detecting suspicious frames incoming from camera flows. Our solution is based on spatio-temporal autoencoders used to truthfully reconstruct the camera frames and detect abnormal ones by measuring the difference with the input. We test our approach on a real-word dataset, i.e. flows coming from embedded cameras of self-driving cars. Our solution outperforms the existing works on different scenarios.

2022-10-13
M, Yazhmozhi V., Janet, B., Reddy, Srinivasulu.  2020.  Anti-phishing System using LSTM and CNN. 2020 IEEE International Conference for Innovation in Technology (INOCON). :1—5.
Users prefer to do e-banking and e-shopping now-a-days because of the exponential growth of the internet. Because of this paradigm shift, hackers are finding umpteen ways to steal our personal information and critical details like details of debit and credit cards, by disguising themselves as reputed websites, just by changing the spelling or making minor modifications to the URL. Identifying whether an URL is benign or malicious is a challenging job, because it makes use of the weakness of the user. While there are several works carried out to detect phishing websites, they only use heuristic methods and list based techniques and therefore couldn't avoid phishing effectively. In this paper an anti-phishing system was proposed to protect the users. It uses an ensemble model that uses both LSTM and CNN with a massive data set containing nearly 2,00,000 URLs, that is balanced. After analyzing the accuracy of different existing approaches, it has been found that the ensemble model that uses both LSTM and CNN performed better with an accuracy of 96% and the precision is 97% respectively which is far better than the existing solutions.
2022-11-08
HeydariGorji, Ali, Rezaei, Siavash, Torabzadehkashi, Mahdi, Bobarshad, Hossein, Alves, Vladimir, Chou, Pai H..  2020.  HyperTune: Dynamic Hyperparameter Tuning for Efficient Distribution of DNN Training Over Heterogeneous Systems. 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD). :1–8.
Distributed training is a novel approach to accelerating training of Deep Neural Networks (DNN), but common training libraries fall short of addressing the distributed nature of heterogeneous processors or interruption by other workloads on the shared processing nodes. This paper describes distributed training of DNN on computational storage devices (CSD), which are NAND flash-based, high-capacity data storage with internal processing engines. A CSD-based distributed architecture incorporates the advantages of federated learning in terms of performance scalability, resiliency, and data privacy by eliminating the unnecessary data movement between the storage device and the host processor. The paper also describes Stannis, a DNN training framework that improves on the shortcomings of existing distributed training frameworks by dynamically tuning the training hyperparameters in heterogeneous systems to maintain the maximum overall processing speed in term of processed images per second and energy efficiency. Experimental results on image classification training benchmarks show up to 3.1x improvement in performance and 2.45x reduction in energy consumption when using Stannis plus CSD compare to the generic systems.
2021-11-30
Shateri, Mohammadhadi, Messina, Francisco, Piantanida, Pablo, Labeau, Fabrice.  2020.  On the Impact of Side Information on Smart Meter Privacy-Preserving Methods. 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1–6.
Smart meters (SMs) can pose privacy threats for consumers, an issue that has received significant attention in recent years. This paper studies the impact of Side Information (SI) on the performance of possible attacks to real-time privacy-preserving algorithms for SMs. In particular, we consider a deep adversarial learning framework, in which the desired releaser, which is a Recurrent Neural Network (RNN), is trained by fighting against an adversary network until convergence. To define the objective for training, two different approaches are considered: the Causal Adversarial Learning (CAL) and the Directed Information (DI)-based learning. The main difference between these approaches relies on how the privacy term is measured during the training process. The releaser in the CAL method, disposing of supervision from the actual values of the private variables and feedback from the adversary performance, tries to minimize the adversary log-likelihood. On the other hand, the releaser in the DI approach completely relies on the feedback received from the adversary and is optimized to maximize its uncertainty. The performance of these two algorithms is evaluated empirically using real-world SMs data, considering an attacker with access to SI (e.g., the day of the week) that tries to infer the occupancy status from the released SMs data. The results show that, although they perform similarly when the attacker does not exploit the SI, in general, the CAL method is less sensitive to the inclusion of SI. However, in both cases, privacy levels are significantly affected, particularly when multiple sources of SI are included.
2021-09-07
Sami, Muhammad, Ibarra, Matthew, Esparza, Anamaria C., Al-Jufout, Saleh, Aliasgari, Mehrdad, Mozumdar, Mohammad.  2020.  Rapid, Multi-vehicle and Feed-forward Neural Network based Intrusion Detection System for Controller Area Network Bus. 2020 IEEE Green Energy and Smart Systems Conference (IGESSC). :1–6.
In this paper, an Intrusion Detection System (IDS) in the Controller Area Network (CAN) bus of modern vehicles has been proposed. NESLIDS is an anomaly detection algorithm based on the supervised Deep Neural Network (DNN) architecture that is designed to counter three critical attack categories: Denial-of-service (DoS), fuzzy, and impersonation attacks. Our research scope included modifying DNN parameters, e.g. number of hidden layer neurons, batch size, and activation functions according to how well it maximized detection accuracy and minimized the false positive rate (FPR) for these attacks. Our methodology consisted of collecting CAN Bus data from online and in real-time, injecting attack data after data collection, preprocessing in Python, training the DNN, and testing the model with different datasets. Results show that the proposed IDS effectively detects all attack types for both types of datasets. NESLIDS outperforms existing approaches in terms of accuracy, scalability, and low false alarm rates.
2022-09-09
Yucheng, Zeng, Yongjiayou, Zeng, Yuhan, Zeng, Ruihan, Tao.  2020.  Research on the Evaluation of Supply Chain Financial Risk under the Domination of 3PL Based on BP Neural Network. 2020 2nd International Conference on Economic Management and Model Engineering (ICEMME). :886—893.
The rise of supply chain finance has provided effective assistance to SMEs with financing difficulties. This study mainly explores the financial risk evaluation of supply chain under the leadership of 3PL. According to the risk identification, 27 comprehensive rating indicators were established, and then the model under the BP neural network was constructed through empirical data. The actual verification results show that the model performs very well in risk assessment which helps 3PL companies to better evaluate the business risks of supply chain finance, so as to take more effective risk management measures.
2021-09-07
Franco, Muriel Figueredo, Rodrigues, Bruno, Scheid, Eder John, Jacobs, Arthur, Killer, Christian, Granville, Lisandro Zambenedetti, Stiller, Burkhard.  2020.  SecBot: a Business-Driven Conversational Agent for Cybersecurity Planning and Management. 2020 16th International Conference on Network and Service Management (CNSM). :1–7.
Businesses were moving during the past decades to-ward full digital models, which made companies face new threats and cyberattacks affecting their services and, consequently, their profits. To avoid negative impacts, companies' investments in cybersecurity are increasing considerably. However, Small and Medium-sized Enterprises (SMEs) operate on small budgets, minimal technical expertise, and few personnel to address cybersecurity threats. In order to address such challenges, it is essential to promote novel approaches that can intuitively present cybersecurity-related technical information.This paper introduces SecBot, a cybersecurity-driven conversational agent (i.e., chatbot) for the support of cybersecurity planning and management. SecBot applies concepts of neural networks and Natural Language Processing (NLP), to interact and extract information from a conversation. SecBot can (a) identify cyberattacks based on related symptoms, (b) indicate solutions and configurations according to business demands, and (c) provide insightful information for the decision on cybersecurity investments and risks. A formal description had been developed to describe states, transitions, a language, and a Proof-of-Concept (PoC) implementation. A case study and a performance evaluation were conducted to provide evidence of the proposed solution's feasibility and accuracy.
2021-06-01
Zhang, Han, Song, Zhihua, Feng, Boyu, Zhou, Zhongliang, Liu, Fuxian.  2020.  Technology of Image Steganography and Steganalysis Based on Adversarial Training. 2020 16th International Conference on Computational Intelligence and Security (CIS). :77–80.
Steganography has made great progress over the past few years due to the advancement of deep convolutional neural networks (DCNN), which has caused severe problems in the network security field. Ensuring the accuracy of steganalysis is becoming increasingly difficult. In this paper, we designed a two-channel generative adversarial network (TGAN), inspired by the idea of adversarial training that is based on our previous work. The TGAN consisted of three parts: The first hiding network had two input channels and one output channel. For the second extraction network, the input was a hidden image embedded with the secret image. The third detecting network had two input channels and one output channel. Experimental results on two independent image data sets showed that the proposed TGAN performed well and had better detecting capability compared to other algorithms, thus having important theoretical significance and engineering value.
2021-05-25
Tashev, Komil, Rustamova, Sanobar.  2020.  Analysis of Subject Recognition Algorithms based on Neural Networks. 2020 International Conference on Information Science and Communications Technologies (ICISCT). :1—4.
This article describes the principles of construction, training and use of neural networks. The features of the neural network approach are indicated, as well as the range of tasks for which it is most preferable. Algorithms of functioning, software implementation and results of work of an artificial neural network are presented.
2021-03-30
Ganfure, G. O., Wu, C.-F., Chang, Y.-H., Shih, W.-K..  2020.  DeepGuard: Deep Generative User-behavior Analytics for Ransomware Detection. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

In the last couple of years, the move to cyberspace provides a fertile environment for ransomware criminals like ever before. Notably, since the introduction of WannaCry, numerous ransomware detection solution has been proposed. However, the ransomware incidence report shows that most organizations impacted by ransomware are running state of the art ransomware detection tools. Hence, an alternative solution is an urgent requirement as the existing detection models are not sufficient to spot emerging ransomware treat. With this motivation, our work proposes "DeepGuard," a novel concept of modeling user behavior for ransomware detection. The main idea is to log the file-interaction pattern of typical user activity and pass it through deep generative autoencoder architecture to recreate the input. With sufficient training data, the model can learn how to reconstruct typical user activity (or input) with minimal reconstruction error. Hence, by applying the three-sigma limit rule on the model's output, DeepGuard can distinguish the ransomware activity from the user activity. The experiment result shows that DeepGuard effectively detects a variant class of ransomware with minimal false-positive rates. Overall, modeling the attack detection with user-behavior permits the proposed strategy to have deep visibility of various ransomware families.

2021-05-05
Kishore, Pushkar, Barisal, Swadhin Kumar, Prasad Mohapatra, Durga.  2020.  JavaScript malware behaviour analysis and detection using sandbox assisted ensemble model. 2020 IEEE REGION 10 CONFERENCE (TENCON). :864—869.

Whenever any internet user visits a website, a scripting language runs in the background known as JavaScript. The embedding of malicious activities within the script poses a great threat to the cyberworld. Attackers take advantage of the dynamic nature of the JavaScript and embed malicious code within the website to download malware and damage the host. JavaScript developers obfuscate the script to keep it shielded from getting detected by the malware detectors. In this paper, we propose a novel technique for analysing and detecting JavaScript using sandbox assisted ensemble model. We extract the payload using malware-jail sandbox to get the real script. Upon getting the extracted script, we analyse it to define the features that are needed for creating the dataset. We compute Pearson's r between every feature for feature extraction. An ensemble model consisting of Sequential Minimal Optimization (SMO), Voted Perceptron and AdaBoost algorithm is used with voting technique to detect malicious JavaScript. Experimental results show that our proposed model can detect obfuscated and de-obfuscated malicious JavaScript with an accuracy of 99.6% and 0.03s detection time. Our model performs better than other state-of-the-art models in terms of accuracy and least training and detection time.

2021-09-16
Astakhova, Liudmila, Medvedev, Ivan.  2020.  The Software Application for Increasing the Awareness of Industrial Enterprise Workers on Information Security of Significant Objects of Critical Information Infrastructure. 2020 Global Smart Industry Conference (GloSIC). :121–126.
Digitalization of production and management as the imperatives of Industry 4.0 stipulated the requirements of state regulators for informing and training personnel of a significant object of critical information infrastructure. However, the attention of industrial enterprises to this problem is assessed as insufficient. This determines the relevance and purpose of this article - to develop a methodology and tool for raising the awareness of workers of an industrial enterprise about information security (IS) of significant objects of critical information infrastructure. The article reveals the features of training at industrial enterprises associated with a high level of development of safety and labor protection systems. Traditional and innovative methods and means of training personnel at the workplace within the framework of these systems and their opportunities for training in the field of information security are shown. The specificity of the content and forms of training employees on the security of critical information infrastructure has been substantiated. The scientific novelty of the study consists in the development of methods and software applications that can perform the functions of identifying personal qualities of employees; testing the input level of their knowledge in the field of IS; testing for knowledge of IS rules (by the example of a response to socio-engineering attacks); planning an individual thematic plan for employee training; automatic creation of a modular program and its content; automatic notification of the employee about the training schedule at the workplace; organization of training according to the schedule; control self-testing and testing the level of knowledge of the employee after training; organizing a survey to determine satisfaction with employee training. The practical significance of the work lies in the possibility of implementing the developed software application in industrial enterprises, which is confirmed by the successful results of its testing.
2021-06-28
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
2021-09-07
Kumar, Nripesh, Srinath, G., Prataap, Abhishek, Nirmala, S. Jaya.  2020.  Attention-based Sequential Generative Conversational Agent. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1–6.
In this work, we examine the method of enabling computers to understand human interaction by constructing a generative conversational agent. An experimental approach in trying to apply the techniques of natural language processing using recurrent neural networks (RNNs) to emulate the concept of textual entailment or human reasoning is presented. To achieve this functionality, our experiment involves developing an integrated Long Short-Term Memory cell neural network (LSTM) system enhanced with an attention mechanism. The results achieved by the model are shown in terms of the number of epochs versus loss graphs as well as a brief illustration of the model's conversational capabilities.
2021-03-29
Peng, Y., Fu, G., Luo, Y., Hu, J., Li, B., Yan, Q..  2020.  Detecting Adversarial Examples for Network Intrusion Detection System with GAN. 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). :6–10.
With the increasing scale of network, attacks against network emerge one after another, and security problems become increasingly prominent. Network intrusion detection system is a widely used and effective security means at present. In addition, with the development of machine learning technology, various intelligent intrusion detection algorithms also start to sprout. By flexibly combining these intelligent methods with intrusion detection technology, the comprehensive performance of intrusion detection can be improved, but the vulnerability of machine learning model in the adversarial environment can not be ignored. In this paper, we study the defense problem of network intrusion detection system against adversarial samples. More specifically, we design a defense algorithm for NIDS against adversarial samples by using bidirectional generative adversarial network. The generator learns the data distribution of normal samples during training, which is an implicit model reflecting the normal data distribution. After training, the adversarial sample detection module calculates the reconstruction error and the discriminator matching error of sample. Then, the adversarial samples are removed, which improves the robustness and accuracy of NIDS in the adversarial environment.