Visible to the public Biblio

Found 431 results

Filters: Keyword is Neural networks  [Clear All Filters]
2020-10-12
Rudd-Orthner, Richard N M, Mihaylova, Lyudmilla.  2019.  An Algebraic Expert System with Neural Network Concepts for Cyber, Big Data and Data Migration. 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1–6.

This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.

Marrone, Stefano, Sansone, Carlo.  2019.  An Adversarial Perturbation Approach Against CNN-based Soft Biometrics Detection. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.
The use of biometric-based authentication systems spread over daily life consumer electronics. Over the years, researchers' interest shifted from hard (such as fingerprints, voice and keystroke dynamics) to soft biometrics (such as age, ethnicity and gender), mainly by using the latter to improve the authentication systems effectiveness. While newer approaches are constantly being proposed by domain experts, in the last years Deep Learning has raised in many computer vision tasks, also becoming the current state-of-art for several biometric approaches. However, since the automatic processing of data rich in sensitive information could expose users to privacy threats associated to their unfair use (i.e. gender or ethnicity), in the last years researchers started to focus on the development of defensive strategies in the view of a more secure and private AI. The aim of this work is to exploit Adversarial Perturbation, namely approaches able to mislead state-of-the-art CNNs by injecting a suitable small perturbation over the input image, to protect subjects against unwanted soft biometrics-based identification by automatic means. In particular, since ethnicity is one of the most critical soft biometrics, as a case of study we will focus on the generation of adversarial stickers that, once printed, can hide subjects ethnicity in a real-world scenario.
2020-10-05
Rafati, Jacob, DeGuchy, Omar, Marcia, Roummel F..  2018.  Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques. 2018 26th European Signal Processing Conference (EUSIPCO). :2015—2019.

Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.

Cruz, Rodrigo Santa, Fernando, Basura, Cherian, Anoop, Gould, Stephen.  2018.  Neural Algebra of Classifiers. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). :729—737.

The world is fundamentally compositional, so it is natural to think of visual recognition as the recognition of basic visually primitives that are composed according to well-defined rules. This strategy allows us to recognize unseen complex concepts from simple visual primitives. However, the current trend in visual recognition follows a data greedy approach where huge amounts of data are required to learn models for any desired visual concept. In this paper, we build on the compositionality principle and develop an "algebra" to compose classifiers for complex visual concepts. To this end, we learn neural network modules to perform boolean algebra operations on simple visual classifiers. Since these modules form a complete functional set, a classifier for any complex visual concept defined as a boolean expression of primitives can be obtained by recursively applying the learned modules, even if we do not have a single training sample. As our experiments show, using such a framework, we can compose classifiers for complex visual concepts outperforming standard baselines on two well-known visual recognition benchmarks. Finally, we present a qualitative analysis of our method and its properties.

2020-09-28
Liu, Kai, Zhou, Yun, Wang, Qingyong, Zhu, Xianqiang.  2019.  Vulnerability Severity Prediction With Deep Neural Network. 2019 5th International Conference on Big Data and Information Analytics (BigDIA). :114–119.
High frequency of network security incidents has also brought a lot of negative effects and even huge economic losses to countries, enterprises and individuals in recent years. Therefore, more and more attention has been paid to the problem of network security. In order to evaluate the newly included vulnerability text information accurately, and to reduce the workload of experts and the false negative rate of the traditional method. Multiple deep learning methods for vulnerability text classification evaluation are proposed in this paper. The standard Cross Site Scripting (XSS) vulnerability text data is processed first, and then classified using three kinds of deep neural networks (CNN, LSTM, TextRCNN) and one kind of traditional machine learning method (XGBoost). The dropout ratio of the optimal CNN network, the epoch of all deep neural networks and training set data were tuned via experiments to improve the fit on our target task. The results show that the deep learning methods evaluate vulnerability risk levels better, compared with traditional machine learning methods, but cost more time. We train our models in various training sets and test with the same testing set. The performance and utility of recurrent convolutional neural networks (TextRCNN) is highest in comparison to all other methods, which classification accuracy rate is 93.95%.
2020-09-21
Chow, Ka-Ho, Wei, Wenqi, Wu, Yanzhao, Liu, Ling.  2019.  Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :1282–1291.
Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
2020-09-18
Besser, Karl-Ludwig, Janda, Carsten R., Lin, Pin-Hsun, Jorswieck, Eduard A..  2019.  Flexible Design of Finite Blocklength Wiretap Codes by Autoencoders. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2512—2516.

With an increasing number of wireless devices, the risk of being eavesdropped increases as well. From information theory, it is well known that wiretap codes can asymptotically achieve vanishing decoding error probability at the legitimate receiver while also achieving vanishing leakage to eavesdroppers. However, under finite blocklength, there exists a tradeoff among different parameters of the transmission. In this work, we propose a flexible wiretap code design for Gaussian wiretap channels under finite blocklength by neural network autoencoders. We show that the proposed scheme has higher flexibility in terms of the error rate and leakage tradeoff, compared to the traditional codes.

2020-09-14
Ma, Zhuo, Liu, Yang, Liu, Ximeng, Ma, Jianfeng, Li, Feifei.  2019.  Privacy-Preserving Outsourced Speech Recognition for Smart IoT Devices. IEEE Internet of Things Journal. 6:8406–8420.
Most of the current intelligent Internet of Things (IoT) products take neural network-based speech recognition as the standard human-machine interaction interface. However, the traditional speech recognition frameworks for smart IoT devices always collect and transmit voice information in the form of plaintext, which may cause the disclosure of user privacy. Due to the wide utilization of speech features as biometric authentication, the privacy leakage can cause immeasurable losses to personal property and privacy. Therefore, in this paper, we propose an outsourced privacy-preserving speech recognition framework (OPSR) for smart IoT devices in the long short-term memory (LSTM) neural network and edge computing. In the framework, a series of additive secret sharing-based interactive protocols between two edge servers are designed to achieve lightweight outsourced computation. And based on the protocols, we implement the neural network training process of LSTM for intelligent IoT device voice control. Finally, combined with the universal composability theory and experiment results, we theoretically prove the correctness and security of our framework.
2020-09-11
Azakami, Tomoka, Shibata, Chihiro, Uda, Ryuya, Kinoshita, Toshiyuki.  2019.  Creation of Adversarial Examples with Keeping High Visual Performance. 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). :52—56.
The accuracy of the image classification by the convolutional neural network is exceeding the ability of human being and contributes to various fields. However, the improvement of the image recognition technology gives a great blow to security system with an image such as CAPTCHA. In particular, since the character string CAPTCHA has already added distortion and noise in order not to be read by the computer, it becomes a problem that the human readability is lowered. Adversarial examples is a technique to produce an image letting an image classification by the machine learning be wrong intentionally. The best feature of this technique is that when human beings compare the original image with the adversarial examples, they cannot understand the difference on appearance. However, Adversarial examples that is created with conventional FGSM cannot completely misclassify strong nonlinear networks like CNN. Osadchy et al. have researched to apply this adversarial examples to CAPTCHA and attempted to let CNN misclassify them. However, they could not let CNN misclassify character images. In this research, we propose a method to apply FGSM to the character string CAPTCHAs and to let CNN misclassified them.
2020-09-08
de Almeida Ramos, Elias, Filho, João Carlos Britto, Reis, Ricardo.  2019.  Cryptography by Synchronization of Hopfield Neural Networks that Simulate Chaotic Signals Generated by the Human Body. 2019 17th IEEE International New Circuits and Systems Conference (NEWCAS). :1–4.
In this work, an asymmetric cryptography method for information security was developed, inspired by the fact that the human body generates chaotic signals, and these signals can be used to create sequences of random numbers. Encryption circuit was implemented in a Reconfigurable Hardware (FPGA). To encode and decode an image, the chaotic synchronization between two dynamic systems, such as Hopfield neural networks (HNNs), was used to simulate chaotic signals. The notion of Homotopy, an argument of topological nature, was used for the synchronization. The results show efficiency when compared to state of the art, in terms of image correlation, histogram analysis and hardware implementation.
2020-09-04
Usama, Muhammad, Qayyum, Adnan, Qadir, Junaid, Al-Fuqaha, Ala.  2019.  Black-box Adversarial Machine Learning Attack on Network Traffic Classification. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :84—89.

Deep machine learning techniques have shown promising results in network traffic classification, however, the robustness of these techniques under adversarial threats is still in question. Deep machine learning models are found vulnerable to small carefully crafted adversarial perturbations posing a major question on the performance of deep machine learning techniques. In this paper, we propose a black-box adversarial attack on network traffic classification. The proposed attack successfully evades deep machine learning-based classifiers which highlights the potential security threat of using deep machine learning techniques to realize autonomous networks.

Jing, Huiyun, Meng, Chengrui, He, Xin, Wei, Wei.  2019.  Black Box Explanation Guided Decision-Based Adversarial Attacks. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1592—1596.
Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack.
Sutton, Sara, Bond, Benjamin, Tahiri, Sementa, Rrushi, Julian.  2019.  Countering Malware Via Decoy Processes with Improved Resource Utilization Consistency. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :110—119.
The concept of a decoy process is a new development of defensive deception beyond traditional honeypots. Decoy processes can be exceptionally effective in detecting malware, directly upon contact or by redirecting malware to decoy I/O. A key requirement is that they resemble their real counterparts very closely to withstand adversarial probes by threat actors. To be usable, decoy processes need to consume only a small fraction of the resources consumed by their real counterparts. Our contribution in this paper is twofold. We attack the resource utilization consistency of decoy processes provided by a neural network with a heatmap training mechanism, which we find to be insufficiently trained. We then devise machine learning over control flow graphs that improves the heatmap training mechanism. A neural network retrained by our work shows higher accuracy and defeats our attacks without a significant increase in its own resource utilization.
2020-08-28
Huang, Angus F.M., Chi-Wei, Yang, Tai, Hsiao-Chi, Chuan, Yang, Huang, Jay J.C., Liao, Yu-Han.  2019.  Suspicious Network Event Recognition Using Modified Stacking Ensemble Machine Learning. 2019 IEEE International Conference on Big Data (Big Data). :5873—5880.
This study aims to detect genuine suspicious events and false alarms within a dataset of network traffic alerts. The rapid development of cloud computing and artificial intelligence-oriented automatic services have enabled a large amount of data and information to be transmitted among network nodes. However, the amount of cyber-threats, cyberattacks, and network intrusions have increased in various domains of network environments. Based on the fields of data science and machine learning, this paper proposes a series of solutions involving data preprocessing, exploratory data analysis, new features creation, features selection, ensemble learning, models construction, and verification to identify suspicious network events. This paper proposes a modified form of stacking ensemble machine learning which includes AdaBoost, Neural Networks, Random Forest, LightGBM, and Extremely Randomised Trees (Extra Trees) to realise a high-performance classification. A suspicious network event recognition dataset for a security operations centre, which uses real network log observations from the 2019 IEEE BigData Cup Challenge, is used as an experimental dataset. This paper investigates the possibility of integrating big-data analytics, machine learning, and data science to improve intelligent cybersecurity.
Jafariakinabad, Fereshteh, Hua, Kien A..  2019.  Style-Aware Neural Model with Application in Authorship Attribution. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :325—328.

Writing style is a combination of consistent decisions associated with a specific author at different levels of language production, including lexical, syntactic, and structural. In this paper, we introduce a style-aware neural model to encode document information from three stylistic levels and evaluate it in the domain of authorship attribution. First, we propose a simple way to jointly encode syntactic and lexical representations of sentences. Subsequently, we employ an attention-based hierarchical neural network to encode the syntactic and semantic structure of sentences in documents while rewarding the sentences which contribute more to capturing the writing style. Our experimental results, based on four benchmark datasets, reveal the benefits of encoding document information from all three stylistic levels when compared to the baseline methods in the literature.

2020-08-17
Djemaiel, Yacine, Fessi, Boutheina A., Boudriga, Noureddine.  2019.  Using Temporal Conceptual Graphs and Neural Networks for Big Data-Based Attack Scenarios Reconstruction. 2019 IEEE Intl Conf on Parallel Distributed Processing with Applications, Big Data Cloud Computing, Sustainable Computing Communications, Social Computing Networking (ISPA/BDCloud/SocialCom/SustainCom). :991–998.
The emergence of novel technologies and high speed networks has enabled a continually generation of huge volumes of data that should be stored and processed. These big data have allowed the emergence of new forms of complex attacks whose resolution represents a big challenge. Different methods and tools are developed to deal with this issue but definite detection is still needed since various features are not considered and tracing back an attack remains a timely activity. In this context, we propose an investigation framework that allows the reconstruction of complex attack scenarios based on huge volume of data. This framework used a temporal conceptual graph to represent the big data and the dependency between them in addition to the tracing back of the whole attack scenario. The selection of the most probable attack scenario is assisted by a developed decision model based on hybrid neural network that enables the real time classification of the possible attack scenarios using RBF networks and the convergence to the most potential attack scenario within the support of an Elman network. The efficiency of the proposed framework has been illustrated for the global attack reconstruction process targeting a smart city where a set of available services are involved.
Regol, Florence, Pal, Soumyasundar, Coates, Mark.  2019.  Node Copying for Protection Against Graph Neural Network Topology Attacks. 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). :709–713.
Adversarial attacks can affect the performance of existing deep learning models. With the increased interest in graph based machine learning techniques, there have been investigations which suggest that these models are also vulnerable to attacks. In particular, corruptions of the graph topology can degrade the performance of graph based learning algorithms severely. This is due to the fact that the prediction capability of these algorithms relies mostly on the similarity structure imposed by the graph connectivity. Therefore, detecting the location of the corruption and correcting the induced errors becomes crucial. There has been some recent work which tackles the detection problem, however these methods do not address the effect of the attack on the downstream learning task. In this work, we propose an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks. The proposed methodology is applied only after the model for the downstream task is trained and the added computation cost scales well for large graphs. Experimental results show the effectiveness of our approach for several real world datasets.
2020-08-10
Kwon, Hyun, Yoon, Hyunsoo, Park, Ki-Woong.  2019.  Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :136–139.

Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.

2020-08-03
Juuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N..  2019.  PRADA: Protecting Against DNN Model Stealing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :512–527.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
2020-07-30
Wang, Tianhao, Kerschbaum, Florian.  2019.  Attacks on Digital Watermarks for Deep Neural Networks. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2622—2626.
Training deep neural networks is a computationally expensive task. Furthermore, models are often derived from proprietary datasets that have been carefully prepared and labelled. Hence, creators of deep learning models want to protect their models against intellectual property theft. However, this is not always possible, since the model may, e.g., be embedded in a mobile app for fast response times. As a countermeasure watermarks for deep neural networks have been developed that embed secret information into the model. This information can later be retrieved by the creator to prove ownership. Uchida et al. proposed the first such watermarking method. The advantage of their scheme is that it does not compromise the accuracy of the model prediction. However, in this paper we show that their technique modifies the statistical distribution of the model. Using this modification we can not only detect the presence of a watermark, but even derive its embedding length and use this information to remove the watermark by overwriting it. We show analytically that our detection algorithm follows consequentially from their embedding algorithm and propose a possible countermeasure. Our findings shall help to refine the definition of undetectability of watermarks for deep neural networks.
2020-07-20
Pengcheng, Li, Yi, Jinfeng, Zhang, Lijun.  2018.  Query-Efficient Black-Box Attack by Active Learning. 2018 IEEE International Conference on Data Mining (ICDM). :1200–1205.
Deep neural network (DNN) as a popular machine learning model is found to be vulnerable to adversarial attack. This attack constructs adversarial examples by adding small perturbations to the raw input, while appearing unmodified to human eyes but will be misclassified by a well-trained classifier. In this paper, we focus on the black-box attack setting where attackers have almost no access to the underlying models. To conduct black-box attack, a popular approach aims to train a substitute model based on the information queried from the target DNN. The substitute model can then be attacked using existing white-box attack approaches, and the generated adversarial examples will be used to attack the target DNN. Despite its encouraging results, this approach suffers from poor query efficiency, i.e., attackers usually needs to query a huge amount of times to collect enough information for training an accurate substitute model. To this end, we first utilize state-of-the-art white-box attack methods to generate samples for querying, and then introduce an active learning strategy to significantly reduce the number of queries needed. Besides, we also propose a diversity criterion to avoid the sampling bias. Our extensive experimental results on MNIST and CIFAR-10 show that the proposed method can reduce more than 90% of queries while preserve attacking success rates and obtain an accurate substitute model which is more than 85% similar with the target oracle.
Xu, Tangwei, Lu, Xiaozhen, Xiao, Liang, Tang, Yuliang, Dai, Huaiyu.  2019.  Voltage Based Authentication for Controller Area Networks with Reinforcement Learning. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1–5.
Controller area networks (CANs) are vulnerable to spoofing attacks such as frame falsifying attacks, as electronic control units (ECUs) send and receive messages without any authentication and encryption. In this paper, we propose a physical authentication scheme that exploits the voltage features of the ECU signals on the CAN bus and applies reinforcement learning to choose the authentication mode such as the protection level and test threshold. This scheme enables a monitor node to optimize the authentication mode via trial-and-error without knowing the CAN bus signal model and spoofing model. Experimental results show that the proposed authentication scheme can significantly improve the authentication accuracy and response compared with a benchmark scheme.
Boumiza, Safa, Braham, Rafik.  2019.  An Anomaly Detector for CAN Bus Networks in Autonomous Cars based on Neural Networks. 2019 International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). :1–6.
The domain of securing in-vehicle networks has attracted both academic and industrial researchers due to high danger of attacks on drivers and passengers. While securing wired and wireless interfaces is important to defend against these threats, detecting attacks is still the critical phase to construct a robust secure system. There are only a few results on securing communication inside vehicles using anomaly-detection techniques despite their efficiencies in systems that need real-time detection. Therefore, we propose an intrusion detection system (IDS) based on Multi-Layer Perceptron (MLP) neural network for Controller Area Networks (CAN) bus. This IDS divides data according to the ID field of CAN packets using K-means clustering algorithm, then it extracts suitable features and uses them to train and construct the neural network. The proposed IDS works for each ID separately and finally it combines their individual decisions to construct the final score and generates alert in the presence of attack. The strength of our intrusion detection method is that it works simultaneously for two types of attacks which will eliminate the use of several separate IDS and thus reduce the complexity and cost of implementation.
2020-07-13
Andrew, J., Karthikeyan, J., Jebastin, Jeffy.  2019.  Privacy Preserving Big Data Publication On Cloud Using Mondrian Anonymization Techniques and Deep Neural Networks. 2019 5th International Conference on Advanced Computing Communication Systems (ICACCS). :722–727.

In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.

2020-07-03
Shaout, Adnan, Crispin, Brennan.  2019.  Markov Augmented Neural Networks for Streaming Video Classification. 2019 International Arab Conference on Information Technology (ACIT). :1—7.

With the growing number of streaming services, internet providers are increasingly needing to be able to identify the types of data and content providers that are being used on their networks. Traditional methods, such as IP and port scanning, are not always available for clients using VPNs or with providers using varying IP addresses. As such, in this paper we explore a potential method using neural networks and Markov Decision Process in order to augment deep packet inspection techniques in identifying the source and class of video streaming services.