Biblio
Deep machine learning techniques have shown promising results in network traffic classification, however, the robustness of these techniques under adversarial threats is still in question. Deep machine learning models are found vulnerable to small carefully crafted adversarial perturbations posing a major question on the performance of deep machine learning techniques. In this paper, we propose a black-box adversarial attack on network traffic classification. The proposed attack successfully evades deep machine learning-based classifiers which highlights the potential security threat of using deep machine learning techniques to realize autonomous networks.
Continuous and adaptive learning is an effective learning approach when dealing with highly dynamic and changing scenarios, where concept drift often happens. In a continuous, stream or adaptive learning setup, new measurements arrive continuously and there are no boundaries for learning, meaning that the learning model has to decide how and when to (re)learn from these new data constantly. We address the problem of adaptive and continual learning for network security, building dynamic models to detect network attacks in real network traffic. The combination of fast and big network measurements data with the re-training paradigm of adaptive learning imposes complex challenges in terms of data processing speed, which we tackle by relying on big data platforms for parallel stream processing. We build and benchmark different adaptive learning models on top of a novel big data analytics platform for network traffic monitoring and analysis tasks, and show that high speed-up computations (as high as × 6) can be achieved by parallelizing off-the-shelf stream learning approaches.
Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.
In today's world, software is ubiquitous and relied upon to perform many important and critical functions. Unfortunately, software is riddled with security vulnerabilities that invite exploitation. Attackers are particularly attracted to software systems that hold sensitive data with the goal of compromising the data. For such systems, this paper proposes a modeling method applied at design time to identify and reduce the attack surface, which arises due to the locations containing sensitive data within the software system and the accessibility of those locations to attackers. The method reduces the attack surface by changing the design so that the number of such locations is reduced. The method performs these changes on a graphical model of the software system. The changes are then considered for application to the design of the actual system to improve its security.
Today's software is full of security vulnerabilities that invite attack. Attackers are especially drawn to software systems containing sensitive data. For such systems, this paper presents a modeling approach especially suited for Serum or other forms of agile development to identify and reduce the attack surface. The latter arises due to the locations containing sensitive data within the software system that are reachable by attackers. The approach reduces the attack surface by changing the design so that the number of such locations is reduced. The approach performs these changes on a visual model of the software system. The changes are then considered for application to the actual system to improve its security.
Expected and unexpected risks in cloud computing, which included data security, data segregation, and the lack of control and knowledge, have led to some dilemmas in several fields. Among all of these dilemmas, the privacy problem is even more paramount, which has largely constrained the prevalence and development of cloud computing. There are several privacy protection algorithms proposed nowadays, which generally include two categories, Anonymity algorithm, and differential privacy mechanism. Since many types of research have already focused on the efficiency of the algorithms, few of them emphasized the different orientation and demerits between the two algorithms. Motivated by this emerging research challenge, we have conducted a comprehensive survey on the two popular privacy protection algorithms, namely K-Anonymity Algorithm and Differential Privacy Algorithm. Based on their principles, implementations, and algorithm orientations, we have done the evaluations of these two algorithms. Several expectations and comparisons are also conducted based on the current cloud computing privacy environment and its future requirements.
Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.