Biblio
Antifragile systems enhance their capabilities and become stronger when exposed to adverse conditions, stresses or attacks, making antifragility a desirable property for cyber defence systems that operate in contested military environments. Self-improvement in autonomic systems refers to the improvement of their self-* capabilities, so that they are able to (a) better handle previously known (anticipated) situations, and (b) deal with previously unknown (unanticipated) situations. In this position paper, we present a vision of using self-improvement through learning to achieve antifragility in autonomic cyber defence systems. We first enumerate some of the major challenges associated with realizing distributed self-improvement. We then propose a reference model for middleware frameworks for self-improving autonomic systems and a set of desirable features of such frameworks.
In Cloud Computing Environment, using only static security measures didn't mitigate the attack considerably. Hence, deployment of sophisticated methods by the attackers to understand the network topology of complex network makes the task easier. For this reason, the use of dynamic security measure as virtual machine (VM) migration increases uncertainty to locate a virtual machine in a dynamic attack surface. Although this, not all VM's migration enhances security. Indeed, the destination server to host the VM should be selected precisely in order to avoid externality and attack at the same time. In this paper, we model migration in cloud environment by using continuous Markov Chain. Then, we analyze the probability of a VM to be compromised based on the destination server parameters. Finally, we provide some numerical results to show the effectiveness of our approach in term of avoiding intrusion.
Aiming at the problem that one-dimensional parameter optimization in insider threat detection using deep learning will lead to unsatisfactory overall performance of the model, an insider threat detection method based on adaptive optimization DBN by grid search is designed. This method adaptively optimizes the learning rate and the network structure which form the two-dimensional grid, and adaptively selects a set of optimization parameters for threat detection, which optimizes the overall performance of the deep learning model. The experimental results show that the method has good adaptability. The learning rate of the deep belief net is optimized to 0.6, the network structure is optimized to 6 layers, and the threat detection rate is increased to 98.794%. The training efficiency and the threat detection rate of the deep belief net are improved.
The number of sensors and embedded devices in an urban area can be on the order of thousands. New low-power wide area (LPWA) wireless network technologies have been proposed to support this large number of asynchronous, low-bandwidth devices. Among them, the Cooperative UltraNarrowband (C-UNB) is a clean-slate cellular network technology to connect these devices to a remote site or data collection server. C-UNB employs small bandwidth channels, and a lightweight random access protocol. In this paper, a new application is investigated - the use of C-UNB wireless networks to support the Advanced Metering Infrastructure (AMI), in order to facilitate the communication between smart meters and utilities. To this end, we adapted a mathematical model for C-UNB, and implemented a network simulation module in NS-3 to represent C-UNB's physical and medium access control layer. For the application layer, we implemented the DLMS-COSEM protocol, or Device Language Message Specification - Companion Specification for Energy Metering. Details of the simulation module are presented and we conclude that it supports the results of the mathematical model.
Corpora used to learn open-domain Question-Answering (QA) models are typically collected from a wide variety of topics or domains. Since QA requires understanding natural language, open-domain QA models generally need very large training corpora. A simple way to alleviate data demand is to restrict the domain covered by the QA model, leading thus to domain-specific QA models. While learning improved QA models for a specific domain is still challenging due to the lack of sufficient training data in the topic of interest, additional training data can be obtained from related topic domains. Thus, instead of learning a single open-domain QA model, we investigate domain adaptation approaches in order to create multiple improved domain-specific QA models. We demonstrate that this can be achieved by stratifying the source dataset, without the need of searching for complementary data unlike many other domain adaptation approaches. We propose a deep architecture that jointly exploits convolutional and recurrent networks for learning domain-specific features while transferring domain-shared features. That is, we use transferable features to enable model adaptation from multiple source domains. We consider different transference approaches designed to learn span-level and sentence-level QA models. We found that domain-adaptation greatly improves sentence-level QA performance, and span-level QA benefits from sentence information. Finally, we also show that a simple clustering algorithm may be employed when the topic domains are unknown and the resulting loss in accuracy is negligible.
Trust Relationships have shown great potential to improve recommendation quality, especially for cold start and sparse users. Since each user trust their friends in different degrees, there are numbers of works been proposed to take Trust Strength into account for recommender systems. However, these methods ignore the information of trust directions between users. In this paper, we propose a novel method to adaptively learn directive trust strength to improve trust-aware recommender systems. Advancing previous works, we propose to establish direction of trust strength by modeling the implicit relationships between users with roles of trusters and trustees. Specially, under new trust strength with directions, how to compute the directive trust strength is becoming a new challenge. Therefore, we present a novel method to adaptively learn directive trust strengths in a unified framework by enforcing the trust strength into range of [0, 1] through a mapping function. Our experiments on Epinions and Ciao datasets demonstrate that the proposed algorithm can effectively outperform several state-of-art algorithms on both MAE and RMSE metrics.
The use of Knuth's Rule and Bayesian Blocks constant piecewise models for characterization of RFID traffic has been proposed already. This study presents an evaluation of the application of those two modeling techniques for various RFID traffic patterns. The data sets used in this study consist of time series of binned RFID command counts. More specifically., we compare the shape of several empirical plots of raw data sets we obtained from experimental RIFD readings., against the constant piecewise graphs produced as an output of the two modeling algorithms. One issue limiting the applicability of modeling techniques to RFID traffic is the fact that there are a large number of various RFID applications available. We consider this phenomenon to present the main motivation for this study. The general expectation is that the RFID traffic traces from different applications would be sequences with different histogram shapes. Therefore., no modeling technique could be considered universal for modeling the traffic from multiple RFID applications., without first evaluating its model performance for various traffic patterns. We postulate that differences in traffic patterns are present if the histograms of two different sets of RFID traces form visually different plot shapes.
Early detection of new kinds of malware always plays an important role in defending the network systems. Especially, if intelligent protection systems could themselves detect an existence of new malware types in their system, even with a very small number of malware samples, it must be a huge benefit for the organization as well as the social since it help preventing the spreading of that kind of malware. To deal with learning from few samples, term ``one-shot learning'' or ``fewshot learning'' was introduced, and mostly used in computer vision to recognize images, handwriting, etc. An approach introduced in this paper takes advantage of One-shot learning algorithms in solving the malware classification problem by using Memory Augmented Neural Network in combination with malware's API calls sequence, which is a very valuable source of information for identifying malware behavior. In addition, it also use some advantages of the development in Natural Language Processing field such as word2vec, etc. to convert those API sequences to numeric vectors before feeding to the one-shot learning network. The results confirm very good accuracies compared to the other traditional methods.
In the development process of critical systems, one of the main challenges is to provide early system validation and verification against vulnerabilities in order to reduce cost caused by late error detection. We propose in this paper an approach that, firstly allows formally describe system security specifications, thanks to our suggested extended attack tree. Secondly, static and dynamic system modeling by using a SysML connectivity profile to model error propagation is introduced. Finally, a model checker has been used in order to validate system specifications.
Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.
Attackers create new threats and constantly change their behavior to mislead security systems. In this paper, we propose an adaptive threat detection architecture that trains its detection models in real time. The major contributions of the proposed architecture are: i) gather data about zero-day attacks and attacker behavior using honeypots in the network; ii) process data in real time and achieve high processing throughput through detection schemes implemented with stream processing technology; iii) use of two real datasets to evaluate our detection schemes, the first from a major network operator in Brazil and the other created in our lab; iv) design and development of adaptive detection schemes including both online trained supervised classification schemes that update their parameters in real time and learn zero-day threats from the honeypots, and online trained unsupervised anomaly detection schemes that model legitimate user behavior and adapt to changes. The performance evaluation results show that proposed architecture maintains an excellent trade-off between threat detection and false positive rates and achieves high classification accuracy of more than 90%, even with legitimate behavior changes and zero-day threats.
With the repaid growth of social tagging users, it becomes very important for social tagging systems how the required resources are recommended to users rapidly and accurately. Firstly, the architecture of an agent-based intelligent social tagging system is constructed using agent technology. Secondly, the design and implementation of user interest mining, personalized recommendation and common preference group recommendation are presented. Finally, a self-adaptive recommendation strategy for social tagging and its implementation are proposed based on the analysis to the shortcoming of the personalized recommendation strategy and the common preference group recommendation strategy. The self-adaptive recommendation strategy achieves equilibrium selection between efficiency and accuracy, so that it solves the contradiction between efficiency and accuracy in the personalized recommendation model and the common preference recommendation model.
Artificial neural networks are complex biologically inspired algorithms made up of highly distributed, adaptive and self-organizing structures that make them suitable for optimization problems. They are made up of a group of interconnected nodes, similar to the great networks of neurons in the human brain. So far, artificial neural networks have not been applied to user modeling in multi-criteria recommender systems. This paper presents neural networks-based user modeling technique that exploits some of the characteristics of biological neurons for improving the accuracy of multi-criteria recommendations. The study was based upon the aggregation function approach that computes the overall rating as a function of the criteria ratings. The proposed technique was evaluated using different evaluation metrics, and the empirical results of the experiments were compared with that of the single rating-based collaborative filtering and two other similarity-based modeling approaches. The two similarity-based techniques used are: the worst-case and the average similarity techniques. The results of the comparative analysis have shown that the proposed technique is more efficient than the two similarity-based techniques and the single rating collaborative filtering technique.
The pattern recognition in the sparse representation (SR) framework has been very successful. In this model, the test sample can be represented as a sparse linear combination of training samples by solving a norm-regularized least squares problem. However, the value of regularization parameter is always indiscriminating for the whole dictionary. To enhance the group concentration of the coefficients and also to improve the sparsity, we propose a new SR model called adaptive sparse representation classifier(ASRC). In ASRC, a sparse coefficient strengthened item is added in the objective function. The model is solved by the artificial bee colony (ABC) algorithm with variable step to speed up the convergence. Also, a partition strategy for large scale dictionary is adopted to lighten bee's load and removes the irrelevant groups. Through different data sets, we empirically demonstrate the property of the new model and its recognition performance.
This paper investigates practical strategies for distributing payload across images with content-adaptive steganography and for pooling outputs of a single-image detector for steganalysis. Adopting a statistical model for the detector's output, the steganographer minimizes the power of the most powerful detector of an omniscient Warden, while the Warden, informed by the payload spreading strategy, detects with the likelihood ratio test in the form of a matched filter. Experimental results with state-of-the-art content-adaptive additive embedding schemes and rich models are included to show the relevance of the results.
A group of mobile nodes with limited capabilities sparsed in different clusters forms the backbone of Mobile Ad-Hoc Networks (MANET). In such situations, the requirements (mobility, performance, security, trust and timing constraints) vary with change in context, time, and geographic location of deployment. This leads to various performance and security challenges which necessitates a trade-off between them on the application of routing protocols in a specific context. The focus of our research is towards developing an adaptive and secure routing protocol for Mobile Ad-Hoc Networks, which dynamically configures the routing functions using varying contextual features with secure and real-time processing of traffic. In this paper, we propose a formal framework for modelling and verification of requirement constraints to be used in designing adaptive routing protocols for MANET. We formally represent the network topology, behaviour, and functionalities of the network in SMT-LIB language. In addition, our framework verifies various functional, security, and Quality-of-Service (QoS) constraints. The verification engine is built using the Yices SMT Solver. The efficacy of the proposed requirement models is demonstrated with experimental results.