Biblio
Tracing and integrating security requirements throughout the development process is a key challenge in security engineering. In socio-technical systems, security requirements for the organizational and technical aspects of a system are currently dealt with separately, giving rise to substantial misconceptions and errors. In this paper, we present a model-based security engineering framework for supporting the system design on the organizational and technical level. The key idea is to allow the involved experts to specify security requirements in the languages they are familiar with: business analysts use BPMN for procedural system descriptions; system developers use UML to design and implement the system architecture. Security requirements are captured via the language extensions SecBPMN2 and UMLsec. We provide a model transformation to bridge the conceptual gap between SecBPMN2 and UMLsec. Using UMLsec policies, various security properties of the resulting architecture can be verified. In a case study featuring an air traffic management system, we show how our framework can be practically applied.
Software Defined Networks (SDNs) have gained prominence recently due to their flexible management and superior configuration functionality of the underlying network. SDNs, with OpenFlow as their primary implementation, allow for the use of a centralised controller to drive the decision making for all the supported devices in the network and manage traffic through routing table changes for incoming flows. In conventional networks, machine learning has been shown to detect malicious intrusion, and classify attacks such as DoS, user to root, and probe attacks. In this work, we extend the use of machine learning to improve traffic tolerance for SDNs. To achieve this, we extend the functionality of the controller to include a resilience framework, ReSDN, that incorporates machine learning to be able to distinguish DoS attacks, focussing on a neptune attack for our experiments. Our model is trained using the MIT KDD 1999 dataset. The system is developed as a module on top of the POX controller platform and evaluated using the Mininet simulator.
The paper considers the general structure of Pseudo-random binary sequence generator based on the numerical solution of chaotic differential equations. The proposed generator architecture divides the generation process in two stages: numerical simulation of the chaotic system and converting the resulting sequence to a binary form. The new method of calculation of normalization factor is applied to the conversion of state variables values to the binary sequence. Numerical solution of chaotic ODEs is implemented using semi-implicit symmetric composition D-method. Experimental study considers Thomas and Rössler attractors as test chaotic systems. Properties verification for the output sequences of generators is carried out using correlation analysis methods and NIST statistical test suite. It is shown that output sequences of investigated generators have statistical and correlation characteristics that are specific for the random sequences. The obtained results can be used in cryptography applications as well as in secure communication systems design.
OpenFlow has recently emerged as a powerful paradigm to help build dynamic, adaptive and agile networks. By decoupling control plane from data plane, OpenFlow allows network operators to program a centralized intelligence, OpenFlow controller, to manage network-wide traffic flows to meet the changing needs. However, from the security's point of view, a buggy or even malicious controller could compromise the control logic, and then the entire network. Even worse, the recent attack Stuxnet on industrial control systems also indicates the similar, severe threat to OpenFlow controllers from the commercial operating systems they are running on. In this paper, we comprehensively studied the attack vectors against the OpenFlow critical component, controller, and proposed a cross layer diversity approach that enables OpenFlow controllers to detect attacks, corruptions, failures, and then automatically continue correct execution. Case studies demonstrate that our approach can protect OpenFlow controllers from threats coming from compromised operating systems and themselves.
One important goal of black-box complexity theory is the development of complexity models allowing to derive meaningful lower bounds for whole classes of randomized search heuristics. Complementing classical runtime analysis, black-box models help us understand how algorithmic choices such as the population size, the variation operators, or the selection rules influence the optimization time. One example for such a result is the Ω(n log n) lower bound for unary unbiased algorithms on functions with a unique global optimum [Lehre/Witt, GECCO 2010], which tells us that higher arity operators or biased sampling strategies are needed when trying to beat this bound. In lack of analyzing techniques, almost no non-trivial bounds are known for other restricted models. Proving such bounds therefore remains to be one of the main challenges in black-box complexity theory. With this paper we contribute to our technical toolbox for lower bound computations by proposing a new type of information-theoretic argument. We regard the permutation- and bit-invariant version of LeadingOnes and prove that its (1+1) elitist black-box complexity is Ω(n2), a bound that is matched by (1+1)-type evolutionary algorithms. The (1+1) elitist complexity of LeadingOnes is thus considerably larger than its unrestricted one, which is known to be of order n log log n [Afshani et al., 2013].
Ransomwares have become a growing threat since 2012, and the situation continues to worsen until now. The lack of security mechanisms and security awareness are pushing the systems into mire of ransomware attacks. In this paper, a new framework called 2entFOX' is proposed in order to detect high survivable ransomwares (HSR). To our knowledge this framework can be considered as one of the first frameworks in ransomware detection because of little publicly-available research in this field. We analyzed Windows ransomwares' behaviour and we tried to find appropriate features which are particular useful in detecting this type of malwares with high detection accuracy and low false positive rate. After hard experimental analysis we extracted 20 effective features which due to two highly efficient ones we could achieve an appropriate set for HSRs detection. After proposing architecture based on Bayesian belief network, the final evaluation is done on some known ransomware samples and unknown ones based on six different scenarios. The result of this evaluations shows the high accuracy of 2entFox in detection of HSRs.
Deep packet inspection (DPI) is widely used in content-aware network applications to detect string features. It is of vital importance to improve the DPI performance due to the ever-increasing link speed. In this demo, we propose a novel DPI architecture with a hierarchy memory structure and parallel matching engines based on memory-centric FPGA. The implemented DPI prototype is able to provide up to 60Gbps full-text string matching throughput and fast rules update speed.
Abnormality detection is useful in reducing the amount of data to be processed manually by directing attention to the specific portion of data. However, selections of suitable features are important for the success of an abnormality detection system. Designing and selecting appropriate features are time-consuming, requires expensive domain knowledge and human labor. Further, it is very challenging to represent high-level concepts of abnormality in terms of raw input. Most of the existing abnormality detection system use handcrafted feature detector and are based on shallow architecture. In this work, we explore Deep Belief Network for abnormality detection and simultaneously, compared the performance of classic neural network in terms of features learned and accuracy of detecting the abnormality. Further, we explore the set of features learn by each layer of the deep architecture. We also provide a simple and fast mechanism to visualize the feature at the higher layer. Further, the effect of different activation function on abnormality detection is also compared. We observed that deep learning based approach can be used for detecting an abnormality. It has better performance compare to classical neural network in separating distinct as well as almost similar data.
We propose an interactive approach where analysts reason about the security of a system using an abstraction of its runtime structure, as opposed to looking at the code. They interactively refine a hierarchical object graph, set security properties on abstract objects or edges, query the graph, and investigate the results by studying highlighted objects or edges or tracing to the code. Behind the scenes, an inference analysis and an extraction analysis maintain the soundness of the graph with respect to the code.
There is growing evidence that spear phishing campaigns are increasingly pervasive, sophisticated, and remain the starting points of more advanced attacks. Current campaign identification and attribution process heavily relies on manual efforts and is inefficient in gathering intelligence in a timely manner. It is ideal that we can automatically attribute spear phishing emails to known campaigns and achieve early detection of new campaigns using limited labelled emails as the seeds. In this paper, we introduce four categories of email profiling features that capture various characteristics of spear phishing emails. Building on these features, we implement and evaluate an affinity graph based semi-supervised learning model for campaign attribution and detection. We demonstrate that our system, using only 25 labelled emails, achieves 0.9 F1 score with a 0.01 false positive rate in known campaign attribution, and is able to detect previously unknown spear phishing campaigns, achieving 100% 'darkmoon', over 97% of 'samkams' and 91% of 'bisrala' campaign detection using 246 labelled emails in our experiments.
In a world where highly skilled actors involved in cyber-attacks are constantly increasing and where the associated underground market continues to expand, organizations should adapt their defence strategy and improve consequently their security incident management. In this paper, we give an overview of Advanced Persistent Threats (APT) attacks life cycle as defined by security experts. We introduce our own compiled life cycle model guided by attackers objectives instead of their actions. Challenges and opportunities related to the specific camouflage actions performed at the end of each APT phase of the model are highlighted. We also give an overview of new APT protection technologies and discuss their effectiveness at each one of life cycle phases.
Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends on accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called optimal subset coupling that may be of independent interest.
Security in Mobile Ad Hoc networks is still ongoing research in the scientific community and it is difficult bring an overall security solution. In this paper we assess feasibility of distributed firewall solutions in the Mobile Ad Hoc Networks. Attention is also focused on different security solutions in the Ad Hoc networks. We propose a security architecture which secures network on the several layers and is the most secured solution out of analyzed materials. For this purpose we use distributed public key infrastructure, distributed firewall and intrusion detection system. Our architecture is using both symmetric and asymmetric cryptography and in this paper we present performance measurements and the security analysis of our solution.
As more and more cyber security incident data ranging from systems logs to vulnerability scan results are collected, manually analyzing these collected data to detect important cyber security events become impossible. Hence, data mining techniques are becoming an essential tool for real-world cyber security applications. For example, a report from Gartner [gartner12] claims that "Information security is becoming a big data analytics problem, where massive amounts of data will be correlated, analyzed and mined for meaningful patterns". Of course, data mining/analytics is a means to an end where the ultimate goal is to provide cyber security analysts with prioritized actionable insights derived from big data. This raises the question, can we directly apply existing techniques to cyber security applications? One of the most important differences between data mining for cyber security and many other data mining applications is the existence of malicious adversaries that continuously adapt their behavior to hide their actions and to make the data mining models ineffective. Unfortunately, traditional data mining techniques are insufficient to handle such adversarial problems directly. The adversaries adapt to the data miner's reactions, and data mining algorithms constructed based on a training dataset degrades quickly. To address these concerns, over the last couple of years new and novel data mining techniques which is more resilient to such adversarial behavior are being developed in machine learning and data mining community. We believe that lessons learned as a part of this research direction would be beneficial for cyber security researchers who are increasingly applying machine learning and data mining techniques in practice. To give an overview of recent developments in adversarial data mining, in this three hour long tutorial, we introduce the foundations, the techniques, and the applications of adversarial data mining to cyber security applications. We first introduce various approaches proposed in the past to defend against active adversaries, such as a minimax approach to minimize the worst case error through a zero-sum game. We then discuss a game theoretic framework to model the sequential actions of the adversary and the data miner, while both parties try to maximize their utilities. We also introduce a modified support vector machine method and a relevance vector machine method to defend against active adversaries. Intrusion detection and malware detection are two important application areas for adversarial data mining models that will be discussed in details during the tutorial. Finally, we discuss some practical guidelines on how to use adversarial data mining ideas in generic cyber security applications and how to leverage existing big data management tools for building data mining algorithms for cyber security.
The reconfiguration of FPGAs includes downloading the bit-stream file which contains the new design on the FPGA. The option to reconfigure FPGAs dynamically opens up the threat of stealing the Intellectual Property (IP) of the design. Since the configuration is usually stored in external memory, this can be easily tapped and read out by an eaves-dropper. This work presents a low cost solution in order to secure the reconfiguration of FPGAs. The proposed solution is based on an efficient-compact hardware implementation for AEGIS which is considered one of the candidates to the competition of CAESAR. The proposed architecture depends on using 1/4 AES-round for reducing the consumed area. We evaluated the presented design using 90 and 65 nm technologies. Our comparison to existing AES-based schemes reveals that the proposed design is better in terms of the hardware performance (Thr./mm2).
Finding and proving lower bounds on black-box complexities is one of the hardest problems in theory of randomized search heuristics. Until recently, there were no general ways of doing this, except for information theoretic arguments similar to the one of Droste, Jansen and Wegener. In a recent paper by Buzdalov, Kever and Doerr, a theorem is proven which may yield tighter bounds on unrestricted black-box complexity using certain problem-specific information. To use this theorem, one should split the search process into a finite number of states, describe transitions between states, and for each state specify (and prove) the maximum number of different answers to any query. We augment these state constraints by one more kind of constraints on states, namely, the maximum number of different currently possible optima. An algorithm is presented for computing the lower bounds based on these constraints. We also empirically show improved lower bounds on black-box complexity of OneMax and Mastermind.
A novel attack model is proposed against the existing wireless link-based source identification, which classifies packet sources according to the physical-layer link signatures. A link signature is believed to be a more reliable indicator than an IP or MAC address for identifying packet source, as it is generally harder to modify/forge. It is therefore expected to be a future authentication against impersonation and DoS attacks. However, if an attacker is equipped with the same capability/hardware as the authenticator to process physical-layer signals, a link signature can be easily manipulated by any nearby wireless device during the training phase. Based on this finding, we propose an attack model, called the analog man-in-the-middle (AMITM) attack, which utilizes the latest full-duplex relay technology to inject semi-controlled link signatures into authorized packets and reproduce the injected signature in the fabricated packets. Our experimental evaluation shows that with a proper parameter setting, 90% of fabricated packets are classified as those sent from an authorized transmitter. A countermeasure against this new attack is also proposed for the authenticator to inject link-signature noise by the same attack methodology.
Organisers of large-scale crowdsourcing initiatives need to consider how to produce outcomes with their projects, but also how to build volunteer capacity. The initial project experience of contributors plays an important role in this, particularly when the contribution process requires some degree of expertise. We propose three analytical dimensions to assess first-time contributor engagement based on readily available public data: cohort analysis, task analysis, and observation of contributor performance. We apply these to a large-scale study of remote mapping activities coordinated by the Humanitarian OpenStreetMap Team, a global volunteer effort with thousands of contributors. Our study shows that different coordination practices can have a marked impact on contributor retention, and that complex task designs can be a deterrent for certain contributor groups. We close by providing recommendations about how to build and sustain volunteer capacity in these and comparable crowdsourcing systems.
Power networks can be modeled as networked structures with nodes representing the bus bars (connected to generator, loads and transformers) and links representing the transmission lines. In this manuscript we study cascaded failures in power networks. As network structures we consider IEEE 118 bus network and a random spatial model network with similar properties to IEEE 118 bus network. A maximum flow based model is used to find the central edges. We study cascaded failures triggered by both random and targeted attacks to the edges. In the targeted attack the edge with the maximum centrality value is disconnected from the network. A number of metrics including the size of the largest connected component, the number of failed edges, the average maximum flow and the global efficiency are studied as a function of capacity parameter (edge critical load is proportional to its capacity parameter and nominal centrality value). For each case we identify the critical capacity parameter by which the network shows resilient behavior against failures. The experiments show that one should further protect the network for a targeted attack as compared to a random failure.