Biblio
In recent years, persistent cyber adversaries have developed increasingly sophisticated techniques to evade detection. Once adversaries have established a foothold within the target network, using seemingly-limited passive reconnaissance techniques, they can develop significant network reconnaissance capabilities. Cyber deception has been recognized as a critical capability to defend against such adversaries, but, without an accurate model of the adversary's reconnaissance behavior, current approaches are ineffective against advanced adversaries. To address this gap, we propose a novel model to capture how advanced, stealthy adversaries acquire knowledge about the target network and establish and expand their foothold within the system. This model quantifies the cost and reward, from the adversary's perspective, of compromising and maintaining control over target nodes. We evaluate our model through simulations in the CyberVAN testbed, and indicate how it can guide the development and deployment of future defensive capabilities, including high-interaction honeypots, so as to influence the behavior of adversaries and steer them away from critical resources.
Rapidly growing shared information for threat intelligence not only helps security analysts reduce time on tracking attacks, but also bring possibilities to research on adversaries' thinking and decisions, which is important for the further analysis of attackers' habits and preferences. In this paper, we analyze current models and frameworks used in threat intelligence that suited to different modeling goals, and propose a three-layer model (Goal, Behavior, Capability) to study the statistical characteristics of APT groups. Based on the proposed model, we construct a knowledge network composed of adversary behaviors, and introduce a similarity measure approach to capture similarity degree by considering different semantic links between groups. After calculating similarity degrees, we take advantage of Girvan-Newman algorithm to discover community groups, clustering result shows that community structures and boundaries do exist by analyzing the behavior of APT groups.
The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.
This Research to Practice Full Paper presents a new methodology in cybersecurity education. In the context of the cybersecurity profession, the `isolation problem' refers to the observed isolation of different knowledge units, as well as the isolation of technical and business perspectives. Due to limitations in existing cybersecurity education, professionals entering the field are often trapped in microscopic perspectives, and struggle to extend their findings to grasp the big picture in a target network scenario. Guided by a previous developed and published framework named “cross-layer situation knowledge reference model” (SKRM), which delivers comprehensive level big picture situation awareness, our new methodology targets at developing suites of teaching modules to address the above issues. The modules, featuring interactive hands-on labs that emulate real-world multiple-step attacks, will help students form a knowledge network instead of isolated conceptual knowledge units. Students will not just be required to leverage various techniques/tools to analyze breakpoints and complete individual modules; they will be required to connect logically the outputs of these techniques/tools to infer the ground truth and gain big picture awareness of the cyber situation. The modules will be able to be used separately or as a whole in a typical network security course.
Large-scale failures in communication networks due to natural disasters or malicious attacks can severely affect critical communications and threaten lives of people in the affected area. In the absence of a proper communication infrastructure, rescue operation becomes extremely difficult. Progressive and timely network recovery is, therefore, a key to minimizing losses and facilitating rescue missions. To this end, we focus on network recovery assuming partial and uncertain knowledge of the failure locations. We proposed a progressive multi-stage recovery approach that uses the incomplete knowledge of failure to find a feasible recovery schedule. Next, we focused on failure recovery of multiple interconnected networks. In particular, we focused on the interaction between a power grid and a communication network. Then, we focused on network monitoring techniques that can be used for diagnosing the performance of individual links for localizing soft failures (e.g. highly congested links) in a communication network. We studied the optimal selection of the monitoring paths to balance identifiability and probing cost. Finally, we addressed, a minimum disruptive routing framework in software defined networks. Extensive experimental and simulation results show that our proposed recovery approaches have a lower disruption cost compared to the state-of-the-art while we can configure our choice of trade-off between the identifiability, execution time, the repair/probing cost, congestion and the demand loss.
The supply chain is an extremely successful way to cope with the risk posed by distributed decision making in product sourcing and distribution. While open source software has similarly distributed decision making and involves code and information flows similar to those in ordinary supply chains, the actual networks necessary to quantify and communicate risks in software supply chains have not been constructed on large scale. This work proposes to close this gap by measuring dependency, code reuse, and knowledge flow networks in open source software. We have done preliminary work by developing suitable tools and methods that rely on public version control data to measure and comparing these networks for R language and emberjs packages. We propose ways to calculate the three networks for the entirety of public software, evaluate their accuracy, and to provide public infrastructure to build risk assessment and mitigation tools for various individual and organizational participants in open sources software. We hope that this infrastructure will contribute to more predictable experience with OSS and lead to its even wider adoption.
Today's major concern is not only maximizing the information rate through linear network coding scheme which is intelligent combination of information symbols at sending nodes but also secured transmission of information. Though cryptographic measure of security (computational security) gives secure transmission of information, it results system complexity and consequent reduction in efficiency of the communication system. This problem leads to alternative way of optimally secure and maximized information transmission. The alternative solution is secure network coding which is information theoretic approach. Depending up on applications, different security measures are needed during the transmission of information over wiretapped network with potential attack by the adversaries. In this research work, mathematical model for different security constraints with upper and lower boundaries were studied depending up on the randomness added to the source message and hence the security constraints on linear network code for randomized source messages depends both on randomness added and number of random source symbols. If the source generates large number random symbols, lesser number of random keys can give higher security to the information but information theoretic security bounds remain same. Hence maximizing randomness to the source is equivalent to adding security level.
It is a well-known fact that nowadays access to sensitive information is being performed through the use of a three-tier-architecture. Web applications have become a handy interface between users and data. As database-driven web applications are being used more and more every day, web applications are being seen as a good target for attackers with the aim of accessing sensitive data. If an organization fails to deploy effective data protection systems, they might be open to various attacks. Governmental organizations, in particular, should think beyond traditional security policies in order to achieve proper data protection. It is, therefore, imperative to perform security testing and make sure that there are no holes in the system, before an attack happens. One of the most commonly used web application attacks is by insertion of an SQL query from the client side of the application. This attack is called SQL Injection. Since an SQL Injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. To overcome the SQL injection problems, there is a need to use different security systems. In this paper, we will use 3 different scenarios for testing security systems. Using Penetration testing technique, we will try to find out which is the best solution for protecting sensitive data within the government network of Kosovo.
New and unseen network attacks pose a great threat to the signature-based detection systems. Consequently, machine learning-based approaches are designed to detect attacks, which rely on features extracted from network data. The problem is caused by different distribution of features in the training and testing datasets, which affects the performance of the learned models. Moreover, generating labeled datasets is very time-consuming and expensive, which undercuts the effectiveness of supervised learning approaches. In this paper, we propose using transfer learning to detect previously unseen attacks. The main idea is to learn the optimized representation to be invariant to the changes of attack behaviors from labeled training sets and non-labeled testing sets, which contain different types of attacks and feed the representation to a supervised classifier. To the best of our knowledge, this is the first effort to use a feature-based transfer learning technique to detect unseen variants of network attacks. Furthermore, this technique can be used with any common base classifier. We evaluated the technique on publicly available datasets, and the results demonstrate the effectiveness of transfer learning to detect new network attacks.
Software Defined Networking (SDN) has proved to be a promising approach for creating next generation software based network ecosystems. It has provided us with a centralized network provision, a holistic management plane and a well-defined level of abstraction. But, at the same time brings forth new security and management challenges. Research in the field of SDN is primarily focused on reconfiguration, forwarding and network management issues. However in recent times the interest has moved to tackling security and maintenance issues. This work is based on providing a means to mitigate security challenges in an SDN environment from a DDoS attack based point of view. This paper introduces a Multi-Agent based intrusion prevention and mitigation architecture for SDN. Thus allowing networks to govern their behavior and take appropriate measures when the network is under attack. The architecture is evaluated against filter based intrusion prevention architectures to measure efficiency and resilience against DDoS attacks and false policy based attacks.
The prevalence and effectiveness of phishing attacks, despite the presence of a vast array of technical defences, are due largely to the fact that attackers are ruthlessly targeting what is often referred to as the weakest link in the system - the human. This paper reports the results of an investigation into how end users behave when faced with phishing websites and how this behaviour exposes them to attack. Specifically, the paper presents a proof of concept computer model for simulating human behaviour with respect to phishing website detection based on the ACT-R cognitive architecture, and draws conclusions as to the applicability of this architecture to human behaviour modelling within a phishing detection scenario. Following the development of a high-level conceptual model of the phishing website detection process, the study draws upon ACT-R to model and simulate the cognitive processes involved in judging the validity of a representative webpage based primarily around the characteristics of the HTTPS padlock security indicator. The study concludes that despite the low-level nature of the architecture and its very basic user interface support, ACT-R possesses strong capabilities which map well onto the phishing use case, and that further work to more fully represent the range of human security knowledge and behaviours in an ACT-R model could lead to improved insights into how best to combine technical and human defences to reduce the risk to end users from phishing attacks.
Wireless sensor networks offer benefits in several applications but are vulnerable to various security threats, such as eavesdropping and hardware tampering. In order to reach secure communications among nodes, many approaches employ symmetric encryption. Several key management schemes have been proposed in order to establish symmetric keys. The paper presents an innovative key management scheme called random seed distribution with transitory master key, which adopts the random distribution of secret material and a transitory master key used to generate pairwise keys. The proposed approach addresses the main drawbacks of the previous approaches based on these techniques. Moreover, it overperforms the state-of-the-art protocols by providing always a high security level.
- « first
- ‹ previous
- 1
- 2
- 3