Biblio
Underwater acoustic networks is an enabling technology for a range of applications such as mine countermeasures, intelligence and reconnaissance. Common for these applications is a need for robust information distribution while minimizing energy consumption. In terrestrial wireless networks topology information is often used to enhance the efficiency of routing, in terms of higher capacity and less overhead. In this paper we asses the effects of topology information on routing in underwater acoustic networks. More specifically, the interplay between long propagation delays, contention-based channels access and dissemination of varying degrees of topology information is investigated. The study is based on network simulations of a number of network protocols that make use of varying amounts of topology information. The results indicate that, in the considered scenario, relying on local topology information to reduce retransmissions may have adverse effects on the reliability. The difficult channel conditions and the contention-based channels access methods create a need for an increased amount of diversity, i.e., more retransmissions. In the scenario considered, an opportunistic flooding approach is a better, both in terms of robustness and energy consumption.
End-hopping is an effective component of Moving Target Defense (MTD) by randomly hopping network configuration of host, which is a game changing technique against cyber-attack and can interrupt cyber kill chain in the early stage. In this paper, a novel end-hopping model, Multi End-hopping (MEH), is proposed to exploit the full potentials of MTD techniques by hosts cooperating with others to share possible configurable space (PCS). And an optimization method based on cooperative game is presented to make hosts form optimal alliances against reconnaissance, scanning and blind probing DoS attack. Those model and method confuse adversaries by establishing alliances of hosts to enlarge their PCS, which thwarts various malicious scanning and mitigates probing DoS attack intensity. Through simulations, we validate the correctness of MEH model and the effectiveness of optimization method. Experiment results show that the proposed model and method increase system stable operational probability while introduces a low overhead in optimization.
Wireless sensor networks are responsible for sensing, gathering and processing the information of the objects in the network coverage area. Basic data fusion technology generally does not provide data privacy protection mechanism, and the privacy protection mechanism in health care, military reconnaissance, smart home and other areas of the application is usually indispensable. In this paper, we consider the privacy, confidentiality, and the accuracy of fusion results, and propose a data fusion algorithm for privacy preserving. This algorithm relies on the characteristics of data fusion, and uses the method of pre-distribution random number in the node to get the privacy protection requirements of the original data. Theoretical analysis shows that the malicious attacker attempts to steal the difficulty of node privacy in PPND algorithm. At the same time in the TOSSIM simulation results also show that, compared with TAG, SMART algorithm, PPND algorithm in the data traffic, the convergence accuracy of the good performance.
In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.
The wireless boundaries of networks are becoming increasingly important from a security standpoint as the proliferation of 802.11 WiFi technology increases. Concurrently, the complexity of 802.11 access point implementation is rapidly outpacing the standardization process. The result is that nascent wireless functionality management is left up to the individual provider's implementation, which creates new vulnerabilities in wireless networks. One such functional improvement to 802.11 is the virtual access point (VAP), a method of broadcasting logically separate networks from the same physical equipment. Network reconnaissance benefits from VAP identification, not only because network topology is a primary aim of such reconnaissance, but because the knowledge that a secure network and an insecure network are both being broadcast from the same physical equipment is tactically relevant information. In this work, we present a novel graph-theoretic approach to VAP identification which leverages a body of research concerned with establishing community structure. We apply our approach to both synthetic data and a large corpus of real-world data to demonstrate its efficacy. In most real-world cases, near-perfect blind identification is possible highlighting the effectiveness of our proposed VAP identification algorithm.
Distributed Denial of Service attacks against high-profile targets have become more frequent in recent years. In response to such massive attacks, several architectures have adopted proxies to introduce layers of indirection between end users and target services and reduce the impact of a DDoS attack by migrating users to new proxies and shuffling clients across proxies so as to isolate malicious clients. However, the reactive nature of these solutions presents weaknesses that we leveraged to develop a new attack - the proxy harvesting attack - which enables malicious clients to collect information about a large number of proxies before launching a DDoS attack. We show that current solutions are vulnerable to this attack, and propose a moving target defense technique consisting in periodically and proactively replacing one or more proxies and remapping clients to proxies. Our primary goal is to disrupt the attacker's reconnaissance effort. Additionally, to mitigate ongoing attacks, we propose a new client-to-proxy assignment strategy to isolate compromised clients, thereby reducing the impact of attacks. We validate our approach both theoretically and through simulation, and show that the proposed solution can effectively limit the number of proxies an attacker can discover and isolate malicious clients.
Compression is desirable for network applications as it saves bandwidth. Differently, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to the “Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH)” attack on web traffic protected by the TLS protocol. The general guidance to prevent this attack is to disable HTTP compression, preserving confidentiality but sacrificing bandwidth. As a more sophisticated countermeasure, fixed-dictionary compression was introduced in 2015 enabling compression while protecting high-value secrets, such as cookies, from attacks. The fixed-dictionary compression method is a cryptographically sound countermeasure against the BREACH attack, since it is proven secure in a suitable security model. In this project, we integrate the fixed-dictionary compression method as a countermeasure for BREACH attack, for real-world client-server setting. Further, we measure the performance of the fixed-dictionary compression algorithm against the DEFLATE compression algorithm. The results evident that, it is possible to save some amount of bandwidth, with reasonable compression/decompression time compared to DEFLATE operations. The countermeasure is easy to implement and deploy, hence, this would be a possible direction to mitigate the BREACH attack efficiently, rather than stripping off the HTTP compression entirely.
Industrial Control System (ICS) consists of large number of electronic devices connected to field devices to execute the physical processes. Communication network of ICS supports wide range of packet based applications. A growing issue with network security and its impact on ICS have highlighted some fundamental risks to critical infrastructure. To address network security issues for ICS a clear understanding of security specific defensive countermeasures is required. Reconnaissance of ICS network by deep packet inspection (DPI) consists analysis of the contents of the captured packets in order to get accurate measures of process that uses specific countermeasure to create an aggregated posture. In this paper we focus on novel approach by presenting a technique with captured network traffic. This technique is capable to identify the protocols and extract different features for classification of traffic based on network protocol, header information and payload to understand the whole architecture of complex system. Here we have segregated possible types of attacks on ICS.
The anonymizing network Tor is examined as one method of anonymizing port scanning tools and avoiding identification and retaliation. Performing anonymized port scans through Tor is possible using Nmap, but parallelization of the scanning processes is required to accelerate the scan rate.
The American National Standards Institute (ANSI) has standardized an access control approach, Next Generation Access Control (NGAC), that enables simultaneous instantiation of multiple access control policies. For large complex enterprises this is critical to limiting the authorized access of insiders. However, the specifications describe the required access control capabilities but not the related algorithms. While appropriate, this leave open the important question as to whether or not NGAC is scalable. Existing cubic reference implementations indicate that it does not. For example, the primary NGAC reference implementation took several minutes to simply display the set of files accessible to a user on a moderately sized system. To solve this problem we provide an efficient access control decision algorithm, reducing the overall complexity from cubic to linear. Our other major contribution is to provide a novel mechanism for administrators and users to review allowed access rights. We provide an interface that appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. These capabilities help limit insider access to information (and thereby limit information leakage) by enabling the efficient simultaneous instantiation of multiple access control policies.
Threat classification is extremely important for individuals and organizations, as it is an important step towards realization of information security. In fact, with the progress of information technologies (IT) security becomes a major challenge for organizations which are vulnerable to many types of insiders and outsiders security threats. The paper deals with threats classification models in order to help managers to define threat characteristics and then protect their assets from them. Existing threats classification models are non complete and present non orthogonal threats classes. The aim of this paper is to suggest a scalable and complete approach that classifies security threat in orthogonal way.
In this paper, we analyze manipulation methods of the MAC address and consequent security threats. The Ethernet MAC address is known to be unchanged, and so is highly considered as platform-unique information. For this reason, various services are researched using the MAC address. These kinds of services are organized with MAC address as plat- form identifier or a password, and such a diverse range of security threats are caused when the MAC address is manipulated. Therefore, here we research on manipulation methods for MAC address at different levels on a computing platform and highlight the security threats resulted from modification of the MAC address. In this paper, we introduce manipulation methods on the original MAC address stored in the EEPROM on NIC (Network Interface Card) as hardware- based MAC spoofing attack, which are unknown to be general approaches. This means that the related services should struggle to detect the falsification and the results of this paper have deep significance in most MAC address-based services.
The threat that malicious insiders pose towards organisations is a significant problem. In this paper, we investigate the task of detecting such insiders through a novel method of modelling a user's normal behaviour in order to detect anomalies in that behaviour which may be indicative of an attack. Specifically, we make use of Hidden Markov Models to learn what constitutes normal behaviour, and then use them to detect significant deviations from that behaviour. Our results show that this approach is indeed successful at detecting insider threats, and in particular is able to accurately learn a user's behaviour. These initial tests improve on existing research and may provide a useful approach in addressing this part of the insider-threat challenge.
Since the number of cyber attacks by insider threats and the damage caused by them has been increasing over the last years, organizations are in need for specific security solutions to counter these threats. To limit the damage caused by insider threats, the timely detection of erratic system behavior and malicious activities is of primary importance. We observed a major paradigm shift towards anomaly-focused detection mechanisms, which try to establish a baseline of system behavior – based on system logging data – and report any deviations from this baseline. While these approaches are promising, they usually have to cope with scalability issues. As the amount of log data generated during IT operations is exponentially growing, high-performance security solutions are required that can handle this huge amount of data in real time. In this paper, we demonstrate how high-performance bioinformatics tools can be leveraged to tackle this issue, and we demonstrate their application to log data for outlier detection, to timely detect anomalous system behavior that points to insider attacks.
Advanced targeted cyber attacks often rely on reconnaissance missions to gather information about potential targets and their location in a networked environment to identify vulnerabilities which can be exploited for further attack maneuvers. Advanced network scanning techniques are often used for this purpose and are automatically executed by malware infected hosts. In this paper we formally define network deception to defend reconnaissance and develop RDS (Reconnaissance Deception System), which is based on SDN (Software Defined Networking), to achieve deception by simulating virtual network topologies. Our system thwarts network reconnaissance by delaying the scanning techniques of adversaries and invalidating their collected information, while minimizing the performance impact on benign network traffic. We introduce approaches to defend malicious network discovery and reconnaissance in computer networks, which are required for targeted cyber attacks such as Advanced Persistent Threats (APT). We show, that our system is able to invalidate an attackers information, delay the process of finding vulnerable hosts and identify the source of adversarial reconnaissance within a network, while only causing a minuscule performance overhead of 0.2 milliseconds per packet flow on average.
Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.
- « first
- ‹ previous
- 1
- 2
- 3