Biblio
Large-scale failures in communication networks due to natural disasters or malicious attacks can severely affect critical communications and threaten lives of people in the affected area. In the absence of a proper communication infrastructure, rescue operation becomes extremely difficult. Progressive and timely network recovery is, therefore, a key to minimizing losses and facilitating rescue missions. To this end, we focus on network recovery assuming partial and uncertain knowledge of the failure locations. We proposed a progressive multi-stage recovery approach that uses the incomplete knowledge of failure to find a feasible recovery schedule. Next, we focused on failure recovery of multiple interconnected networks. In particular, we focused on the interaction between a power grid and a communication network. Then, we focused on network monitoring techniques that can be used for diagnosing the performance of individual links for localizing soft failures (e.g. highly congested links) in a communication network. We studied the optimal selection of the monitoring paths to balance identifiability and probing cost. Finally, we addressed, a minimum disruptive routing framework in software defined networks. Extensive experimental and simulation results show that our proposed recovery approaches have a lower disruption cost compared to the state-of-the-art while we can configure our choice of trade-off between the identifiability, execution time, the repair/probing cost, congestion and the demand loss.
The growing trend toward information technology increases the amount of data travelling over the network links. The problem of detecting anomalies in data streams has increased with the growth of internet connectivity. Software-Defined Networking (SDN) is a new concept of computer networking that can adapt and support these growing trends. However, the centralized nature of the SDN design is challenged by the need for an efficient method for traffic monitoring against traffic anomalies caused by misconfigured devices or ongoing attacks. In this paper, we propose a new model for traffic behavior monitoring that aims to ensure trusted communication links between the network devices. The main objective of this model is to confirm that the behavior of the traffic streams matches the instructions provided by the SDN controller, which can help to increase the trust between the SDN controller and its covered infrastructure components. According to our preliminary implementation, the behavior monitoring unit is able to read all traffic information and perform a validation process that reports any mismatching traffic to the controller.
This paper describes the work done to design a SoC platform for real-time on-line pattern search in TCP packets for Deep Packet Inspection (DPI) applications. The platform is based on a Xilinx Zynq programmable SoC and includes an accelerator that implements a pattern search engine that extends the original Boyer-Moore algorithm with timing and logical rules, that produces a very complex set of rules. Also, the platform implements different modes of operation, including SIMD and MISD parallelism, which can be configured on-line. The platform is scalable depending of the analysis requirement up to 8 Gbps. High-Level synthesis and platform based design methodologies have been used to reduce the time to market of the completed system.
A technique and algorithms for early detection of the started attack and subsequent blocking of malicious traffic are proposed. The primary separation of mixed traffic into trustworthy and malicious traffic was carried out using cluster analysis. Classification of newly arrived requests was done using different classifiers with the help of received training samples and developed success criteria.
Network monitoring is vital to the administration and operation of networks, but it requires privileged access that only highly trusted parties are granted. This severely limits the opportunity for external parties, such as service or equipment providers, auditors, or even clients, to measure the health or operation of a network in which they are stakeholders, but do not have access to its internal structure. In this position paper we propose the use of middleboxes to open up network monitoring to external parties using privacy-preserving technology. This will allow distrusted parties to make more inferences about the network state than currently possible, without learning any precise information about the network or the data that crosses it. Thus the state of the network will be more transparent to external stakeholders, who will be empowered to verify claims made by network operators. Network operators will be able to provide more information about their network without compromising security or privacy.
Identifying threats contained within encrypted network traffic poses a unique set of challenges. It is important to monitor this traffic for threats and malware, but do so in a way that maintains the integrity of the encryption. Because pattern matching cannot operate on encrypted data, previous approaches have leveraged observable metadata gathered from the flow, e.g., the flow's packet lengths and inter-arrival times. In this work, we extend the current state-of-the-art by considering a data omnia approach. To this end, we develop supervised machine learning models that take advantage of a unique and diverse set of network flow data features. These data features include TLS handshake metadata, DNS contextual flows linked to the encrypted flow, and the HTTP headers of HTTP contextual flows from the same source IP address within a 5 minute window. We begin by exhibiting the differences between malicious and benign traffic's use of TLS, DNS, and HTTP on millions of unique flows. This study is used to design the feature sets that have the most discriminatory power. We then show that incorporating this contextual information into a supervised learning system significantly increases performance at a 0.00% false discovery rate for the problem of classifying encrypted, malicious flows. We further validate our false positive rate on an independent, real-world dataset.