Visible to the public Biblio

Found 103 results

Filters: Keyword is transport protocols  [Clear All Filters]
2018-02-02
Anderson, E. C., Okafor, K. C., Nkwachukwu, O., Dike, D. O..  2017.  Real time car parking system: A novel taxonomy for integrated vehicular computing. 2017 International Conference on Computing Networking and Informatics (ICCNI). :1–9.
Automation of real time car parking system (RTCPS) using mobile cloud computing (MCC) and vehicular networking (VN) has given rise to a novel concept of integrated communication-computing platforms (ICCP). The aim of ICCP is to evolve an effective means of addressing challenges such as improper parking management scheme, traffic congestion in parking lots, insecurity of vehicles (safety applications), and other Infrastructure-to-Vehicle (I2V) services for providing data dissemination and content delivery services to connected Vehicular Clients (VCs). Edge (parking lot based) Fog computing (EFC) through road side sensor based monitoring is proposed to achieve ICCP. A real-time cloud to vehicular clients (VCs) in the context of smart car parking system (SCPS) which satisfies deterministic and non-deterministic constraints is introduced. Vehicular cloud computing (VCC) and intra-Edge-Fog node architecture is presented for ICCP. This is targeted at distributed mini-sized self-energized Fog nodes/data centers, placed between distributed remote cloud and VCs. The architecture processes data-disseminated real-time services to the connected VCs. The work built a prototype testbed comprising a black box PSU, Arduino IoT Duo, GH-311RT ultrasonic distance sensor and SHARP 2Y0A21 passive infrared sensor for vehicle detection; LinkSprite 2MP UART JPEG camera module, SD card module, RFID card reader, RDS3115 metal gear servo motors, FPM384 fingerprint scanner, GSM Module and a VCC web portal. The testbed functions at the edge of the vehicular network and is connected to the served VCs through Infrastructure-to-Vehicular (I2V) TCP/IP-based single-hop mobile links. This research seeks to facilitate urban renewal strategies and highlight the significance of ICCP prototype testbed. Open challenges and future research directions are discussed for an efficient VCC model which runs on networked fog centers (NetFCs).
2017-12-20
Williams, N., Li, S..  2017.  Simulating Human Detection of Phishing Websites: An Investigation into the Applicability of the ACT-R Cognitive Behaviour Architecture Model. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–8.

The prevalence and effectiveness of phishing attacks, despite the presence of a vast array of technical defences, are due largely to the fact that attackers are ruthlessly targeting what is often referred to as the weakest link in the system - the human. This paper reports the results of an investigation into how end users behave when faced with phishing websites and how this behaviour exposes them to attack. Specifically, the paper presents a proof of concept computer model for simulating human behaviour with respect to phishing website detection based on the ACT-R cognitive architecture, and draws conclusions as to the applicability of this architecture to human behaviour modelling within a phishing detection scenario. Following the development of a high-level conceptual model of the phishing website detection process, the study draws upon ACT-R to model and simulate the cognitive processes involved in judging the validity of a representative webpage based primarily around the characteristics of the HTTPS padlock security indicator. The study concludes that despite the low-level nature of the architecture and its very basic user interface support, ACT-R possesses strong capabilities which map well onto the phishing use case, and that further work to more fully represent the range of human security knowledge and behaviours in an ACT-R model could lead to improved insights into how best to combine technical and human defences to reduce the risk to end users from phishing attacks.

Sevilla, S., Garcia-Luna-Aceves, J. J., Sadjadpour, H..  2017.  GroupSec: A new security model for the web. 2017 IEEE International Conference on Communications (ICC). :1–6.
The de facto approach to Web security today is HTTPS. While HTTPS ensures complete security for clients and servers, it also interferes with transparent content-caching at middleboxes. To address this problem and support both security and caching, we propose a new approach to Web security and privacy called GroupSec. The key innovation of GroupSec is that it replaces the traditional session-based security model with a new model based on content group membership. We introduce the GroupSec security model and show how HTTP can be easily adapted to support GroupSec without requiring changes to browsers, servers, or middleboxes. Finally, we present results of a threat analysis and performance experiments which show that GroupSec achieves notable performance benefits at the client and server while remaining as secure as HTTPS.
Dolnák, I., Litvik, J..  2017.  Introduction to HTTP security headers and implementation of HTTP strict transport security (HSTS) header for HTTPS enforcing. 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA). :1–4.

This article presents introduction to HTTP Security Headers - new security topic in communication over Internet. It is emphasized that HTTPS protocol and SSL/TLS certificates alone do not offer sufficient level of security for communication among people and devices. In the world of web applications and Internet of Things (IoT), it is vital to bring communication security at higher level, what could be realised via few simple steps. HTTP Response Headers used for different purposes in the past are now the effective way how to propagate security policies from servers to clients (from web servers to web browsers). First improvement is enforcing HTTPS protocol for communication everywhere it is possible and promote this protocol as first and only option for secure connection over the Internet. It is emphasized that HTTP protocol for communication is not suitable anymore.

2017-12-12
Byrenheid, M., Rossberg, M., Schaefer, G., Dorn, R..  2017.  Covert-channel-resistant congestion control for traffic normalization in uncontrolled networks. 2017 IEEE International Conference on Communications (ICC). :1–7.

Traffic normalization, i.e. enforcing a constant stream of fixed-length packets, is a well-known measure to completely prevent attacks based on traffic analysis. In simple configurations, the enforced traffic rate can be statically configured by a human operator, but in large virtual private networks (VPNs) the traffic pattern of many connections may need to be adjusted whenever the overlay topology or the transport capacity of the underlying infrastructure changes. We propose a rate-based congestion control mechanism for automatic adjustment of traffic patterns that does not leak any information about the actual communication. Overly strong rate throttling in response to packet loss is avoided, as the control mechanism does not change the sending rate immediately when a packet loss was detected. Instead, an estimate of the current packet loss rate is obtained and the sending rate is adjusted proportionally. We evaluate our control scheme based on a measurement study in a local network testbed. The results indicate that the proposed approach avoids network congestion, enables protected TCP flows to achieve an increased goodput, and yet ensures appropriate traffic flow confidentiality.

Chow, J., Li, X., Mountrouidou, X..  2017.  Raising flags: Detecting covert storage channels using relative entropy. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :25–30.

This paper focuses on one type of Covert Storage Channel (CSC) that uses the 6-bit TCP flag header in TCP/IP network packets to transmit secret messages between accomplices. We use relative entropy to characterize the irregularity of network flows in comparison to normal traffic. A normal profile is created by the frequency distribution of TCP flags in regular traffic packets. In detection, the TCP flag frequency distribution of network traffic is computed for each unique IP pair. In order to evaluate the accuracy and efficiency of the proposed method, this study uses real regular traffic data sets as well as CSC messages using coding schemes under assumptions of both clear text, composed by a list of keywords common in Unix systems, and encrypted text. Moreover, smart accomplices may use only those TCP flags that are ever appearing in normal traffic. Then, in detection, the relative entropy can reveal the dissimilarity of a different frequency distribution from this normal profile. We have also used different data processing methods in detection: one method summarizes all the packets for a pair of IP addresses into one flow and the other uses a sliding moving window over such a flow to generate multiple frames of packets. The experimentation results, displayed by Receiver Operating Characteristic (ROC) curves, have shown that the method is promising to differentiate normal and CSC traffic packet streams. Furthermore the delay of raising an alert is analyzed for CSC messages to show its efficiency.

Kogos, K. G., Seliverstova, E. I., Epishkina, A. V..  2017.  Review of covert channels over HTTP: Communication and countermeasures. 2017 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :459–462.

Many innovations in the field of cryptography have been made in recent decades, ensuring the confidentiality of the message's content. However, sometimes it's not enough to secure the message, and communicating parties need to hide the fact of the presence of any communication. This problem is solved by covert channels. A huge number of ideas and implementations of different types of covert channels was proposed ever since the covert channels were mentioned for the first time. The spread of the Internet and networking technologies was the reason for the use of network protocols for the invention of new covert communication methods and has led to the emergence of a new class of threats related to the data leakage via network covert channels. In recent years, web applications, such as web browsers, email clients and web messengers have become indispensable elements in business and everyday life. That's why ubiquitous HTTP messages are so useful as a covert information containers. The use of HTTP for the implementation of covert channels may increase the capacity of covert channels due to HTTP's flexibility and wide distribution as well. We propose a detailed analysis of all known HTTP covert channels and techniques of their detection and capacity limitation.

Zahra, A., Shah, M. A..  2017.  IoT based ransomware growth rate evaluation and detection using command and control blacklisting. 2017 23rd International Conference on Automation and Computing (ICAC). :1–6.

Internet of things (IoT) is internetworking of various physical devices to provide a range of services and applications. IoT is a rapidly growing field, on an account of this; the security measurements for IoT should be at first concern. In the modern day world, the most emerging cyber-attack threat for IoT is ransomware attack. Ransomware is a kind of malware with the aim of rendering a victim's computer unusable or inaccessible, and then asking the user to pay a ransom to revert the destruction. In this paper we are evaluating ransomware attacks statistics for the past 2 years and the present year to estimate growth rate of the most emerging ransomware families from the last 3 years to evaluate most threatening ransomware attacks for IoT. Growth rate results shows that the number of attacks for Cryptowall and locky ransomware are notably increasing therefore, these ransomware families are potential threat to IoT. Moreover, we present a Cryptowall ransomware attack detection model based on the communication and behavioral study of Cryptowall for IoT environment. The proposed model observes incoming TCP/IP traffic through web proxy server then extracts TCP/IP header and uses command and control (C&C) server black listing to detect ransomware attacks.

2017-04-20
Sankalpa, I., Dhanushka, T., Amarasinghe, N., Alawathugoda, J., Ragel, R..  2016.  On implementing a client-server setting to prevent the Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH) attacks. 2016 Manufacturing Industrial Engineering Symposium (MIES). :1–5.

Compression is desirable for network applications as it saves bandwidth. Differently, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to the “Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH)” attack on web traffic protected by the TLS protocol. The general guidance to prevent this attack is to disable HTTP compression, preserving confidentiality but sacrificing bandwidth. As a more sophisticated countermeasure, fixed-dictionary compression was introduced in 2015 enabling compression while protecting high-value secrets, such as cookies, from attacks. The fixed-dictionary compression method is a cryptographically sound countermeasure against the BREACH attack, since it is proven secure in a suitable security model. In this project, we integrate the fixed-dictionary compression method as a countermeasure for BREACH attack, for real-world client-server setting. Further, we measure the performance of the fixed-dictionary compression algorithm against the DEFLATE compression algorithm. The results evident that, it is possible to save some amount of bandwidth, with reasonable compression/decompression time compared to DEFLATE operations. The countermeasure is easy to implement and deploy, hence, this would be a possible direction to mitigate the BREACH attack efficiently, rather than stripping off the HTTP compression entirely.

2017-03-07
Kolahi, S. S., Treseangrat, K., Sarrafpour, B..  2015.  Analysis of UDP DDoS flood cyber attack and defense mechanisms on Web Server with Linux Ubuntu 13. 2015 International Conference on Communications, Signal Processing, and their Applications (ICCSPA). :1–5.

Denial of Service (DoS) attacks is one of the major threats and among the hardest security problems in the Internet world. Of particular concern are Distributed Denial of Service (DDoS) attacks, whose impact can be proportionally severe. With little or no advance warning, an attacker can easily exhaust the computing resources of its victim within a short period of time. In this paper, we study the impact of a UDP flood attack on TCP throughput, round-trip time, and CPU utilization for a Web Server with the new generation of Linux platform, Linux Ubuntu 13. This paper also evaluates the impact of various defense mechanisms, including Access Control Lists (ACLs), Threshold Limit, Reverse Path Forwarding (IP Verify), and Network Load Balancing. Threshold Limit is found to be the most effective defense.

Zeb, K., Baig, O., Asif, M. K..  2015.  DDoS attacks and countermeasures in cyberspace. 2015 2nd World Symposium on Web Applications and Networking (WSWAN). :1–6.

In cyberspace, availability of the resources is the key component of cyber security along with confidentiality and integrity. Distributed Denial of Service (DDoS) attack has become one of the major threats to the availability of resources in computer networks. It is a challenging problem in the Internet. In this paper, we present a detailed study of DDoS attacks on the Internet specifically the attacks due to protocols vulnerabilities in the TCP/IP model, their countermeasures and various DDoS attack mechanisms. We thoroughly review DDoS attacks defense and analyze the strengths and weaknesses of different proposed mechanisms.

Treseangrat, K., Kolahi, S. S., Sarrafpour, B..  2015.  Analysis of UDP DDoS cyber flood attack and defense mechanisms on Windows Server 2012 and Linux Ubuntu 13. 2015 International Conference on Computer, Information and Telecommunication Systems (CITS). :1–5.

Distributed Denial of Service (DoS) attacks is one of the major threats and among the hardest security problems in the Internet world. In this paper, we study the impact of a UDP flood attack on TCP throughputs, round-trip time, and CPU utilization on the latest version of Windows and Linux platforms, namely, Windows Server 2012 and Linux Ubuntu 13. This paper also evaluates several defense mechanisms including Access Control Lists (ACLs), Threshold Limit, Reverse Path Forwarding (IP Verify), and Network Load Balancing. Threshold Limit defense gave better results than the other solutions.

2017-02-14
J. Brynielsson, R. Sharma.  2015.  "Detectability of low-rate HTTP server DoS attacks using spectral analysis". 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :954-961.

Denial-of-Service (DoS) attacks pose a threat to any service provider on the internet. While traditional DoS flooding attacks require the attacker to control at least as much resources as the service provider in order to be effective, so-called low-rate DoS attacks can exploit weaknesses in careless design to effectively deny a service using minimal amounts of network traffic. This paper investigates one such weakness found within version 2.2 of the popular Apache HTTP Server software. The weakness concerns how the server handles the persistent connection feature in HTTP 1.1. An attack simulator exploiting this weakness has been developed and shown to be effective. The attack was then studied with spectral analysis for the purpose of examining how well the attack could be detected. Similar to other papers on spectral analysis of low-rate DoS attacks, the results show that disproportionate amounts of energy in the lower frequencies can be detected when the attack is present. However, by randomizing the attack pattern, an attacker can efficiently reduce this disproportion to a degree where it might be impossible to correctly identify an attack in a real world scenario.

2015-05-06
Stephens, B., Cox, A.L., Singla, A., Carter, J., Dixon, C., Felter, W..  2014.  Practical DCB for improved data center networks. INFOCOM, 2014 Proceedings IEEE. :1824-1832.

Storage area networking is driving commodity data center switches to support lossless Ethernet (DCB). Unfortunately, to enable DCB for all traffic on arbitrary network topologies, we must address several problems that can arise in lossless networks, e.g., large buffering delays, unfairness, head of line blocking, and deadlock. We propose TCP-Bolt, a TCP variant that not only addresses the first three problems but reduces flow completion times by as much as 70%. We also introduce a simple, practical deadlock-free routing scheme that eliminates deadlock while achieving aggregate network throughput within 15% of ECMP routing. This small compromise in potential routing capacity is well worth the gains in flow completion time. We note that our results on deadlock-free routing are also of independent interest to the storage area networking community. Further, as our hardware testbed illustrates, these gains are achievable today, without hardware changes to switches or NICs.

Zhuo Hao, Yunlong Mao, Sheng Zhong, Li, L.E., Haifan Yao, Nenghai Yu.  2014.  Toward Wireless Security without Computational Assumptions #x2014;Oblivious Transfer Based on Wireless Channel Characteristics. Computers, IEEE Transactions on. 63:1580-1593.

Wireless security has been an active research area since the last decade. A lot of studies of wireless security use cryptographic tools, but traditional cryptographic tools are normally based on computational assumptions, which may turn out to be invalid in the future. Consequently, it is very desirable to build cryptographic tools that do not rely on computational assumptions. In this paper, we focus on a crucial cryptographic tool, namely 1-out-of-2 oblivious transfer. This tool plays a central role in cryptography because we can build a cryptographic protocol for any polynomial-time computable function using this tool. We present a novel 1-out-of-2 oblivious transfer protocol based on wireless channel characteristics, which does not rely on any computational assumption. We also illustrate the potential broad applications of this protocol by giving two applications, one on private communications and the other on privacy preserving password verification. We have fully implemented this protocol on wireless devices and conducted experiments in real environments to evaluate the protocol. Our experimental results demonstrate that it has reasonable efficiency.
 

Alrabaee, S., Bataineh, A., Khasawneh, F.A., Dssouli, R..  2014.  Using model checking for Trivial File Transfer Protocol validation. Communications and Networking (ComNet), 2014 International Conference on. :1-7.

This paper presents verification and model based checking of the Trivial File Transfer Protocol (TFTP). Model checking is a technique for software verification that can detect concurrency defects within appropriate constraints by performing an exhaustive state space search on a software design or implementation and alert the implementing organization to potential design deficiencies that are otherwise difficult to be discovered. The TFTP is implemented on top of the Internet User Datagram Protocol (UDP) or any other datagram protocol. We aim to create a design model of TFTP protocol, with adding window size, using Promela to simulate it and validate some specified properties using spin. The verification has been done by using the model based checking tool SPIN which accepts design specification written in the verification language PROMELA. The results show that TFTP is free of live locks.
 

Premnath, A.P., Ju-Yeon Jo, Yoohwan Kim.  2014.  Application of NTRU Cryptographic Algorithm for SCADA Security. Information Technology: New Generations (ITNG), 2014 11th International Conference on. :341-346.

Critical Infrastructure represents the basic facilities, services and installations necessary for functioning of a community, such as water, power lines, transportation, or communication systems. Any act or practice that causes a real-time Critical Infrastructure System to impair its normal function and performance will have debilitating impact on security and economy, with direct implication on the society. SCADA (Supervisory Control and Data Acquisition) system is a control system which is widely used in Critical Infrastructure System to monitor and control industrial processes autonomously. As SCADA architecture relies on computers, networks, applications and programmable controllers, it is more vulnerable to security threats/attacks. Traditional SCADA communication protocols such as IEC 60870, DNP3, IEC 61850, or Modbus did not provide any security services. Newer standards such as IEC 62351 and AGA-12 offer security features to handle the attacks on SCADA system. However there are performance issues with the cryptographic solutions of these specifications when applied to SCADA systems. This research is aimed at improving the performance of SCADA security standards by employing NTRU, a faster and light-weight NTRU public key algorithm for providing end-to-end security.

Zhenlong Yuan, Cuilan Du, Xiaoxian Chen, Dawei Wang, Yibo Xue.  2014.  SkyTracer: Towards fine-grained identification for Skype traffic via sequence signatures. Computing, Networking and Communications (ICNC), 2014 International Conference on. :1-5.

Skype has been a typical choice for providing VoIP service nowadays and is well-known for its broad range of features, including voice-calls, instant messaging, file transfer and video conferencing, etc. Considering its wide application, from the viewpoint of ISPs, it is essential to identify Skype flows and thus optimize network performance and forecast future needs. However, in general, a host is likely to run multiple network applications simultaneously, which makes it much harder to classify each and every Skype flow from mixed traffic exactly. Especially, current techniques usually focus on host-level identification and do not have the ability to identify Skype traffic at the flow-level. In this paper, we first reveal the unique sequence signatures of Skype UDP flows and then implement a practical online system named SkyTracer for precise Skype traffic identification. To the best of our knowledge, this is the first time to utilize the strong sequence signatures to carry out early identification of Skype traffic. The experimental results show that SkyTracer can achieve very high accuracy at fine-grained level in identifying Skype traffic.

Hammi, B., Khatoun, R., Doyen, G..  2014.  A Factorial Space for a System-Based Detection of Botcloud Activity. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.

Badis, H., Doyen, G., Khatoun, R..  2014.  Understanding botclouds from a system perspective: A principal component analysis. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-9.

Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.

Haddadi, F., Morgan, J., Filho, E.G., Zincir-Heywood, A.N..  2014.  Botnet Behaviour Analysis Using IP Flows: With HTTP Filters Using Classifiers. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :7-12.

Botnets are one of the most destructive threats against the cyber security. Recently, HTTP protocol is frequently utilized by botnets as the Command and Communication (C&C) protocol. In this work, we aim to detect HTTP based botnet activity based on botnet behaviour analysis via machine learning approach. To achieve this, we employ flow-based network traffic utilizing NetFlow (via Softflowd). The proposed botnet analysis system is implemented by employing two different machine learning algorithms, C4.5 and Naive Bayes. Our results show that C4.5 learning algorithm based classifier obtained very promising performance on detecting HTTP based botnet activity.

Leong, P., Liming Lu.  2014.  Multiagent Web for the Internet of Things. Information Science and Applications (ICISA), 2014 International Conference on. :1-4.

The Internet of Things (IOT) is a network of networks where massively large numbers of objects or things are interconnected to each other through the network. The Internet of Things brings along many new possibilities of applications to improve human comfort and quality of life. Complex systems such as the Internet of Things are difficult to manage because of the emergent behaviours that arise from the complex interactions between its constituent parts. Our key contribution in the paper is a proposed multiagent web for the Internet of Things. Corresponding data management architecture is also proposed. The multiagent architecture provides autonomic characteristics for IOT making the IOT manageable. In addition, the multiagent web allows for flexible processing on heterogeneous platforms as we leverage off web protocols such as HTTP and language independent data formats such as JSON for communications between agents. The architecture we proposed enables a scalable architecture and infrastructure for a web-scale multiagent Internet of Things.
 

2015-05-05
Morrell, C., Ransbottom, J.S., Marchany, R., Tront, J.G..  2014.  Scaling IPv6 address bindings in support of a moving target defense. Internet Technology and Secured Transactions (ICITST), 2014 9th International Conference for. :440-445.

Moving target defense is an area of network security research in which machines are moved logically around a network in order to avoid detection. This is done by leveraging the immense size of the IPv6 address space and the statistical improbability of two machines selecting the same IPv6 address. This defensive technique forces a malicious actor to focus on the reconnaissance phase of their attack rather than focusing only on finding holes in a machine's static defenses. We have a current implementation of an IPv6 moving target defense entitled MT6D, which works well although is limited to functioning in a peer to peer scenario. As we push our research forward into client server networks, we must discover what the limits are in reference to the client server ratio. In our current implementation of a simple UDP echo server that binds large numbers of IPv6 addresses to the ethernet interface, we discover limits in both the number of addresses that we can successfully bind to an interface and the speed at which UDP requests can be successfully handled across a large number of bound interfaces.
 

Lopes Alcantara Batista, B., Lima de Campos, G.A., Fernandez, M.P..  2014.  Flow-based conflict detection in OpenFlow networks using first-order logic. Computers and Communication (ISCC), 2014 IEEE Symposium on. :1-6.

The OpenFlow architecture is a proposal from the Clean Slate initiative to define a new Internet architecture where the network devices are simple, and the control and management plane is performed by a centralized controller. The simplicity and centralization architecture makes it reliable and inexpensive. However, this architecture does not provide mechanisms to detect conflicting in flows, allowing that unreachable flows can be configured in the network elements, and the network may not behave as expected. This paper proposes an approach to conflict detection using first-order logic to define possible antagonisms and employ an inference engine to detect conflicting flows before the OpenFlow controller implement in the network elements.
 

Marchal, S., Xiuyan Jiang, State, R., Engel, T..  2014.  A Big Data Architecture for Large Scale Security Monitoring. Big Data (BigData Congress), 2014 IEEE International Congress on. :56-63.

Network traffic is a rich source of information for security monitoring. However the increasing volume of data to treat raises issues, rendering holistic analysis of network traffic difficult. In this paper we propose a solution to cope with the tremendous amount of data to analyse for security monitoring perspectives. We introduce an architecture dedicated to security monitoring of local enterprise networks. The application domain of such a system is mainly network intrusion detection and prevention, but can be used as well for forensic analysis. This architecture integrates two systems, one dedicated to scalable distributed data storage and management and the other dedicated to data exploitation. DNS data, NetFlow records, HTTP traffic and honeypot data are mined and correlated in a distributed system that leverages state of the art big data solution. Data correlation schemes are proposed and their performance are evaluated against several well-known big data framework including Hadoop and Spark.