Visible to the public Biblio

Found 944 results

Filters: Keyword is Internet  [Clear All Filters]
2015-05-06
Zhuo Lu, Wenye Wang, Wang, C..  2014.  How can botnets cause storms? Understanding the evolution and impact of mobile botnets INFOCOM, 2014 Proceedings IEEE. :1501-1509.

A botnet in mobile networks is a collection of compromised nodes due to mobile malware, which are able to perform coordinated attacks. Different from Internet botnets, mobile botnets do not need to propagate using centralized infrastructures, but can keep compromising vulnerable nodes in close proximity and evolving organically via data forwarding. Such a distributed mechanism relies heavily on node mobility as well as wireless links, therefore breaks down the underlying premise in existing epidemic modeling for Internet botnets. In this paper, we adopt a stochastic approach to study the evolution and impact of mobile botnets. We find that node mobility can be a trigger to botnet propagation storms: the average size (i.e., number of compromised nodes) of a botnet increases quadratically over time if the mobility range that each node can reach exceeds a threshold; otherwise, the botnet can only contaminate a limited number of nodes with average size always bounded above. This also reveals that mobile botnets can propagate at the fastest rate of quadratic growth in size, which is substantially slower than the exponential growth of Internet botnets. To measure the denial-of-service impact of a mobile botnet, we define a new metric, called last chipper time, which is the last time that service requests, even partially, can still be processed on time as the botnet keeps propagating and launching attacks. The last chipper time is identified to decrease at most on the order of 1/√B, where B is the network bandwidth. This result reveals that although increasing network bandwidth can help with mobile services; at the same time, it can indeed escalate the risk for services being disrupted by mobile botnets.

Leong, P., Liming Lu.  2014.  Multiagent Web for the Internet of Things. Information Science and Applications (ICISA), 2014 International Conference on. :1-4.

The Internet of Things (IOT) is a network of networks where massively large numbers of objects or things are interconnected to each other through the network. The Internet of Things brings along many new possibilities of applications to improve human comfort and quality of life. Complex systems such as the Internet of Things are difficult to manage because of the emergent behaviours that arise from the complex interactions between its constituent parts. Our key contribution in the paper is a proposed multiagent web for the Internet of Things. Corresponding data management architecture is also proposed. The multiagent architecture provides autonomic characteristics for IOT making the IOT manageable. In addition, the multiagent web allows for flexible processing on heterogeneous platforms as we leverage off web protocols such as HTTP and language independent data formats such as JSON for communications between agents. The architecture we proposed enables a scalable architecture and infrastructure for a web-scale multiagent Internet of Things.
 

Biagioni, E..  2014.  Ubiquitous Interpersonal Communication over Ad-hoc Networks and the Internet. System Sciences (HICSS), 2014 47th Hawaii International Conference on. :5144-5153.

The hardware and low-level software in many mobile devices are capable of mobile-to-mobile communication, including ad-hoc 802.11, Bluetooth, and cognitive radios. We have started to leverage this capability to provide interpersonal communication both over infrastructure networks (the Internet), and over ad-hoc and delay-tolerant networks composed of the mobile devices themselves. This network is decentralized in the sense that it can function without any infrastructure, but does take advantage of infrastructure connections when available. All interpersonal communication is encrypted and authenticated so packets may be carried by devices belonging to untrusted others. The decentralized model of security builds a flexible trust network on top of the social network of communicating individuals. This social network can be used to prioritize packets to or from individuals closely related by the social network. Other packets are prioritized to favor packets likely to consume fewer network resources. Each device also has a policy that determines how many packets may be forwarded, with the goal of providing useful interpersonal communications using at most 1% of any given resource on mobile devices. One challenge in a fully decentralized network is routing. Our design uses Rendezvous Points (RPs) and Distributed Hash Tables (DHTs) for delivery over infrastructure networks, and hop-limited broadcast and Delay Tolerant Networking (DTN) within the wireless ad-hoc network.

2015-05-05
Craig, P., Roa Seiler, N., Olvera Cervantes, A.D..  2014.  Animated Geo-temporal Clusters for Exploratory Search in Event Data Document Collections. Information Visualisation (IV), 2014 18th International Conference on. :157-163.

This paper presents a novel visual analytics technique developed to support exploratory search tasks for event data document collections. The technique supports discovery and exploration by clustering results and overlaying cluster summaries onto coordinated timeline and map views. Users can also explore and interact with search results by selecting clusters to filter and re-cluster the data with animation used to smooth the transition between views. The technique demonstrates a number of advantages over alternative methods for displaying and exploring geo-referenced search results and spatio-temporal data. Firstly, cluster summaries can be presented in a manner that makes them easy to read and scan. Listing representative events from each cluster also helps the process of discovery by preserving the diversity of results. Also, clicking on visual representations of geo-temporal clusters provides a quick and intuitive way to navigate across space and time simultaneously. This removes the need to overload users with the display of too many event labels at any one time. The technique was evaluated with a group of nineteen users and compared with an equivalent text based exploratory search engine.
 

Kumar, S., Rama Krishna, C., Aggarwal, N., Sehgal, R., Chamotra, S..  2014.  Malicious data classification using structural information and behavioral specifications in executables. Engineering and Computational Sciences (RAECS), 2014 Recent Advances in. :1-6.

With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.

Oweis, N.E., Owais, S.S., Alrababa, M.A., Alansari, M., Oweis, W.G..  2014.  A survey of Internet security risk over social networks. Computer Science and Information Technology (CSIT), 2014 6th International Conference on. :1-4.

The Communities vary from country to country. There are civil societies and rural communities, which also differ in terms of geography climate and economy. This shows that the use of social networks vary from region to region depending on the demographics of the communities. So, in this paper, we researched the most important problems of the Social Network, as well as the risk which is based on the human elements. We raised the problems of social networks in the transformation of societies to another affected by the global economy. The social networking integration needs to strengthen social ties that lead to the existence of these problems. For this we focused on the Internet security risks over the social networks. And study on Risk Management, and then look at resolving various problems that occur from the use of social networks.
 

Yu Li, Rui Dai, Junjie Zhang.  2014.  Morphing communications of Cyber-Physical Systems towards moving-target defense. Communications (ICC), 2014 IEEE International Conference on. :592-598.

Since the massive deployment of Cyber-Physical Systems (CPSs) calls for long-range and reliable communication services with manageable cost, it has been believed to be an inevitable trend to relay a significant portion of CPS traffic through existing networking infrastructures such as the Internet. Adversaries who have access to networking infrastructures can therefore eavesdrop network traffic and then perform traffic analysis attacks in order to identify CPS sessions and subsequently launch various attacks. As we can hardly prevent all adversaries from accessing network infrastructures, thwarting traffic analysis attacks becomes indispensable. Traffic morphing serves as an effective means towards this direction. In this paper, a novel traffic morphing algorithm, CPSMorph, is proposed to protect CPS sessions. CPSMorph maintains a number of network sessions whose distributions of inter-packet delays are statistically indistinguishable from those of typical network sessions. A CPS message will be sent through one of these sessions with assured satisfaction of its time constraint. CPSMorph strives to minimize the overhead by dynamically adjusting the morphing process. It is characterized by low complexity as well as high adaptivity to changing dynamics of CPS sessions. Experimental results have shown that CPSMorph can effectively performing traffic morphing for real-time CPS messages with moderate overhead.
 

Hong, J.B., Dong Seong Kim.  2014.  Scalable Security Models for Assessing Effectiveness of Moving Target Defenses. Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on. :515-526.

Moving Target Defense (MTD) changes the attack surface of a system that confuses intruders to thwart attacks. Various MTD techniques are developed to enhance the security of a networked system, but the effectiveness of these techniques is not well assessed. Security models (e.g., Attack Graphs (AGs)) provide formal methods of assessing security, but modeling the MTD techniques in security models has not been studied. In this paper, we incorporate the MTD techniques in security modeling and analysis using a scalable security model, namely Hierarchical Attack Representation Models (HARMs), to assess the effectiveness of the MTD techniques. In addition, we use importance measures (IMs) for scalable security analysis and deploying the MTD techniques in an effective manner. The performance comparison between the HARM and the AG is given. Also, we compare the performance of using the IMs and the exhaustive search method in simulations.

Morrell, C., Ransbottom, J.S., Marchany, R., Tront, J.G..  2014.  Scaling IPv6 address bindings in support of a moving target defense. Internet Technology and Secured Transactions (ICITST), 2014 9th International Conference for. :440-445.

Moving target defense is an area of network security research in which machines are moved logically around a network in order to avoid detection. This is done by leveraging the immense size of the IPv6 address space and the statistical improbability of two machines selecting the same IPv6 address. This defensive technique forces a malicious actor to focus on the reconnaissance phase of their attack rather than focusing only on finding holes in a machine's static defenses. We have a current implementation of an IPv6 moving target defense entitled MT6D, which works well although is limited to functioning in a peer to peer scenario. As we push our research forward into client server networks, we must discover what the limits are in reference to the client server ratio. In our current implementation of a simple UDP echo server that binds large numbers of IPv6 addresses to the ethernet interface, we discover limits in both the number of addresses that we can successfully bind to an interface and the speed at which UDP requests can be successfully handled across a large number of bound interfaces.
 

Kumar, A., Reddy, K..  2014.  Constructing secure web applications with proper data validations. Recent Advances and Innovations in Engineering (ICRAIE), 2014. :1-5.

With the advent of World Wide Web, information sharing through internet increased drastically. So web applications security is today's most significant battlefield between attackers and resources of web service. It is likely to remain so for the foreseeable future. By considering recent attacks it has been found that major attacks in Web Applications have been carried out even when system having most significant network level security. Poor input validation mechanisms that using in Web Applications shall causes to launching vulnerable web applications, which easy to exploit easy in future stages. Critical Web Application Vulnerabilities like Cross Site Scripting (XSS) and Injections (SQL, PHP, LDAP, SSL, XML, Command, and Code) are happen because of base level Validations, and it is enough to update system in unauthorized way or may be causes to exploit the system. In this paper we present those issues in data validations strategies, to avoid deployment of vulnerable web applications.
 

Coelho Martins da Fonseca, J.C., Amorim Vieira, M.P..  2014.  A Practical Experience on the Impact of Plugins in Web Security. Reliable Distributed Systems (SRDS), 2014 IEEE 33rd International Symposium on. :21-30.

In an attempt to support customization, many web applications allow the integration of third-party server-side plugins that offer diverse functionality, but also open an additional door for security vulnerabilities. In this paper we study the use of static code analysis tools to detect vulnerabilities in the plugins of the web application. The goal is twofold: 1) to study the effectiveness of static analysis on the detection of web application plugin vulnerabilities, and 2) to understand the potential impact of those plugins in the security of the core web application. We use two static code analyzers to evaluate a large number of plugins for a widely used Content Manage-ment System. Results show that many plugins that are current-ly deployed worldwide have dangerous Cross Site Scripting and SQL Injection vulnerabilities that can be easily exploited, and that even widely used static analysis tools may present disappointing vulnerability coverage and false positive rates.

Buja, G., Bin Abd Jalil, K., Bt Hj Mohd Ali, F., Rahman, T.F.A..  2014.  Detection model for SQL injection attack: An approach for preventing a web application from the SQL injection attack. Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on. :60-64.

Since the past 20 years the uses of web in daily life is increasing and becoming trend now. As the use of the web is increasing, the use of web application is also increasing. Apparently most of the web application exists up to today have some vulnerability that could be exploited by unauthorized person. Some of well-known web application vulnerabilities are Structured Query Language (SQL) Injection, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). By compromising with these web application vulnerabilities, the system cracker can gain information about the user and lead to the reputation of the respective organization. Usually the developers of web applications did not realize that their web applications have vulnerabilities. They only realize them when there is an attack or manipulation of their code by someone. This is normal as in a web application, there are thousands of lines of code, therefore it is not easy to detect if there are some loopholes. Nowadays as the hacking tools and hacking tutorials are easier to get, lots of new hackers are born. Even though SQL injection is very easy to protect against, there are still large numbers of the system on the internet are vulnerable to this type of attack because there will be a few subtle condition that can go undetected. Therefore, in this paper we propose a detection model for detecting and recognizing the web vulnerability which is; SQL Injection based on the defined and identified criteria. In addition, the proposed detection model will be able to generate a report regarding the vulnerability level of the web application. As the consequence, the proposed detection model should be able to decrease the possibility of the SQL Injection attack that can be launch onto the web application.

Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

Wenmin Xiao, Jianhua Sun, Hao Chen, Xianghua Xu.  2014.  Preventing Client Side XSS with Rewrite Based Dynamic Information Flow. Parallel Architectures, Algorithms and Programming (PAAP), 2014 Sixth International Symposium on. :238-243.

This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
 

Bozic, J., Wotawa, F..  2014.  Security Testing Based on Attack Patterns. Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on. :4-11.

Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study.

Guowei Dong, Yan Zhang, Xin Wang, Peng Wang, Liangkun Liu.  2014.  Detecting cross site scripting vulnerabilities introduced by HTML5. Computer Science and Software Engineering (JCSSE), 2014 11th International Joint Conference on. :319-323.

Recent years, HTML5 is widely adopted in popular browsers. Unfortunately, as a new Web standard, HTML5 may expand the Cross Site Scripting (XSS) attack surface as well as improve the interactivity of the page. In this paper, we identified 14 XSS attack vectors related to HTML5 by a systematic analysis about new tags and attributes. Based on these vectors, a XSS test vector repository is constructed and a dynamic XSS vulnerability detection tool focusing on Webmail systems is implemented. By applying the tool to some popular Webmail systems, seven exploitable XSS vulnerabilities are found. The evaluation result shows that our tool can efficiently detect XSS vulnerabilities introduced by HTML5.

Rocha, T.S., Souto, E..  2014.  ETSSDetector: A Tool to Automatically Detect Cross-Site Scripting Vulnerabilities. Network Computing and Applications (NCA), 2014 IEEE 13th International Symposium on. :306-309.

The inappropriate use of features intended to improve usability and interactivity of web applications has resulted in the emergence of various threats, including Cross-Site Scripting(XSS) attacks. In this work, we developed ETSS Detector, a generic and modular web vulnerability scanner that automatically analyzes web applications to find XSS vulnerabilities. ETSS Detector is able to identify and analyze all data entry points of the application and generate specific code injection tests for each one. The results shows that the correct filling of the input fields with only valid information ensures a better effectiveness of the tests, increasing the detection rate of XSS attacks.
 

Gupta, M.K., Govil, M.C., Singh, G..  2014.  A context-sensitive approach for precise detection of cross-site scripting vulnerabilities. Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on. :7-12.

Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.

Gupta, M.K., Govil, M.C., Singh, G..  2014.  Static analysis approaches to detect SQL injection and cross site scripting vulnerabilities in web applications: A survey. Recent Advances and Innovations in Engineering (ICRAIE), 2014. :1-5.

Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.

Syrivelis, D., Paschos, G.S., Tassiulas, L..  2014.  VirtueMAN: A software-defined network architecture for WiFi-based metropolitan applications. Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2014 IEEE 19th International Workshop on. :95-99.

Metropolitan scale WiFi deployments face several challenges including controllability and management, which prohibit the provision of Seamless Access, Quality of Service (QoS) and Security to mobile users. Thus, they remain largely an untapped networking resource. In this work, a SDN-based network architecture is proposed; it is comprised of a distributed network-wide controller and a novel datapath for wireless access points. Virtualization of network functions is employed for configurable user access control as well as for supporting an IP-independent forwarding scheme. The proposed architecture is a flat network across the deployment area, providing seamless connectivity and reachability without the need of intermediary servers over the Internet, enabling thus a wide variety of localized applications, like for instance video surveillance. Also, the provided interface allows for transparent implementation of intra-network distributed cross-layer traffic control protocols that can optimize the multihop performance of the wireless network.
 

Gregr, M., Veda, M..  2014.  Challenges with Transition and User Accounting in Next Generation Networks. Network Protocols (ICNP), 2014 IEEE 22nd International Conference on. :501-503.

Future networks may change the way how network administrators monitor and account their users. History shows that usually a completely new design (clean slate) is used to propose a new network architecture - e.g. Network Control Protocol to TCP/IP, IPv4 to IPv6 or IP to Recursive Inter Network Architecture. The incompatibility between these architectures changes the user accounting process as network administrators have to use different information to identify a user. The paper presents a methodology how it is possible to gather all necessary information needed for smooth transition between two incompatible architectures. The transition from IPv4 and IPv6 is used as a use case, but it should be able to use the same process with any new networking architecture.
 

Manandhar, K., Adcock, B., Xiaojun Cao.  2014.  Preserving the Anonymity in MobilityFirst networks. Computer Communication and Networks (ICCCN), 2014 23rd International Conference on. :1-6.

A scheme for preserving privacy in MobilityFirst (MF) clean-slate future Internet architecture is proposed in this paper. The proposed scheme, called Anonymity in MobilityFirst (AMF), utilizes the three-tiered approach to effectively exploit the inherent properties of MF Network such as Globally Unique Flat Identifier (GUID) and Global Name Resolution Service (GNRS) to provide anonymity to the users. While employing new proposed schemes in exchanging of keys between different tiers of routers to alleviate trust issues, the proposed scheme uses multiple routers in each tier to avoid collaboration amongst the routers in the three tiers to expose the end users.

Qadir, J., Hasan, O..  2015.  Applying Formal Methods to Networking: Theory, Techniques, and Applications. Communications Surveys Tutorials, IEEE. 17:256-291.

Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.
 

Riggio, R., De Pellegrini, F., Siracusa, D..  2014.  The price of virtualization: Performance isolation in multi-tenants networks. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-7.

Network virtualization sits firmly on the Internet evolutionary path allowing researchers to experiment with novel clean-slate designs over the production network and practitioners to manage multi-tenants infrastructures in a flexible and scalable manner. In such scenarios, isolation between virtual networks is often intended as purely logical: this is the case of address space isolation or flow space isolation. This approach neglects the effect that network virtualization has on resource allocation network-wide. In this work we investigate the price paid by a purely logical approach in terms of performance degradation. This performance loss is paid by the actual users of a multi-tenants datacenter network. We propose a solution to this problem leveraging on a new network virtualization primitive, namely an online link utilization feedback mechanism. It provides each tenant with the necessary information to make efficient use of network resources. We evaluate our solution trough a real implementation exploiting the OpenFlow protocol. Empirical results confirm that the proposed scheme is able to support tenants in exploiting virtualized network resources effectively.
 

Coras, F., Saucez, D., Iannone, L., Donnet, B..  2014.  On the performance of the LISP beta network. Networking Conference, 2014 IFIP. :1-9.

The future Internet has been a hot topic during the past decade and many approaches towards this future Internet, ranging from incremental evolution to complete clean slate ones, have been proposed. One of the proposition, LISP, advocates for the separation of the identifier and the locator roles of IP addresses to reduce BGP churn and BGP table size. Up to now, however, most studies concerning LISP have been theoretical and, in fact, little is known about the actual LISP deployment performance. In this paper, we fill this gap through measurement campaigns carried out on the LISP Beta Network. More precisely, we evaluate the performance of the two key components of the infrastructure: the control plane (i.e., the mapping system) and the interworking mechanism (i.e., communication between LISP and non-LISP sites). Our measurements highlight that performance offered by the LISP interworking infrastructure is strongly dependent on BGP routing policies. If we exclude misconfigured nodes, the mapping system typically provides reliable performance and relatively low median mapping resolution delays. Although the bias is not very important, control plane performance favors USA sites as a result of its larger LISP user base but also because European infrastructure appears to be less reliable.