Visible to the public Biblio

Found 944 results

Filters: Keyword is Internet  [Clear All Filters]
2017-12-12
Hariri, S., Tunc, C., Badr, Y..  2017.  Resilient Dynamic Data Driven Application Systems as a Service (rDaaS): A Design Overview. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W). :352–356.

To overcome the current cybersecurity challenges of protecting our cyberspace and applications, we present an innovative cloud-based architecture to offer resilient Dynamic Data Driven Application Systems (DDDAS) as a cloud service that we refer to as resilient DDDAS as a Service (rDaaS). This architecture integrates Service Oriented Architecture (SOA) and DDDAS paradigms to offer the next generation of resilient and agile DDDAS-based cyber applications, particularly convenient for critical applications such as Battle and Crisis Management applications. Using the cloud infrastructure to offer resilient DDDAS routines and applications, large scale DDDAS applications can be developed by users from anywhere and by using any device (mobile or stationary) with the Internet connectivity. The rDaaS provides transformative capabilities to achieve superior situation awareness (i.e., assessment, visualization, and understanding), mission planning and execution, and resilient operations.

Zhou, G., Huang, J. X..  2017.  Modeling and Learning Distributed Word Representation with Metadata for Question Retrieval. IEEE Transactions on Knowledge and Data Engineering. 29:1226–1239.

Community question answering (cQA) has become an important issue due to the popularity of cQA archives on the Web. This paper focuses on addressing the lexical gap problem in question retrieval. Question retrieval in cQA archives aims to find the existing questions that are semantically equivalent or relevant to the queried questions. However, the lexical gap problem brings a new challenge for question retrieval in cQA. In this paper, we propose to model and learn distributed word representations with metadata of category information within cQA pages for question retrieval using two novel category powered models. One is a basic category powered model called MB-NET and the other one is an enhanced category powered model called ME-NET which can better learn the distributed word representations and alleviate the lexical gap problem. To deal with the variable size of word representation vectors, we employ the framework of fisher kernel to transform them into the fixed-length vectors. Experimental results on large-scale English and Chinese cQA data sets show that our proposed approaches can significantly outperform state-of-the-art retrieval models for question retrieval in cQA. Moreover, we further conduct our approaches on large-scale automatic evaluation experiments. The evaluation results show that promising and significant performance improvements can be achieved.

Ktob, A., Li, Z..  2017.  The Arabic Knowledge Graph: Opportunities and Challenges. 2017 IEEE 11th International Conference on Semantic Computing (ICSC). :48–52.

Semantic Web has brought forth the idea of computing with knowledge, hence, attributing the ability of thinking to machines. Knowledge Graphs represent a major advancement in the construction of the Web of Data where machines are context-aware when answering users' queries. The English Knowledge Graph was a milestone realized by Google in 2012. Even though it is a useful source of information for English users and applications, it does not offer much for the Arabic users and applications. In this paper, we investigated the different challenges and opportunities prone to the life-cycle of the construction of the Arabic Knowledge Graph (AKG) while following some best practices and techniques. Additionally, this work suggests some potential solutions to these challenges. The proprietary factor of data creates a major problem in the way of harvesting this latter. Moreover, when the Arabic data is openly available, it is generally in an unstructured form which requires further processing. The complexity of the Arabic language itself creates a further problem for any automatic or semi-automatic extraction processes. Therefore, the usage of NLP techniques is a feasible solution. Some preliminary results are presented later in this paper. The AKG has very promising outcomes for the Semantic Web in general and the Arabic community in particular. The goal of the Arabic Knowledge Graph is mainly the integration of the different isolated datasets available on the Web. Later, it can be used in both the academic (by providing a large dataset for many different research fields and enhance discovery) and commercial sectors (by improving search engines, providing metadata, interlinking businesses).

2017-12-04
Boudguiga, A., Bouzerna, N., Granboulan, L., Olivereau, A., Quesnel, F., Roger, A., Sirdey, R..  2017.  Towards Better Availability and Accountability for IoT Updates by Means of a Blockchain. 2017 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :50–58.

Building the Internet of Things requires deploying a huge number of objects with full or limited connectivity to the Internet. Given that these objects are exposed to attackers and generally not secured-by-design, it is essential to be able to update them, to patch their vulnerabilities and to prevent hackers from enrolling them into botnets. Ideally, the update infrastructure should implement the CIA triad properties, i.e., confidentiality, integrity and availability. In this work, we investigate how the use of a blockchain infrastructure can meet these requirements, with a focus on availability. In addition, we propose a peer-to-peer mechanism, to spread updates between objects that have limited access to the Internet. Finally, we give an overview of our ongoing prototype implementation.

Costa, V. G. T. da, Barbon, S., Miani, R. S., Rodrigues, J. J. P. C., Zarpelão, B. B..  2017.  Detecting mobile botnets through machine learning and system calls analysis. 2017 IEEE International Conference on Communications (ICC). :1–6.

Botnets have been a serious threat to the Internet security. With the constant sophistication and the resilience of them, a new trend has emerged, shifting botnets from the traditional desktop to the mobile environment. As in the desktop domain, detecting mobile botnets is essential to minimize the threat that they impose. Along the diverse set of strategies applied to detect these botnets, the ones that show the best and most generalized results involve discovering patterns in their anomalous behavior. In the mobile botnet field, one way to detect these patterns is by analyzing the operation parameters of this kind of applications. In this paper, we present an anomaly-based and host-based approach to detect mobile botnets. The proposed approach uses machine learning algorithms to identify anomalous behaviors in statistical features extracted from system calls. Using a self-generated dataset containing 13 families of mobile botnets and legitimate applications, we were able to test the performance of our approach in a close-to-reality scenario. The proposed approach achieved great results, including low false positive rates and high true detection rates.

2017-11-27
Mohammadi, M., Chu, B., Lipford, H. R., Murphy-Hill, E..  2016.  Automatic Web Security Unit Testing: XSS Vulnerability Detection. 2016 IEEE/ACM 11th International Workshop in Automation of Software Test (AST). :78–84.

Integrating security testing into the workflow of software developers not only can save resources for separate security testing but also reduce the cost of fixing security vulnerabilities by detecting them early in the development cycle. We present an automatic testing approach to detect a common type of Cross Site Scripting (XSS) vulnerability caused by improper encoding of untrusted data. We automatically extract encoding functions used in a web application to sanitize untrusted inputs and then evaluate their effectiveness by automatically generating XSS attack strings. Our evaluations show that this technique can detect 0-day XSS vulnerabilities that cannot be found by static analysis tools. We will also show that our approach can efficiently cover a common type of XSS vulnerability. This approach can be generalized to test for input validation against other types injections such as command line injection.

2017-11-20
Koch, R., Kühn, T., Odenwald, M., Rodosek, G. Dreo.  2016.  Dr. WATTson: Lightweight current-based Intrusion Detection (CBID). 2016 14th Annual Conference on Privacy, Security and Trust (PST). :170–177.

Intrusion detection has been an active field of research for more than 35 years. Numerous systems had been built based on the two fundamental detection principles, knowledge-based and behavior-based detection. Anyway, having a look at day-to-day news about data breaches and successful attacks, detection effectiveness is still limited. Even more, heavy-weight intrusion detection systems cannot be installed in every endangered environment. For example, Industrial Control Systems are typically utilized for decades, charging off huge investments of companies. Thus, some of these systems have been in operation for years, but were designed afore without security in mind. Even worse, as systems often have connections to other networks and even the Internet nowadays, an adequate protection is mandatory, but integrating intrusion detection can be extremely difficult - or even impossible to date. We propose a new lightweight current-based IDS which is using a difficult to manipulate measurement base and verifiable ground truth. Focus of our system is providing intrusion detection for ICS and SCADA on a low-priced base, easy to integrate. Dr. WATTson, a prototype implemented based on our concept provides high detection and low false alarm rates.

Weichselbaum, L., Spagnuolo, M., Janc, A..  2016.  Adopting Strict Content Security Policy for XSS Protection. 2016 IEEE Cybersecurity Development (SecDev). :149–149.

Content Security Policy is a mechanism designed to prevent the exploitation of XSS – the most common high-risk web application flaw. CSP restricts which scripts can be executed by allowing developers to define valid script sources; an attacker with a content-injection flaw should not be able to force the browser to execute arbitrary malicious scripts. Currently, CSP is commonly used in conjunction with domain-based script whitelist, where the existence of a single unsafe endpoint in the script whitelist effectively removes the value of the policy as a protection against XSS ( some examples ).

Regainia, L., Salva, S., Ecuhcurs, C..  2016.  A classification methodology for security patterns to help fix software weaknesses. 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA). :1–8.

Security patterns are generic solutions that can be applied since early stages of software life to overcome recurrent security weaknesses. Their generic nature and growing number make their choice difficult, even for experts in system design. To help them on the pattern choice, this paper proposes a semi-automatic methodology of classification and the classification itself, which exposes relationships among software weaknesses, security principles and security patterns. It expresses which patterns remove a given weakness with respect to the security principles that have to be addressed to fix the weakness. The methodology is based on seven steps, which anatomize patterns and weaknesses into set of more precise sub-properties that are associated through a hierarchical organization of security principles. These steps provide the detailed justifications of the resulting classification and allow its upgrade. Without loss of generality, this classification has been established for Web applications and covers 185 software weaknesses, 26 security patterns and 66 security principles. Research supported by the industrial chair on Digital Confidence (http://confiance-numerique.clermont-universite.fr/index-en.html).

Cordero, C. García, Hauke, S., Mühlhäuser, M., Fischer, M..  2016.  Analyzing flow-based anomaly intrusion detection using Replicator Neural Networks. 2016 14th Annual Conference on Privacy, Security and Trust (PST). :317–324.

Defending key network infrastructure, such as Internet backbone links or the communication channels of critical infrastructure, is paramount, yet challenging. The inherently complex nature and quantity of network data impedes detecting attacks in real world settings. In this paper, we utilize features of network flows, characterized by their entropy, together with an extended version of the original Replicator Neural Network (RNN) and deep learning techniques to learn models of normality. This combination allows us to apply anomaly-based intrusion detection on arbitrarily large amounts of data and, consequently, large networks. Our approach is unsupervised and requires no labeled data. It also accurately detects network-wide anomalies without presuming that the training data is completely free of attacks. The evaluation of our intrusion detection method, on top of real network data, indicates that it can accurately detect resource exhaustion attacks and network profiling techniques of varying intensities. The developed method is efficient because a normality model can be learned by training an RNN within a few seconds only.

2017-11-03
Zulkarnine, A. T., Frank, R., Monk, B., Mitchell, J., Davies, G..  2016.  Surfacing collaborated networks in dark web to find illicit and criminal content. 2016 IEEE Conference on Intelligence and Security Informatics (ISI). :109–114.
The Tor Network, a hidden part of the Internet, is becoming an ideal hosting ground for illegal activities and services, including large drug markets, financial frauds, espionage, child sexual abuse. Researchers and law enforcement rely on manual investigations, which are both time-consuming and ultimately inefficient. The first part of this paper explores illicit and criminal content identified by prominent researchers in the dark web. We previously developed a web crawler that automatically searched websites on the internet based on pre-defined keywords and followed the hyperlinks in order to create a map of the network. This crawler has demonstrated previous success in locating and extracting data on child exploitation images, videos, keywords and linkages on the public internet. However, as Tor functions differently at the TCP level, and uses socket connections, further technical challenges are faced when crawling Tor. Some of the other inherent challenges for advanced Tor crawling include scalability, content selection tradeoffs, and social obligation. We discuss these challenges and the measures taken to meet them. Our modified web crawler for Tor, termed the “Dark Crawler” has been able to access Tor while simultaneously accessing the public internet. We present initial findings regarding what extremist and terrorist contents are present in Tor and how this content is connected to each other in a mapped network that facilitates dark web crimes. Our results so far indicate the most popular websites in the dark web are acting as catalysts for dark web expansion by providing necessary knowledgebase, support and services to build Tor hidden services and onion websites.
Park, A. J., Beck, B., Fletche, D., Lam, P., Tsang, H. H..  2016.  Temporal analysis of radical dark web forum users. 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :880–883.
Extremist groups have turned to the Internet and social media sites as a means of sharing information amongst one another. This research study analyzes forum posts and finds people who show radical tendencies through the use of natural language processing and sentiment analysis. The forum data being used are from six Islamic forums on the Dark Web which are made available for security research. This research project uses a POS tagger to isolate keywords and nouns that can be utilized with the sentiment analysis program. Then the sentiment analysis program determines the polarity of the post. The post is scored as either positive or negative. These scores are then divided into monthly radical scores for each user. Once these time clusters are mapped, the change in opinions of the users over time may be interpreted as rising or falling levels of radicalism. Each user is then compared on a timeline to other radical users and events to determine possible connections or relationships. The ability to analyze a forum for an overall change in attitude can be an indicator of unrest and possible radical actions or terrorism.
Iliou, C., Kalpakis, G., Tsikrika, T., Vrochidis, S., Kompatsiaris, I..  2016.  Hybrid Focused Crawling for Homemade Explosives Discovery on Surface and Dark Web. 2016 11th International Conference on Availability, Reliability and Security (ARES). :229–234.
This work proposes a generic focused crawling framework for discovering resources on any given topic that reside on the Surface or the Dark Web. The proposed crawler is able to seamlessly traverse the Surface Web and several darknets present in the Dark Web (i.e. Tor, I2P and Freenet) during a single crawl by automatically adapting its crawling behavior and its classifier-guided hyperlink selection strategy based on the network type. This hybrid focused crawler is demonstrated for the discovery of Web resources containing recipes for producing homemade explosives. The evaluation experiments indicate the effectiveness of the proposed ap-proach both for the Surface and the Dark Web.
Baravalle, A., Lopez, M. S., Lee, S. W..  2016.  Mining the Dark Web: Drugs and Fake Ids. 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). :350–356.
In the last years, governmental bodies have been futilely trying to fight against dark web marketplaces. Shortly after the closing of "The Silk Road" by the FBI and Europol in 2013, new successors have been established. Through the combination of cryptocurrencies and nonstandard communication protocols and tools, agents can anonymously trade in a marketplace for illegal items without leaving any record. This paper presents a research carried out to gain insights on the products and services sold within one of the larger marketplaces for drugs, fake ids and weapons on the Internet, Agora. Our work sheds a light on the nature of the market, there is a clear preponderance of drugs, which accounts for nearly 80% of the total items on sale. The ready availability of counterfeit documents, while they make up for a much smaller percentage of the market, raises worries. Finally, the role of organized crime within Agora is discussed and presented.
Cabaj, K., Mazurczyk, W..  2016.  Using Software-Defined Networking for Ransomware Mitigation: The Case of CryptoWall. IEEE Network. 30:14–20.

Currently, different forms of ransomware are increasingly threatening Internet users. Modern ransomware encrypts important user data, and it is only possible to recover it once a ransom has been paid. In this article we show how software-defined networking can be utilized to improve ransomware mitigation. In more detail, we analyze the behavior of popular ransomware - CryptoWall - and, based on this knowledge, propose two real-time mitigation methods. Then we describe the design of an SDN-based system, implemented using OpenFlow, that facilitates a timely reaction to this threat, and is a crucial factor in the case of crypto ransomware. What is important is that such a design does not significantly affect overall network performance. Experimental results confirm that the proposed approach is feasible and efficient.

Upadhyaya, R., Jain, A..  2016.  Cyber ethics and cyber crime: A deep dwelved study into legality, ransomware, underground web and bitcoin wallet. 2016 International Conference on Computing, Communication and Automation (ICCCA). :143–148.

Future wars will be cyber wars and the attacks will be a sturdy amalgamation of cryptography along with malware to distort information systems and its security. The explosive Internet growth facilitates cyber-attacks. Web threats include risks, that of loss of confidential data and erosion of consumer confidence in e-commerce. The emergence of cyber hack jacking threat in the new form in cyberspace is known as ransomware or crypto virus. The locker bot waits for specific triggering events, to become active. It blocks the task manager, command prompt and other cardinal executable files, a thread checks for their existence every few milliseconds, killing them if present. Imposing serious threats to the digital generation, ransomware pawns the Internet users by hijacking their system and encrypting entire system utility files and folders, and then demanding ransom in exchange for the decryption key it provides for release of the encrypted resources to its original form. We present in this research, the anatomical study of a ransomware family that recently picked up quite a rage and is called CTB locker, and go on to the hard money it makes per user, and its source C&C server, which lies with the Internet's greatest incognito mode-The Dark Net. Cryptolocker Ransomware or the CTB Locker makes a Bitcoin wallet per victim and payment mode is in the form of digital bitcoins which utilizes the anonymity network or Tor gateway. CTB Locker is the deadliest malware the world ever encountered.

Shinde, R., Veeken, P. Van der, Schooten, S. Van, Berg, J. van den.  2016.  Ransomware: Studying transfer and mitigation. 2016 International Conference on Computing, Analytics and Security Trends (CAST). :90–95.

Cybercrimes today are focused over returns, especially in the form of monetary returns. In this paper - through a literature study and conducting interviews for the people victimized by ransomware and a survey with random set of victimized and non-victimized by ransomware - conclusions about the dependence of ransomware on demographics like age and education areshown. Increasing threats due to ease of transfer of ransomware through internet arealso discussed. Finally, low level awarenessamong company professionals is confirmed and reluctance to payment on being a victim is found as a common trait.

2017-05-30
Singh, Rachee, Gill, Phillipa.  2016.  PathCache: A Path Prediction Toolkit. Proceedings of the 2016 ACM SIGCOMM Conference. :569–570.

Path prediction on the Internet has been a topic of research in the networking community for close to a decade. Applications of path prediction solutions have ranged from optimizing selection of peers in peer- to-peer networks to improving and debugging CDN predictions. Recently, revelations of traffic correlation and surveillance on the Internet have raised the topic of path prediction in the context of network security. Specifically, predicting network paths can allow us to identify and avoid given organizations on network paths (e.g., to avoid traffic correlation attacks in Tor) or to infer the impact of hijacks and interceptions when direct measurements are not available. In this poster we propose the design and implementation of PathCache which aims to reuse measurement data to estimate AS level paths on the Internet. Unlike similar systems, PathCache does not assume that routing on the Internet is destination based. Instead, we develop an algorithm to compute confidence in paths between ASes. These multiple paths ranked by their confidence values are returned to the user.

2017-04-20
Rohrmann, R., Patton, M. W., Chen, H..  2016.  Anonymous port scanning: Performing network reconnaissance through Tor. 2016 IEEE Conference on Intelligence and Security Informatics (ISI). :217–217.

The anonymizing network Tor is examined as one method of anonymizing port scanning tools and avoiding identification and retaliation. Performing anonymized port scans through Tor is possible using Nmap, but parallelization of the scanning processes is required to accelerate the scan rate.

Alvarez, E. D., Correa, B. D., Arango, I. F..  2016.  An analysis of XSS, CSRF and SQL injection in colombian software and web site development. 2016 8th Euro American Conference on Telematics and Information Systems (EATIS). :1–5.

Software development and web applications have become fundamental in our lives. Millions of users access these applications to communicate, obtain information and perform transactions. However, these users are exposed to many risks; commonly due to the developer's lack of experience in security protocols. Although there are many researches about web security and hacking protection, there are plenty of vulnerable websites. This article focuses in analyzing 3 main hacking techniques: XSS, CSRF, and SQL Injection over a representative group of Colombian websites. Our goal is to obtain information about how Colombian companies and organizations give (or not) relevance to security; and how the final user could be affected.

Murtaza, S. M., Abid, A. S..  2016.  Automated white-list learning technique for detection of malicious attack on web application. 2016 13th International Bhurban Conference on Applied Sciences and Technology (IBCAST). :416–420.

Web application security has become crucially vital these days. Earlier "default allow" model was used to secure web applications but it was unable to secure web applications against plethora of attacks [1]. In contrast, more restricted security to the web applications is provided by default deny model which at first, builds a model for the particular application and then permits merely those requests that conform to that model while ignoring everything else. Besides this, a novel and effective methodology is followed that allows to analyze the validity of application requests and further results in the generation of semi structured XML cases for the web applications. Furthermore, mature and resilient XML cases are generated by employing learning techniques. This system will further be gauged by examining that XML file containing cases are in correct accordance with the XML format or not. Moreover, the distinction between malicious and non-malicious traffic is carried out carefully. Results have proved its efficacy of rule generation employing access traffic log of cross site scripting (XSS), SQL injection, HTTP Request Splitting, HTTP response splitting and Buffer overflow attacks.

Mhana, Samer Attallah, Din, Jamilah Binti, Atan, Rodziah Binti.  2016.  Automatic generation of Content Security Policy to mitigate cross site scripting. 2016 2nd International Conference on Science in Information Technology (ICSITech). :324–328.

Content Security Policy (CSP) is powerful client-side security layer that helps in mitigating and detecting wide ranges of Web attacks including cross-site scripting (XSS). However, utilizing CSP by site administrators is a fallible process and may require significant changes in web application code. In this paper, we propose an approach to help site administers to overcome these limitations in order to utilize the full benefits of CSP mechanism which leads to more immune sites from XSS. The algorithm is implemented as a plugin. It does not interfere with the Web application original code. The plugin can be “installed” on any other web application with minimum efforts. The algorithm can be implemented as part of Web Server layer, not as part of the business logic layer. It can be extended to support generating CSP for contents that are modified by JavaScript after loading. Current approach inspects the static contents of URLs.

Rao, K. S., Jain, N., Limaje, N., Gupta, A., Jain, M., Menezes, B..  2016.  Two for the price of one: A combined browser defense against XSS and clickjacking. 2016 International Conference on Computing, Networking and Communications (ICNC). :1–6.
Cross Site Scripting (XSS) and clickjacking have been ranked among the top web application threats in recent times. This paper introduces XBuster - our client-side defence against XSS, implemented as an extension to the Mozilla Firefox browser. XBuster splits each HTTP request parameter into HTML and JavaScript contexts and stores them separately. It searches for both contexts in the HTTP response and handles each context type differently. It defends against all XSS attack vectors including partial script injection, attribute injection and HTML injection. Also, existing XSS filters may inadvertently disable frame busting code used in web pages as a defence against clickjacking. However, XBuster has been designed to detect and neutralize such attempts.
Chaudhary, P., Gupta, B. B., Yamaguchi, S..  2016.  XSS detection with automatic view isolation on online social network. 2016 IEEE 5th Global Conference on Consumer Electronics. :1–5.

Online Social Networks (OSNs) are continuously suffering from the negative impact of Cross-Site Scripting (XSS) vulnerabilities. This paper describes a novel framework for mitigating XSS attack on OSN-based platforms. It is completely based on the request authentication and view isolation approach. It detects XSS attack through validating string value extracted from the vulnerable checkpoint present in the web page by implementing string examination algorithm with the help of XSS attack vector repository. Any similarity (i.e. string is not validated) indicates the presence of malicious code injected by the attacker and finally it removes the script code to mitigate XSS attack. To assess the defending ability of our designed model, we have tested it on OSN-based web application i.e. Humhub. The experimental results revealed that our model discovers the XSS attack vectors with low false negatives and false positive rate tolerable performance overhead.

Zhang, X., Gong, L., Xun, Y., Piao, X., Leit, K..  2016.  Centaur: A evolutionary design of hybrid NDN/IP transport architecture for streaming application. 2016 IEEE 7th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :1–7.

Named Data Networking (NDN), a clean-slate data oriented Internet architecture targeting on replacing IP, brings many potential benefits for content distribution. Real deployment of NDN is crucial to verify this new architecture and promote academic research, but work in this field is at an early stage. Due to the fundamental design paradigm difference between NDN and IP, Deploying NDN as IP overlay causes high overhead and inefficient transmission, typically in streaming applications. Aiming at achieving efficient NDN streaming distribution, this paper proposes a transitional architecture of NDN/IP hybrid network dubbed Centaur, which embodies both NDN's smartness, scalability and IP's transmission efficiency and deployment feasibility. In Centaur, the upper NDN module acts as the smart head while the lower IP module functions as the powerful feet. The head is intelligent in content retrieval and self-control, while the IP feet are able to transport large amount of media data faster than that if NDN directly overlaying on IP. To evaluate the performance of our proposal, we implement a real streaming prototype in ndnSIM and compare it with both NDN-Hippo and P2P under various experiment scenarios. The result shows that Centaur can achieve better load balance with lower overhead, which is close to the performance that ideal NDN can achieve. All of these validate that our proposal is a promising choice for the incremental and compatible deployment of NDN.