Visible to the public Biblio

Found 585 results

Filters: Keyword is Computer architecture  [Clear All Filters]
2017-11-03
Ahmadian, M. M., Shahriari, H. R..  2016.  2entFOX: A framework for high survivable ransomwares detection. 2016 13th International Iranian Society of Cryptology Conference on Information Security and Cryptology (ISCISC). :79–84.

Ransomwares have become a growing threat since 2012, and the situation continues to worsen until now. The lack of security mechanisms and security awareness are pushing the systems into mire of ransomware attacks. In this paper, a new framework called 2entFOX' is proposed in order to detect high survivable ransomwares (HSR). To our knowledge this framework can be considered as one of the first frameworks in ransomware detection because of little publicly-available research in this field. We analyzed Windows ransomwares' behaviour and we tried to find appropriate features which are particular useful in detecting this type of malwares with high detection accuracy and low false positive rate. After hard experimental analysis we extracted 20 effective features which due to two highly efficient ones we could achieve an appropriate set for HSRs detection. After proposing architecture based on Bayesian belief network, the final evaluation is done on some known ransomware samples and unknown ones based on six different scenarios. The result of this evaluations shows the high accuracy of 2entFox in detection of HSRs.

2017-04-20
Zhang, X., Gong, L., Xun, Y., Piao, X., Leit, K..  2016.  Centaur: A evolutionary design of hybrid NDN/IP transport architecture for streaming application. 2016 IEEE 7th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :1–7.

Named Data Networking (NDN), a clean-slate data oriented Internet architecture targeting on replacing IP, brings many potential benefits for content distribution. Real deployment of NDN is crucial to verify this new architecture and promote academic research, but work in this field is at an early stage. Due to the fundamental design paradigm difference between NDN and IP, Deploying NDN as IP overlay causes high overhead and inefficient transmission, typically in streaming applications. Aiming at achieving efficient NDN streaming distribution, this paper proposes a transitional architecture of NDN/IP hybrid network dubbed Centaur, which embodies both NDN's smartness, scalability and IP's transmission efficiency and deployment feasibility. In Centaur, the upper NDN module acts as the smart head while the lower IP module functions as the powerful feet. The head is intelligent in content retrieval and self-control, while the IP feet are able to transport large amount of media data faster than that if NDN directly overlaying on IP. To evaluate the performance of our proposal, we implement a real streaming prototype in ndnSIM and compare it with both NDN-Hippo and P2P under various experiment scenarios. The result shows that Centaur can achieve better load balance with lower overhead, which is close to the performance that ideal NDN can achieve. All of these validate that our proposal is a promising choice for the incremental and compatible deployment of NDN.

Vidhya, R., Karthik, P..  2016.  Coexistence of cellular IOT and 4G networks. 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT). :555–558.

Increase in M2M use cases, the availability of narrow band spectrum with operators and a need for very low cost modems for M2M applications has led to the discussions around what is called as Cellular IOT (CIOT). In order to develop the Cellular IOT network, discussions are focused around developing a new air interface that can leverage narrow band spectrum as well as lead to low cost modems which can be embedded into M2M/IOT devices. One key issue that arises during the development of a clean slate CIOT network is that of coexistence with the 4G networks. In this paper we explore architectures for Cellular IOT and 4G network harmonization that also addresses the one key requirement of possibly using narrow channels for IOT on the existing 4G networks and not just as a separate standalone Cellular IOT system. We analyze the architectural implication on the core network load in a tightly coupled CIOT-LTE architecture propose a offload mechanism from LTE to CIOT cells.

Bronzino, F., Raychaudhuri, D., Seskar, I..  2016.  Demonstrating Context-Aware Services in the Mobility First Future Internet Architecture. 2016 28th International Teletraffic Congress (ITC 28). 01:201–204.

As the amount of mobile devices populating the Internet keeps growing at tremendous pace, context-aware services have gained a lot of traction thanks to the wide set of potential use cases they can be applied to. Environmental sensing applications, emergency services, and location-aware messaging are just a few examples of applications that are expected to increase in popularity in the next few years. The MobilityFirst future Internet architecture, a clean-slate Internet architecture design, provides the necessary abstractions for creating and managing context-aware services. Starting from these abstractions we design a context services framework, which is based on a set of three fundamental mechanisms: an easy way to specify context based on human understandable techniques, i.e. use of names, an architecture supported management mechanism that allows both to conveniently deploy the service and efficiently provide management capabilities, and a native delivery system that reduces the tax on the network components and on the overhead cost of deploying such applications. In this paper, we present an emergency alert system for vehicles assisting first responders that exploits users location awareness to support quick and reliable alert messages for interested vehicles. By deploying a demo of the system on a nationwide testbed, we aim to provide better understanding of the dynamics involved in our designed framework.

Akhtar, N., Matta, I., Wang, Y..  2016.  Managing NFV using SDN and control theory. NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium. :1005–1006.

Control theory and SDN (Software Defined Networking) are key components for NFV (Network Function Virtualization) deployment. However little has been done to use a control-theoretic approach for SDN and NFV management. In this demo, we describe a use case for NFV management using control theory and SDN. We use the management architecture of RINA (a clean-slate Recursive InterNetwork Architecture) to manage Virtual Network Function (VNF) instances over the GENI testbed. We deploy Snort, an Intrusion Detection System (IDS) as the VNF. Our network topology has source and destination hosts, multiple IDSes, an Open vSwitch (OVS) and an OpenFlow controller. A distributed management application running on RINA measures the state of the VNF instances and communicates this information to a Proportional Integral (PI) controller, which then provides load balancing information to the OpenFlow controller. The latter controller in turn updates traffic flow forwarding rules on the OVS switch, thus balancing load across the VNF instances. This demo demonstrates the benefits of using such a control-theoretic load balancing approach and the RINA management architecture in virtualized environments for NFV management. It also illustrates that the GENI testbed can easily support a wide range of SDN and NFV related experiments.

You, T..  2016.  Toward the future of internet architecture for IoE: Precedent research on evolving the identifier and locator separation schemes. 2016 International Conference on Information and Communication Technology Convergence (ICTC). :436–439.

Internet has been being becoming the most famous and biggest communication networks as social, industrial, and public infrastructure since Internet was invented at late 1960s. In a historical retrospect of Internet's evolution, the Internet architecture continues evolution repeatedly by going through various technical challenges, for instance, in early 1990s, Internet had encountered danger of scalability, after a short while it had been overcome and successfully evolved by applying emerging techniques such as CIDR, NAT, and IPv6. Especially this paper emphasizes scalability issues as technical challenges with forecasting that Internet of things era has come. Firstly, we describe the Identifier and locator separation scheme that can achieve dramatically architectural evolution in historical perspective. Additionally, it reviews various kinds of Identifier and locator separation scheme because recently the scheme can be the major design pillar towards future of Internet architecture such as both various clean-slated future Internet architectures and evolving Internet architectures. Lastly we show a result of analysis by analysis table for future of internet of everything where number of Internet connected devices will growth to more than 20 billion by 2020.

Lauer, H., Kuntze, N..  2016.  Hypervisor-Based Attestation of Virtual Environments. 2016 Intl IEEE Conferences on Ubiquitous Intelligence Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld). :333–340.
Several years ago, virtualization technologies, hypervisors were rediscovered, today virtualization is used in a variety of applications. Network operators have discovered the cost-effectiveness, flexibility,, scalability of virtualizing network functions (NFV). However, in light of current events, security breaches related to platform software manipulation the use of Trusted Computing technologies has become not only more popular but increasingly viewed as mandatory for adequate system protection. While Trusted Computing hardware for physical platforms is currently available, widely used, analogous support for virtualized environments, virtualized platforms is rare, not suitable for larger scale virtualization scenarios. Current remote, deep attestation protocols for virtual machines can support a limited amount of virtual machines before the inefficient use of the TPM device becomes a crucial bottle neck. We propose a scalable remote attestation scheme suitable for private cloud, NFV use cases supporting large amounts of VM attestations by efficient use of the physical TPM device.
2017-03-13
Teke, R. J., Chaudhari, M. S., Prasad, R..  2016.  Impact of security enhancement over Autonomous Mobile Mesh Network (AMMNET). 2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). :1–6.

The Mobile Ad-hoc Networks (MANET) are suffering from network partitioning when there is group mobility and thus cannot efficiently provide connectivity to all nodes in the network. Autonomous Mobile Mesh Network (AMMNET) is a new class of MANET which will overcome the weakness of MANET, especially from network partitioning. However, AMMNET is vulnerable to routing attacks such as Blackhole attack in which malicious node can make itself as intragroup, intergroup or intergroup bridge router and disrupt the network. In AMMNET, To maintain connectivity, network survivability is an important aspect of reliable communication. Maintaning security is a challenge in the self organising nature of the topology. To address this weakness proposed approach measured the performance of the impact of security enhancement on AMMNET with the basis of bait detection scheme. Modified bait approach that will prevent blackhole node entering into the network and helps to maintain the reliability of the network. The proposed scheme uses the idea of Wumpus World concept from Artificial Intelligence. Modified bait scheme will prevent the blackhole attack and secures network.

2017-03-08
Sadasivam, G. K., Hota, C..  2015.  Scalable Honeypot Architecture for Identifying Malicious Network Activities. 2015 International Conference on Emerging Information Technology and Engineering Solutions. :27–31.

Server honey pots are computer systems that hide in a network capturing attack packets. As the name goes, server honey pots are installed in server machines running a set of services. Enterprises and government organisations deploy these honey pots to know the extent of attacks on their network. Since, most of the recent attacks are advanced persistent attacks there is much research work going on in building better peripheral security measures. In this paper, the authors have deployed several honey pots in a virtualized environment to gather traces of malicious activities. The network infrastructure is resilient and provides much information about hacker's activities. It is cost-effective and can be easily deployed in any organisation without specialized hardware.

Varma, P..  2015.  Building an Open Identity Platform for India. 2015 Asia-Pacific Software Engineering Conference (APSEC). :3–3.

Summary form only given. Aadhaar, India's Unique Identity Project, has become the largest biometric identity system in the world, already covering more than 920 million people. Building such a massive system required significant design thinking, aligning to the core strategy, and building a technology platform that is scalable to meet the project's objective. Entire technology architecture behind Aadhaar is based on principles of openness, linear scalability, strong security, and most importantly vendor neutrality. All application components are built using open source components and open standards. Aadhaar system currently runs across two of the data centers within India managed by UIDAI and handles 1 million enrollments a day and at the peak doing about 900 trillion biometric matches a day. Current system has about 8 PB (8000 Terabytes) of raw data. Aadhaar Authentication service, which requires sub-second response time, is already live and can handle more than 100 million authentications a day. In this talk, the speaker, who has been the Chief Architect of Aadhaar since inception, shares his experience of building the system.

Buda, A., Främling, K., Borgman, J., Madhikermi, M., Mirzaeifar, S., Kubler, S..  2015.  Data supply chain in Industrial Internet. 2015 IEEE World Conference on Factory Communication Systems (WFCS). :1–7.

The Industrial Internet promises to radically change and improve many industry's daily business activities, from simple data collection and processing to context-driven, intelligent and pro-active support of workers' everyday tasks and life. The present paper first provides insight into a typical industrial internet application architecture, then it highlights one fundamental arising contradiction: “Who owns the data is often not capable of analyzing it”. This statement is explained by imaging a visionary data supply chain that would realize some of the Industrial Internet promises. To concretely implement such a system, recent standards published by The Open Group are presented, where we highlight the characteristics that make them suitable for Industrial Internet applications. Finally, we discuss comparable solutions and concludes with new business use cases.

Jilcott, S..  2015.  Securing the supply chain for commodity IT devices by automated scenario generation. 2015 IEEE International Symposium on Technologies for Homeland Security (HST). :1–6.

Almost all commodity IT devices include firmware and software components from non-US suppliers, potentially introducing grave vulnerabilities to homeland security by enabling cyber-attacks via flaws injected into these devices through the supply chain. However, determining that a given device is free of any and all implementation flaws is computationally infeasible in the general case; hence a critical part of any vetting process is prioritizing what kinds of flaws are likely to enable potential adversary goals. We present Theseus, a four-year research project sponsored by the DARPA VET program. Theseus will provide technology to automatically map and explore the firmware/software (FW/SW) architecture of a commodity IT device and then generate attack scenarios for the device. From these device attack scenarios, Theseus then creates a prioritized checklist of FW/SW components to check for potential vulnerabilities. Theseus combines static program analysis, attack graph generation algorithms, and a Boolean satisfiability solver to automate the checklist generation workflow. We describe how Theseus exploits analogies between the commodity IT device problem and attack graph generation for networks. We also present a novel approach called Component Interaction Mapping to recover a formal model of a device's FW/SW architecture from which attack scenarios can be generated.

Mahajan, S., Katti, J., Walunj, A., Mahalunkar, K..  2015.  Designing a database encryption technique for database security solution with cache. 2015 IEEE International Advance Computing Conference (IACC). :357–360.

A database is a vast collection of data which helps us to collect, retrieve, organize and manage the data in an efficient and effective manner. Databases are critical assets. They store client details, financial information, personal files, company secrets and other data necessary for business. Today people are depending more on the corporate data for decision making, management of customer service and supply chain management etc. Any loss, corrupted data or unavailability of data may seriously affect its performance. The database security should provide protected access to the contents of a database and should preserve the integrity, availability, consistency, and quality of the data This paper describes the architecture based on placing the Elliptical curve cryptography module inside database management software (DBMS), just above the database cache. Using this method only selected part of the database can be encrypted instead of the whole database. This architecture allows us to achieve very strong data security using ECC and increase performance using cache.

Jaiswal, A., Garg, B., Kaushal, V., Sharma, G. K..  2015.  SPAA-Aware 2D Gaussian Smoothing Filter Design Using Efficient Approximation Techniques. 2015 28th International Conference on VLSI Design. :333–338.

The limited battery lifetime and rapidly increasing functionality of portable multimedia devices demand energy-efficient designs. The filters employed mainly in these devices are based on Gaussian smoothing, which is slow and, severely affects the performance. In this paper, we propose a novel energy-efficient approximate 2D Gaussian smoothing filter (2D-GSF) architecture by exploiting "nearest pixel approximation" and rounding-off Gaussian kernel coefficients. The proposed architecture significantly improves Speed-Power-Area-Accuracy (SPAA) metrics in designing energy-efficient filters. The efficacy of the proposed approximate 2D-GSF is demonstrated on real application such as edge detection. The simulation results show 72%, 79% and 76% reduction in area, power and delay, respectively with acceptable 0.4dB loss in PSNR as compared to the well-known approximate 2D-GSF.

Finn, J., Nuzzo, P., Sangiovanni-Vincentelli, A..  2015.  A mixed discrete-continuous optimization scheme for Cyber-Physical System architecture exploration. 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). :216–223.

We propose a methodology for architecture exploration for Cyber-Physical Systems (CPS) based on an iterative, optimization-based approach, where a discrete architecture selection engine is placed in a loop with a continuous sizing engine. The discrete optimization routine proposes a candidate architecture to the sizing engine. The sizing routine optimizes over the continuous parameters using simulation to evaluate the physical models and to monitor the requirements. To decrease the number of simulations, we show how balance equations and conservation laws can be leveraged to prune the discrete space, thus achieving significant reduction in the overall runtime. We demonstrate the effectiveness of our methodology on an industrial case study, namely an aircraft environmental control system, showing more than one order of magnitude reduction in optimization time.

2017-03-07
Raza, N..  2015.  Challenges to network forensics in cloud computing. 2015 Conference on Information Assurance and Cyber Security (CIACS). :22–29.

The digital forensics refers to the application of scientific techniques in investigation of a crime, specifically to identify or validate involvement of some suspect in an activity leading towards that crime. Network forensics particularly deals with the monitoring of network traffic with an aim to trace some suspected activity from normal traffic or to identify some abnormal pattern in the traffic that may give clue towards some attack. Network forensics, quite valuable phenomenon in investigation process, presents certain challenges including problems in accessing network devices of cloud architecture, handling large amount network traffic, and rigorous processing required to analyse the huge volume of data, of which large proportion may prove to be irrelevant later on. Cloud Computing technology offers services to its clients remotely from a shared pool of resources, as per clients customized requirement, any time, from anywhere. Cloud Computing has attained tremendous popularity recently, leading to its vast and rapid deployment, however Privacy and Security concerns have also increased in same ratio, since data and application is outsourced to a third party. Security concerns about cloud architecture have come up as the prime barrier hindering the major shift of industry towards cloud model, despite significant advantages of cloud architecture. Cloud computing architecture presents aggravated and specific challenges in the network forensics. In this paper, I have reviewed challenges and issues faced in conducting network forensics particularly in the cloud computing environment. The study covers limitations that a network forensic expert may confront during investigation in cloud environment. I have categorized challenges presented to network forensics in cloud computing into various groups. Challenges in each group can be handled appropriately by either Forensic experts, Cloud service providers or Forensic tools whereas leftover challenges are declared as be- ond the control.

Mir, I. E., Kim, D. S., Haqiq, A..  2015.  Security modeling and analysis of a self-cleansing intrusion tolerance technique. 2015 11th International Conference on Information Assurance and Security (IAS). :111–117.

Since security is increasingly the principal concern in the conception and implementation of software systems, it is very important that the security mechanisms are designed so as to protect the computer systems against cyber attacks. An Intrusion Tolerance Systems play a crucial role in maintaining the service continuity and enhancing the security compared with the traditional security. In this paper, we propose to combine a preventive maintenance with existing intrusion tolerance system to improve the system security. We use a semi-Markov process to model the system behavior. We quantitatively analyze the system security using the measures such as system availability, Mean Time To Security Failure and cost. The numerical analysis is presented to show the feasibility of the proposed approach.

2017-02-27
Abd, S. K., Salih, R. T., Al-Haddad, S. A. R., Hashim, F., Abdullah, A. B. H., Yussof, S..  2015.  Cloud computing security risks with authorization access for secure Multi-Tenancy based on AAAS protocol. TENCON 2015 - 2015 IEEE Region 10 Conference. :1–5.

Many cloud security complexities can be concerned as a result of its open system architecture. One of these complexities is multi-tenancy security issue. This paper discusses and addresses the most common public cloud security complexities focusing on Multi-Tenancy security issue. Multi-tenancy is one of the most important security challenges faced by public cloud services providers. Therefore, this paper presents a secure multi-tenancy architecture using authorization model Based on AAAS protocol. By utilizing cloud infrastructure, access control can be provided to various cloud information and services by our suggested authorization system. Each business can offer several cloud services. These cloud services can cooperate with other services which can be related to the same organization or different one. Moreover, these cooperation agreements are supported by our suggested system.

Lokesh, M. R., Kumaraswamy, Y. S..  2015.  Healing process towards resiliency in cyber-physical system: A modified danger theory based artifical immune recogization2 algorithm approach. 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS). :226–232.

Healing Process is a major role in developing resiliency in cyber-physical system where the environment is diverse in nature. Cyber-physical system is modelled with Multi Agent Paradigm and biological inspired Danger Theory based-Artificial Immune Recognization2 Algorithm Methodology towards developing healing process. The Proposed methodology is implemented in a simulation environment and percentage of Convergence rates shown in achieving accuracy in the healing process to resiliency in cyber-physical system environment is shown.

2017-02-23
A. Akinbi, E. Pereira.  2015.  "Mapping Security Requirements to Identify Critical Security Areas of Focus in PaaS Cloud Models". 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing. :789-794.

Information Technology experts cite security and privacy concerns as the major challenges in the adoption of cloud computing. On Platform-as-a-Service (PaaS) clouds, customers are faced with challenges of selecting service providers and evaluating security implementations based on their security needs and requirements. This study aims to enable cloud customers the ability to quantify their security requirements in order to identify critical areas in PaaS cloud architectures were security provisions offered by CSPs could be assessed. With the use of an adaptive security mapping matrix, the study uses a quantitative approach to presents findings of numeric data that shows critical architectures within the PaaS environment where security can be evaluated and security controls assessed to meet these security requirements. The matrix can be adapted across different types of PaaS cloud models based on individual security requirements and service level objectives identified by PaaS cloud customers.

2017-02-21
A. Dutta, R. K. Mangang.  2015.  "Analog to information converter based on random demodulation". 2015 International Conference on Electronic Design, Computer Networks Automated Verification (EDCAV). :105-109.

With the increase in signal's bandwidth, the conventional analog to digital converters (ADCs), operating on the basis of Shannon/Nyquist theorem, are forced to work at very high rates leading to low dynamic range and high power consumptions. This paper here tells about one Analog to Information converter developed based on compressive sensing techniques. The high sampling rates, which is the main drawback for ADCs, is being successfully reduced to 4 times lower than the conventional rates. The system is also accompanied with the advantage of low power dissipation.

Shuhao Liu, Baochun Li.  2015.  "On scaling software-Defined Networking in wide-area networks". Tsinghua Science and Technology. 20:221-232.

Software-Defined Networking (SDN) has emerged as a promising direction for next-generation network design. Due to its clean-slate and highly flexible design, it is believed to be the foundational principle for designing network architectures and improving their flexibility, resilience, reliability, and security. As the technology matures, research in both industry and academia has designed a considerable number of tools to scale software-defined networks, in preparation for the wide deployment in wide-area networks. In this paper, we survey the mechanisms that can be used to address the scalability issues in software-defined wide-area networks. Starting from a successful distributed system, the Domain Name System, we discuss the essential elements to make a large scale network infrastructure scalable. Then, the existing technologies proposed in the literature are reviewed in three categories: scaling out/up the data plane and scaling the control plane. We conclude with possible research directions towards scaling software-defined wide-area networks.

Z. Jiang, W. Quan, J. Guan, H. Zhang.  2015.  "A SINET-based communication architecture for Smart Grid". 2015 International Telecommunication Networks and Applications Conference (ITNAC). :298-301.

Communication architecture is a crucial component in smart grid. Most of the previous researches have been focused on the traditional Internet and proposed numerous evolutionary designs. However, the traditional network architecture has been reported with multiple inherent shortcomings, which bring unprecedented challenges for the Smart Grid. Moreover, the smart network architecture for the future Smart Grid is still unexplored. In this context, this paper proposes a clean-slate communication approach to boost the development of smart grid in the respective of Smart Identifier Network (SINET), named SI4SG. It also designs the service resolution mechanism and the ns-3 based simulating tool for the proposed communication architecture.

J. Pan, R. Jain, S. Paul.  2015.  "Enhanced Evaluation of the Interdomain Routing System for Balanced Routing Scalability and New Internet Architecture Deployments". IEEE Systems Journal. 9:892-903.

Internet is facing many challenges that cannot be solved easily through ad hoc patches. To address these challenges, many research programs and projects have been initiated and many solutions are being proposed. However, before we have a new architecture that can motivate Internet service providers (ISPs) to deploy and evolve, we need to address two issues: 1) know the current status better by appropriately evaluating the existing Internet; and 2) find how various incentives and strategies will affect the deployment of the new architecture. For the first issue, we define a series of quantitative metrics that can potentially unify results from several measurement projects using different approaches and can be an intrinsic part of future Internet architecture (FIA) for monitoring and evaluation. Using these metrics, we systematically evaluate the current interdomain routing system and reveal many “autonomous-system-level” observations and key lessons for new Internet architectures. Particularly, the evaluation results reveal the imbalance underlying the interdomain routing system and how the deployment of FIAs can benefit from these findings. With these findings, for the second issue, appropriate deployment strategies of the future architecture changes can be formed with balanced incentives for both customers and ISPs. The results can be used to shape the short- and long-term goals for new architectures that are simple evolutions of the current Internet (so-called dirty-slate architectures) and to some extent to clean-slate architectures.

Wensheng Chen, Hui Li, Jun Lu, Chaoqi Yu, Fuxing Chen.  2015.  "Routing in the Centralized Identifier Network". 2015 10th International Conference on Communications and Networking in China (ChinaCom). :73-78.

We propose a clean-slate network architecture called Centralized Identifier Network (CIN) which jointly considers the ideas of both control plane/forwarding plane separation and identifier/locator separation. In such an architecture, a controller cluster is designed to perform routers' link states gathering and routing calculation/handing out. Meanwhile, a tailor-made router without routing calculation function is designed to forward packets and communicate with its controller. Furthermore, A router or a host owns a globally unique ID and a host should be registered to a router whose ID will be the host's location. Control plane/forwarding plane separation enables CIN easily re-splitting the network functions into finer optional building blocks for sufficient flexibility and adaptability. Identifier/locator separation helps CIN deal with serious scaling problems and offer support for host mobility. This article mainly shows the routing mechanism of CIN. Furthermore, numerical results are presented to demonstrate the performance of the proposed mechanism.