Biblio
Traffic from mobile wireless networks has been growing at a fast pace in recent years and is expected to surpass wired traffic very soon. Service providers face significant challenges at such scales including providing seamless mobility, efficient data delivery, security, and provisioning capacity at the wireless edge. In the Mobility First project, we have been exploring clean slate enhancements to the network protocols that can inherently provide support for at-scale mobility and trustworthiness in the Internet. An extensible data plane using pluggable compute-layer services is a key component of this architecture. We believe these extensions can be used to implement in-network services to enhance mobile end-user experience by either off-loading work and/or traffic from mobile devices, or by enabling en-route service-adaptation through context-awareness (e.g., Knowing contemporary access bandwidth). In this work we present details of the architectural support for in-network services within Mobility First, and propose protocol and service-API extensions to flexibly address these pluggable services from end-points. As a demonstrative example, we implement an in network service that does rate adaptation when delivering video streams to mobile devices that experience variable connection quality. We present details of our deployment and evaluation of the non-IP protocols along with compute-layer extensions on the GENI test bed, where we used a set of programmable nodes across 7 distributed sites to configure a Mobility First network with hosts, routers, and in-network compute services.
The deployment and operation of global network architectures can exhibit complex, dynamic behavior and the comprehensive validation of their properties, without actually building and running the systems, can only be achieved with the help of simulations. Packet-level models are not feasible in the Internet scale, but we are still interested in the phenomena that emerge when the systems are run in their intended environment. We argue for the high-level simulation methodology and introduce a simulation environment based on aggregate models built on state-of-the-art datasets available while respecting invariants observed in measurements. The models developed are aimed at studying a clean slate name-based interdomain routing architecture and provide an abundance of parameters for sensitivity analysis and a modular design with a balanced level of detail in different aspects of the model. In addition to introducing several reusable models for traffic, topology, and deployment, we report our experiences in using the high-level simulation approach and potential pitfalls related to it.
Content distribution in the Internet places content providers in a dominant position, with delivery happening directly between two end-points, that is, from content providers to consumers. Information-Centrism has been proposed as a paradigm shift from the host-to-host Internet to a host-to-content one, or in other words from an end-to-end communication system to a native distribution network. This trend has attracted the attention of the research community, which has argued that content, instead of end-points, must be at the center stage of attention. Given this emergence of information-centric solutions, the relevant management needs in terms of performance have not been adequately addressed, yet they are absolutely essential for relevant network operations and crucial for the information-centric approaches to succeed. Performance management and traffic engineering approaches are also required to control routing, to configure the logic for replacement policies in caches and to control decisions where to cache, for instance. Therefore, there is an urgent need to manage information-centric resources and in fact to constitute their missing management and control plane which is essential for their success as clean-slate technologies. In this thesis we aim to provide solutions to crucial problems that remain, such as the management of information-centric approaches which has not yet been addressed, focusing on the key aspect of route and cache management.
The emergence of new technologies, in addition with the popularization of mobile devices and wireless communication systems, demands a variety of requirements that current Internet is not able to comply adequately. In this scenario, the innovative information-centric Entity Title Architecture (ETArch), a Future Internet (FI) clean slate approach, was design to efficiently cope with the increasing demand of beyond-IP networking services. Nevertheless, despite all ETArch capabilities, it was not projected with reliable networking functions, which limits its operability in mobile multimedia networking, and will seriously restrict its scope in Future Internet scenarios. Therefore, our work extends ETArch mobility control with advanced quality-oriented mobility functions, to deploy mobility prediction, Point of Attachment (PoA) decision and handover setup meeting both session quality requirements of active session flows and current wireless quality conditions of neighbouring PoA candidates. The effectiveness of the proposed additions were confirmed through a preliminary evaluation carried out by MATLAB, in which we have considered distinct applications scenario, and showed that they were able to outperform the most relevant alternative solutions in terms of performance and quality of service.
Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
During recent years, establishing proper metrics for measuring system security has received increasing attention. Security logs contain vast amounts of information which are essential for creating many security metrics. Unfortunately, security logs are known to be very large, making their analysis a difficult task. Furthermore, recent security metrics research has focused on generic concepts, and the issue of collecting security metrics with log analysis methods has not been well studied. In this paper, we will first focus on using log analysis techniques for collecting technical security metrics from security logs of common types (e.g., Network IDS alarm logs, workstation logs, and Net flow data sets). We will also describe a production framework for collecting and reporting technical security metrics which is based on novel open-source technologies for big data.
Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.
The Internet of Things is a vision that broadens the scope of the internet by incorporating physical objects to identify themselves to the participating entities. This innovative concept enables a physical device to represent itself in the digital world. There are a lot of speculations and future forecasts about the Internet of Things devices. However, most of them are vendor specific and lack a unified standard, which renders their seamless integration and interoperable operations. Another major concern is the lack of security features in these devices and their corresponding products. Most of them are resource-starved and unable to support computationally complex and resource consuming secure algorithms. In this paper, we have proposed a lightweight mutual authentication scheme which validates the identities of the participating devices before engaging them in communication for the resource observation. Our scheme incurs less connection overhead and provides a robust defence solution to combat various types of attacks.
Web Services can be invoked from anywhere through internet without having enough knowledge about the implementation details. In some cases, single service cannot accomplish user needs. One or more services must be composed which together satisfy the user needs. Therefore, security is the most important concern not only at single service level but also at composition level. Several attacks are possible on SOAP messages communicated among Web Services because of their standardized interfaces. Examples of Web Service attacks are oversize payload, SOAPAction spoofing, XML injection, WS-Addressing spoofing, etc. Most of the existing works provide solution to ensure basic security features of Web Services such as confidentiality, integrity, authentication, authorization, and non-repudiation. Very few of the existing works provide solutions such as schema validation and schema hardening for attacks on Web Services. But these solutions do not address and provide attack specific solutions for SOAP messages communicated between Web Service. Hence, it is proposed to provide solutions for two of the prevailing Web Service attacks. Since new types of Web Service attacks are evolving over time, the proposed security solutions are implemented as APIs that are pluggable in any server where the Web Service is deployed.
Eduroam is a secure WLAN roaming service between academic and research institutions around the globe. It allows users from participating institutions secure Internet access at any other participating visited institution using their home credentials. The authentication credentials are verified by the home institution, while authorization is done by the visited institution. The user receives an IP address in the range of the visited institution, and accesses the Internet through the firewall and proxy servers of the visited institution. However, access granted to services that authorize via an IP address of the visited institution may include access to services that are not allowed at the home institution, due to legal agreements. This paper looks at typical legal agreements with service providers and explores the risks and countermeasures that need to be considered when using eduroam.
Web Service (WS) plays an important role in today's word to provide effective services for humans and these web services are built with the standard of SOAP, WSDL & UDDI. This technology enables various service providers to register and service sender their intelligent agent based privacy preserving modelservices to utilize the service over the internet through pre established networks. Also accessing these services need to be secured and protected from various types of attacks in the network environment. Exchanging data between two applications on a secure channel is a challenging issue in today communication world. Traditional security mechanism such as secured socket layer (SSL), Transport Layer Security (TLS) and Internet Protocol Security (IP Sec) is able to resolve this problem partially, hence this research paper proposes the privacy preserving named as HTTPI to secure the communication more efficiently. This HTTPI protocol satisfies the QoS requirements, such as authentication, authorization, integrity and confidentiality in various levels of the OSI layers. This work also ensures the QoS that covers non functional characteristics like performance (throughput), response time, security, reliability and capacity. This proposed intelligent agent based model results in excellent throughput, good response time and increases the QoS requirements.
Internet into our physical world and making it present everywhere. This evolution is also raising challenges in issues such as privacy, and security. For that reason, this work is focused on the integration and lightweight adaptation of existing authentication protocols, which are able also to offer authorization and access control functionalities. In particular, this work is focused on the Extensible Authentication Protocol (EAP). EAP is widely used protocol for access control in local area networks such Wireless (802.11) and wired (802.3). This work presents an integration of the EAP frame into IEEE 802.15.4 frames, demonstrating that EAP protocol and some of its mechanisms are feasible to be applied in constrained devices, such as the devices that are populating the IoT networks.
This paper designs a secure transmission and authorization management system which based on the principles of Public Key Infrastructure and Rose-Based Access Control. It can solve the problems of identity authentication, secure transmission and access control on internet. In the first place, according to PKI principles, certificate authority system is implemented. It can issue and revoke the server-side and client-side digital certificate. Data secure transmission is achieved through the combination of digital certificate and SSL protocol. In addition, this paper analyses access control mechanism and RBAC model. The structure of RBAC model has been improved. The principle of group authority is added into the model and the combination of centralized authority and distributed authority management is adopted, so the model becomes more flexible.
Cloud computing is an application and set of services given through the internet. However it is an emerging technology for shared infrastructure but it lacks with an access rights and security mechanism. As it lacks security issues for the cloud users our system focuses only on the security provided through the token management system. It is based on the internet where computing is done through the virtual shared servers for providing infrastructure, software, platform and security as a services. In which security plays an important role in the cloud service. Hence, this security has been given with three types of services such as mutual authentication, directory services, token granting for the resources. Since, existing token issuing mechanism does not provide scalability to large data sets and also increases memory overhead between the client and the server. Hence, our proposed work focuses on providing tokens to the users, which addresses the problem of scalability and memory overhead. The proposed framework of token management system monitors the entire operations of the cloud and there by managing the entire cloud infrastructure. Our model comes under the new category of cloud model known as "Security as a Service". This paper provides the security framework as an architectural model to verify user authorization and data correctness of the resource stored thereby provides guarantee to the data owner for their resource stored into the cloud This framework also describes about the storage of token in a secured manner and it also facilitates search and usage of tokens for auditing purpose and supervision of the users.
Cloud computing is a new paradigm and emerged technology for hosting and delivering resources over a network such as internet by using concepts of virtualization, processing power and storage. However, many challenging issues are still unclear in cloud-based environments and decrease the rate of reliability and efficiency for service providers and users. User Authentication is one of the most challenging issues in cloud-based environments and according to this issue this paper proposes an efficient user authentication model that involves both of defined phases during registration and accessing processes. Geo Detection and Digital Signature Authorization (GD2SA) is a user authentication tool for provisional access permission in cloud computing environments. The main aim of GD2SA is to compare the location of an un-registered device with the location of the user by using his belonging devices (e.g. smart phone). In addition, this authentication algorithm uses the digital signature of account owner to verify the identity of applicant. This model has been evaluated in this paper according to three main parameters: efficiency, scalability, and security. In overall, the theoretical analysis of the proposed model showed that it can increase the rate of efficiency and reliability in cloud computing as an emerging technology.
The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.
Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.
Artificial monitoring is no longer able to match the rapid growth of cybercrime, it is in great need to develop a new spatial analysis technology which allows emergency events to get rapidly and accurately locked in real environment, furthermore, to establish correlative analysis model for cybercrime prevention strategy. On the other hand, Geography information system has been changed virtually in data structure, coordinate system and analysis model due to the “uncertainty and hyper-dimension” characteristics of network object and behavior. In this paper, the spatial rules of typical cybercrime are explored on base of GIS with Internet searching and IP tracking technology: (1) Setup spatial database through IP searching based on criminal evidence. (2)Extend GIS data-structure and spatial models, add network dimension and virtual attribution to realize dynamic connection between cyber and real space. (3)Design cybercrime monitoring and prevention system to discover the cyberspace logics based on spatial analysis.
The growth of the Internet has made IPv4 addresses a scarce resource. Due to slow IPv6 deployment, IANA-level IPv4 address exhaustion was reached before the world could transition to an IPv6-only Internet. The continuing need for IPv4 reachability will only be supported by IPv4 address sharing. This paper reviews ISP-level address sharing mechanisms, which allow Internet service providers to connect multiple customers who share a single IPv4 address. Some mechanisms come with severe and unpredicted consequences, and all of them come with tradeoffs. We propose a novel classification, which we apply to existing mechanisms such as NAT444 and DS-Lite and proposals such as 4rd, MAP, etc. Our tradeoff analysis reveals insights into many problems including: abuse attribution, performance degradation, address and port usage efficiency, direct intercustomer communication, and availability.
Internet security issues require authorship identification for all kinds of internet contents; however, authorship identification for microblog users is much harder than other documents because microblog texts are too short. Moreover, when the number of candidates becomes large, i.e., big data, it will take long time to identify. Our proposed method solves these problems. The experimental results show that our method successfully identifies the authorship with 53.2% of precision out of 10,000 microblog users in the almost half execution time of previous method.
Conducting active cyberdefense requires the acceptance of a proactive framework that acknowledges the lack of predictable symmetries between malicious actors and their capabilities and intent. Unlike physical weapons such as firearms, naval vessels, and piloted aircraft-all of which risk physical exposure when engaged in direct combat-cyberweapons can be deployed (often without their victims' awareness) under the protection of the anonymity inherent in cyberspace. Furthermore, it is difficult in the cyber domain to determine with accuracy what a malicious actor may target and what type of cyberweapon the actor may wield. These aspects imply an advantage for malicious actors in cyberspace that is greater than for those in any other domain, as the malicious cyberactor, under current international constructs and norms, has the ability to choose the time, place, and weapon of engagement. This being said, if defenders are to successfully repel attempted intrusions, then they must conduct an active cyberdefense within a framework that proactively engages threatening actions independent of a requirement to achieve attribution. This paper proposes that private business, government personnel, and cyberdefenders must develop a threat identification framework that does not depend upon attribution of the malicious actor, i.e., an attribution agnostic cyberdefense construct. Furthermore, upon developing this framework, network defenders must deploy internally based cyberthreat countermeasures that take advantage of defensive network environmental variables and alter the calculus of nefarious individuals in cyberspace. Only by accomplishing these two objectives can the defenders of cyberspace actively combat malicious agents within the virtual realm.
Multi-touch attribution, which allows distributing the credit to all related advertisements based on their corresponding contributions, has recently become an important research topic in digital advertising. Traditionally, rule-based attribution models have been used in practice. The drawback of such rule-based models lies in the fact that the rules are not derived form the data but only based on simple intuition. With the ever enhanced capability to tracking advertisement and users' interaction with the advertisement, data-driven multi-touch attribution models, which attempt to infer the contribution from user interaction data, become an important research direction. We here propose a new data-driven attribution model based on survival theory. By adopting a probabilistic framework, one key advantage of the proposed model is that it is able to remove the presentation biases inherit to most of the other attribution models. In addition to model the attribution, the proposed model is also able to predict user's 'conversion' probability. We validate the proposed method with a real-world data set obtained from a operational commercial advertising monitoring company. Experiment results have shown that the proposed method is quite promising in both conversion prediction and attribution.
Mobile Voice over Internet Protocol (mVoIP) applications have gained increasing popularity in the last few years, with millions of users communicating using such applications (e.g. Skype). Similar to other forms of Internet and telecommunications, mVoIP communications are vulnerable to both lawful and unauthorized interceptions. Encryption is a common way of ensuring the privacy of mVoIP users. To the best of our knowledge, there has been no academic study to determine whether mVoIP applications provide encrypted communications. In this paper, we examine Skype and nine other popular mVoIP applications for Android mobile devices, and analyze the intercepted communications to determine whether the captured voice and text communications are encrypted (or not). The results indicate that most of the applications encrypt text communications. However, voice communications may not be encrypted in six of the ten applications examined.
Infrastructure-based Vehicular Networks can be applied in different social contexts, such as health care, transportation and entertainment. They can easily take advantage of the benefices provided by wireless mesh networks (WMNs) to mobility, since WMNs essentially support technological convergence and resilience, required for the effective operation of services and applications. However, infrastructure-based vehicular networks are prone to attacks such as ARP packets flooding that compromise mobility management and users' network access. Hence, this work proposes MIRF, a secure mobility scheme based on reputation and filtering to mitigate flooding attacks on mobility management. The efficiency of the MIRF scheme has been evaluated by simulations considering urban scenarios with and without attacks. Analyses show that it significantly improves the packet delivery ratio in scenarios with attacks, mitigating their intentional negative effects, as the reduction of malicious ARP requests. Furthermore, improvements have been observed in the number of handoffs on scenarios under attacks, being faster than scenarios without the scheme.
Unmanned Aerial Systems (UAS) have raised a great concern on privacy recently. A practical method to protect privacy is needed for adopting UAS in civilian airspace. This paper examines the privacy policies, filtering strategies, existing techniques, then proposes a novel method based on the encrypted video stream and the cloud-based privacy servers. In this scheme, all video surveillance images are initially encrypted, then delivered to a privacy server. The privacy server decrypts the video using the shared key with the camera, and filters the image according to the privacy policy specified for the surveyed region. The sanitized video is delivered to the surveillance operator or anyone on the Internet who is authorized. In a larger system composed of multiple cameras and multiple privacy servers, the keys can be distributed using Kerberos protocol. With this method the privacy policy can be changed on demand in real-time and there is no need for a costly on-board processing unit. By utilizing the cloud-based servers, advanced image processing algorithms and new filtering algorithms can be applied immediately without upgrading the camera software. This method is cost-efficient and promotes video sharing among multiple subscribers, thus it can spur wide adoption.