Biblio
With the rapid development of Internet of Things technology and sensor networks, large amount of data is facing security challenges in the transmission process. In the process of data transmission, the standardization and authentication of data sources are very important. A digital signature scheme based on bilinear pairing problem is designed. In this scheme, by signing the authorization mechanism, the management node can control the signature process and distribute data. The use of private key segmentation mechanism can reduce the performance requirements of sensor nodes. The reasonable combination of timestamp mechanism can ensure the time limit of signature and be verified after the data is sent. It is hoped that the implementation of this scheme can improve the security of data transmission on the Internet of things environment.
The main objective of the proposed work is to build a reliable and secure architecture for cloud servers where users may safely store and transfer their data. This platform ensures secure communication between the client and the server during data transfer. Furthermore, it provides a safe method for sharing and transferring files from one person to another. As a result, for ensuring safe data on cloud servers, this research work presents a secure architecture combining three DNA cryptography, HMAC, and a third party Auditor. In order to provide security by utilizing various strategies, a number of traditional and novel cryptographic methods are investigated. In the first step, data will be encrypted with the help of DNA cryptography, where the encoded document will be stored in the cloud server. In next step, create a HMAC value of encrypted file, which was stored on cloud by using secret key and sends to TPA. In addition, Third Party Auditor is used for authenticate the purity of stored documents in cloud at the time of verification TPA also create HMAC value from Cloud stored data and verify it. DNA-based cryptographic technique, hash based message authentic code and third party auditor will provide more secured framework for data security and integrity in cloud server.
Zero Trust Model ensures each node is responsible for the approval of the transaction before it gets committed. The data owners can track their data while it’s shared amongst the various data custodians ensuring data security. The consensus algorithm enables the users to trust the network as malicious nodes fail to get approval from all nodes, thereby causing the transaction to be aborted. The use case chosen to demonstrate the proposed consensus algorithm is the college placement system. The algorithm has been extended to implement a diversified, decentralized, automated placement system, wherein the data owner i.e. the student, maintains an immutable certificate vault and the student’s data has been validated by a verifier network i.e. the academic department and placement department. The data transfer from student to companies is recorded as transactions in the distributed ledger or blockchain allowing the data to be tracked by the student.
Cipher Text Policy Attribute Based Encryption which is a form of Public Key Encryption has become a renowned approach as a Data access control scheme for data security and confidentiality. It not only provides the flexibility and scalability in the access control mechanisms but also enhances security by fuzzy fined-grained access control. However, schemes are there which for more security increases the key size which ultimately leads to high encryption and decryption time. Also, there is no provision for handling the middle man attacks during data transfer. In this paper, a light-weight and more scalable encryption mechanism is provided which not only uses fewer resources for encoding and decoding but also improves the security along with faster encryption and decryption time. Moreover, this scheme provides an efficient key sharing mechanism for providing secure transfer to avoid any man-in-the-middle attacks. Also, due to fuzzy policies inclusion, chances are there to get approximation of user attributes available which makes the process fast and reliable and improves the performance of legitimate users.
The steady decline of IP transit prices in the past two decades has helped fuel the growth of traffic demands in the Internet ecosystem. Despite the declining unit pricing, bandwidth costs remain significant due to ever-increasing scale and reach of the Internet, combined with the price disparity between the Internet's core hubs versus remote regions. In the meantime, cloud providers have been auctioning underutilized computing resources in their marketplace as spot instances for a much lower price, compared to their on-demand instances. This state of affairs has led the networking community to devote extensive efforts to cloud-assisted networks - the idea of offloading network functionality to cloud platforms, ultimately leading to more flexible and highly composable network service chains.We initiate a critical discussion on the economic and technological aspects of leveraging cloud-assisted networks for Internet-scale interconnections and data transfers. Namely, we investigate the prospect of constructing a large-scale virtualized network provider that does not own any fixed or dedicated resources and runs atop several spot instances. We construct a cloud-assisted overlay as a virtual network provider, by leveraging third-party cloud spot instances. We identify three use case scenarios where such approach will not only be economically and technologically viable but also provide performance benefits compared to current commercial offerings of connectivity and transit providers.
Secure routing in the field of mobile ad hoc network (MANET) is one of the most flourishing areas of research. Devising a trustworthy security protocol for ad hoc routing is a challenging task due to the unique network characteristics such as lack of central authority, rapid node mobility, frequent topology changes, insecure operational environment, and confined availability of resources. Due to low configuration and quick deployment, MANETs are well-suited for emergency situations like natural disasters or military applications. Therefore, data transfer between two nodes should necessarily involve security. A black-hole attack in the mobile ad-hoc network (MANET) is an offense occurring due to malicious nodes, which attract the data packets by incorrectly publicizing a fresh route to the destination. A clustering direction in AODV routing protocol for the detection and prevention of black-hole attack in MANET has been put forward. Every member of the unit will ping once to the cluster head, to detect the exclusive difference between the number of data packets received and forwarded by the particular node. If the fault is perceived, all the nodes will obscure the contagious nodes from the network. The reading of the system performance has been done in terms of packet delivery ratio (PDR), end to end delay (ETD) throughput and Energy simulation inferences are recorded using ns2 simulator.
This paper describes an experiment carried out to demonstrate robustness and trustworthiness of an orchestrated two-layer network test-bed (PROnet). A Robotic Operating System Industrial (ROS-I) distributed application makes use of end-to-end flow services offered by PROnet. The PROnet Orchestrator is used to provision reliable end-to-end Ethernet flows to support the ROS-I application required data exchange. For maximum reliability, the Orchestrator provisions network resource redundancy at both layers, i.e., Ethernet and optical. Experimental results show that the robotic application is not interrupted by a fiber outage.
Open Science Big Data is emerging as an important area of research and software development. Although there are several high quality frameworks for Big Data, additional capabilities are needed for Open Science Big Data. These include data provenance, citable reusable data, data sources providing links to research literature, relationships to other data and theories, transparent analysis/reproducibility, data privacy, new optimizations/advanced algorithms, data curation, data storage and transfer. An important part of science is explanation of results, ideally leading to theory formation. In this paper, we examine means for supporting the use of theory in big data analytics as well as using big data to assist in theory formation. One approach is to fit data in a way that is compatible with some theory, existing or new. Functional Data Analysis allows precise fitting of data as well as penalties for lack of smoothness or even departure from theoretical expectations. This paper discusses principal differential analysis and related techniques for fitting data where, for example, a time-based process is governed by an ordinary differential equation. Automation in theory formation is also considered. Case studies in the fields of computational economics and finance are considered.
GENI (Global Environment for Network Innovations) is a National Science Foundation (NSF) funded program which provides a virtual laboratory for networking and distributed systems research and education. It is well suited for exploring networks at a scale, thereby promoting innovations in network science, security, services and applications. GENI allows researchers obtain compute resources from locations around the United States, connect compute resources using 100G Internet2 L2 service, install custom software or even custom operating systems on these compute resources, control how network switches in their experiment handle traffic flows, and run their own L3 and above protocols. GENI architecture incorporates cloud federation. With the federation, cloud resources can be federated and/or community of clouds can be formed. The heart of federation is user identity and an ability to “advertise” cloud resources into community including compute, storage, and networking. GENI administrators can carve out what resources are available to the community and hence a portion of GENI resources are reserved for internal consumption. GENI architecture also provides “stitching” of compute and storage resources researchers request. This provides L2 network domain over Internet2's 100G network. And researchers can run their Software Defined Networking (SDN) controllers on the provisioned L2 network domain for a complete control of networking traffic. This capability is useful for large science data transfer (bypassing security devices for high throughput). Renaissance Computing Institute (RENCI), a research institute in the state of North Carolina, has developed ORCA (Open Resource Control Architecture), a GENI control framework. ORCA is a distributed resource orchestration system to serve science experiments. ORCA provides compute resources as virtual machines and as well as baremetals. ORCA based GENI ra- k was designed to serve both High Throughput Computing (HTC) and High Performance Computing (HPC) type of computes. Although, GENI is primarily used in various universities and research entities today, GENI architecture can be leveraged in the commercial, aerospace and government settings. This paper will go over the architecture of GENI and discuss the GENI architecture for scientific computing experiments.
Named Data Networking (NDN), a clean-slate data oriented Internet architecture targeting on replacing IP, brings many potential benefits for content distribution. Real deployment of NDN is crucial to verify this new architecture and promote academic research, but work in this field is at an early stage. Due to the fundamental design paradigm difference between NDN and IP, Deploying NDN as IP overlay causes high overhead and inefficient transmission, typically in streaming applications. Aiming at achieving efficient NDN streaming distribution, this paper proposes a transitional architecture of NDN/IP hybrid network dubbed Centaur, which embodies both NDN's smartness, scalability and IP's transmission efficiency and deployment feasibility. In Centaur, the upper NDN module acts as the smart head while the lower IP module functions as the powerful feet. The head is intelligent in content retrieval and self-control, while the IP feet are able to transport large amount of media data faster than that if NDN directly overlaying on IP. To evaluate the performance of our proposal, we implement a real streaming prototype in ndnSIM and compare it with both NDN-Hippo and P2P under various experiment scenarios. The result shows that Centaur can achieve better load balance with lower overhead, which is close to the performance that ideal NDN can achieve. All of these validate that our proposal is a promising choice for the incremental and compatible deployment of NDN.
The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.