Biblio
This article presents introduction to HTTP Security Headers - new security topic in communication over Internet. It is emphasized that HTTPS protocol and SSL/TLS certificates alone do not offer sufficient level of security for communication among people and devices. In the world of web applications and Internet of Things (IoT), it is vital to bring communication security at higher level, what could be realised via few simple steps. HTTP Response Headers used for different purposes in the past are now the effective way how to propagate security policies from servers to clients (from web servers to web browsers). First improvement is enforcing HTTPS protocol for communication everywhere it is possible and promote this protocol as first and only option for secure connection over the Internet. It is emphasized that HTTP protocol for communication is not suitable anymore.
The focus of this paper is to propose an integration between Internet of Things (IoT) and Video Surveillance, with the aim to satisfy the requirements of the future needs of Video Surveillance, and to accomplish a better use. IoT is a new technology in the sector of telecommunications. It is a network that contains physical objects, items, and devices, which are embedded with sensors and software, thus enabling the objects, and allowing for their data exchange. Video Surveillance systems collect and exchange the data which has been recorded by sensors and cameras and send it through the network. This paper proposes an innovative topology paradigm which could offer a better use of IoT technology in Video Surveillance systems. Furthermore, the contribution of these technologies provided by Internet of Things features in dealing with the basic types of Video Surveillance technology with the aim to improve their use and to have a better transmission of video data through the network. Additionally, there is a comparison between our proposed topology and relevant proposed topologies focusing on the security issue.
There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.
Internet of things (IOT) is a kind of advanced information technology which has drawn societies' attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually suggested encryption algorithm has been simulated by MATLAB software and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.
The Internet of Things (IoT) has become ubiquitous in our daily life as billions of devices are connected through the Internet infrastructure. However, the rapid increase of IoT devices brings many non-traditional challenges for system design and implementation. In this paper, we focus on the hardware security vulnerabilities and ultra-low power design requirement of IoT devices. We briefly survey the existing design methods to address these issues. Then we propose an approximate computing based information hiding approach that provides security with low power. We demonstrate that this security primitive can be applied for security applications such as digital watermarking, fingerprinting, device authentication, and lightweight encryption.
Over the last years, the number of rather simple interconnected devices in nonindustrial scenarios (e.g., for home automation) has steadily increased. For ease of use, the overall system security is often neglected. Before the Internet of Things (IoT) reaches the same distribution rate and impact in industrial applications, where security is crucial for success, solutions that combine usability, scalability, and security are required. We develop such a security system, mainly targeting sensor modules equipped with Radio Frequency IDentification (RFID) tags which we leverage to increase the security level. More specifically, we consider a network based on Message Queue Telemetry Transport (MQTT) which is a widely adopted protocol for the IoT.
Security Evaluation and Management (SEM) is considerably important process to protect the Embedded System (ES) from various kinds of security's exploits. In general, SEM's processes have some challenges, which limited its efficiency. Some of these challenges are system-based challenges like the hetero-geneity among system's components and system's size. Some other challenges are expert-based challenges like mis-evaluation possibility and experts non-continuous availability. Many of these challenges were addressed by the Multi Metric (MM) framework, which depends on experts' or subjective evaluation for basic evaluations. Despite of its productivity, subjective evaluation has some drawbacks (e.g. expert misevaluation) foster the need for considering objective evaluations in the MM framework. In addition, the MM framework is system centric framework, thus, by modelling complex and huge system using the MM framework a guide is needed indicating changes toward desirable security's requirements. This paper proposes extensions for the MM framework consider the usage of objective evaluations and work as guide for needed changes to satisfy desirable security requirements.
The survey of related work in the very specialized field of information security (IS) ensurance for the Internet of Things (IoT) allowed us to work out a taxonomy of typical attacks against the IoT elements (with special attention to the IoT device protection). The key directions of countering these attacks were defined on this basis. According to the modern demand for the IoT big IS-related data processing, the application of Security Intelligence approach is proposed. The main direction of the future research, namely the IoT operational resilience, is indicated.
Smart IoT applications require connecting multiple IoT devices and networks with multiple services running in fog and cloud computing platforms. One approach to connecting IoT devices with cloud and fog services is to create a federated virtual network. The main benefit of this approach is that IoT devices can then interact with multiple remote services using an application specific federated network where no traffic from other applications passes. This federated network spans multiple cloud platforms and IoT networks but it can be managed as a single entity. From the point of view of security, federated virtual networks can be managed centrally and be secured with a coherent global network security policy. This does not mean that the same security policy applies everywhere, but that the different security policies are specified in a single coherent security policy. In this paper we propose to extend a federated cloud networking security architecture so that it can secure IoT devices and networks. The federated network is extended to the edge of IoT networks by integrating a federation agent in an IoT gateway or network controller (Can bus, 6LowPan, Lora, ...). This allows communication between the federated cloud network and the IoT network. The security architecture is based on the concepts of network function virtualisation (NFV) and service function chaining (SFC) for composing security services. The IoT network and devices can then be protected by security virtual network functions (VNF) running at the edge of the IoT network.
Internet of Things (IoT) depicts an intelligent future, where any IoT-based devices having a sensorial and computing capabilities to interact with each other. Recently, we are living in the area of internet and rapidly moving towards a smart planet where devices are capable to be connected to each other. Cooperative ad-hoc vehicle systems are the main driving force for the actualization of IoT-based concept. Vehicular Ad-hoc Network (VANET) is considered as a promising platform for the intelligent wireless communication system. This paper presents and analyzes the tradeoffs between the security and reliability of the IoT-based VANET system in the presence of eavesdropping attacks using smart vehicle relays based on opportunistic relay selection (ORS) scheme. Then, the optimization of the distance between the source (S), destination (D), and Eavesdropper (E) is illustrated in details, showing the effect of this parameter on the IoT-based network. In order to improve the SRT, we quantify the attainable SRT improvement with variable distances between IoT-based nodes. It is shown that given the maximum tolerable Intercept Probability (IP), the Outage Probability (OP) of our proposed model approaches zero for Ge → ∞, where Ge is distance ratio between S — E via the vehicle relay (R).
Link quality protocols employ link quality estimators to collect statistics on the wireless link either independently or cooperatively among the sensor nodes. Furthermore, link quality routing protocols for wireless sensor networks may modify an estimator to meet their needs. Link quality estimators are vulnerable against malicious attacks that can exploit them. A malicious node may share false information with its neighboring sensor nodes to affect the computations of their estimation. Consequently, malicious node may behave maliciously such that its neighbors gather incorrect statistics about their wireless links. This paper aims to detect malicious nodes that manipulate the link quality estimator of the routing protocol. In order to accomplish this task, MINTROUTE and CTP routing protocols are selected and updated with intrusion detection schemes (IDSs) for further investigations with other factors. It is proved that these two routing protocols under scrutiny possess inherent susceptibilities, that are capable of interrupting the link quality calculations. Malicious nodes that abuse such vulnerabilities can be registered through operational detection mechanisms. The overall performance of the new LQR protocol with IDSs features is experimented, validated and represented via the detection rates and false alarm rates.
Open Science Big Data is emerging as an important area of research and software development. Although there are several high quality frameworks for Big Data, additional capabilities are needed for Open Science Big Data. These include data provenance, citable reusable data, data sources providing links to research literature, relationships to other data and theories, transparent analysis/reproducibility, data privacy, new optimizations/advanced algorithms, data curation, data storage and transfer. An important part of science is explanation of results, ideally leading to theory formation. In this paper, we examine means for supporting the use of theory in big data analytics as well as using big data to assist in theory formation. One approach is to fit data in a way that is compatible with some theory, existing or new. Functional Data Analysis allows precise fitting of data as well as penalties for lack of smoothness or even departure from theoretical expectations. This paper discusses principal differential analysis and related techniques for fitting data where, for example, a time-based process is governed by an ordinary differential equation. Automation in theory formation is also considered. Case studies in the fields of computational economics and finance are considered.
The Sensor Web is evolving into a complex information space, where large volumes of sensor observation data are often consumed by complex applications. Provenance has become an important issue in the Sensor Web, since it allows applications to answer “what”, “when”, “where”, “who”, “why”, and “how” queries related to observations and consumption processes, which helps determine the usability and reliability of data products. This paper investigates characteristics and requirements of provenance in the Sensor Web and proposes an interoperable approach to building a provenance model for the Sensor Web. Our provenance model extends the W3C PROV Data Model with Sensor Web domain vocabularies. It is developed using Semantic Web technologies and thus allows provenance information of sensor observations to be exposed in the Web of Data using the Linked Data approach. A use case illustrates the applicability of the approach.
Compute-intensive simulations typically charge substantial workloads on an online simulation platform backed by limited computing clusters and storage resources. Some (or most) of the simulations initiated by users may accompany input parameters/files that have been already provided by other (or same) users in the past. Unfortunately, these duplicate simulations may aggravate the performance of the platform by drastic consumption of the limited resources shared by a number of users on the platform. To minimize or avoid conducting repeated simulations, we present a novel system, called SUPERMAN (SimUlation ProvEnance Recycling MANager) that can record simulation provenances and recycle the results of past simulations. This system presents a great opportunity to not only reutilize existing results but also perform various analytics helpful for those who are not familiar with the platform. The system also offers interoperability across other systems by collecting the provenances in a standardized format. In our simulated experiments we found that over half of past computing jobs could be answered without actual executions by our system.
Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.
Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (textless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- textbartextbar
The Internet of Things (IoT) is envisioned to include billions of pervasive and mission-critical sensors and actuators connected to the (public) Internet. This network of smart devices is expected to generate and have access to vast amounts of information, creating unique opportunities for novel applications but, at the same time raising significant privacy and security concerns that impede its further adoption and development. In this paper, we explore the potential of a blockchain-assisted information distribution system for the IoT. We identify key security requirements of such a system and we discuss how they can be satisfied using blockchains and smart contracts. Furthermore, we present a preliminary design of the system and we identify enabling technologies.
Multi-agent simulations are useful for exploring collective patterns of individual behavior in social, biological, economic, network, and physical systems. However, there is no provenance support for multi-agent models (MAMs) in a distributed setting. To this end, we introduce ProvMASS, a novel approach to capture provenance of MAMs in a distributed memory by combining inter-process identification, lightweight coordination of in-memory provenance storage, and adaptive provenance capture. ProvMASS is built on top of the Multi-Agent Spatial Simulation (MASS) library, a framework that combines multi-agent systems with large-scale fine-grained agent-based models, or MAMs. Unlike other environments supporting MAMs, MASS parallelizes simulations with distributed memory, where agents and spatial data are shared application resources. We evaluate our approach with provenance queries to support three use cases and performance measures. Initial results indicate that our approach can support various provenance queries for MAMs at reasonable performance overhead.
Science is conducted collaboratively, often requiring knowledge sharing about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. In this paper, we present the sciunit, a reusable research object in which aggregated content is recomputable. We describe a Git-like client that efficiently creates, stores, and repeats sciunits. We show through analysis that sciunits repeat computational experiments with minimal storage and processing overhead. Finally, we provide an overview of sharing and reproducible cyberinfrastructure based on sciunits gaining adoption in the domain of geosciences.
Provenance counterfeit and packet loss assaults are measured as threats in the large scale wireless sensor networks which are engaged for diverse application domains. The assortments of information source generate necessitate promising the reliability of information such as only truthful information is measured in the decision procedure. Details about the sensor nodes play an major role in finding trust value of sensor nodes. In this paper, a novel lightweight secure provenance method is initiated for improving the security of provenance data transmission. The anticipated system comprises provenance authentication and renovation at the base station by means of Merkle-Hellman knapsack algorithm based protected provenance encoding in the Bloom filter framework. Side Channel Monitoring (SCM) is exploited for noticing the presence of selfish nodes and packet drop behaviors. This lightweight secure provenance method decreases the energy and bandwidth utilization with well-organized storage and secure data transmission. The investigational outcomes establishes the efficacy and competence of the secure provenance secure system by professionally noticing provenance counterfeit and packet drop assaults which can be seen from the assessment in terms of provenance confirmation failure rate, collection error, packet drop rate, space complexity, energy consumption, true positive rate, false positive rate and packet drop attack detection.