Visible to the public Biblio

Found 7504 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-12-20
Wazan, A. S., Laborde, R., Chadwick, D. W., Barrere, F., Benzekri, A..  2017.  TLS Connection Validation by Web Browsers: Why do Web Browsers Still Not Agree? 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). 1:665–674.
The TLS protocol is the primary technology used for securing web transactions. It is based on X.509 certificates that are used for binding the identity of web servers' owners to their public keys. Web browsers perform the validation of X.509 certificates on behalf of Web users. Our previous research in 2009 showed that the validation process of Web browsers is inconsistent and flawed. We showed how this situation might have a negative impact on Web users. From 2009 until now, many new X.509 related standards have been created or updated. In this paper, we performed an increased set of experiments over our 2009 study in order to highlight the improvements and/or regressions in Web browsers' behaviours.
Narvekar, A. N., Joshi, K. K..  2017.  Security sandbox model for modern web environment. 2017 International Conference on Nascent Technologies in Engineering (ICNTE). :1–6.
We require a very good technical knowledge to create automated tests to exploit the browser vulnerabilities. It is usually a combination of technical abilities and set of specific tools. Security concerns is of prime importance when it comes to web browsers. Attacks during surfing, executing any downloaded file and while transmission are very frequent these days and hence all browsers need to be hardened to ensure security. Sandbox is one of the feature where we can prevent malicious applications to run directly on hardware. It is an environment where new or non-trusted applications are executed. Many leading web browsers are trying their level best to implement sandbox. In this paper, we have mentioned the basic necessity of sandbox, current implementations in different web browsers and also present a self-proposed approach.
Rogowski, R., Morton, M., Li, F., Monrose, F., Snow, K. Z., Polychronakis, M..  2017.  Revisiting Browser Security in the Modern Era: New Data-Only Attacks and Defenses. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :366–381.
The continuous discovery of exploitable vulnerabilitiesin popular applications (e.g., web browsers and documentviewers), along with their heightening protections against control flow hijacking, has opened the door to an oftenneglected attack strategy-namely, data-only attacks. In thispaper, we demonstrate the practicality of the threat posedby data-only attacks that harness the power of memorydisclosure vulnerabilities. To do so, we introduce memorycartography, a technique that simplifies the construction ofdata-only attacks in a reliable manner. Specifically, we showhow an adversary can use a provided memory mapping primitive to navigate through process memory at runtime, andsafely reach security-critical data that can then be modifiedat will. We demonstrate this capability by using our cross-platform memory cartography framework implementation toconstruct data-only exploits against Internet Explorer and Chrome. The outcome of these exploits ranges from simple HTTP cookie leakage, to the alteration of the same originpolicy for targeted domains, which enables the cross-originexecution of arbitrary script code. The ease with which we can undermine the security ofmodern browsers stems from the fact that although isolationpolicies (such as the same origin policy) are enforced atthe script level, these policies are not well reflected in theunderlying sandbox process models used for compartmentalization. This gap exists because the complex demands oftoday's web functionality make the goal of enforcing thesame origin policy through process isolation a difficult oneto realize in practice, especially when backward compatibility is a priority (e.g., for support of cross-origin IFRAMEs). While fixing the underlying problems likely requires a majorrefactoring of the security architecture of modern browsers(in the long term), we explore several defenses, includingglobal variable randomization, that can limit the power ofthe attacks presented herein.
Sevilla, S., Garcia-Luna-Aceves, J. J., Sadjadpour, H..  2017.  GroupSec: A new security model for the web. 2017 IEEE International Conference on Communications (ICC). :1–6.
The de facto approach to Web security today is HTTPS. While HTTPS ensures complete security for clients and servers, it also interferes with transparent content-caching at middleboxes. To address this problem and support both security and caching, we propose a new approach to Web security and privacy called GroupSec. The key innovation of GroupSec is that it replaces the traditional session-based security model with a new model based on content group membership. We introduce the GroupSec security model and show how HTTP can be easily adapted to support GroupSec without requiring changes to browsers, servers, or middleboxes. Finally, we present results of a threat analysis and performance experiments which show that GroupSec achieves notable performance benefits at the client and server while remaining as secure as HTTPS.
Dolnák, I., Litvik, J..  2017.  Introduction to HTTP security headers and implementation of HTTP strict transport security (HSTS) header for HTTPS enforcing. 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA). :1–4.

This article presents introduction to HTTP Security Headers - new security topic in communication over Internet. It is emphasized that HTTPS protocol and SSL/TLS certificates alone do not offer sufficient level of security for communication among people and devices. In the world of web applications and Internet of Things (IoT), it is vital to bring communication security at higher level, what could be realised via few simple steps. HTTP Response Headers used for different purposes in the past are now the effective way how to propagate security policies from servers to clients (from web servers to web browsers). First improvement is enforcing HTTPS protocol for communication everywhere it is possible and promote this protocol as first and only option for secure connection over the Internet. It is emphasized that HTTP protocol for communication is not suitable anymore.

2017-12-12
Stergiou, C., Psannis, K. E., Plageras, A. P., Kokkonis, G., Ishibashi, Y..  2017.  Architecture for security monitoring in IoT environments. 2017 IEEE 26th International Symposium on Industrial Electronics (ISIE). :1382–1385.

The focus of this paper is to propose an integration between Internet of Things (IoT) and Video Surveillance, with the aim to satisfy the requirements of the future needs of Video Surveillance, and to accomplish a better use. IoT is a new technology in the sector of telecommunications. It is a network that contains physical objects, items, and devices, which are embedded with sensors and software, thus enabling the objects, and allowing for their data exchange. Video Surveillance systems collect and exchange the data which has been recorded by sensors and cameras and send it through the network. This paper proposes an innovative topology paradigm which could offer a better use of IoT technology in Video Surveillance systems. Furthermore, the contribution of these technologies provided by Internet of Things features in dealing with the basic types of Video Surveillance technology with the aim to improve their use and to have a better transmission of video data through the network. Additionally, there is a comparison between our proposed topology and relevant proposed topologies focusing on the security issue.

Ren, Z., Liu, X., Ye, R., Zhang, T..  2017.  Security and privacy on internet of things. 2017 7th IEEE International Conference on Electronics Information and Emergency Communication (ICEIEC). :140–144.

There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.

Yousefi, A., Jameii, S. M..  2017.  Improving the security of internet of things using encryption algorithms. 2017 International Conference on IoT and Application (ICIOT). :1–5.

Internet of things (IOT) is a kind of advanced information technology which has drawn societies' attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually suggested encryption algorithm has been simulated by MATLAB software and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.

Gao, M., Qu, G..  2017.  A novel approximate computing based security primitive for the Internet of Things. 2017 IEEE International Symposium on Circuits and Systems (ISCAS). :1–4.

The Internet of Things (IoT) has become ubiquitous in our daily life as billions of devices are connected through the Internet infrastructure. However, the rapid increase of IoT devices brings many non-traditional challenges for system design and implementation. In this paper, we focus on the hardware security vulnerabilities and ultra-low power design requirement of IoT devices. We briefly survey the existing design methods to address these issues. Then we propose an approximate computing based information hiding approach that provides security with low power. We demonstrate that this security primitive can be applied for security applications such as digital watermarking, fingerprinting, device authentication, and lightweight encryption.

Hänel, T., Bothe, A., Helmke, R., Gericke, C., Aschenbruck, N..  2017.  Adjustable security for RFID-equipped IoT devices. 2017 IEEE International Conference on RFID Technology Application (RFID-TA). :208–213.

Over the last years, the number of rather simple interconnected devices in nonindustrial scenarios (e.g., for home automation) has steadily increased. For ease of use, the overall system security is often neglected. Before the Internet of Things (IoT) reaches the same distribution rate and impact in industrial applications, where security is crucial for success, solutions that combine usability, scalability, and security are required. We develop such a security system, mainly targeting sensor modules equipped with Radio Frequency IDentification (RFID) tags which we leverage to increase the security level. More specifically, we consider a network based on Message Queue Telemetry Transport (MQTT) which is a widely adopted protocol for the IoT.

Fayyad, S., Noll, J..  2017.  Toward objective security measurability and manageability. 2017 14th International Conference on Smart Cities: Improving Quality of Life Using ICT IoT (HONET-ICT). :98–104.

Security Evaluation and Management (SEM) is considerably important process to protect the Embedded System (ES) from various kinds of security's exploits. In general, SEM's processes have some challenges, which limited its efficiency. Some of these challenges are system-based challenges like the hetero-geneity among system's components and system's size. Some other challenges are expert-based challenges like mis-evaluation possibility and experts non-continuous availability. Many of these challenges were addressed by the Multi Metric (MM) framework, which depends on experts' or subjective evaluation for basic evaluations. Despite of its productivity, subjective evaluation has some drawbacks (e.g. expert misevaluation) foster the need for considering objective evaluations in the MM framework. In addition, the MM framework is system centric framework, thus, by modelling complex and huge system using the MM framework a guide is needed indicating changes toward desirable security's requirements. This paper proposes extensions for the MM framework consider the usage of objective evaluations and work as guide for needed changes to satisfy desirable security requirements.

Miloslavskaya, N., Tolstoy, A..  2017.  Ensuring Information Security for Internet of Things. 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud). :62–69.

The survey of related work in the very specialized field of information security (IS) ensurance for the Internet of Things (IoT) allowed us to work out a taxonomy of typical attacks against the IoT elements (with special attention to the IoT device protection). The key directions of countering these attacks were defined on this basis. According to the modern demand for the IoT big IS-related data processing, the application of Security Intelligence approach is proposed. The main direction of the future research, namely the IoT operational resilience, is indicated.

Massonet, P., Deru, L., Achour, A., Dupont, S., Croisez, L. M., Levin, A., Villari, M..  2017.  Security in Lightweight Network Function Virtualisation for Federated Cloud and IoT. 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud). :148–154.

Smart IoT applications require connecting multiple IoT devices and networks with multiple services running in fog and cloud computing platforms. One approach to connecting IoT devices with cloud and fog services is to create a federated virtual network. The main benefit of this approach is that IoT devices can then interact with multiple remote services using an application specific federated network where no traffic from other applications passes. This federated network spans multiple cloud platforms and IoT networks but it can be managed as a single entity. From the point of view of security, federated virtual networks can be managed centrally and be secured with a coherent global network security policy. This does not mean that the same security policy applies everywhere, but that the different security policies are specified in a single coherent security policy. In this paper we propose to extend a federated cloud networking security architecture so that it can secure IoT devices and networks. The federated network is extended to the edge of IoT networks by integrating a federation agent in an IoT gateway or network controller (Can bus, 6LowPan, Lora, ...). This allows communication between the federated cloud network and the IoT network. The security architecture is based on the concepts of network function virtualisation (NFV) and service function chaining (SFC) for composing security services. The IoT network and devices can then be protected by security virtual network functions (VNF) running at the edge of the IoT network.

Ghourab, E. M., Azab, M., Rizk, M., Mokhtar, A..  2017.  Security versus reliability study for power-limited mobile IoT devices. 2017 8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). :430–438.

Internet of Things (IoT) depicts an intelligent future, where any IoT-based devices having a sensorial and computing capabilities to interact with each other. Recently, we are living in the area of internet and rapidly moving towards a smart planet where devices are capable to be connected to each other. Cooperative ad-hoc vehicle systems are the main driving force for the actualization of IoT-based concept. Vehicular Ad-hoc Network (VANET) is considered as a promising platform for the intelligent wireless communication system. This paper presents and analyzes the tradeoffs between the security and reliability of the IoT-based VANET system in the presence of eavesdropping attacks using smart vehicle relays based on opportunistic relay selection (ORS) scheme. Then, the optimization of the distance between the source (S), destination (D), and Eavesdropper (E) is illustrated in details, showing the effect of this parameter on the IoT-based network. In order to improve the SRT, we quantify the attainable SRT improvement with variable distances between IoT-based nodes. It is shown that given the maximum tolerable Intercept Probability (IP), the Outage Probability (OP) of our proposed model approaches zero for Ge → ∞, where Ge is distance ratio between S — E via the vehicle relay (R).

Jiang, J., Chaczko, Z., Al-Doghman, F., Narantaka, W..  2017.  New LQR Protocols with Intrusion Detection Schemes for IOT Security. 2017 25th International Conference on Systems Engineering (ICSEng). :466–474.

Link quality protocols employ link quality estimators to collect statistics on the wireless link either independently or cooperatively among the sensor nodes. Furthermore, link quality routing protocols for wireless sensor networks may modify an estimator to meet their needs. Link quality estimators are vulnerable against malicious attacks that can exploit them. A malicious node may share false information with its neighboring sensor nodes to affect the computations of their estimation. Consequently, malicious node may behave maliciously such that its neighbors gather incorrect statistics about their wireless links. This paper aims to detect malicious nodes that manipulate the link quality estimator of the routing protocol. In order to accomplish this task, MINTROUTE and CTP routing protocols are selected and updated with intrusion detection schemes (IDSs) for further investigations with other factors. It is proved that these two routing protocols under scrutiny possess inherent susceptibilities, that are capable of interrupting the link quality calculations. Malicious nodes that abuse such vulnerabilities can be registered through operational detection mechanisms. The overall performance of the new LQR protocol with IDSs features is experimented, validated and represented via the detection rates and false alarm rates.

Miller, J. A., Peng, H., Cotterell, M. E..  2017.  Adding Support for Theory in Open Science Big Data. 2017 IEEE World Congress on Services (SERVICES). :71–75.

Open Science Big Data is emerging as an important area of research and software development. Although there are several high quality frameworks for Big Data, additional capabilities are needed for Open Science Big Data. These include data provenance, citable reusable data, data sources providing links to research literature, relationships to other data and theories, transparent analysis/reproducibility, data privacy, new optimizations/advanced algorithms, data curation, data storage and transfer. An important part of science is explanation of results, ideally leading to theory formation. In this paper, we examine means for supporting the use of theory in big data analytics as well as using big data to assist in theory formation. One approach is to fit data in a way that is compatible with some theory, existing or new. Functional Data Analysis allows precise fitting of data as well as penalties for lack of smoothness or even departure from theoretical expectations. This paper discusses principal differential analysis and related techniques for fitting data where, for example, a time-based process is governed by an ordinary differential equation. Automation in theory formation is also considered. Case studies in the fields of computational economics and finance are considered.

Jiang, L., Kuhn, W., Yue, P..  2017.  An interoperable approach for Sensor Web provenance. 2017 6th International Conference on Agro-Geoinformatics. :1–6.

The Sensor Web is evolving into a complex information space, where large volumes of sensor observation data are often consumed by complex applications. Provenance has become an important issue in the Sensor Web, since it allows applications to answer “what”, “when”, “where”, “who”, “why”, and “how” queries related to observations and consumption processes, which helps determine the usability and reliability of data products. This paper investigates characteristics and requirements of provenance in the Sensor Web and proposes an interoperable approach to building a provenance model for the Sensor Web. Our provenance model extends the W3C PROV Data Model with Sensor Web domain vocabularies. It is developed using Semantic Web technologies and thus allows provenance information of sensor observations to be exposed in the Web of Data using the Linked Data approach. A use case illustrates the applicability of the approach.

Suh, Y. K., Ma, J..  2017.  SuperMan: A Novel System for Storing and Retrieving Scientific-Simulation Provenance for Efficient Job Executions on Computing Clusters. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W). :283–288.

Compute-intensive simulations typically charge substantial workloads on an online simulation platform backed by limited computing clusters and storage resources. Some (or most) of the simulations initiated by users may accompany input parameters/files that have been already provided by other (or same) users in the past. Unfortunately, these duplicate simulations may aggravate the performance of the platform by drastic consumption of the limited resources shared by a number of users on the platform. To minimize or avoid conducting repeated simulations, we present a novel system, called SUPERMAN (SimUlation ProvEnance Recycling MANager) that can record simulation provenances and recycle the results of past simulations. This system presents a great opportunity to not only reutilize existing results but also perform various analytics helpful for those who are not familiar with the platform. The system also offers interoperability across other systems by collecting the provenances in a standardized format. In our simulated experiments we found that over half of past computing jobs could be answered without actual executions by our system.

Stephan, E., Raju, B., Elsethagen, T., Pouchard, L., Gamboa, C..  2017.  A scientific data provenance harvester for distributed applications. 2017 New York Scientific Data Summit (NYSDS). :1–9.

Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.

Dai, D., Chen, Y., Carns, P., Jenkins, J., Ross, R..  2017.  Lightweight Provenance Service for High-Performance Computing. 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT). :117–129.

Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

Bertino, E., Kantarcioglu, M..  2017.  A Cyber-Provenance Infrastructure for Sensor-Based Data-Intensive Applications. 2017 IEEE International Conference on Information Reuse and Integration (IRI). :108–114.

Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (textless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- textbartextbar

Polyzos, G. C., Fotiou, N..  2017.  Blockchain-Assisted Information Distribution for the Internet of Things. 2017 IEEE International Conference on Information Reuse and Integration (IRI). :75–78.

The Internet of Things (IoT) is envisioned to include billions of pervasive and mission-critical sensors and actuators connected to the (public) Internet. This network of smart devices is expected to generate and have access to vast amounts of information, creating unique opportunities for novel applications but, at the same time raising significant privacy and security concerns that impede its further adoption and development. In this paper, we explore the potential of a blockchain-assisted information distribution system for the IoT. We identify key security requirements of such a system and we discuss how they can be satisfied using blockchains and smart contracts. Furthermore, we present a preliminary design of the system and we identify enabling technologies.

Davis, D. B., Featherston, J., Fukuda, M., Asuncion, H. U..  2017.  Data Provenance for Multi-Agent Models. 2017 IEEE 13th International Conference on e-Science (e-Science). :39–48.

Multi-agent simulations are useful for exploring collective patterns of individual behavior in social, biological, economic, network, and physical systems. However, there is no provenance support for multi-agent models (MAMs) in a distributed setting. To this end, we introduce ProvMASS, a novel approach to capture provenance of MAMs in a distributed memory by combining inter-process identification, lightweight coordination of in-memory provenance storage, and adaptive provenance capture. ProvMASS is built on top of the Multi-Agent Spatial Simulation (MASS) library, a framework that combines multi-agent systems with large-scale fine-grained agent-based models, or MAMs. Unlike other environments supporting MAMs, MASS parallelizes simulations with distributed memory, where agents and spatial data are shared application resources. We evaluate our approach with provenance queries to support three use cases and performance measures. Initial results indicate that our approach can support various provenance queries for MAMs at reasonable performance overhead.

That, D. H. T., Fils, G., Yuan, Z., Malik, T..  2017.  Sciunits: Reusable Research Objects. 2017 IEEE 13th International Conference on e-Science (e-Science). :374–383.

Science is conducted collaboratively, often requiring knowledge sharing about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. In this paper, we present the sciunit, a reusable research object in which aggregated content is recomputable. We describe a Git-like client that efficiently creates, stores, and repeats sciunits. We show through analysis that sciunits repeat computational experiments with minimal storage and processing overhead. Finally, we provide an overview of sharing and reproducible cyberinfrastructure based on sciunits gaining adoption in the domain of geosciences.

Sowmyadevi, D., Karthikeyan, K..  2017.  Merkle-Hellman knapsack-side channel monitoring based secure scheme for detecting provenance forgery and selfish nodes in wireless sensor networks. 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–8.

Provenance counterfeit and packet loss assaults are measured as threats in the large scale wireless sensor networks which are engaged for diverse application domains. The assortments of information source generate necessitate promising the reliability of information such as only truthful information is measured in the decision procedure. Details about the sensor nodes play an major role in finding trust value of sensor nodes. In this paper, a novel lightweight secure provenance method is initiated for improving the security of provenance data transmission. The anticipated system comprises provenance authentication and renovation at the base station by means of Merkle-Hellman knapsack algorithm based protected provenance encoding in the Bloom filter framework. Side Channel Monitoring (SCM) is exploited for noticing the presence of selfish nodes and packet drop behaviors. This lightweight secure provenance method decreases the energy and bandwidth utilization with well-organized storage and secure data transmission. The investigational outcomes establishes the efficacy and competence of the secure provenance secure system by professionally noticing provenance counterfeit and packet drop assaults which can be seen from the assessment in terms of provenance confirmation failure rate, collection error, packet drop rate, space complexity, energy consumption, true positive rate, false positive rate and packet drop attack detection.