Biblio
With the increasing diversity of application needs (datacenters, IoT, content retrieval, industrial automation, etc.), new network architectures are continually being proposed to address specific and particular requirements. From a network management perspective, it is both important and challenging to enable evolution towards such new architectures. Given the ubiquity of the Internet, a clean-slate change of the entire infrastructure to a new architecture is impractical. It is believed that we will see new network architectures coming into existence with support for interoperability between separate architectural islands. We may have servers, and more importantly, content, residing in domains having different architectures. This paper presents COIN, a content-oriented interoperability framework for current and future Internet architectures. We seek to provide seamless connectivity and content accessibility across multiple of these network architectures, including the current Internet. COIN preserves each domain's key architectural features and mechanisms, while allowing flexibility for evolvability and extensibility. We focus on Information-Centric Networks (ICN), the prominent class of Future Internet architectures. COIN avoids expanding domain-specific protocols or namespaces. Instead, it uses an application-layer Object Resolution Service to deliver the right "foreign" names to consumers. COIN uses translation gateways that retain essential interoperability state, leverages encryption for confidentiality, and relies on domain-specific signatures to guarantee provenance and data integrity. Using NDN and MobilityFirst as important candidate solutions of ICN, and IP, we evaluate COIN. Measurements from an implementation of the gateways show that the overhead is manageable and scales well.
The JavaCard multi-application platform is now deployed to over twenty billion smartcards, used in various applications ranging from banking payments and authentication tokens to SIM cards and electronic documents. In most of those use cases, access to various cryptographic primitives is required. The standard JavaCard API provides a basic level of access to such functionality (e.g., RSA encryption) but does not expose low-level cryptographic primitives (e.g., elliptic curve operations) and essential data types (e.g., Integers). Developers can access such features only through proprietary, manufacturer-specific APIs. Unfortunately, such APIs significantly reduce the interoperability and certification transparency of the software produced as they require non-disclosure agreements (NDA) that prohibit public sharing of the applet's source code.We introduce JCMathLib, an open library that provides an intermediate layer realizing essential data types and low-level cryptographic primitives from high-level operations. To achieve this, we introduce a series of optimization techniques for resource-constrained platforms that make optimal use of the underlying hardware, while having a small memory footprint. To the best of our knowledge, it is the first generic library for low-level cryptographic operations in JavaCards that does not rely on a proprietary API.Without any disclosure limitations, JCMathLib has the potential to increase transparency by enabling open code sharing, release of research prototypes, and public code audits. Moreover, JCMathLib can help resolve the conflict between strict open-source licenses such as GPL and proprietary APIs available only under an NDA. This is of particular importance due to the introduction of JavaCard API v3.1, which targets specifically IoT devices, where open-source development might be more common than in the relatively closed world of government-issued electronic documents.
Based on the analysis of the difficulties and pain points of privacy protection in the opening and sharing of government data, this paper proposes a new method for intelligent discovery and protection of structured and unstructured privacy data. Based on the improvement of the existing government data masking process, this method introduces the technologies of NLP and machine learning, studies the intelligent discovery of sensitive data, the automatic recommendation of masking algorithm and the full automatic execution following the improved masking process. In addition, the dynamic masking and static masking prototype with text and database as data source are designed and implemented with agent-based intelligent masking middleware. The results show that the recognition range and protection efficiency of government privacy data, especially government unstructured text have been significantly improved.
Efficient application of Internet of Battlefield Things (IoBT) technology on the battlefield calls for innovative solutions to control and manage the deluge of heterogeneous IoBT devices. This paper presents an innovative paradigm to address heterogeneity in controlling IoBT and IoT devices, enabling multi-force cooperation in challenging battlefield scenarios.
Cloud Storage Brokers (CSB) provide seamless and concurrent access to multiple Cloud Storage Services (CSS) while abstracting cloud complexities from end-users. However, this multi-cloud strategy faces several security challenges including enlarged attack surfaces, malicious insider threats, security complexities due to integration of disparate components and API interoperability issues. Novel security approaches are imperative to tackle these security issues. Therefore, this paper proposes CS-BAuditor, a novel cloud security system that continuously audits CSB resources, to detect malicious activities and unauthorized changes e.g. bucket policy misconfigurations, and remediates these anomalies. The cloud state is maintained via a continuous snapshotting mechanism thereby ensuring fault tolerance. We adopt the principles of chaos engineering by integrating BrokerMonkey, a component that continuously injects failure into our reference CSB system, CloudRAID. Hence, CSBAuditor is continuously tested for efficiency i.e. its ability to detect the changes injected by BrokerMonkey. CSBAuditor employs security metrics for risk analysis by computing severity scores for detected vulnerabilities using the Common Configuration Scoring System, thereby overcoming the limitation of insufficient security metrics in existing cloud auditing schemes. CSBAuditor has been tested using various strategies including chaos engineering failure injection strategies. Our experimental evaluation validates the efficiency of our approach against the aforementioned security issues with a detection and recovery rate of over 96 %.
A wide variety of security software systems need to be integrated into a Security Orchestration Platform (SecOrP) to streamline the processes of defending against and responding to cybersecurity attacks. Lack of interpretability and interoperability among security systems are considered the key challenges to fully leverage the potential of the collective capabilities of different security systems. The processes of integrating security systems are repetitive, time-consuming and error-prone; these processes are carried out manually by human experts or using ad-hoc methods. To help automate security systems integration processes, we propose an Ontology-driven approach for Security OrchestrAtion Platform (OnSOAP). The developed solution enables interpretability, and interoperability among security systems, which may exist in operational silos. We demonstrate OnSOAP's support for automated integration of security systems to execute the incident response process with three security systems (Splunk, Limacharlie, and Snort) for a Distributed Denial of Service (DDoS) attack. The evaluation results show that OnSOAP enables SecOrP to interpret the input and output of different security systems, produce error-free integration details, and make security systems interoperable with each other to automate and accelerate an incident response process.
The smart grid is a complex cyber-physical system (CPS) that poses challenges related to scale, integration, interoperability, processes, governance, and human elements. The US National Institute of Standards and Technology (NIST) and its government, university and industry collaborators, developed an approach, called CPS Framework, to reasoning about CPS across multiple levels of concern and competency, including trustworthiness, privacy, reliability, and regulatory. The approach uses ontology and reasoning techniques to achieve a greater understanding of the interdependencies among the elements of the CPS Framework model applied to use cases. This paper demonstrates that the approach extends naturally to automated and manual decision-making for smart grids: we apply it to smart grid use cases, and illustrate how it can be used to analyze grid topologies and address concerns about the smart grid. Smart grid stakeholders, whose decision making may be assisted by this approach, include planners, designers and operators.
Data Distribution Service (DDS) is a realtime peer-to-peer protocol that serves as a scalable middleware between distributed networked systems found in many Industrial IoT domains such as automotive, medical, energy, and defense. Since the initial ratification of the standard, specifications have introduced a Security Model and Service Plugin Interface (SPI) architecture, facilitating authenticated encryption and data centric access control while preserving interoperable data exchange. However, as Secure DDS v1.1, the default plugin specifications presently exchanges digitally signed capability lists of both participants in the clear during the crypto handshake for permission attestation; thus breaching confidentiality of the context of the connection. In this work, we present an attacker model that makes use of network reconnaissance afforded by this leaked context in conjunction with formal verification and model checking to arbitrarily reason about the underlying topology and reachability of information flow, enabling targeted attacks such as selective denial of service, adversarial partitioning of the data bus, or vulnerability excavation of vendor implementations.
Development of information systems dealing with education and labour market using web and grid service architecture enables their modularity, expandability and interoperability. Application of ontologies to the web helps with collecting and selecting the knowledge about a certain field in a generic way, thus enabling different applications to understand, use, reuse and share the knowledge among them. A necessary step before publishing computer-interpretable data on the public web is the implementation of common standards that will ensure the exchange of information. Croatian Qualification Framework (CROQF) is a project of standardization of occupations for the labour market, as well as standardization of sets of qualifications, skills and competences and their mutual relations. This paper analysis a respectable amount of research dealing with application of ontologies to information systems in education during the last decade. The main goal is to compare achieved results according to: 1) phases of development/classifications of education-related ontologies; 2) areas of education and 3) standards and structures of metadata for educational systems. Collected information is used to provide insight into building blocks of CROQF, both the ones well supported by experience and best practices, and the ones that are not, together with guidelines for development of own standards using ontological structures.