Biblio
Nowadays is becoming trivial to have multiple virtual machines working in parallel on hardware platforms with high processing power. This appropriate cost effective approach can be found at Internet Service Providers, in cloud service providers’ environments, in research and development lab testing environment (for example Universities’ student’s lab), in virtual application for security evaluation and in many other places. In the aforementioned cases, it is often necessary to start and/or stop virtual machines on the fly. In cloud service providers all the creation / tear down actions are triggered by a customer request and cannot be postponed or delayed for later evaluation. When a new virtual machine is created, it is imperative to assign unique IP addresses to all network interfaces and also domain name system DNS records that contain text based data, IP addresses, etc. Even worse, if a virtual machine has to be stopped or torn down, the critical network resources such as IP addresses and DNS records have to be carefully controlled in order to avoid IP addresses conflicts and name resolution problems between an old virtual machine and a newly created virtual machine. This paper proposes a provisioning mechanism to avoid both DNS records and IP addresses conflicts due to human misconfiguration, problems that can cause networking operation service disruptions.
Internet of Things (IoT) systems are becoming widely used, which makes them to be a high-value target for both hackers and crackers. From gaining access to sensitive information to using them as bots for complex attacks, the variety of advantages after exploiting different security vulnerabilities makes the security of IoT devices to be one of the most challenging desideratum for cyber security experts. In this paper, we will propose a new IoT system, designed to ensure five data principles: confidentiality, integrity, availability, authentication and authorization. The innovative aspects are both the usage of a web-based communication and a custom dynamic data request structure.
Recently, a large amount of research studies aiming at the privacy-preserving data publishing have been conducted. We find that most K-anonymity algorithms fail to consider the characteristics of attribute values distribution in data and the contribution value differences in quasi-identifier attributes when service-oriented. In this paper, the importance of distribution characteristics of attribute values and the differences in contribution value of quasi-identifier attributes to anonymous results are illustrated. In order to maximize the utility of released data, a service-oriented adaptive anonymity algorithm is proposed. We establish a model of reaction dispersion degree to quantify the characteristics of attribute value distribution and introduce the concept of utility weight related to the contribution value of quasi-identifier attributes. The priority coefficient and the characterization coefficient of partition quality are defined to optimize selection strategies of dimension and splitting value in anonymity group partition process adaptively, which can reduce unnecessary information loss so as to further improve the utility of anonymized data. The rationality and validity of the algorithm are verified by theoretical analysis and multiple experiments.
Nowadays, trust and reputation models are used to build a wide range of trust-based security mechanisms and trust-based service management applications on the Internet of Things (IoT). Considering trust as a single unit can result in missing important and significant factors. We split trust into its building-blocks, then we sort and assign weight to these building-blocks (trust metrics) on the basis of its priorities for the transaction context of a particular goal. To perform these processes, we consider trust as a multi-criteria decision-making problem, where a set of trust worthiness metrics represent the decision criteria. We introduce Entropy-based fuzzy analytic hierarchy process (EFAHP) as a trust model for selecting a trustworthy service provider, since the sense of decision making regarding multi-metrics trust is structural. EFAHP gives 1) fuzziness, which fits the vagueness, uncertainty, and subjectivity of trust attributes; 2) AHP, which is a systematic way for making decisions in complex multi-criteria decision making; and 3) entropy concept, which is utilized to calculate the aggregate weights for each service provider. We present a numerical illustration in trust-based Service Oriented Architecture in the IoT (SOA-IoT) to demonstrate the service provider selection using the EFAHP Model in assessing and aggregating the trust scores.
Engineering complex distributed systems is challenging. Recent solutions for the development of cyber-physical systems (CPS) in industry tend to rely on architectural designs based on service orientation, where the constituent components are deployed according to their service behavior and are to be understood as loosely coupled and mostly independent. In this paper, we develop a workflow that combines contract-based and CPS model-based specifications with service orientation, and analyze the resulting model using fault injection to assess the dependability of the systems. Compositionality principles based on the contract specification help us to make the analysis practical. The presented techniques are evaluated on two case studies.
The intelligent production line is a complex application with a large number of independent equipment network integration. In view of the characteristics of CPS, the existing modeling methods cannot well meet the application requirements of large scale high-performance system. a formal simulation verification framework and verification method are designed for the performance constraints such as the real-time and security of the intelligent production line based on soft bus. A model-based service-oriented integration approach is employed, which adopts a model-centric way to automate the development course of the entire software life cycle. Developing experience indicate that the proposed approach based on the formal modeling and verification framework in this paper can improve the performance of the system, which is also helpful to achieve the balance of the production line and maintain the reasonable use rate of the processing equipment.
The clear, social, and dark web have lately been identified as rich sources of valuable cyber-security information that -given the appropriate tools and methods-may be identified, crawled and subsequently leveraged to actionable cyber-threat intelligence. In this work, we focus on the information gathering task, and present a novel crawling architecture for transparently harvesting data from security websites in the clear web, security forums in the social web, and hacker forums/marketplaces in the dark web. The proposed architecture adopts a two-phase approach to data harvesting. Initially a machine learning-based crawler is used to direct the harvesting towards websites of interest, while in the second phase state-of-the-art statistical language modelling techniques are used to represent the harvested information in a latent low-dimensional feature space and rank it based on its potential relevance to the task at hand. The proposed architecture is realised using exclusively open-source tools, and a preliminary evaluation with crowdsourced results demonstrates its effectiveness.
Statistics suggests, proceeding towards IoT generation, is increasing IoT devices at a drastic rate. This will be very challenging for our present-day network infrastructure to manage, this much of data. This may risk, both security and traffic collapsing. We have proposed an infrastructure with Fog Computing. The Fog layer consists two layers, using the concepts of Service oriented Architecture (SOA) and the Agent based composition model which ensures the traffic usage reduction. In order to have a robust and secured system, we have modified the Fog based agent model by replacing the SOA with secured Named Data Network (NDN) protocol. Knowing the fact that NDN has the caching layer, we are combining NDN and with Fog, as it can overcome the forwarding strategy limitation and memory constraints of NDN by the Agent Society, in the Middle layer along with Trust management.
Since cyber-physical systems are inherently vulnerable to information leaks, software architects need to reason about security policies to define desired and undesired information flow through a system. The microservice architectural style requires the architects to refine a macro-level security policy into micro-level policies for individual microservices. However, when policies are refined in an ill-formed way, information leaks can emerge on composition of microservices. Related approaches to prevent such leaks do not take into account characteristics of cyber-physical systems like real-time behavior or message passing communication. In this paper, we enable the refinement and verification of information-flow security policies for cyber-physical microservice architectures. We provide architects with a set of well-formedness rules for refining a macro-level policy in a way that enforces its security restrictions. Based on the resulting micro-level policies, we present a verification technique to check if the real-time message passing of microservices is secure. In combination, our contributions prevent information leaks from emerging on composition. We evaluate the accuracy of our approach using an extension of the CoCoME case study.
Web application technologies are growing rapidly with continuous innovation and improvements. This paper focuses on the popular Spring Boot [1] java-based framework for building web and enterprise applications and how it provides the flexibility for service-oriented architecture (SOA). One challenge with any Spring-based applications is its level of complexity with configurations. Spring Boot makes it easy to create and deploy stand-alone, production-grade Spring applications with very little Spring configuration. Example, if we consider Spring Model-View-Controller (MVC) framework [2], we need to configure dispatcher servlet, web jars, a view resolver, and component scan among other things. To solve this, Spring Boot provides several Auto Configuration options to setup the application with any needed dependencies. Another challenge is to identify the framework dependencies and associated library versions required to develop a web application. Spring Boot offers simpler dependency management by using a comprehensive, but flexible, framework and the associated libraries in one single dependency, which provides all the Spring related technology that you need for starter projects as compared to CRUD web applications. This framework provides a range of additional features that are common across many projects such as embedded server, security, metrics, health checks, and externalized configuration. Web applications are generally packaged as war and deployed to a web server, but Spring Boot application can be packaged either as war or jar file, which allows to run the application without the need to install and/or configure on the application server. In this paper, we discuss how Atmospheric Radiation Measurement (ARM) Data Center (ADC) at Oak Ridge National Laboratory, is using Spring Boot to create a SOA based REST [4] service API, that bridges the gap between frontend user interfaces and backend database. Using this REST service API, ARM scientists are now able to submit reports via a user form or a command line interface, which captures the same data quality or other important information about ARM data.
The key factors for deploying successful services is centered on the service design practices adopted by an enterprise. The design level information should be validated and measures are required to quantify the structural attributes. The metrics at this stage will support an early discovery of design flaws and help designers to predict the capabilities of service oriented architecture (SOA) adoption. In this work, we take a deeper look at how we can forecast the key SOA capabilities infrastructure efficiency and service reuse from the service designs modeled by SOA modeling language. The proposed approach defines metrics based on the structural and domain level similarity of service operations. The proposed metrics are analytically validated with respect to software engineering metrics properties. Moreover, a tool has been developed to automate the proposed approach and the results indicate that the metrics predict the SOA capabilities at the service design stage. This work can be further extended to predict the business based capabilities of SOA adoption such as flexibility and agility.
Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.
Service composition is currently done by (hierarchical) orchestration and choreography. However, these approaches do not support explicit control flow and total compositionality, which are crucial for the scalability of service-oriented systems. In this paper, we propose exogenous connectors for service composition. These connectors support both explicit control flow and total compositionality in hierarchical service composition. To validate and evaluate our proposal, we present a case study based on the popular MusicCorp.
In this paper we investigate whether and how hardware-based roots of trust, namely Trusted Platform Modules (TPMs) can improve the security of the communication protocol OPC UA (Open Platform Communications Unified Architecture) under reasonable assumptions, i.e. the Dolev-Yao attacker model. Our analysis shows that TPMs may serve for generating (RNG) and securely storing cryptographic keys, as cryptocoprocessors for weak systems, as well as for remote attestation. We propose to include these TPM functions into OPC UA via so-called ConformanceUnits, which can serve as building blocks of profiles that are used by clients and servers for negotiating the parameters of a session. Eventually, we present first results regarding the performance of a client-server communication including an additional OPC UA server providing remote attestation of other OPC UA servers.
When clients interact with a cloud-based service, they expect certain levels of quality of service guarantees. These are expressed as security and privacy policies, interaction authorization policies, and service performance policies among others. The main security challenge in a cloud-based service environment, typically modeled using service-oriented architecture (SOA), is that it is difficult to trust all services in a service composition. In addition, the details of the services involved in an end-to-end service invocation chain are usually not exposed to the clients. The complexity of the SOA services and multi-tenancy in the cloud environment leads to a large attack surface. In this paper we propose a novel approach for end-to-end security and privacy in cloud-based service orchestrations, which uses a service activity monitor to audit activities of services in a domain. The service monitor intercepts interactions between a client and services, as well as among services, and provides a pluggable interface for different modules to analyze service interactions and make dynamic decisions based on security policies defined over the service domain. Experiments with a real-world service composition scenario demonstrate that the overhead of monitoring is acceptable for real-time operation of Web services.
To overcome the current cybersecurity challenges of protecting our cyberspace and applications, we present an innovative cloud-based architecture to offer resilient Dynamic Data Driven Application Systems (DDDAS) as a cloud service that we refer to as resilient DDDAS as a Service (rDaaS). This architecture integrates Service Oriented Architecture (SOA) and DDDAS paradigms to offer the next generation of resilient and agile DDDAS-based cyber applications, particularly convenient for critical applications such as Battle and Crisis Management applications. Using the cloud infrastructure to offer resilient DDDAS routines and applications, large scale DDDAS applications can be developed by users from anywhere and by using any device (mobile or stationary) with the Internet connectivity. The rDaaS provides transformative capabilities to achieve superior situation awareness (i.e., assessment, visualization, and understanding), mission planning and execution, and resilient operations.