Biblio
Today's companies are increasingly relying on Internet of Everything (IoE) to modernize their operations. The very complexes characteristics of such system expose their applications and their exchanged data to multiples risks and security breaches that make them targets for cyber attacks. The aim of our work in this paper is to provide an cybersecurity strategy whose objective is to prevent and anticipate threats related to the IoE. An economic approach is used in order to help to take decisions according to the reduction of the risks generated by the non definition of the appropriate levels of security. The considered problem have been resolved by exploiting a combinatorial optimization approach with a practical case of knapsack. We opted for a bi-objective modeling under uncertainty with a constraint of cardinality and a given budget to be respected. To guarantee a robustness of our strategy, we have also considered the criterion of uncertainty by taking into account all the possible threats that can be generated by a cyber attacks over IoE. Our strategy have been implemented and simulated under MATLAB environement and its performance results have been compared to those obtained by NSGA-II metaheuristic. Our proposed cyber security strategy recorded a clear improvment of efficiency according to the optimization of the security level and cost parametrs.
Cloud service has the computing characteristics of self-organizing strain on demand, which is prone to failure or loss of responsibility in its extensive application. In the prediction or accountability of this, the modeling of cloud service structure becomes an insurmountable priority. This paper reviews the modeling of cloud service network architecture. It mainly includes: Firstly, the research status of cloud service structure modeling is analyzed and reviewed. Secondly, the classification of time-varying structure of cloud services and the classification of time-varying structure modeling methods are summarized as a whole. Thirdly, it points out the existing problems. Finally, for cloud service accountability, research approach of time-varying structure modeling is proposed.
Cloud-based cyber-physical systems, like vehicle and intelligent transportation systems, are now attracting much more attentions. These systems usually include large-scale distributed sensor networks covering various components and producing enormous measurement data. Lots of modeling languages are put to use for describing cyber-physical systems or its aspects, bringing contribution to the development of cyber-physical systems. But most of the modeling techniques only focuse on software aspect so that they could not exactly express the whole cloud-based cyber-physical systems, which require appropriate views and tools in its design; but those tools are hard to be used under systemic or object-oriented methods. For example, the widest used modeling language, UML, could not fulfil the above design's requirements by using the foremer's standard form. This paper presents a method designing the cloud-based cyber-physical systems with AADL, by which we can analyse, model and apply those requirements on cloud platforms ensuring QoS in a relatively highly extensible way at the mean time.
SDN (Software Defined Network) with multiple controllers draws more attention for the increasing scale of the network. The architecture can handle what SDN with single controller is not able to address. In order to understand what this architecture can accomplish and face precisely, we analyze it with formal methods. In this paper, we apply CSP (Communicating Sequential Processes) to model the routing service of SDN under HyperFlow architecture based on OpenFlow protocol. By using model checker PAT (Process Analysis Toolkit), we verify that the models satisfy three properties, covering deadlock freeness, consistency and fault tolerance.
Experimentation tools facilitate exploration of Tor performance and security research problems and allow researchers to safely and privately conduct Tor experiments without risking harm to real Tor users. However, researchers using these tools configure them to generate network traffic based on simplifying assumptions and outdated measurements and without understanding the efficacy of their configuration choices. In this work, we design a novel technique for dynamically learning Tor network traffic models using hidden Markov modeling and privacy-preserving measurement techniques. We conduct a safe but detailed measurement study of Tor using 17 relays (\textasciitilde2% of Tor bandwidth) over the course of 6 months, measuring general statistics and models that can be used to generate a sequence of streams and packets. We show how our measurement results and traffic models can be used to generate traffic flows in private Tor networks and how our models are more realistic than standard and alternative network traffic generation\textasciitildemethods.
Industrial control systems are changing from monolithic to distributed and interconnected architectures, entering the era of industrial IoT. One fundamental issue is that security properties of such distributed control systems are typically only verified empirically, during development and after system deployment. We propose a novel modelling framework for the security verification of distributed industrial control systems, with the goal of moving towards early design stage formal verification. In our framework we model industrial IoT infrastructures, attack patterns, and mitigation strategies for countering attacks. We conduct model checking-based formal analysis of system security through scenario execution, where the analysed system is exposed to attacks and implement mitigation strategies. We study the applicability of our framework for large systems using a scalability analysis.
Tracing and integrating security requirements throughout the development process is a key challenge in security engineering. In socio-technical systems, security requirements for the organizational and technical aspects of a system are currently dealt with separately, giving rise to substantial misconceptions and errors. In this paper, we present a model-based security engineering framework for supporting the system design on the organizational and technical level. The key idea is to allow the involved experts to specify security requirements in the languages they are familiar with: business analysts use BPMN for procedural system descriptions; system developers use UML to design and implement the system architecture. Security requirements are captured via the language extensions SecBPMN2 and UMLsec. We provide a model transformation to bridge the conceptual gap between SecBPMN2 and UMLsec. Using UMLsec policies, various security properties of the resulting architecture can be verified. In a case study featuring an air traffic management system, we show how our framework can be practically applied.
Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.
Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.
Sophisticated cyber attacks by state-sponsored and criminal actors continue to plague government and industrial infrastructure. Intuitively, partitioning cyber systems into survivable, intrusion tolerant compartments is a good idea. This prevents witting and unwitting insiders from moving laterally and reaching back to their command and control (C2) servers. However, there is a lack of artifacts that can predict the effectiveness of this approach in a realistic setting. We extend earlier work by relaxing simplifying assumptions and providing a new attacker-facing metric. In this article, we propose new closed-form mathematical models and a discrete time simulation to predict three critical statistics: probability of compromise, probability of external host compromise and probability of reachback. The results of our new artifacts agree with one another and with previous work, which suggests they are internally valid and a viable method to evaluate the effectiveness of cyber zone defense.
Network systems, such as transportation systems and water supply systems, play important roles in our daily life and industrial production. However, a variety of disruptive events occur during their life time, causing a series of serious losses. Due to the inevitability of disruption, we should not only focus on improving the reliability or the resistance of the system, but also pay attention to the ability of the system to response timely and recover rapidly from disruptive events. That is to say we need to pay more attention to the resilience. In this paper, we describe two resilience models, quotient resilience and integral resilience, to measure the final recovered performance and the performance cumulative process during recovery respectively. Based on these two models, we implement the optimization of the system recovery strategies after disruption, focusing on the repair sequence of the damaged components and the allocation scheme of resource. The proposed research in this paper can serve as guidance to prioritize repair tasks and allocate resource reasonably.