Biblio
Adversarial models are well-established for cryptographic protocols, but distributed real-time protocols have requirements that these abstractions are not intended to cover. The IEEE/IEC 61850 standard for communication networks and systems for power utility automation in particular not only requires distributed processing, but in case of the generic object oriented substation events and sampled value (GOOSE/SV) protocols also hard real-time characteristics. This motivates the desire to include both quality of service (QoS) and explicit network topology in an adversary model based on a π-calculus process algebraic formalism based on earlier work. This allows reasoning over process states, placement of adversarial entities and communication behaviour. We demonstrate the use of our model for the simple case of a replay attack against the publish/subscribe GOOSE/SV subprotocol, showing bounds for non-detectability of such an attack.
Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.
Confidentiality, Integrity, and Availability are principal keys to build any secure software. Considering the security principles during the different software development phases would reduce software vulnerabilities. This paper measures the impact of the different software quality metrics on Confidentiality, Integrity, or Availability for any given object-oriented PHP application, which has a list of reported vulnerabilities. The National Vulnerability Database was used to provide the impact score on confidentiality, integrity, and availability for the reported vulnerabilities on the selected applications. This paper includes a study for these scores and its correlation with 25 code metrics for the given vulnerable source code. The achieved results were able to correlate 23.7% of the variability in `Integrity' to four metrics: Vocabulary Used in Code, Card and Agresti, Intelligent Content, and Efferent Coupling metrics. The Length (Halstead metric) could alone predict about 24.2 % of the observed variability in ` Availability'. The results indicate no significant correlation of `Confidentiality' with the tested code metrics.
Affective Computing is a rapidly growing field spurred by advancements in artificial intelligence, but often, held back by the inability to translate psychological theories of emotion into tractable computational models. To address this, we propose a probabilistic programming approach to affective computing, which models psychological-grounded theories as generative models of emotion, and implements them as stochastic, executable computer programs. We first review probabilistic approaches that integrate reasoning about emotions with reasoning about other latent mental states (e.g., beliefs, desires) in context. Recently-developed probabilistic programming languages offer several key desidarata over previous approaches, such as: (i) flexibility in representing emotions and emotional processes; (ii) modularity and compositionality; (iii) integration with deep learning libraries that facilitate efficient inference and learning from large, naturalistic data; and (iv) ease of adoption. Furthermore, using a probabilistic programming framework allows a standardized platform for theory-building and experimentation: Competing theories (e.g., of appraisal or other emotional processes) can be easily compared via modular substitution of code followed by model comparison. To jumpstart adoption, we illustrate our points with executable code that researchers can easily modify for their own models. We end with a discussion of applications and future directions of the probabilistic programming approach
Cyber-physical systems (CPS) research leverages the expertise of researchers from multiple domains to engineer complex systems of interacting physical and computational components. An approach called co-simulation is often used in CPS conceptual design to integrate the specialized tools and simulators from each of these domains into a joint simulation for the evaluation of design decisions. Many co-simulation platforms are being developed to expedite CPS conceptualization and realization, but most use intrusive modeling and communication libraries that require researchers to either abandon their existing models or spend considerable effort to integrate them into the platform. A significant number of these co-simulation platforms use the High Level Architecture (HLA) standard that provides a rich set of services to facilitate distributed simulation. This paper introduces a simple gateway that can be readily implemented without co-simulation expertise to adapt existing models and research infrastructure for use in HLA. An open-source implementation of the gateway has been developed for the National Institute of Standards and Technology (NIST) co-simulation platform called the Universal CPS Environment for Federation (UCEF).
We present a unified communication architecture for security requirements in the industrial internet of things. Formulating security requirements in the language of OPC UA provides a unified method to communicate and compare security requirements within a heavily heterogeneous landscape of machines in the field. Our machine-readable data model provides a fully automatable approach for security requirement communication within the rapidly evolving fourth industrial revolution, which is characterized by high-grade interconnection of industrial infrastructures and self-configuring production systems. Capturing security requirements in an OPC UA compliant and unified data model for industrial control systems enables strong use cases within modern production plants and future supply chains. We implement our data model as well as an OPC UA server that operates on this model to show the feasibility of our approach. Further, we deploy and evaluate our framework within a reference project realized by 14 industrial partners and 7 research facilities within Germany.
Multicast distribution employs the model of many-to-many so that it is a more efficient way of data delivery compared to traditional one-to-one unicast distribution, which can benefit many applications such as media streaming. However, the lack of security features in its nature makes multicast technology much less popular in an open environment such as the Internet. Internet Service Providers (ISPs) take advantage of IP multicast technology's high efficiency of data delivery to provide Internet Protocol Television (IPTV) to their users. But without the full control on their networks, ISPs cannot collect revenue for the services they provide. Secure Internet Group Management Protocol (SIGMP), an extension of Internet Group Management Protocol (IGMP), and Group Security Association Management Protocol (GSAM), have been proposed to enforce receiver access control at the network level of IP multicast. In this paper, we analyze operational details and issues of both SIGMP and GSAM. An examination of the performance of both protocols is also conducted.
Cloud-based cyber-physical systems, like vehicle and intelligent transportation systems, are now attracting much more attentions. These systems usually include large-scale distributed sensor networks covering various components and producing enormous measurement data. Lots of modeling languages are put to use for describing cyber-physical systems or its aspects, bringing contribution to the development of cyber-physical systems. But most of the modeling techniques only focuse on software aspect so that they could not exactly express the whole cloud-based cyber-physical systems, which require appropriate views and tools in its design; but those tools are hard to be used under systemic or object-oriented methods. For example, the widest used modeling language, UML, could not fulfil the above design's requirements by using the foremer's standard form. This paper presents a method designing the cloud-based cyber-physical systems with AADL, by which we can analyse, model and apply those requirements on cloud platforms ensuring QoS in a relatively highly extensible way at the mean time.
Security model is an important subject in the field of low energy independence complexity theory. It takes security strategy as the core, changes the system from static protection to dynamic protection, and provides the basis for the rapid response of the system. A large number of empirical studies have been conducted to verify the cache consistency. The development of object oriented language is pure object oriented language, and the other is mixed object oriented language, that is, adding class, inheritance and other elements in process language and other languages. This paper studies a new object-oriented language application, namely GUT for a write-back cache, which is based on the study of simulation algorithm to solve all these challenges in the field of low energy independence complexity theory.
Industrial control systems are changing from monolithic to distributed and interconnected architectures, entering the era of industrial IoT. One fundamental issue is that security properties of such distributed control systems are typically only verified empirically, during development and after system deployment. We propose a novel modelling framework for the security verification of distributed industrial control systems, with the goal of moving towards early design stage formal verification. In our framework we model industrial IoT infrastructures, attack patterns, and mitigation strategies for countering attacks. We conduct model checking-based formal analysis of system security through scenario execution, where the analysed system is exposed to attacks and implement mitigation strategies. We study the applicability of our framework for large systems using a scalability analysis.
The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.