Biblio
To reduce cost and ease maintenance, industrial control systems (ICS) have adopted Ethernetbased interconnections that integrate operational technology (OT) systems with information technology (IT) networks. This integration has made these critical systems vulnerable to attack. Security solutions tailored to ICS environments are an active area of research. Anomalybased network intrusion detection systems are well-suited for these environments. Often these systems must be optimized for their specific environment. In prior work, we introduced a method for assessing the impact of various anomaly-based network IDS settings on security. This paper reviews the experimental outcomes when we applied our method to a full-scale ICS test bed using actual attacks. Our method provides new and valuable data to operators enabling more informed decisions about IDS configurations.
The relevance of data protection is related to the intensive informatization of various aspects of society and the need to prevent unauthorized access to them. World spending on ensuring information security (IS) for the current state: expenses in the field of IS today amount to \$81.7 billion. Expenditure forecast by 2020: about \$105 billion [1]. Information protection of military facilities is the most critical in the public sector, in the non-state - financial organizations is one of the leaders in spending on information protection. An example of the importance of IS research is the Trojan encoder WannaCry, which infected hundreds of thousands of computers around the world, attacks are recorded in more than 116 countries. The attack of the encoder of WannaCry (Wana Decryptor) happens through a vulnerability in service Server Message Block (protocol of network access to file systems) of Windows OS. Then, a rootkit (a set of malware) was installed on the infected system, using which the attackers launched an encryption program. Then each vulnerable computer could become infected with another infected device within one local network. Due to these attacks, about \$70,000 was lost (according to data from 18.05.2017) [2]. It is assumed in the presented work, that the software level of information protection is fundamentally insufficient to ensure the stable functioning of critical objects. This is due to the possible hardware implementation of undocumented instructions, discussed later. The complexity of computing systems and the degree of integration of their components are constantly growing. Therefore, monitoring the operation of the computer hardware is necessary to achieve the maximum degree of protection, in particular, data processing methods.
Most modern cloud and web services are programmatically accessed through REST APIs. This paper discusses how an attacker might compromise a service by exploiting vulnerabilities in its REST API. We introduce four security rules that capture desirable properties of REST APIs and services. We then show how a stateful REST API fuzzer can be extended with active property checkers that automatically test and detect violations of these rules. We discuss how to implement such checkers in a modular and efficient way. Using these checkers, we found new bugs in several deployed production Azure and Office365 cloud services, and we discuss their security implications. All these bugs have been fixed.
The paper outlines the concept of the Digital economy, defines the role and types of intellectual resources in the context of digitalization of the economy, reviews existing approaches and methods to intellectual property valuation and analyzes drawbacks of quantitative evaluation of intellectual resources (based intellectual property valuation) related to: uncertainty, noisy data, heterogeneity of resources, nonformalizability, lack of reliable tools for measuring the parameters of intellectual resources and non-stationary development of intellectual resources. The results of the study offer the ways of further development of methods for quantitative evaluation of intellectual resources (inter alia aimed at their capitalization).
Two-phase I/O is a well-known strategy for implementing collective MPI-IO functions. It redistributes I/O requests among the calling processes into a form that minimizes the file access costs. As modern parallel computers continue to grow into the exascale era, the communication cost of such request redistribution can quickly overwhelm collective I/O performance. This effect has been observed from parallel jobs that run on multiple compute nodes with a high count of MPI processes on each node. To reduce the communication cost, we present a new design for collective I/O by adding an extra communication layer that performs request aggregation among processes within the same compute nodes. This approach can significantly reduce inter-node communication contention when redistributing the I/O requests. We evaluate the performance and compare it with the original two-phase I/O on Cray XC40 parallel computers (Theta and Cori) with Intel KNL and Haswell processors. Using I/O patterns from two large-scale production applications and an I/O benchmark, we show our proposed method effectively reduces the communication cost and hence maintains the scalability for a large number of processes.
Localizing concurrency faults that occur in production is hard because, (1) detailed field data, such as user input, file content and interleaving schedule, may not be available to developers to reproduce the failure; (2) it is often impractical to assume the availability of multiple failing executions to localize the faults using existing techniques; (3) it is challenging to search for buggy locations in an application given limited runtime data; and, (4) concurrency failures at the system level often involve multiple processes or event handlers (e.g., software signals), which can not be handled by existing tools for diagnosing intra-process(thread-level) failures. To address these problems, we present SCMiner, a practical online bug diagnosis tool to help developers understand how a system-level concurrency fault happens based on the logs collected by the default system audit tools. SCMiner achieves online bug diagnosis to obviate the need for offline bug reproduction. SCMiner does not require code instrumentation on the production system or rely on the assumption of the availability of multiple failing executions. Specifically, after the system call traces are collected, SCMiner uses data mining and statistical anomaly detection techniques to identify the failure-inducing system call sequences. It then maps each abnormal sequence to specific application functions. We have conducted an empirical study on 19 real-world benchmarks. The results show that SCMiner is both effective and efficient at localizing system-level concurrency faults.
We present a unified communication architecture for security requirements in the industrial internet of things. Formulating security requirements in the language of OPC UA provides a unified method to communicate and compare security requirements within a heavily heterogeneous landscape of machines in the field. Our machine-readable data model provides a fully automatable approach for security requirement communication within the rapidly evolving fourth industrial revolution, which is characterized by high-grade interconnection of industrial infrastructures and self-configuring production systems. Capturing security requirements in an OPC UA compliant and unified data model for industrial control systems enables strong use cases within modern production plants and future supply chains. We implement our data model as well as an OPC UA server that operates on this model to show the feasibility of our approach. Further, we deploy and evaluate our framework within a reference project realized by 14 industrial partners and 7 research facilities within Germany.
The article deals with the aspects of IT-security of business processes, using a variety of methodological tools, including Integrated Management Systems. Currently, all IMS consist of at least 2 management systems, including the IT-Security Management System. Typically, these IMS cover biggest part of the company business processes, but in practice, there are examples of different scales, even within a single facility. However, it should be recognized that the total number of such projects both in the Russian Federation and in the World is small. The security of business processes will be considered on the example of the incident of Norsk Hydro. In the article the main conclusions are given to confirm the possibility of security, continuity and recovery of critical business processes on the example of this incident.
The paper discusses the architectural, algorithmic and computing aspects of creating and operating a class of expert system for managing technological safety of an enterprise, in conditions of a large flow of diagnostic variables. The algorithm for finding a faulty technological chain uses expert information, formed as a set of evidence on the influence of diagnostic variables on the correctness of the technological process. Using the Dempster-Schafer trust function allows determining the overall probability measure on subsets of faulty process chains. To combine different evidence, the orthogonal sums of the base probabilities determined for each evidence are calculated. The procedure described above is converted into the rules of the knowledge base production. The description of the developed prototype of the expert system, its architecture, algorithmic and software is given. The functionality of the expert system and configuration tools for a specific type of production are under discussion.
We propose a novel cross-stack sensor framework for realizing lightweight, context-aware, high-interaction network and endpoint deceptions for attacker disinformation, misdirection, monitoring, and analysis. In contrast to perimeter-based honeypots, the proposed method arms production workloads with deceptive attack-response capabilities via injection of booby-traps at the network, endpoint, operating system, and application layers. This provides defenders with new, potent tools for more effectively harvesting rich cyber-threat data from the myriad of attacks launched by adversaries whose identities and methodologies can be better discerned through direct engagement rather than purely passive observations of probe attempts. Our research provides new tactical deception capabilities for cyber operations, including new visibility into both enterprise and national interest networks, while equipping applications and endpoints with attack awareness and active mitigation capabilities.
The evolution of the enterprise computing landscape towards emerging trends such as fog/edge computing and the Industrial Internet of Things (IIoT) are leading to a change of approach to securing computer networks to deal with challenges such as mobility, virtualized infrastructures, dynamic and heterogeneous user contexts and transaction-based interactions. The uncertainty introduced by such dynamicity introduces greater uncertainty into the access control process and motivates the need for risk-based access control decision making. Thus, the traditional perimeter-based security paradigm is increasingly being abandoned in favour of a so called "zero trust networking" (ZTN). In ZTN networks are partitioned into zones with different levels of trust required to access the zone resources depending on the assets protected by the zone. All accesses to sensitive information is subject to rigorous access control based on user and device profile and context. In this paper we outline a policy enforcement framework to address many of open challenges for risk-based access control for ZTN. We specify the design of required policy languages including a generic firewall policy language to express firewall rules. We design a mechanism to map these rules to specific firewall syntax and to install the rules on the firewall. We show the viability of our design with a small proof-of-concept.