Biblio
Physical protection system (PPS) is developed to protect the assets or facilities against threats. A systematic analysis of the capabilities and intentions of potential threat capabilities is needed resulting in a so-called Design Basis Threat (DBT) document. A proper development of DBT is important to identify the system requirements that are required for adequately protecting a system and to optimize the resources needed for the PPS. In this paper we propose a model-based systems engineering approach for developing a DBT based on feature models. Based on a domain analysis process, we provide a metamodel that defines the key concepts needed for developing DBT. Subsequently, a reusable family feature model for PPS is provided that includes the common and variant properties of the PPS concepts detection, deterrence and response. The configuration processes are modeled to select and analyze the required features for implementing the threat scenarios. Finally, we discuss the integration of the DBT with the PPS design process.
This work describes a top down systems security requirements analysis approach for understanding and eliciting security requirements for a notional small unmanned aerial system (SUAS). More specifically, the System-Theoretic Process Analysis approach for Security (STPA-Sec) is used to understand and elicit systems security requirements. The effort employs STPA-Sec on a notional SUAS system case study to detail the development of functional-level security requirements, design-level engineering considerations, and architectural-level security specification criteria early in the system life cycle when the solution trade-space is largest rather than merely examining components and adding protections during system operation or sustainment. These details were elaborated during a semester independent study research effort by two United States Air Force Academy Systems Engineering cadets, guided by their instructor and a series of working group sessions with UAS operators and subject matter experts. This work provides insight into a viable systems security requirements analysis approach which results in traceable security, safety, and resiliency requirements that can be designed-for, built-to, and verified with confidence.
A critical need exists for collaboration and action by government, industry, and academia to address cyber weaknesses or vulnerabilities inherent to embedded or cyber physical systems (CPS). These vulnerabilities are introduced as we leverage technologies, methods, products, and services from the global supply chain throughout a system's lifecycle. As adversaries are exploiting these weaknesses as access points for malicious purposes, solutions for system security and resilience become a priority call for action. The SAE G-32 Cyber Physical Systems Security Committee has been convened to address this complex challenge. The SAE G-32 will take a holistic systems engineering approach to integrate system security considerations to develop a Cyber Physical System Security Framework. This framework is intended to bring together multiple industries and develop a method and common language which will enable us to more effectively, efficiently, and consistently communicate a risk, cost, and performance trade space. The standard will allow System Integrators to make decisions utilizing a common framework and language to develop affordable, trustworthy, resilient, and secure systems.
A recurring principle in consideration of the future of systems engineering is continual dynamic adaptation. Context drives change whether it be from potential loss (threats, vulnerabilities) or from potential gain (opportunity-driven). Contextual-awareness has great influence over the future of systems engineering and of systems security. Those contextual environments contain fitness functions that will naturally select compatible approaches and filter out the incompatible, with prejudice. We don't have to guess at what those environmental shaping forces will look like. William Gibson famously tells us why: “The future is already here, it's just not evenly distributed;” and, sometimes difficult to discern. This paper provides archetypes that 1) characterize general systems engineering for products, processes, and operations; 2) characterize the integration of security to systems engineering; and, 3) characterize contextually aware agile-security. This paper is more of a problem statement than a solution. Solution objectives and tactics for guiding the path forward have a broader range of options for subsequent treatment elsewhere. Our purpose here is to offer a short list of necessary considerations for effective contextually aware adaptive system security in the future of systems engineering.
Technical debt is an analogy introduced in 1992 by Cunningham to help explain how intentional decisions not to follow a gold standard or best practice in order to save time or effort during creation of software can later on lead to a product of lower quality in terms of product quality itself, reliability, maintainability or extensibility. Little work has been done so far that applies this analogy to cyber physical (production) systems (CP(P)S). Also there is only little work that uses this analogy for security related issues. This work aims to fill this gap: We want to find out which security related symptoms within the field of cyber physical production systems can be traced back to TD items during all phases, from requirements and design down to maintenance and operation. This work shall support experts from the field by being a first step in exploring the relationship between not following security best practices and concrete increase of costs due to TD as consequence.
In the development process of critical systems, one of the main challenges is to provide early system validation and verification against vulnerabilities in order to reduce cost caused by late error detection. We propose in this paper an approach that, firstly allows formally describe system security specifications, thanks to our suggested extended attack tree. Secondly, static and dynamic system modeling by using a SysML connectivity profile to model error propagation is introduced. Finally, a model checker has been used in order to validate system specifications.
This paper describes multiple system security engineering techniques for assessing system security vulnerabilities and discusses the application of these techniques at different system maturity points. The proposed vulnerability assessment approach allows a systems engineer to identify and assess vulnerabilities early in the life cycle and to continually increase the fidelity of the vulnerability identification and assessment as the system matures.
Presented at the Illinois Information Trust Institute Assured Cloud Computing Weekly Research Seminar, September 28, 2016.
The Department of Energy seeks to modernize the U.S. electric grid through the SmartGrid initiative, which includes the use of Global Positioning System (GPS)-timing dependent electric phasor measurement units (PMUs) for continual monitoring and automated controls. The U.S. Department of Homeland Security is concerned with the associated risks of increased utilization of GPS timing in the electricity subsector, which could in turn affect a large number of electricity-dependent Critical Infrastructure (CI) sectors. Exploiting the vulnerabilities of GPS systems in the electricity subsector can result to large-scale and costly blackouts. This paper seeks to analyze the risks of increased dependence of GPS into the electric grid through the introduction of PMUs and provides a systems engineering perspective to the GPS-dependent System of Systems (S-o-S) created by the SmartGrid initiative. The team started by defining and modeling the S-o-S followed by usage of a risk analysis methodology to identify and measure risks and evaluate solutions to mitigating the effects of the risks. The team expects that the designs and models resulting from the study will prove useful in terms of determining both current and future risks to GPS-dependent CIs sectors along with the appropriate countermeasures as the United States moves towards a SmartGrid system.
Detecting and preventing attacks before they compromise a system can be done using acceptance testing, redundancy based mechanisms, and using external consistency checking such external monitoring and watchdog processes. Diversity-based adjudication, is a step towards an oracle that uses knowable behavior of a healthy system. That approach, under best circumstances, is able to detect even zero-day attacks. In this approach we use functionally equivalent but in some way diverse components and we compare their output vectors and reactions for a given input vector. This paper discusses practical relevance of this approach in the context of recent web-service attacks.
Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).
This paper presents a system named SPOT to achieve high accuracy and preemptive detection of attacks. We use security logs of real-incidents that occurred over a six-year period at National Center for Supercomputing Applications (NCSA) to evaluate SPOT. Our data consists of attacks that led directly to the target system being compromised, i.e., not detected in advance, either by the security analysts or by intrusion detection systems. Our approach can detect 75 percent of attacks as early as minutes to tens of hours before attack payloads are executed.
A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions.
Moving Target Defense (MTD) can enhance the resilience of cyber systems against attacks. Although there have been many MTD techniques, there is no systematic understanding and quantitative characterization of the power of MTD. In this paper, we propose to use a cyber epidemic dynamics approach to characterize the power of MTD. We define and investigate two complementary measures that are applicable when the defender aims to deploy MTD to achieve a certain security goal. One measure emphasizes the maximum portion of time during which the system can afford to stay in an undesired configuration (or posture), without considering the cost of deploying MTD. The other measure emphasizes the minimum cost of deploying MTD, while accommodating that the system has to stay in an undesired configuration (or posture) for a given portion of time. Our analytic studies lead to algorithms for optimally deploying MTD.
Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area.
The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountability-related properties from the literature. Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utility-theoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model. In our use of this framework, we define notions of "open" and "closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.