Biblio
The goal of this roadmap paper is to summarize the stateof-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.
In the current generation of SCADA (Supervisory Control And Data Acquisition) systems used in power grids, a sophisticated attacker can exploit system vulnerabilities and use a legitimate maliciously crafted command to cause a wide range of system changes that traditional contingency analysis does not consider and remedial action schemes cannot handle. To detect such malicious commands, we propose a semantic analysis framework based on a distributed network of intrusion detection systems (IDSes). The framework combines system knowledge of both cyber and physical infrastructure in power grid to help IDS to estimate execution consequences of control commands, thus to reveal attacker’s malicious intentions. We evaluated the approach on the IEEE 30-bus system. Our experiments demonstrate that: (i) by opening 3 transmission lines, an attacker can avoid detection by the traditional contingency analysis and instantly put the tested 30-bus system into an insecure state and (ii) the semantic analysis provides reliable detection of malicious commands with a small amount of analysis time.
Memory corruption bugs in software written in low-level languages like C or C++ are one of the oldest problems in computer security. The lack of safety in these languages allows attackers to alter the program's behavior or take full control over it by hijacking its control flow. This problem has existed for more than 30 years and a vast number of potential solutions have been proposed, yet memory corruption attacks continue to pose a serious threat. Real world exploits show that all currently deployed protections can be defeated. This paper sheds light on the primary reasons for this by describing attacks that succeed on today's systems. We systematize the current knowledge about various protection techniques by setting up a general model for memory corruption attacks. Using this model we show what policies can stop which attacks. The model identifies weaknesses of currently deployed techniques, as well as other proposed protections enforcing stricter policies. We analyze the reasons why protection mechanisms implementing stricter polices are not deployed. To achieve wide adoption, protection mechanisms must support a multitude of features and must satisfy a host of requirements. Especially important is performance, as experience shows that only solutions whose overhead is in reasonable bounds get deployed. A comparison of different enforceable policies helps designers of new protection mechanisms in finding the balance between effectiveness (security) and efficiency. We identify some open research problems, and provide suggestions on improving the adoption of newer techniques.
A successful Smart Grid system requires purpose-built security architecture which is explicitly designed to protect customer data confidentiality. In addition to the investment on electric power infrastructure for protecting the privacy of Smart Grid-related data, entities need to actively participate in the NIST interoperability framework process; establish policies and oversight structure for the enforcement of cyber security controls of the data through adoption of security best practices, personnel training, cyber vulnerability assessments, and consumer privacy audits.
The use of shared mutable state, commonly seen in object-oriented systems, is often problematic due to the potential conflicting interactions between aliases to the same state. We present a substructural type system outfitted with a novel lightweight interference control mechanism, rely-guarantee protocols, that enables controlled aliasing of shared resources. By assigning each alias separate roles, encoded in a novel protocol abstraction in the spirit of rely-guarantee reasoning, our type system ensures that challenging uses of shared state will never interfere in an unsafe fashion. In particular, rely-guarantee protocols ensure that each alias will never observe an unexpected value, or type, when inspecting shared memory regardless of how the changes to that shared state (originating from potentially unknown program contexts) are interleaved at run-time.
Security requirements engineering ideally combines expertise in software security with proficiency in requirements engineering to provide a foundation for developing secure systems. However, security requirements are often inadequately understood and improperly specified, often due to lack of security expertise and a lack of emphasis on security during early stages of system development. Software systems often have common and recurrent security requirements in addition to system-specific security needs. Security requirements patterns can provide a means of capturing common security requirements while documenting the context in which a requirement manifests itself and the tradeoffs involved. The objective of this paper is to aid in understanding of the process for pattern development and provide considerations for writing effective security requirements patterns. We analyzed existing literature on software patterns, problem solving and cognition to outline the process for developing software patterns. We also reviewed strategies for specifying reusable security requirements and security requirements patterns. Our proposed considerations can aid pattern writers in capturing necessary contextual information when documenting security requirements patterns to facilitate application and integration of security requirements.
Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). In such cases, learning algorithms may have to cope with manipulated data aimed at hampering decision making. Although some previous work addressed the issue of handling malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method–online centroid anomaly detection–in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, and analysis of attack efficiency and limitations. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly detection under different conditions: attacker's full or limited control over the traffic and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation, carried out on real traces of HTTP and exploit traffic, confirms the tightness of our theoretical bounds and the practicality of our protection mechanisms.