Biblio
With the increasing use of mobile phones in contemporary society, more and more networked computers are connected to each other. This has brought along security issues. To solve these issues, both research and development communities are trying to build more secure software. However, there is the question that how the secure software is defined and how the security could be measured. In this paper, we study this problem by studying what kinds of security measurement tools (i.e. metrics) are available, and what these tools and metrics reveal about the security of software. As the result of the study, we noticed that security verification activities fall into two main categories, evaluation and assurance. There exist 34 metrics for measuring the security, from which 29 are assurance metrics and 5 are evaluation metrics. Evaluating and studying these metrics, lead us to the conclusion that the general quality of the security metrics are not in a satisfying level that could be suitably used in daily engineering work flows. They have both theoretical and practical issues that require further research, and need to be improved.
Metaheuristic search technique is one of the advance approach when compared with traditional heuristic search technique. To select one option among different alternatives is not hard to get but really hard is give assurance that being cost effective. This hard problem is solved by the meta-heuristic search technique with the help of fitness function. Fitness function is a crucial metrics or a measure which helps in deciding which solution is optimal to choose from available set of test sets. This paper discusses hill climbing, simulated annealing, tabu search, genetic algorithm and particle swarm optimization techniques in detail explaining with the help of the algorithm. If metaheuristic search techniques combine some of the security testing methods, it would result in better searching technique as well as secure too. This paper primarily focusses on the metaheuristic search techniques.
This presents a new model to support empirical failure probability estimation for a software-intensive system. The new element of the approach is that it combines the results of testing using a simulated hardware platform with results from testing on the real platform. This approach addresses a serious practical limitation of a technique known as statistical testing. This limitation will be called the test time expansion problem (or simply the 'time problem'), which is that the amount of testing required to demonstrate useful levels of reliability over a time period T is many orders of magnitude greater than T. The time problem arises whether the aim is to demonstrate ultra-high reliability levels for protection system, or to demonstrate any (desirable) reliability levels for continuous operation ('high demand') systems. Specifically, the theoretical feasibility of a platform simulation approach is considered since, if this is not proven, questions of practical implementation are moot. Subject to the assumptions made in the paper, theoretical feasibility is demonstrated.
Software has emerged as a significant part of many domains, including financial service platforms, social networks and vehicle control. Standards organizations have responded to this by creating regulations to address issues such as safety and privacy. In this context, compliance of software with standards has emerged as a key issue. For software development organizations, compliance is a complex and costly goal to achieve and is often accomplished by producing so-called assurance cases, which demonstrate that the system indeed satisfies the property imposed by a standard (e.g., safety, privacy, security). As systems and standards undergo evolution for a variety of reasons, maintaining assurance cases multiplies the effort. In this work, we propose to exploit the connection between the field of model management and the problem of compliance management and propose methods that use model management techniques to address compliance scenarios such as assurance case evolution and reuse. For validation, we ground our approaches on the automotive domain and the ISO 26262 standard for functional safety of road vehicles.
Secure by design is an approach to developing secure software systems from the ground up. In such approach, the alternate security tactics are first thought, among them, the best are selected and enforced by the architecture design, and then used as guiding principles for developers. Thus, design flaws in the architecture of a software system mean that successful attacks could result in enormous consequences. Therefore, secure by design shifts the main focus of software assurance from finding security bugs to identifying architectural flaws in the design. Current research in software security has been neglecting vulnerabilities which are caused by flaws in a software architecture design and/or deteriorations of the implementation of the architectural decisions. In this paper, we present the concept of Common Architectural Weakness Enumeration (CAWE), a catalog which enumerates common types of vulnerabilities rooted in the architecture of a software and provides mitigation techniques to address them. The CAWE catalog organizes the architectural flaws according to known security tactics. We developed an interactive web-based solution which helps designers and developers explore this catalog based on architectural choices made in their project. CAWE catalog contains 224 weaknesses related to security architecture. Through this catalog, we aim to promote the awareness of security architectural flaws and stimulate the security design thinking of developers, software engineers, and architects.
Security cases-which document the rationale for believing that a system is adequately secure-have not been sufficiently used for a lack of practical construction method. This paper presents a hierarchical software security case development method to address this issue. We present a security concept relationship model first, then come up with a hierarchical asset-threat-control measure argument strategy, together with the consideration of an asset classification and threat classification for software security case. Lastly, we propose 11 software security case patterns and illustrate one of them.
Increasing interest in cyber-physical systems with integrated computational and physical capabilities that can interact with humans can be identified in research and practice. Since these systems can be classified as safety- and security-critical systems the need for safety and security assurance and certification will grow. Moreover, these systems are typically characterized by fragmentation, interconnectedness, heterogeneity, short release cycles, cross organizational nature and high interference between safety and security requirements. These properties combined with the assurance of compliance to multiple standards, carrying out certification and re-certification, and the lack of an approach to model, document and integrate safety and security requirements represent a major challenge. In order to address this gap we developed a domain agnostic approach to model security and safety requirements in an integrated view to support certification processes during design and run-time phases of cyber-physical systems.
Cloud data centers are critical infrastructures to deliver cloud services. Although security and performance of cloud data centers have been well studied in the past, their networking aspects are overlooked. Current network infrastructures in cloud data centers limit the ability of cloud provider to offer guaranteed cloud network resources to users. In order to ensure security and performance requirements as defined in the service level agreement (SLA) between cloud user and provider, cloud providers need the ability to provision network resources dynamically and on the fly. The main challenge for cloud provider in utilizing network resource can be addressed by provisioning virtual networks that support information centric services by separating the control plane from the cloud infrastructure. In this paper, we propose an sdn based information centric cloud framework to provision network resources in order to support elastic demands of cloud applications depending on SLA requirements. The framework decouples the control plane and data plane wherein the conceptually centralized control plane controls and manages the fully distributed data plane. It computes the path to ensure security and performance of the network. We report initial experiment on average round-trip delay between consumers and producers.
There is widening chasm between the ease of creating software and difficulty of "building security in". This paper reviews the approach, the findings and recent experiments from a seven-year effort to enable consistency across a large, diverse development organization and software portfolio via policies, guidance, automated tools and services. Experience shows that developing secure software is an elusive goal for most. It requires every team to know and apply a wide range of security knowledge in the context of what software is being built, how the software will be used, and the projected threats in the environment where the software will operate. The drive for better outcomes for secure development and increased developer productivity led to experiments to augment developer knowledge and eventually realize the goal of "building the right security in".
In mixed-criticality systems functionalities of different criticalities, that need to have their correctness validated to different levels of assurance, co-exist upon a shared platform. Multiple specifications at differing levels of assurance may be provided for such systems; the specifications that are trusted at very high levels of assurance tend to be more conservative than those at lower levels of assurance. Prior research on the scheduling of such mixed-criticality systems has primarily focused upon the case where multiple estimates of the worst-case execution time (WCET) of pieces of code are provided; in this paper, a model is considered in which multiple estimates are instead provided for the rate at which event-triggered processes are executed. An algorithm is derived for scheduling such systems upon a preemptive uniprocessor; the effectiveness of this algorithm is demonstrated quantitatively via the speedup factor metric.
Software fault prediction is a significant part of software quality assurance and it is commonly used to detect faulty software modules based on software measurement data. Several machine learning based approaches have been proposed for generating predictive models from collected data, although none has become standard given the specificities of each software project. Hence, we believe that recommending the best algorithm for each project is much more important and useful than developing a single algorithm for being used in any project. For achieving that goal, we propose in this paper a novel framework for recommending machine learning algorithms that is capable of automatically identifying the most suitable algorithm according to the software project that is being considered. Our solution, namely SFP-MLF, makes use of the meta-learning paradigm in order to learn the best learner for a particular project. Results show that the SFP-MLF framework provides both the best single algorithm recommendation and also the best ranking recommendation for the software fault prediction problem.
The fully automatic generation of code that establishes the reversibility of arbitrary C/C++ code has been a target of research and engineering for more than a decade as reverse computation has become a central notion in large scale parallel discrete event simulation (PDES). The simulation models that are implemented for PDES are of increasing complexity and size and require various language features to support abstraction, encapsulation, and composition when building a simulation model. In this paper we focus on parallel simulation models that are written in C++ and present an approach and an evaluation for a fully automatically generated reversible code for a kinetic Monte-Carlo application implemented in C++. Although a significant runtime overhead is introduced with our technique, the assurance that the reverse code is generated automatically and correctly, is an enormous win that allows simulation model developers to write forward event code using the entire C++ language, and have that code automatically transformed into reversible code to enable parallel execution with the Rensselaer's Optimistic Simulation System (ROSS).
Finding security vulnerabilities in the source code as early as possible is becoming more and more essential. In this respect, vulnerability prediction models have the potential to help the security assurance activities by identifying code locations that deserve the most attention. In this paper, we investigate whether prediction models behave like milk (i.e., they turn with time) or wine (i.e., the improve with time) when used to predict future vulnerabilities. Our findings indicate that the recall values are largely in favor of predictors based on older versions. However, the better recall comes at the price of much higher file inspection ratio values.
Agile methodologies are becoming widespread in modern software development. However, due to a lack of safety assurance activities, agile methods are criticized for being inadequate for the development of safe software. Safety analysis and safety verification are complementary methods for safety assurance. Yet, both usually rely on traditional, waterfall-like processes. Therefore, it is strongly needed to integrate an appropriate safety analysis approach into agile software development processes driving architecture design and verify the safe design at the code level. This paper presents a novel agile process model "S-Scrum" based on the existing development process "Safe Scrum" and extended by a safety analysis method and a safety verification approach based on STPA (System-Theoretic Process Analysis). The proposed agile development process S-Scrum can be separated into three parts: (1) performing safety-guided design by STPA inside each sprint. (2) Verifying safety requirements at the code level by using model checking. (3) Replacing traditional RAMS (Reliability, Availability, Maintainability, Safety) validation on the final product by STPA safety analysis. We adopt other aspects from the original Safe Scrum. Finally, the feasibility of S-Scrum is illustrated with the example of an airbag system.
Code review is known to be an efficient quality assurance technique. Many software companies today use it, usually with a process similar to the patch review process in open source software development. However, there is still a large fraction of companies performing almost no code reviews at all. And the companies that do code reviews have a lot of variation in the details of their processes. For researchers trying to improve the use of code reviews in industry, it is important to know the reasons for these process variations. We have performed a grounded theory study to clarify process variations and their rationales. The study is based on interviews with software development professionals from 19 companies. These interviews provided insights into the reasons and influencing factors behind the adoption or non-adoption of code reviews as a whole as well as for different process variations. We have condensed these findings into seven hypotheses and a classification of the influencing factors. Our results show the importance of cultural and social issues for review adoption. They trace many process variations to differences in development context and in desired review effects.
Context: While successful conventional software development regularly employs separate testing staff, there are successful agile teams with as well as without separate testers. Question: How does successful agile development work without separate testers? What are advantages and disadvantages? Method: A case study, based on Grounded Theory evaluation of interviews and direct observation of three agile teams; one having separate testers, two without. All teams perform long-term development of parts of e-business web portals. Results: Teams without testers use a quality experience work mode centered around a tight field-use feedback loop, driven by a feeling of responsibility, supported by test automation, resulting in frequent deployments. Conclusion: In the given domain, hand-overs to separate testers appear to hamper the feedback loop more than they contribute to quality, so working without testers is preferred. However, Quality Experience is achievable only with modular architectures and in suitable domains.
Evolution in software systems is a necessary activity that occurs due to fixing bugs, adding functionality or improving system quality. Systems often need to be shown to comply with regulatory standards. Along with demonstrating compliance, an artifact, called an assurance case, is often produced to show that the system indeed satisfies the property imposed by the standard (e.g., safety, privacy, security, etc.). Since each of the system, the standard, and the assurance case can be presented as a model, we propose the extension and use of traditional model management operators to aid in the reuse of parts of the assurance case when the system undergoes an evolution. Specifically, we present a model management approach that eventually produces a partial evolved assurance case and guidelines to help the assurance engineer in completing it. We demonstrate how our approach works on an automotive subsystem regulated by the ISO 26262 standard.
A self-adaptive system (SAS) can reconfigure to adapt to potentially adverse conditions that can manifest in the environment at run time. However, the SAS may not have been explicitly developed with such conditions in mind, thereby requiring additional configuration states or updates to the requirements specification for the SAS to provide assurance that it continually satisfies its requirements and delivers acceptable behavior. By discovering both adverse environmental conditions and the SAS configuration states that can mitigate those conditions at design time, an SAS can be hardened against uncertainty prior to deployment, effectively extending its lifetime. This paper introduces two search-based techniques, Ragnarok and Valkyrie, for hardening an SAS against uncertainty. Ragnarok automatically discovers adverse conditions that negatively impact an SAS by searching for environmental conditions that explicitly cause requirements violations. Valkyrie then searches for SAS configurations that improve requirements satisficement throughout execution in response to discovered adverse environmental conditions. Together, these techniques can be used to improve the design and implementation of an SAS. We apply each technique to an industry-provided remote data mirroring application that can self-reconfigure in response to unknown or adverse conditions, such as network message delays, network link failures, and sensor noise.
An increasingly important concern of software engineers is handling uncertainty at runtime. Over the last decade researchers have applied architecture-based self-adaptation approaches to address this concern. However, providing guarantees required by current software systems has shown to be challenging with these approaches. To tackle this challenge, we study the application of control theory to realize self-adaptation and develop novel control-based adaptation mechanisms that guarantee desired system properties. Results are validated on systems with strict requirements.