Biblio
The "aging" phenomenon occurs after the long-term running of software, with the fault rate rising and running efficiency dropping. As there is no corresponding testing type for this phenomenon among conventional software tests, "software runtime accumulative testing" is proposed. Through analyzing several examples of software aging causing serious accidents, software is placed in the system environment required for running and the occurrence mechanism of software aging is analyzed. In addition, corresponding testing contents and recommended testing methods are designed with regard to all factors causing software aging, and the testing process and key points of testing requirement analysis for carrying out runtime accumulative testing are summarized, thereby providing a method and guidance for carrying out "software runtime accumulative testing" in software engineering.
In transient distributed cloud computing environment, software is vulnerable to attack, which leads to software functional completeness, so it is necessary to carry out functional testing. In order to solve the problem of high overhead and high complexity of unsupervised test methods, an intelligent evaluation method for transient analysis software function testing based on active depth learning algorithm is proposed. Firstly, the active deep learning mathematical model of transient analysis software function test is constructed by using association rule mining method, and the correlation dimension characteristics of software function failure are analyzed. Then the reliability of the software is measured by the spectral density distribution method of software functional completeness. The intelligent evaluation model of transient analysis software function testing is established in the transient distributed cloud computing environment, and the function testing and reliability intelligent evaluation are realized. Finally, the performance of the transient analysis software is verified by the simulation experiment. The results show that the accuracy of the software functional integrity positioning is high and the intelligent evaluation of the transient analysis software function testing has a good self-adaptability by using this method to carry out the function test of the transient analysis software. It ensures the safe and reliable operation of the software.
Software vulnerabilities often remain hidden until an attacker exploits the weak/insecure code. Therefore, testing the software from a vulnerability discovery perspective becomes challenging for developers if they do not inspect their code thoroughly (which is time-consuming). We propose that vulnerability prediction using certain software metrics can support the testing process by identifying vulnerable code-components (e.g., functions, classes, etc.). Once a code-component is predicted as vulnerable, the developers can focus their testing efforts on it, thereby avoiding the time/effort required for testing the entire application. The current paper presents a study that compares how software metrics perform as vulnerability predictors for software projects developed in two different languages (Java vs Python). The goal of this research is to analyze the vulnerability prediction performance of software metrics for different programming languages. We designed and conducted experiments on security vulnerabilities reported for three Java projects (Apache Tomcat 6, Tomcat 7, Apache CXF) and two Python projects (Django and Keystone). In this paper, we focus on a specific type of code component: Functions. We apply Machine Learning models for predicting vulnerable functions. Overall results show that software metrics-based vulnerability prediction is more useful for Java projects than Python projects (i.e., software metrics when used as features were able to predict Java vulnerable functions with a higher recall and precision compared to Python vulnerable functions prediction).
Technical debt is an analogy introduced in 1992 by Cunningham to help explain how intentional decisions not to follow a gold standard or best practice in order to save time or effort during creation of software can later on lead to a product of lower quality in terms of product quality itself, reliability, maintainability or extensibility. Little work has been done so far that applies this analogy to cyber physical (production) systems (CP(P)S). Also there is only little work that uses this analogy for security related issues. This work aims to fill this gap: We want to find out which security related symptoms within the field of cyber physical production systems can be traced back to TD items during all phases, from requirements and design down to maintenance and operation. This work shall support experts from the field by being a first step in exploring the relationship between not following security best practices and concrete increase of costs due to TD as consequence.
As trust becomes increasingly important in the software domain. Due to its complex composite concept, people face great challenges, especially in today's dynamic and constantly changing internet technology. In addition, measuring the software trustworthiness correctly and effectively plays a significant role in gaining users trust in choosing different software. In the context of security, trust is previously measured based on the vulnerability time occurrence to predict the total number of vulnerabilities or their future occurrence time. In this study, we proposed a new unified index called "loss speed index" that integrates the most important variables of software security such as vulnerability occurrence time, number and severity loss, which are used to evaluate the overall software trust measurement. Based on this new definition, a new model called software trustworthy security growth model (STSGM) has been proposed. This paper also aims at filling the gap by addressing the severity of vulnerabilities and proposed a vulnerability severity prediction model, the results are further evaluated by STSGM to estimate the future loss speed index. Our work has several features such as: (1) It is used to predict the vulnerability severity/type in future, (2) Unlike traditional evaluation methods like expert scoring, our model uses historical data to predict the future loss speed of software, (3) The loss metric value is used to evaluate the risk associated with different software, which has a direct impact on software trustworthiness. Experiments performed on real software vulnerability datasets and its results are analyzed to check the correctness and effectiveness of the proposed model.
How to evaluate software reliability based on historical data of embedded software projects is one of the problems we have to face in practical engineering. Therefore, we establish a software reliability evaluation model based on code metrics. This evaluation technique requires the aggregation of software code metrics into project metrics. Statistical value methods, metric distribution methods, and econometric methods are commonly-used aggregation methods. What are the differences between these methods in the software reliability evaluation process, and which methods can improve the accuracy of the reliability assessment model we have established are our concerns. In view of these concerns, we conduct an empirical study on the application of software code metric aggregation methods based on actual projects. We find the distribution of code metrics for the projects under study. Using these distribution laws, we optimize the aggregation method of code metrics and improve the accuracy of the software reliability evaluation model.
The paper describes modification of the ATA (Attack Tree Analysis) technique for assessment of instrumentation and control systems (ICS) dependability (reliability, availability and cyber security) called AvTA (Availability Tree Analysis). The techniques FMEA, FMECA and IMECA applied to carry out preliminary semi-formal and criticality oriented analysis before AvTA based assessment are described. AvTA models combine reliability and cyber security subtrees considering probabilities of ICS recovery in case of hardware (physical) and software (design) failures and attacks on components casing failures. Successful recovery events (SREs) avoid corresponding failures in tree using OR gates if probabilities of SRE for assumed time are more than required. Case for dependability AvTA based assessment (model, availability function and technology of decision-making for choice of component and system parameters) for smart building ICS (Building Automation Systems, BAS) is discussed.
Static application security testing (SAST) detects vulnerability warnings through static program analysis. Fixing the vulnerability warnings tremendously improves software quality. However, SAST has not been fully utilized by developers due to various reasons: difficulties in handling a large number of reported warnings, a high rate of false warnings, and lack of guidance in fixing the reported warnings. In this paper, we collaborated with security experts from a commercial SAST product and propose a set of approaches (Priv) to help developers better utilize SAST techniques. First, Priv identifies preferred fix locations for the detected vulnerability warnings, and group them based on the common fix locations. Priv also leverages visualization techniques so that developers can quickly investigate the warnings in groups and prioritize their quality-assurance effort. Second, Priv identifies actionable vulnerability warnings by removing SAST-specific false positives. Finally, Priv provides customized fix suggestions for vulnerability warnings. Our evaluation of Priv on six web applications highlights the accuracy and effectiveness of Priv. For 75.3% of the vulnerability warnings, the preferred fix locations found by Priv are identical to the ones annotated by security experts. The visualization based on shared preferred fix locations is useful for prioritizing quality-assurance efforts. Priv reduces the rate of SAST-specific false positives significantly. Finally, Priv is able to provide fully complete and correct fix suggestions for 75.6% of the evaluated warnings. Priv is well received by security experts and some features are already integrated into industrial practice.
Exploits based on ROP (Return-Oriented Programming) are increasingly present in advanced attack scenarios. Testing systems for ROP-based attacks can be valuable for improving the security and reliability of software. In this paper, we propose ROPMATE, the first Visual Analytics system specifically designed to assist human red team ROP exploit builders. In contrast, previous ROP tools typically require users to inspect a puzzle of hundreds or thousands of lines of textual information, making it a daunting task. ROPMATE presents builders with a clear interface of well-defined and semantically meaningful gadgets, i.e., fragments of code already present in the binary application that can be chained to form fully-functional exploits. The system supports incrementally building exploits by suggesting gadget candidates filtered according to constraints on preserved registers and accessed memory. Several visual aids are offered to identify suitable gadgets and assemble them into semantically correct chains. We report on a preliminary user study that shows how ROPMATE can assist users in building ROP chains.
Nowadays, honeypots are a key tool to attract attackers and study their activity. They help us in the tasks of evaluating attacker's behaviour, discovering new types of attacks, and collecting information and statistics associated with them. However, the gathered data cannot be directly interpreted, but must be analyzed to obtain useful information. In this paper, we present a SSH honeypot-based system designed to simulate a vulnerable server. Thus, we propose an approach for the classification of metrics from the data collected by the honeypot along 19 months.
Predict software program reliability turns into a completely huge trouble in these days. Ordinary many new software programs are introducing inside the marketplace and some of them dealing with failures as their usage/managing is very hard. and plenty of shrewd strategies are already used to are expecting software program reliability. In this paper we're giving a sensible knowledge and the difference among those techniques with my new method. As a result, the prediction fashions constructed on one dataset display a extensive decrease in their accuracy when they are used with new statistics. The aim of this assessment, SE issues which can be of sensible importance are software development/cost estimation, software program reliability prediction, and so forth, and also computing its broaden computational equipment with enhanced power, scalability, flexibility and that can engage more successfully with human beings.
Vulnerability being the buzz word in the modern time is the most important jargon related to software and operating system. Since every now and then, software is developed some loopholes and incompleteness lie in the development phase, so there always remains a vulnerability of abruptness in it which can come into picture anytime. Detecting vulnerability is one thing and predicting its occurrence in the due course of time is another thing. If we get to know the vulnerability of any software in the due course of time then it acts as an active alarm for the developers to again develop sound and improvised software the second time. The proposal talks about the implementation of the idea using the artificial neural network, where different data sets are being given as input for being used for further analysis for successful results. As of now, there are models for studying the vulnerabilities in the software and networks, this paper proposal in addition to the current work, will throw light on the predictability of vulnerabilities over the due course of time.
Organizations face the issue of how to best allocate their security resources. Thus, they need an accurate method for assessing how many new vulnerabilities will be reported for the operating systems (OSs) they use in a given time period. Our approach consists of clustering vulnerabilities by leveraging the text information within vulnerability records, and then simulating the mean value function of vulnerabilities by relaxing the monotonic intensity function assumption, which is prevalent among the studies that use software reliability models (SRMs) and nonhomogeneous Poisson process (NHPP) in modeling. We applied our approach to the vulnerabilities of four OSs: Windows, Mac, IOS, and Linux. For the OSs analyzed in terms of curve fitting and prediction capability, our results, compared to a power-law model without clustering issued from a family of SRMs, are more accurate in all cases we analyzed.
This paper offers a new approach to modelling the effect of cyber-attacks on reliability of software used in industrial control applications. The model is based on the view that successful cyber-attacks introduce failure regions, which are not present in non-compromised software. The model is then extended to cover a fault tolerant architecture, such as the 1-out-of-2 software, popular for building industrial protection systems. The model is used to study the effectiveness of software maintenance policies such as patching and "cleansing" ("proactive recovery") under different adversary models ranging from independent attacks to sophisticated synchronized attacks on the channels. We demonstrate that the effect of attacks on reliability of diverse software significantly depends on the adversary model. Under synchronized attacks system reliability may be more than an order of magnitude worse than under independent attacks on the channels. These findings, although not surprising, highlight the importance of using an adequate adversary model in the assessment of how effective various cyber-security controls are.
The implementation of RFID technology in computer systems gives access to quality information on the location or object tracking in real time, thereby improving workflow and lead to safer, faster and better business decisions. This paper discusses the quantitative indicators of the quality of the computer system supported by RFID technology applied in monitoring facilities (pallets, packages and people) marked with RFID tag. Results of analysis of quantitative indicators of quality compute system supported by RFID technology are presented in tables.
This presents a new model to support empirical failure probability estimation for a software-intensive system. The new element of the approach is that it combines the results of testing using a simulated hardware platform with results from testing on the real platform. This approach addresses a serious practical limitation of a technique known as statistical testing. This limitation will be called the test time expansion problem (or simply the 'time problem'), which is that the amount of testing required to demonstrate useful levels of reliability over a time period T is many orders of magnitude greater than T. The time problem arises whether the aim is to demonstrate ultra-high reliability levels for protection system, or to demonstrate any (desirable) reliability levels for continuous operation ('high demand') systems. Specifically, the theoretical feasibility of a platform simulation approach is considered since, if this is not proven, questions of practical implementation are moot. Subject to the assumptions made in the paper, theoretical feasibility is demonstrated.
Secure by design is an approach to developing secure software systems from the ground up. In such approach, the alternate security tactics are first thought, among them, the best are selected and enforced by the architecture design, and then used as guiding principles for developers. Thus, design flaws in the architecture of a software system mean that successful attacks could result in enormous consequences. Therefore, secure by design shifts the main focus of software assurance from finding security bugs to identifying architectural flaws in the design. Current research in software security has been neglecting vulnerabilities which are caused by flaws in a software architecture design and/or deteriorations of the implementation of the architectural decisions. In this paper, we present the concept of Common Architectural Weakness Enumeration (CAWE), a catalog which enumerates common types of vulnerabilities rooted in the architecture of a software and provides mitigation techniques to address them. The CAWE catalog organizes the architectural flaws according to known security tactics. We developed an interactive web-based solution which helps designers and developers explore this catalog based on architectural choices made in their project. CAWE catalog contains 224 weaknesses related to security architecture. Through this catalog, we aim to promote the awareness of security architectural flaws and stimulate the security design thinking of developers, software engineers, and architects.
Security cases-which document the rationale for believing that a system is adequately secure-have not been sufficiently used for a lack of practical construction method. This paper presents a hierarchical software security case development method to address this issue. We present a security concept relationship model first, then come up with a hierarchical asset-threat-control measure argument strategy, together with the consideration of an asset classification and threat classification for software security case. Lastly, we propose 11 software security case patterns and illustrate one of them.
Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.
With Software Defined Networking (SDN) the control plane logic of forwarding devices, switches and routers, is extracted and moved to an entity called SDN controller, which acts as a broker between the network applications and physical network infrastructure. Failures of the SDN controller inhibit the network ability to respond to new application requests and react to events coming from the physical network. Despite of the huge impact that a controller has on the network performance as a whole, a comprehensive study on its failure dynamics is still missing in the state of the art literature. The goal of this paper is to analyse, model and evaluate the impact that different controller failure modes have on its availability. A model in the formalism of Stochastic Activity Networks (SAN) is proposed and applied to a case study of a hypothetical controller based on commercial controller implementations. In case study we show how the proposed model can be used to estimate the controller steady state availability, quantify the impact of different failure modes on controller outages, as well as the effects of software ageing, and impact of software reliability growth on the transient behaviour.