Biblio
We are witnessing a huge growth of cyber-physical systems, which are autonomous, mobile, endowed with sensing, controlled by software, and often wirelessly connected and Internet-enabled. They include factory automation systems, robotic assistants, self-driving cars, and wearable and implantable devices. Since they are increasingly often used in safety- or business-critical contexts, to mention invasive treatment or biometric authentication, there is an urgent need for modelling and verification technologies to support the design process, and hence improve the reliability and reduce production costs. This paper gives an overview of quantitative verification and synthesis techniques developed for cyber-physical systems, summarising recent achievements and future challenges in this important field.
The current evaluation of API recommendation systems mainly focuses on correctness, which is calculated through matching results with ground-truth APIs. However, this measurement may be affected if there exist more than one APIs in a result. In practice, some APIs are used to implement basic functionalities (e.g., print and log generation). These APIs can be invoked everywhere, and they may contribute less than functionally related APIs to the given requirements in recommendation. To study the impacts of correct-but-useless APIs, we use utility to measure them. Our study is conducted on more than 5,000 matched results generated by two specification-based API recommendation techniques. The results show that the matched APIs are heavily overlapped, 10% APIs compose more than 80% matched results. The selected 10% APIs are all correct, but few of them are used to implement the required functionality. We further propose a heuristic approach to measure the utility and conduct an online evaluation with 15 developers. Their reports confirm that the matched results with higher utility score usually have more efforts on programming than the lower ones.
Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.
Predict software program reliability turns into a completely huge trouble in these days. Ordinary many new software programs are introducing inside the marketplace and some of them dealing with failures as their usage/managing is very hard. and plenty of shrewd strategies are already used to are expecting software program reliability. In this paper we're giving a sensible knowledge and the difference among those techniques with my new method. As a result, the prediction fashions constructed on one dataset display a extensive decrease in their accuracy when they are used with new statistics. The aim of this assessment, SE issues which can be of sensible importance are software development/cost estimation, software program reliability prediction, and so forth, and also computing its broaden computational equipment with enhanced power, scalability, flexibility and that can engage more successfully with human beings.
The new generation of digital services are natively conceived as an ordered set of Virtual Network Functions, deployed across boundaries and organizations. In this context, security threats, variable network conditions, computational and memory capabilities and software vulnerabilities may significantly weaken the whole service chain, thus making very difficult to combat the newest kinds of attacks. It is thus extremely important to conceive a flexible (and standard-compliant) framework able to attest the trustworthiness and the reliability of each single function of a Service Function Chain. At the time of this writing, and to the best of authors knowledge, the scientific literature addressed all of these problems almost separately. To bridge this gap, this paper proposes a novel methodology, properly tailored within the ETSI-NFV framework. From one side, Software-Defined Controllers continuously monitor the properties and the performance indicators taken from networking domains of each single Virtual Network Function available in the architecture. From another side, a high-level orchestrator combines, on demand, the suitable Virtual Network Functions into a Service Function Chain, based on the user requests, targeted security requirements, and measured reliability levels. The paper concludes by further explaining the functionalities of the proposed architecture through a use case.
The paper describes modification of the ATA (Attack Tree Analysis) technique for assessment of instrumentation and control systems (ICS) dependability (reliability, availability and cyber security) called AvTA (Availability Tree Analysis). The techniques FMEA, FMECA and IMECA applied to carry out preliminary semi-formal and criticality oriented analysis before AvTA based assessment are described. AvTA models combine reliability and cyber security subtrees considering probabilities of ICS recovery in case of hardware (physical) and software (design) failures and attacks on components casing failures. Successful recovery events (SREs) avoid corresponding failures in tree using OR gates if probabilities of SRE for assumed time are more than required. Case for dependability AvTA based assessment (model, availability function and technology of decision-making for choice of component and system parameters) for smart building ICS (Building Automation Systems, BAS) is discussed.
Secure by design is an approach to developing secure software systems from the ground up. In such approach, the alternate security tactics are first thought, among them, the best are selected and enforced by the architecture design, and then used as guiding principles for developers. Thus, design flaws in the architecture of a software system mean that successful attacks could result in enormous consequences. Therefore, secure by design shifts the main focus of software assurance from finding security bugs to identifying architectural flaws in the design. Current research in software security has been neglecting vulnerabilities which are caused by flaws in a software architecture design and/or deteriorations of the implementation of the architectural decisions. In this paper, we present the concept of Common Architectural Weakness Enumeration (CAWE), a catalog which enumerates common types of vulnerabilities rooted in the architecture of a software and provides mitigation techniques to address them. The CAWE catalog organizes the architectural flaws according to known security tactics. We developed an interactive web-based solution which helps designers and developers explore this catalog based on architectural choices made in their project. CAWE catalog contains 224 weaknesses related to security architecture. Through this catalog, we aim to promote the awareness of security architectural flaws and stimulate the security design thinking of developers, software engineers, and architects.
With Software Defined Networking (SDN) the control plane logic of forwarding devices, switches and routers, is extracted and moved to an entity called SDN controller, which acts as a broker between the network applications and physical network infrastructure. Failures of the SDN controller inhibit the network ability to respond to new application requests and react to events coming from the physical network. Despite of the huge impact that a controller has on the network performance as a whole, a comprehensive study on its failure dynamics is still missing in the state of the art literature. The goal of this paper is to analyse, model and evaluate the impact that different controller failure modes have on its availability. A model in the formalism of Stochastic Activity Networks (SAN) is proposed and applied to a case study of a hypothetical controller based on commercial controller implementations. In case study we show how the proposed model can be used to estimate the controller steady state availability, quantify the impact of different failure modes on controller outages, as well as the effects of software ageing, and impact of software reliability growth on the transient behaviour.
Networked embedded systems (which include IoT, CPS, etc.) are vulnerable. Even though we know how to secure these systems, their heterogeneity and the heterogeneity of security policies remains a major problem. Designers face ever more sophisticated attacks while they are not always security experts and have to get a trade-off on design criteria. We propose in this paper the CLASA architecture (Cross-Layer Agent Security Architecture), a generic, integrated, inter-operable, decentralized and modular architecture which relies on cross-layering.
Organizations face the issue of how to best allocate their security resources. Thus, they need an accurate method for assessing how many new vulnerabilities will be reported for the operating systems (OSs) they use in a given time period. Our approach consists of clustering vulnerabilities by leveraging the text information within vulnerability records, and then simulating the mean value function of vulnerabilities by relaxing the monotonic intensity function assumption, which is prevalent among the studies that use software reliability models (SRMs) and nonhomogeneous Poisson process (NHPP) in modeling. We applied our approach to the vulnerabilities of four OSs: Windows, Mac, IOS, and Linux. For the OSs analyzed in terms of curve fitting and prediction capability, our results, compared to a power-law model without clustering issued from a family of SRMs, are more accurate in all cases we analyzed.
The use of software to support the information infrastructure that governments, critical infrastructure providers and businesses worldwide rely on for their daily operations and business processes is gradually becoming unavoidable. Commercial off-the shelf software is widely and increasingly used by these organizations to automate processes with information technology. That notwithstanding, cyber-attacks are becoming stealthier and more sophisticated, which has led to a complex and dynamic risk environment for IT-based operations which users are working to better understand and manage. This has made users become increasingly concerned about the integrity, security and reliability of commercial software. To meet up with these concerns and meet customer requirements, vendors have undertaken significant efforts to reduce vulnerabilities, improve resistance to attack and protect the integrity of the products they sell. These efforts are often referred to as “software assurance.” Software assurance is becoming very important for organizations critical to public safety and economic and national security. These users require a high level of confidence that commercial software is as secure as possible, something only achieved when software is created using best practices for secure software development. Therefore, in this paper, we explore the need for information assurance and its importance for both organizations and end users, methodologies and best practices for software security and information assurance, and we also conducted a survey to understand end users’ opinions on the methodologies researched in this paper and their impact.
ISSN: 2154-0373
Software components, which are vulnerable to being exploited, need to be identified and patched. Employing any prevention techniques designed for the purpose of detecting vulnerable software components in early stages can reduce the expenses associated with the software testing process significantly and thus help building a more reliable and robust software system. Although previous studies have demonstrated the effectiveness of adapting prediction techniques in vulnerability detection, the feasibility of those techniques is limited mainly because of insufficient training data sets. This paper proposes a prediction technique targeting at early identification of potentially vulnerable software components. In the proposed scheme, the potentially vulnerable components are viewed as mislabeled data that may contain true but not yet observed vulnerabilities. The proposed hybrid technique combines the supports vector machine algorithm and ensemble learning strategy to better identify potential vulnerable components. The proposed vulnerability detection scheme is evaluated using some Java Android applications. The results demonstrated that the proposed hybrid technique could identify potentially vulnerable classes with high precision and relatively acceptable accuracy and recall.