Biblio
The relevance of data protection is related to the intensive informatization of various aspects of society and the need to prevent unauthorized access to them. World spending on ensuring information security (IS) for the current state: expenses in the field of IS today amount to \$81.7 billion. Expenditure forecast by 2020: about \$105 billion [1]. Information protection of military facilities is the most critical in the public sector, in the non-state - financial organizations is one of the leaders in spending on information protection. An example of the importance of IS research is the Trojan encoder WannaCry, which infected hundreds of thousands of computers around the world, attacks are recorded in more than 116 countries. The attack of the encoder of WannaCry (Wana Decryptor) happens through a vulnerability in service Server Message Block (protocol of network access to file systems) of Windows OS. Then, a rootkit (a set of malware) was installed on the infected system, using which the attackers launched an encryption program. Then each vulnerable computer could become infected with another infected device within one local network. Due to these attacks, about \$70,000 was lost (according to data from 18.05.2017) [2]. It is assumed in the presented work, that the software level of information protection is fundamentally insufficient to ensure the stable functioning of critical objects. This is due to the possible hardware implementation of undocumented instructions, discussed later. The complexity of computing systems and the degree of integration of their components are constantly growing. Therefore, monitoring the operation of the computer hardware is necessary to achieve the maximum degree of protection, in particular, data processing methods.
In transient distributed cloud computing environment, software is vulnerable to attack, which leads to software functional completeness, so it is necessary to carry out functional testing. In order to solve the problem of high overhead and high complexity of unsupervised test methods, an intelligent evaluation method for transient analysis software function testing based on active depth learning algorithm is proposed. Firstly, the active deep learning mathematical model of transient analysis software function test is constructed by using association rule mining method, and the correlation dimension characteristics of software function failure are analyzed. Then the reliability of the software is measured by the spectral density distribution method of software functional completeness. The intelligent evaluation model of transient analysis software function testing is established in the transient distributed cloud computing environment, and the function testing and reliability intelligent evaluation are realized. Finally, the performance of the transient analysis software is verified by the simulation experiment. The results show that the accuracy of the software functional integrity positioning is high and the intelligent evaluation of the transient analysis software function testing has a good self-adaptability by using this method to carry out the function test of the transient analysis software. It ensures the safe and reliable operation of the software.
A prioritized cyber defense remediation plan is critical for effective risk management in cyber-physical systems (CPS). The increased integration of Information Technology (IT)/Operational Technology (OT) in CPS has to lead to the need to identify the critical assets which, when affected, will impact resilience and safety. In this work, we propose a methodology for prioritized cyber risk remediation plan that balances operational resilience and economic loss (safety impacts) in CPS. We present a platform for modeling and analysis of the effect of cyber threats and random system faults on the safety of CPS that could lead to catastrophic damages. We propose to develop a data-driven attack graph and fault graph-based model to characterize the exploitability and impact of threats in CPS. We develop an operational impact assessment to quantify the damages. Finally, we propose the development of a strategic response decision capability that proposes optimal mitigation actions and policies that balances the trade-off between operational resilience (Tactical Risk) and Strategic Risk.
Software security is a major concern of the developers who intend to deliver a reliable software. Although there is research that focuses on vulnerability prediction and discovery, there is still a need for building security-specific metrics to measure software security and vulnerability-proneness quantitatively. The existing methods are either based on software metrics (defined on the physical characteristics of code; e.g. complexity or lines of code) which are not security-specific or some generic patterns known as nano-patterns (Java method-level traceable patterns that characterize a Java method or function). Other methods predict vulnerabilities using text mining approaches or graph algorithms which perform poorly in cross-project validation and fail to be a generalized prediction model for any system. In this paper, we envision to construct an automated framework that will assist developers to assess the security level of their code and guide them towards developing secure code. To accomplish this goal, we aim to refine and redefine the existing nano-patterns and software metrics to make them more security-centric so that they can be used for measuring the software security level of a source code (either file or function) with higher accuracy. In this paper, we present our visionary approach through a series of three consecutive studies where we (1) will study the challenges of the current software metrics and nano-patterns in vulnerability prediction, (2) will redefine and characterize the nano-patterns and software metrics so that they can capture security-specific properties of code and measure the security level quantitatively, and finally (3) will implement an automated framework for the developers to automatically extract the values of all the patterns and metrics for the given code segment and then flag the estimated security level as a feedback based on our research results. We accomplished some preliminary experiments and presented the results which indicate that our vision can be practically implemented and will have valuable implications in the community of software security.
A significant milestone is reached when the field of software vulnerability research matures to a point warranting related security patterns represented by intelligent data. A substantial research material of empirical findings, distinctive taxonomy, theoretical models, and a set of novel or adapted detection methods justify a unifying research map. The growth interest in software vulnerability is evident from a large number of works done during the last several decades. This article briefly reviews research works in vulnerability enumeration, taxonomy, models and detection methods from the perspective of intelligent data processing and analysis. This article also draws the map which associated with specific characteristics and challenges of vulnerability research, such as vulnerability patterns representation and problem-solving strategies.
In this paper we conduct an empirical study with the purpose of identifying common software weaknesses of embedded devices used as part of industrial control systems in power grids. The data is gathered about the devices and software of 6 companies, ABB, General Electric, Schneider Electric, Schweitzer Engineering Laboratories, Siemens and Wind River. The study uses data from the manufacturersfi online databases, NVD, CWE and ICS CERT. We identified that the most common problems that were reported are related to the improper input validation, cryptographic issues, and programming errors.
Software1 vulnerabilities are closely associated with information systems security, a major and critical field in today's technology. Vulnerabilities constitute a constant and increasing threat for various aspects of everyday life, especially for safety and economy, since the social impact from the problems that they cause is complicated and often unpredictable. Although there is an entire research branch in software engineering that deals with the identification and elimination of vulnerabilities, the growing complexity of software products and the variability of software production procedures are factors contributing to the ongoing occurrence of vulnerabilities, Hence, another area that is being developed in parallel focuses on the study and management of the vulnerabilities that have already been reported and registered in databases. The information contained in such databases includes, a textual description and a number of metrics related to vulnerabilities. The purpose of this paper is to investigate to what extend the assessment of the vulnerability severity can be inferred directly from the corresponding textual description, or in other words, to examine the informative power of the description with respect to the vulnerability severity. For this purpose, text mining techniques, i.e. text analysis and three different classification methods (decision trees, neural networks and support vector machines) were employed. The application of text mining to a sample of 70,678 vulnerabilities from a public data source shows that the description itself is a reliable and highly accurate source of information for vulnerability prioritization.
Vulnerability being the buzz word in the modern time is the most important jargon related to software and operating system. Since every now and then, software is developed some loopholes and incompleteness lie in the development phase, so there always remains a vulnerability of abruptness in it which can come into picture anytime. Detecting vulnerability is one thing and predicting its occurrence in the due course of time is another thing. If we get to know the vulnerability of any software in the due course of time then it acts as an active alarm for the developers to again develop sound and improvised software the second time. The proposal talks about the implementation of the idea using the artificial neural network, where different data sets are being given as input for being used for further analysis for successful results. As of now, there are models for studying the vulnerabilities in the software and networks, this paper proposal in addition to the current work, will throw light on the predictability of vulnerabilities over the due course of time.