Biblio
The use of software to support the information infrastructure that governments, critical infrastructure providers and businesses worldwide rely on for their daily operations and business processes is gradually becoming unavoidable. Commercial off-the shelf software is widely and increasingly used by these organizations to automate processes with information technology. That notwithstanding, cyber-attacks are becoming stealthier and more sophisticated, which has led to a complex and dynamic risk environment for IT-based operations which users are working to better understand and manage. This has made users become increasingly concerned about the integrity, security and reliability of commercial software. To meet up with these concerns and meet customer requirements, vendors have undertaken significant efforts to reduce vulnerabilities, improve resistance to attack and protect the integrity of the products they sell. These efforts are often referred to as “software assurance.” Software assurance is becoming very important for organizations critical to public safety and economic and national security. These users require a high level of confidence that commercial software is as secure as possible, something only achieved when software is created using best practices for secure software development. Therefore, in this paper, we explore the need for information assurance and its importance for both organizations and end users, methodologies and best practices for software security and information assurance, and we also conducted a survey to understand end users’ opinions on the methodologies researched in this paper and their impact.
ISSN: 2154-0373
Static analysis is a general name for various methods of program examination without actually executing it. In particular, it is widely used to discover errors and vulnerabilities in software. Taint analysis usually denotes the process of checking the flow of user-provided data in the program in order to find potential vulnerabilities. It can be performed either statically or dynamically. In the paper we evaluate several improvements for the static taint analyzer Irbis [1], which is based on a special case of interprocedural graph reachability problem - the so-called IFDS problem, originally proposed by Reps et al. [2]. The analyzer is currently being developed at the Ivannikov Institute for System Programming of the Russian Academy of Sciences (ISP RAS). The evaluation is based on several real projects with known vulnerabilities and a subset of the Juliet Test Suite for C/C++ [3]. The chosen subset consists of more than 5 thousand tests for 11 different CWEs.
Development of quality object-oriented software contains security as an integral aspect of that process. During that process, a ceaseless burden on the developers was posed in order to maximize the development and at the same time to reduce the expense and time invested in security. In this paper, the authors analyzed metrics for object-oriented software in order to evaluate and identify the relation between metric value and security of the software. Identification of these relations was achieved by study of software vulnerabilities with code level metrics. By using OWASP classification of vulnerabilities and experimental results, we proved that there was relation between metric values and possible security issues in software. For experimental code analysis, we have developed special software called SOFTMET.
Nowadays, the emerging Internet-of-Things (IoT) emphasize the need for the security of network-connected devices. Additionally, there are two types of services in IoT devices that are easily exploited by attackers, weak authentication services (e.g., SSH/Telnet) and exploited services using command injection. Based on this observation, we propose IoTCMal, a hybrid IoT honeypot framework for capturing more comprehensive malicious samples aiming at IoT devices. The key novelty of IoTC-MAL is three-fold: (i) it provides a high-interactive component with common vulnerable service in real IoT device by utilizing traffic forwarding technique; (ii) it also contains a low-interactive component with Telnet/SSH service by running in virtual environment. (iii) Distinct from traditional low-interactive IoT honeypots[1], which only analyze family categories of malicious samples, IoTCMal primarily focuses on homology analysis of malicious samples. We deployed IoTCMal on 36 VPS1 instances distributed in 13 cities of 6 countries. By analyzing the malware binaries captured from IoTCMal, we discover 8 malware families controlled by at least 11 groups of attackers, which mainly launched DDoS attacks and digital currency mining. Among them, about 60% of the captured malicious samples ran in ARM or MIPs architectures, which are widely used in IoT devices.
In 2018, several malware campaigns targeted and succeed to infect millions of low-cost routers (malwares e.g., VPN-Filter, Navidade, and SonarDNS). These routers were used, then, for all sort of cybercrimes: from DDoS attacks to ransomware. MikroTik routers are a peculiar example of low-cost routers. These routers are used to provide both last mile access to home users and are used in core network infrastructure. Half of the core routers used in one of the biggest Internet exchanges in the world are MikroTik devices. The problem is that vulnerable firmwares (RouterOS) used in homeusers houses are also used in core networks. In this paper, we are the first to quantify the problem that infecting MikroTik devices would pose to the Internet. Based on more than 4 TB of data, we reveal more than 4 million MikroTik devices in the world. Then, we propose an easy-to-deploy MikroTik honeypot and collect more than 17 millions packets, in 45 days, from sensors deployed in Australia, Brazil, China, India, Netherlands, and the United States. Finally, we use the collected data from our honeypots to automatically classify and assess attacks tailored to MikroTik devices. All our source-codes and analysis are publicly available. We believe that our honeypots and our findings in this paper foster security improvements in MikroTik devices worldwide.
The purpose of this work is to analyze the security model of a robotized system, to analyze the approaches to assessing the security of this system, and to develop our own framework. The solution to this problem involves the use of developed frameworks. The analysis will be conducted on a robotic system of robots. The prefix structures assume that the robotic system is divided into levels, and after that it is necessary to directly protect each level. Each level has its own characteristics and drawbacks that must be considered when developing a security system for a robotic system.
Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.