Visible to the public Biblio

Filters: Keyword is software bugs  [Clear All Filters]
2019-11-12
Zhang, Xian, Ben, Kerong, Zeng, Jie.  2018.  Cross-Entropy: A New Metric for Software Defect Prediction. 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). :111-122.

Defect prediction is an active topic in software quality assurance, which can help developers find potential bugs and make better use of resources. To improve prediction performance, this paper introduces cross-entropy, one common measure for natural language, as a new code metric into defect prediction tasks and proposes a framework called DefectLearner for this process. We first build a recurrent neural network language model to learn regularities in source code from software repository. Based on the trained model, the cross-entropy of each component can be calculated. To evaluate the discrimination for defect-proneness, cross-entropy is compared with 20 widely used metrics on 12 open-source projects. The experimental results show that cross-entropy metric is more discriminative than 50% of the traditional metrics. Besides, we combine cross-entropy with traditional metric suites together for accurate defect prediction. With cross-entropy added, the performance of prediction models is improved by an average of 2.8% in F1-score.

2019-02-14
Peng, H., Shoshitaishvili, Y., Payer, M..  2018.  T-Fuzz: Fuzzing by Program Transformation. 2018 IEEE Symposium on Security and Privacy (SP). :697-710.

Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries.

2018-03-26
Aslan, Ö, Samet, R..  2017.  Mitigating Cyber Security Attacks by Being Aware of Vulnerabilities and Bugs. 2017 International Conference on Cyberworlds (CW). :222–225.

Because the Internet makes human lives easier, many devices are connected to the Internet daily. The private data of individuals and large companies, including health-related data, user bank accounts, and military and manufacturing data, are increasingly accessible via the Internet. Because almost all data is now accessible through the Internet, protecting these valuable assets has become a major concern. The goal of cyber security is to protect such assets from unauthorized use. Attackers use automated tools and manual techniques to penetrate systems by exploiting existing vulnerabilities and software bugs. To provide good enough security; attack methodologies, vulnerability concepts and defence strategies should be thoroughly investigated. The main purpose of this study is to show that the patches released for existing vulnerabilities at the operating system (OS) level and in software programs does not completely prevent cyber-attack. Instead, producing specific patches for each company and fixing software bugs by being aware of the software running on each specific system can provide a better result. This study also demonstrates that firewalls, antivirus software, Windows Defender and other prevention techniques are not sufficient to prevent attacks. Instead, this study examines different aspects of penetration testing to determine vulnerable applications and hosts using the Nmap and Metasploit frameworks. For a test case, a virtualized system is used that includes different versions of Windows and Linux OS.

2017-03-07
Masood, A., Java, J..  2015.  Static analysis for web service security - Tools amp; techniques for a secure development life cycle. 2015 IEEE International Symposium on Technologies for Homeland Security (HST). :1–6.

In this ubiquitous IoT (Internet of Things) era, web services have become a vital part of today's critical national and public sector infrastructure. With the industry wide adaptation of service-oriented architecture (SOA), web services have become an integral component of enterprise software eco-system, resulting in new security challenges. Web services are strategic components used by wide variety of organizations for information exchange on the internet scale. The public deployments of mission critical APIs opens up possibility of software bugs to be maliciously exploited. Therefore, vulnerability identification in web services through static as well as dynamic analysis is a thriving and interesting area of research in academia, national security and industry. Using OWASP (Open Web Application Security Project) web services guidelines, this paper discusses the challenges of existing standards, and reviews new techniques and tools to improve services security by detecting vulnerabilities. Recent vulnerabilities like Shellshock and Heartbleed has shifted the focus of risk assessment to the application layer, which for majority of organization means public facing web services and web/mobile applications. RESTFul services have now become the new service development paradigm normal; therefore SOAP centric standards such as XML Encryption, XML Signature, WS-Security, and WS-SecureConversation are nearly not as relevant. In this paper we provide an overview of the OWASP top 10 vulnerabilities for web services, and discuss the potential static code analysis techniques to discover these vulnerabilities. The paper reviews the security issues targeting web services, software/program verification and security development lifecycle.