Biblio
Buffer overflow (BOF) vulnerability is one of the most dangerous security vulnerability which can be exploited by unwanted users. This vulnerability can be detected by both static and dynamic analysis techniques. For dynamic analysis, execution of the program is required in which the behavior of the program according to specifications is checked while in static analysis the source code is analyzed for security vulnerabilities without execution of code. Despite the fact that many open source and commercial security analysis tools employ static and dynamic methods but there is still a margin for improvement in BOF vulnerability detection capability of these tools. We propose an enhancement in Cppcheck tool for statically detecting BOF vulnerability using data flow analysis in C programs. We have used the Juliet Test Suite to test our approach. We selected two best tools cited in the literature for BOF detection (i.e. Frama-C and Splint) to compare the performance and accuracy of our approach. From the experiments, our proposed approach generated Youden Index of 0.45, Frama-C has only 0.1 Youden's score and Splint generated Youden score of -0.47. These results show that our technique performs better as compared to both Frama-C and Splint static analysis tools.
The impact of microarchitectural attacks in Personal Computers (PCs) can be further adapted to and observed in internetworked All Programmable System-on-Chip (AP SoC) platforms. This effort involves the access control or execution of Intellectual Property cores in the FPGA of an AP SoC Victim internetworked with an AP SoC Attacker via Internet Protocol (IP). Three conceptions of attacks were implemented: buffer overflow attack at the stack, return-oriented programming attack, and command-injection-based attack for dynamic reconfiguration in the FPGA. Indeed, a specific preventive countermeasure for each attack is proposed. The functionality of the countermeasures mainly comprises adapted words addition (stack protection) for the first and second attacks and multiple encryption for the third attack. In conclusion, the recommended countermeasures are realizable to counteract the implemented attacks.
A fault attack is a well-known technique where the behaviour of a chip is voluntarily disturbed by hardware means in order to undermine the security of the information handled by the target. In this paper, we explore how Electromagnetic fault injection (EMFI) can be used to create vulnerabilities in sound software, targeting a Cortex-M3 microcontroller. Several use-cases are shown experimentally: control flow hijacking, buffer overflow (even with the presence of a canary), covert backdoor insertion and Return Oriented Programming can be achieved even if programs are not vulnerable in a software point of view. These results suggest that the protection of any software against vulnerabilities must take hardware into account as well.
Software vulnerabilities are a primary concern in the IT security industry, as malicious hackers who discover these vulnerabilities can often exploit them for nefarious purposes. However, complex programs, particularly those written in a relatively low-level language like C, are difficult to fully scan for bugs, even when both manual and automated techniques are used. Since analyzing code and making sure it is securely written is proven to be a non-trivial task, both static analysis and dynamic analysis techniques have been heavily investigated, and this work focuses on the former. The contribution of this paper is a demonstration of how it is possible to catch a large percentage of bugs by extracting text features from functions in C source code and analyzing them with a machine learning classifier. Relatively simple features (character count, character diversity, entropy, maximum nesting depth, arrow count, "if" count, "if" complexity, "while" count, and "for" count) were extracted from these functions, and so were complex features (character n-grams, word n-grams, and suffix trees). The simple features performed unexpectedly better compared to the complex features (74% accuracy compared to 69% accuracy).
As everyone knows vulnerability detection is a very difficult and time consuming work, so taking advantage of the unlabeled data sufficiently is needed and helpful. According the above reality, in this paper a method is proposed to predict buffer overflow based on semi-supervised learning. We first employ Antlr to extract AST from C/C++ source files, then according to the 22 buffer overflow attributes taxonomies, a 22-dimension vector is extracted from every function in AST, at last, the vector is leveraged to train a classifier to predict buffer overflow vulnerabilities. The experiment and evaluation indicate our method is correct and efficient.
The combination of (1) hard to eradicate low-level vulnerabilities, (2) a large trusted computing base written in a memory-unsafe language and (3) a desperate need to provide strong software security guarantees, led to the development of protected-module architectures. Such architectures provide strong isolation of protected modules: Security of code and data depends only on a module's own implementation. In this paper we discuss how such protected modules should be written. From an academic perspective it is clear that the future lies with memory-safe languages. Unfortunately, from a business and management perspective, that is a risky path and will remain so in the near future. The use of well-known but memory-unsafe languages such as C and C++ seem inevitable. We argue that the academic world should take another look at the automatic hardening of software written in such languages to mitigate low-level security vulnerabilities. This is a well-studied topic for full applications, but protected-module architectures introduce a new, and much more challenging environment. Porting existing security measures to a protected-module setting without a thorough security analysis may even harm security of the protected modules they try to protect.
Trusted Platform Module (TPM) has gained its popularity in computing systems as a hardware security approach. TPM provides the boot time security by verifying the platform integrity including hardware and software. However, once the software is loaded, TPM can no longer protect the software execution. In this work, we propose a dynamic TPM design, which performs control flow checking to protect the program from runtime attacks. The control flow checker is integrated at the commit stage of the processor pipeline. The control flow of program is verified to defend the attacks such as stack smashing using buffer overflow and code reuse. We implement the proposed dynamic TPM design in FPGA to achieve high performance, low cost and flexibility for easy functionality upgrade based on FPGA. In our design, neither the source code nor the Instruction Set Architecture (ISA) needs to be changed. The benchmark simulations demonstrate less than 1% of performance penalty on the processor, and an effective software protection from the attacks.