Biblio
The use of risk information can help software engineers identify software components that are likely vulnerable or require extra attention when testing. Some studies have shown that the requirements risk-based approaches can be effective in improving the effectiveness of regression testing techniques. However, the risk estimation processes used in such approaches can be subjective, time-consuming, and costly. In this research, we introduce a fuzzy expert system that emulates human thinking to address the subjectivity related issues in the risk estimation process in a systematic and an efficient way and thus further improve the effectiveness of test case prioritization. Further, the required data for our approach was gathered by employing a semi-automated process that made the risk estimation process less subjective. The empirical results indicate that the new prioritization approach can improve the rate of fault detection over several existing test case prioritization techniques, while reducing threats to subjective risk estimation.
While the number of mobile applications are rapidly growing, these applications are often coming with numerous security flaws due to the lack of appropriate coding practices. Security issues must be addressed earlier in the development lifecycle rather than fixing them after the attacks because the damage might already be extensive. Early elimination of possible security vulnerabilities will help us increase the security of our software and mitigate or reduce the potential damages through data losses or service disruptions caused by malicious attacks. However, many software developers lack necessary security knowledge and skills required at the development stage, and Secure Mobile Software Development (SMSD) is not yet well represented in academia and industry. In this paper, we present a static analysis-based security analysis approach through design and implementation of a plugin for Android Development Studio, namely DroidPatrol. The proposed plugins can support developers by providing list of potential vulnerabilities early.
Computational Intelligence (CI) algorithms/techniques are packaged in a variety of disparate frameworks/applications that all vary with respect to specific supported functionality and implementation decisions that drastically change performance. Developers looking to employ different CI techniques are faced with a series of trade-offs in selecting the appropriate library/framework. These include resource consumption, features, portability, interface complexity, ease of parallelization, etc. Considerations such as language compatibility and familiarity with a particular library make the choice of libraries even more difficult. The paper introduces MeetCI, an open source software framework for computational intelligence software design automation that facilitates the application design decisions and their software implementation process. MeetCI abstracts away specific framework details of CI techniques designed within a variety of libraries. This allows CI users to benefit from a variety of current frameworks without investigating the nuances of each library/framework. Using an XML file, developed in accordance with the specifications, the user can design a CI application generically, and utilize various CI software without having to redesign their entire technology stack. Switching between libraries in MeetCI is trivial and accessing the right library to satisfy a user's goals can be done easily and effectively. The paper discusses the framework's use in design of various applications. The design process is illustrated with four different examples from expert systems and machine learning domains, including the development of an expert system for security evaluation, two classification problems and a prediction problem with recurrent neural networks.
Application development for the cloud is already challenging because of the complexity caused by the ubiquitous, interconnected, and scalable nature of the cloud paradigm. But when modern secure and privacy aware cloud applications require the integration of cryptographic algorithms, developers even need to face additional challenges: An incorrect application may not only lead to a loss of the intended strong security properties but may also open up additional loopholes for potential breaches some time in the near or far future. To avoid these pitfalls and to achieve dependable security and privacy by design, cryptography needs to be systematically designed into the software, and from scratch. We present a system architecture providing a practical abstraction for the many specialists involved in such a development process, plus a suitable cryptographic software development life cycle methodology on top of the architecture. The methodology is complemented with additional tools supporting structured inter–domain communication and thus the generation of consistent results: cloud security and privacy patterns, and modelling of cloud service level agreements. We conclude with an assessment of the use of the Cryptographic Software Design Life Cycle (CryptSDLC) in a EU research project.
Despite decades of research on software diversification, only address space layout randomization has seen widespread adoption. Code randomization, an effective defense against return-oriented programming exploits, has remained an academic exercise mainly due to i) the lack of a transparent and streamlined deployment model that does not disrupt existing software distribution norms, and ii) the inherent incompatibility of program variants with error reporting, whitelisting, patching, and other operations that rely on code uniformity. In this work we present compiler-assisted code randomization (CCR), a hybrid approach that relies on compiler-rewriter cooperation to enable fast and robust fine-grained code randomization on end-user systems, while maintaining compatibility with existing software distribution models. The main concept behind CCR is to augment binaries with a minimal set of transformation-assisting metadata, which i) facilitate rapid fine-grained code transformation at installation or load time, and ii) form the basis for reversing any applied code transformation when needed, to maintain compatibility with existing mechanisms that rely on referencing the original code. We have implemented a prototype of this approach by extending the LLVM compiler toolchain, and developing a simple binary rewriter that leverages the embedded metadata to generate randomized variants using basic block reordering. The results of our experimental evaluation demonstrate the feasibility and practicality of CCR, as on average it incurs a modest file size increase of 11.46% and a negligible runtime overhead of 0.28%, while it is compatible with link-time optimization and control flow integrity.
A blockchain is a distributed ledger forming a distributed consensus on a history of transactions, and is the underlying technology for the Bitcoin cryptocurrency. However, its applications are far beyond the financial sector. The transaction verification process for cryptocurrencies is much slower than traditional digital transaction systems. One approach to increase transaction speed and scalability is to identify a solution that offers faster Proof of Work. In this paper, we propose a method for accelerating the process of Proof of Work based on parallel mining rather than solo mining. The goal is to ensure that no more than two or more miners put the same effort into solving a specific block. The proposed method includes a process for selection of a manager, distribution of work and a reward system. This method has been implemented in a test environment that contains all the characteristics needed to perform Proof of Work for Bitcoin and has been tested, using a variety of case scenarios, by varying the difficulty level and number of validators. Preliminary results show improvement in the scalability of Proof of Work up to 34% compared to the current system.
Traditional security practices focus on negative incentives that attempt to force compliance through constraints, monitoring, and punishment. This paper describes a missing dimension of most organizations' insider threat defense-one that explicitly considers positive incentives for attracting individuals to act in the interests of the organization. Positive incentives focus on properties of the organizational context of workforce management practices - including those relating to organizational supportiveness, coworker connectedness, and job engagement. Without due attention to the organizational context in which insider threats occur, insider misbehaviors may simply reoccur as a natural response to counterproductive or dysfunctional management practices. A balanced combination of positive and negative incentives can improve employees' relationships with the organization and provide a means for employees to better cope with personal and professional stressors. An insider threat program that balances organizational incentives can become an advocate for the workforce and a means for improving employee work life - a welcome message to employees who feel threatened by programs focused on discovering insider wrongdoing.
Modern software development and deployment practices encourage complexity and bloat while unintentionally sacrificing efficiency and security. A major driver in this is the overwhelming emphasis on programmers' productivity. The constant demands to speed up development while reducing costs have forced a series of individual decisions and approaches throughout software engineering history that have led to this point. The current state-of-the-practice in the field is a patchwork of architectures and frameworks, packed full of features in order to appeal to: the greatest number of people, obscure use cases, maximal code reuse, and minimal developer effort. The Office of Naval Research (ONR) Total Platform Cyber Protection (TPCP) program seeks to de-bloat software binaries late in the life-cycle with little or no access to the source code or the development process.
This paper presents our results from identifying anddocumenting false positives generated by static code analysistools. By false positives, we mean a static code analysis toolgenerates a warning message, but the warning message isnot really an error. The goal of our study is to understandthe different kinds of false positives generated so we can (1)automatically determine if an error message is truly indeed a truepositive, and (2) reduce the number of false positives developersand testers must triage. We have used two open-source tools andone commercial tool in our study. The results of our study haveled to 14 core false positive patterns, some of which we haveconfirmed with static code analysis tool developers.
Static code analysis is a convenient technique to support the development of software. Without prior test setup, information about a later runtime behavior can be inferred and errors in the code can be found before using a regular compiler. Solutions to apply static code analysis to PLC software following the IEC 61131-3 already exist, but using these separate tools usually creates a gap in the development process. In this paper we introduce an architecture to use static analysis directly in a development environment and give instant feedback to the developer while he is still editing the PLC software.
New generation communication technologies (e.g., 5G) enhance interactions in mobile and wireless communication networks between devices by supporting a large-scale data sharing. The vehicle is such kind of device that benefits from these technologies, so vehicles become a significant component of vehicular networks. Thus, as a classic application of Internet of Things (IoT), the vehicular network can provide more information services for its human users, which makes the vehicular network more socialized. A new concept is then formed, namely "Vehicular Social Networks (VSNs)", which bring both benefits of data sharing and challenges of security. Traditional public key infrastructures (PKI) can guarantee user identity authentication in the network; however, PKI cannot distinguish untrustworthy information from authorized users. For this reason, a trust evaluation mechanism is required to guarantee the trustworthiness of information by distinguishing malicious users from networks. Hence, this paper explores a trust evaluation algorithm for VSNs and proposes a cloud-based VSN architecture to implement the trust algorithm. Experiments are conducted to investigate the performance of trust algorithm in a vehicular network environment through building a three-layer VSN model. Simulation results reveal that the trust algorithm can be efficiently implemented by the proposed three-layer model.
Policy design is an important part of software development. As security breaches increase in variety, designing a security policy that addresses all potential breaches becomes a nontrivial task. A complete security policy would specify rules to prevent breaches. Systematically determining which, if any, policy clause has been violated by a reported breach is a means for identifying gaps in a policy. Our research goal is to help analysts measure the gaps between security policies and reported breaches by developing a systematic process based on semantic reasoning. We propose SEMAVER, a framework for determining coverage of breaches by policies via comparison of individual policy clauses and breach descriptions. We represent a security policy as a set of norms. Norms (commitments, authorizations, and prohibitions) describe expected behaviors of users, and formalize who is accountable to whom and for what. A breach corresponds to a norm violation. We develop a semantic similarity metric for pairwise comparison between the norm that represents a policy clause and the norm that has been violated by a reported breach. We use the US Health Insurance Portability and Accountability Act (HIPAA) as a case study. Our investigation of a subset of the breaches reported by the US Department of Health and Human Services (HHS) reveals the gaps between HIPAA and reported breaches, leading to a coverage of 65%. Additionally, our classification of the 1,577 HHS breaches shows that 44% of the breaches are accidental misuses and 56% are malicious misuses. We find that HIPAA's gaps regarding accidental misuses are significantly larger than its gaps regarding malicious misuses.