Biblio
With each Windows operating system Microsoft introduces new features to its users. Newly added features present a challenge to digital forensics examiners as they are not analyzed or tested enough. One of the latest features, introduced in Windows 10 version 1909 is Windows Sandbox; a lightweight, temporary, environment for running untrusted applications. Because of the temporary nature of the Sandbox and insufficient documentation, digital forensic examiners are facing new challenges when examining this newly added feature which can be used to hide different illegal activities. Throughout this paper, the focus will be on analyzing different Windows artifacts and event logs, with various tools, left behind as a result of the user interaction with the Sandbox feature on a clear virtual environment. Additionally, the setup of testing environment will be explained, the results of testing and interpretation of the findings will be presented, as well as open-source tools used for the analysis.
JavaScript is a popular attack vector for releasing malicious payloads on unsuspecting Internet users. Authors of this malicious JavaScript often employ numerous obfuscation techniques in order to prevent the automatic detection by antivirus and hinder manual analysis by professional malware analysts. Consequently, this paper presents SAFE-DEOBS, a JavaScript deobfuscation tool that we have built. The aim of SAFE-DEOBS is to automatically deobfuscate JavaScript malware such that an analyst can more rapidly determine the malicious script's intent. This is achieved through a number of static analyses, inspired by techniques from compiler theory. We demonstrate the utility of SAFE-DEOBS through a case study on real-world JavaScript malware, and show that it is a useful addition to a malware analyst's toolset.
Mining of data is used to analyze facts to discover formerly unknown patterns, classifying and grouping the records. There are several crucial scalable statistics mining platforms that have been developed in latest years. RapidMiner is a famous open source software which can be used for advanced analytics, Weka and Orange are important tools of machine learning for classifying patterns with techniques of clustering and regression, whilst Knime is often used for facts preprocessing like information extraction, transformation and loading. This article encapsulates the most important and robust platforms.
Managing identity across an ever-growing digital services landscape has become one of the most challenging tasks for security experts. Over the years, several Identity Management (IDM) systems were introduced and adopted to tackle with the growing demand of an identity. In this series, a recently emerging IDM system is Self-Sovereign Identity (SSI) which offers greater control and access to users regarding their identity. This distinctive feature of the SSI IDM system represents a major development towards the availability of sovereign identity to users. uPort is an emerging open-source identity management system providing sovereign identity to users, organisations, and other entities. As an emerging identity management system, it requires meticulous analysis of its architecture, working, operational services, efficiency, advantages and limitations. Therefore, this paper contributes towards achieving all of these objectives. Firstly, it presents the architecture and working of the uPort identity management system. Secondly, it develops a Decentralized Application (DApp) to demonstrate and evaluate its operational services and efficiency. Finally, based on the developed DApp and experimental analysis, it presents the advantages and limitations of the uPort identity management system.
Developing mission-centric impact assessment techniques to address cyber resiliency in the cyber-physical systems (CPSs) requires integrating system inter-dependencies to the risk and resilience analysis process. Generally, network administrators utilize attack graphs to estimate possible consequences in a networked environment. Attack graphs lack to incorporate the operations-specific dependencies. Localizing the dependencies among operational missions, tasks, and the hosting devices in a large-scale CPS is also challenging. In this work, we offer a graphical modeling technique to integrate the mission-centric impact assessment of cyberattacks by relating the effect to the operational resiliency by utilizing a combination of the logical attack graph and mission impact propagation graph. We propose formal techniques to compute cyberattacks’ impact on the operational mission and offer an optimization process to minimize the same, having budgetary restrictions. We also relate the effect to the system functional operability. We illustrate our modeling techniques using a SCADA (supervisory control and data acquisition) case study for the cyber-physical power systems. We believe our proposed method would help evaluate and minimize the impact of cyber attacks on CPS’s operational missions and, thus, enhance cyber resiliency.
The clear, social, and dark web have lately been identified as rich sources of valuable cyber-security information that -given the appropriate tools and methods-may be identified, crawled and subsequently leveraged to actionable cyber-threat intelligence. In this work, we focus on the information gathering task, and present a novel crawling architecture for transparently harvesting data from security websites in the clear web, security forums in the social web, and hacker forums/marketplaces in the dark web. The proposed architecture adopts a two-phase approach to data harvesting. Initially a machine learning-based crawler is used to direct the harvesting towards websites of interest, while in the second phase state-of-the-art statistical language modelling techniques are used to represent the harvested information in a latent low-dimensional feature space and rank it based on its potential relevance to the task at hand. The proposed architecture is realised using exclusively open-source tools, and a preliminary evaluation with crowdsourced results demonstrates its effectiveness.
With the rapidly increasing number of Internet of Things (IoT) devices and their extensive integration into peoples' daily lives, the security of those devices is of primary importance. Nonetheless, many IoT devices suffer from the absence, or the bad application, of security concepts, which leads to severe vulnerabilities in those devices. To achieve early detection of potential vulnerabilities, network scanner tools are frequently used. However, most of those tools are highly specialized; thus, multiple tools and a meaningful correlation of their results are required to obtain an adequate listing of identified network vulnerabilities. To simplify this process, we propose a modular framework for automated network reconnaissance and vulnerability indication in IP-based networks. It allows integrating a diverse set of tools as either, scanning tools or analysis tools. Moreover, the framework enables result aggregation of different modules and allows information sharing between modules facilitating the development of advanced analysis modules. Additionally, intermediate scanning and analysis data is stored, enabling a historical view of derived information and also allowing users to retrace decision-making processes. We show the framework's modular capabilities by implementing one scanner module and three analysis modules. The automated process is then evaluated using an exemplary scenario with common IP-based IoT components.
Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.
Today, computing on various Android devices is pervasive. However, growing security vulnerabilities and attacks in the Android ecosystem constitute various threats through user apps. Taint analysis is a common technique for defending against these threats, yet it suffers from challenges in attaining practical simultaneous scalability and effectiveness. This paper presents a novel approach to fast and precise taint checking, called incremental taint analysis, by exploiting the evolving nature of Android apps. The analysis narrows down the search space of taint checking from an entire app, as conventionally addressed, to the parts of the program that are different from its previous versions. This technique improves the overall efficiency of checking multiple versions of the app as it evolves. We have implemented the techniques as a tool prototype, EVOTAINT, and evaluated our analysis by applying it to real-world evolving Android apps. Our preliminary results show that the incremental approach largely reduced the cost of taint analysis, by 78.6% on average, yet without sacrificing the analysis effectiveness, relative to a representative precise taint analysis as the baseline.
The supply chain is an extremely successful way to cope with the risk posed by distributed decision making in product sourcing and distribution. While open source software has similarly distributed decision making and involves code and information flows similar to those in ordinary supply chains, the actual networks necessary to quantify and communicate risks in software supply chains have not been constructed on large scale. This work proposes to close this gap by measuring dependency, code reuse, and knowledge flow networks in open source software. We have done preliminary work by developing suitable tools and methods that rely on public version control data to measure and comparing these networks for R language and emberjs packages. We propose ways to calculate the three networks for the entirety of public software, evaluate their accuracy, and to provide public infrastructure to build risk assessment and mitigation tools for various individual and organizational participants in open sources software. We hope that this infrastructure will contribute to more predictable experience with OSS and lead to its even wider adoption.
This paper presents our results from identifying anddocumenting false positives generated by static code analysistools. By false positives, we mean a static code analysis toolgenerates a warning message, but the warning message isnot really an error. The goal of our study is to understandthe different kinds of false positives generated so we can (1)automatically determine if an error message is truly indeed a truepositive, and (2) reduce the number of false positives developersand testers must triage. We have used two open-source tools andone commercial tool in our study. The results of our study haveled to 14 core false positive patterns, some of which we haveconfirmed with static code analysis tool developers.