Biblio
The number of applications and services that are hosted on cloud platforms is constantly increasing. Nowadays, more and more applications are hosted as services on cloud platforms, co-existing with other services in a mutually untrusted environment. Facilities such as virtual machines, containers and encrypted communication channels aim to offer isolation between the various applications and protect sensitive user data. However, such techniques are not always able to provide a secure execution environment for sensitive applications nor they offer guarantees that data are not monitored by an honest but curious provider once they reach the cloud infrastructure. The recent advancements of trusted execution environments within commodity processors, such as Intel SGX, provide a secure reverse sandbox, where code and data are isolated even from the underlying operating system. Moreover, Intel SGX provides a remote attestation mechanism, allowing the communicating parties to verify their identity as well as prove that code is executed on hardware-assisted software enclaves. Many approaches try to ensure code and data integrity, as well as enforce channel encryption schemes such as TLS, however, these techniques are not enough to achieve complete isolation and secure communications without hardware assistance or are not efficient in terms of performance. In this work, we design and implement a practical attestation system that allows the service provider to offer a seamless attestation service between the hosted applications and the end clients. Furthermore, we implement a novel caching system that is capable to eliminate the latencies introduced by the remote attestation process. Our approach allows the parties to attest one another before each communication attempt, with improved performance when compared to a standard TLS handshake.
The advanced persistent threat (APT) landscape has been studied without quantifiable data, for which indicators of compromise (IoC) may be uniformly analyzed, replicated, or used to support security mechanisms. This work culminates extensive academic and industry APT analysis, not as an incremental step in existing approaches to APT detection, but as a new benchmark of APT related opportunity. We collect 15,259 APT IoC hashes, retrieving subsequent sandbox execution logs across 41 different file types. This work forms an initial focus on Windows-based threat detection. We present a novel Windows APT executable (APT-EXE) dataset, made available to the research community. Manual and statistical analysis of the APT-EXE dataset is conducted, along with supporting feature analysis. We draw upon repeat and common APT paths access, file types, and operations within the APT-EXE dataset to generalize APT execution footprints. A baseline case analysis successfully identifies a majority of 117 of 152 live APT samples from campaigns across 2018 and 2019.
Ransomware attacks are taking advantage of the ongoing pandemics and attacking the vulnerable systems in business, health sector, education, insurance, bank, and government sectors. Various approaches have been proposed to combat ransomware, but the dynamic nature of malware writers often bypasses the security checkpoints. There are commercial tools available in the market for ransomware analysis and detection, but their performance is questionable. This paper aims at proposing an AI-based ransomware detection framework and designing a detection tool (AIRaD) using a combination of both static and dynamic malware analysis techniques. Dynamic binary instrumentation is done using PIN tool, function call trace is analyzed leveraging Cuckoo sandbox and Ghidra. Features extracted at DLL, function call, and assembly level are processed with NLP, association rule mining techniques and fed to different machine learning classifiers. Support vector machine and Adaboost with J48 algorithms achieved the highest accuracy of 99.54% with 0.005 false-positive rates for a multi-level combined term frequency approach.
Software Defined Networking (SDN) is a concept that decouples the control plane and the user plane. So the network administrator can easily control the network behavior through its own programs. However, the administrator may unconsciously apply some malicious programs on SDN controllers so that the whole network may be under the attacker’s control. In this paper, we discuss the malicious software issue on SDN networks. We use the idea of sandbox to propose a sandbox network called SanboxNet. We emulate a virtual isolated network environment to verify the SDN application functions. With continuous monitoring, we can locate the suspicious SDN applications. We also consider the sandbox-evading issue in our framework. The emulated networks and the real world networks will be indistinguishable to the SDN controller.
The market landscape has undergone dramatic change because of globalization, shifting marketing conditions, cost pressure, increased competition, and volatility. Transforming the operation of businesses has been possible because of the astonishing speed at which technology has witnessed the change. The automotive industry is on the edge of a revolution. The increased customer expectations, changing ownership, self-driving vehicles and much more have led to the transformation of automobiles, applications, and services from artificial intelligence, sensors, RFID to big data analysis. Large automobiles industries have been emphasizing the collection of data to gain insight into customer's expectations, preferences, and budgets alongside competitor's policies. Statistical methods can be applied to historical data, which has been gathered from various authentic sources and can be used to identify the impact of fixed and variable marketing investments and support automakers to come up with a more effective, precise, and efficient approach to target customers. Proper analysis of supply chain data can disclose the weak links in the chain enabling to adopt timely countermeasures to minimize the adverse effects. In order to fully gain benefit from analytics, the collaboration of a detailed set of capabilities responsible for intersecting and integrating with multiple functions and teams across the business is required. The effective role played by big data analysis in the automobile industry has also been expanded in the research paper. The research paper discusses the scope and challenges of big data. The paper also elaborates on the working technology behind the concept of big data. The paper illustrates the working of MapReduce technology that executes in the back end and is responsible for performing data mining.
In an increasingly asymmetric context of both instability and permanent innovation, organizations demand new capacities and learning patterns. In this sense, supervisors have adopted the metaphor of the "sandbox" as a strategy that allows their regulated parties to experiment and test new proposals in order to study them and adjust to the established compliance frameworks. Therefore, the concept of the "sandbox" is of educational interest as a way to revindicate failure as a right in the learning process, allowing students to think, experiment, ask questions and propose ideas outside the known theories, and thus overcome the mechanistic formation rooted in many of the higher education institutions. Consequently, this article proposes the application of this concept for educational institutions as a way of resignifying what students have learned.
Enforcing security and resilience in a cloud platform is an essential but challenging problem due to the presence of a large number of heterogeneous applications running on shared resources. A security analysis system that can detect threats or malware must exist inside the cloud infrastructure. Much research has been done on machine learning-driven malware analysis, but it is limited in computational complexity and detection accuracy. To overcome these drawbacks, we proposed a new malware detection system based on the concept of clustering and trend micro locality sensitive hashing (TLSH). We used Cuckoo sandbox, which provides dynamic analysis reports of files by executing them in an isolated environment. We used a novel feature extraction algorithm to extract essential features from the malware reports obtained from the Cuckoo sandbox. Further, the most important features are selected using principal component analysis (PCA), random forest, and Chi-square feature selection methods. Subsequently, the experimental results are obtained for clustering and non-clustering approaches on three classifiers, including Decision Tree, Random Forest, and Logistic Regression. The model performance shows better classification accuracy and false positive rate (FPR) as compared to the state-of-the-art works and non-clustering approach at significantly lesser computation cost.
Whenever any internet user visits a website, a scripting language runs in the background known as JavaScript. The embedding of malicious activities within the script poses a great threat to the cyberworld. Attackers take advantage of the dynamic nature of the JavaScript and embed malicious code within the website to download malware and damage the host. JavaScript developers obfuscate the script to keep it shielded from getting detected by the malware detectors. In this paper, we propose a novel technique for analysing and detecting JavaScript using sandbox assisted ensemble model. We extract the payload using malware-jail sandbox to get the real script. Upon getting the extracted script, we analyse it to define the features that are needed for creating the dataset. We compute Pearson's r between every feature for feature extraction. An ensemble model consisting of Sequential Minimal Optimization (SMO), Voted Perceptron and AdaBoost algorithm is used with voting technique to detect malicious JavaScript. Experimental results show that our proposed model can detect obfuscated and de-obfuscated malicious JavaScript with an accuracy of 99.6% and 0.03s detection time. Our model performs better than other state-of-the-art models in terms of accuracy and least training and detection time.
With each Windows operating system Microsoft introduces new features to its users. Newly added features present a challenge to digital forensics examiners as they are not analyzed or tested enough. One of the latest features, introduced in Windows 10 version 1909 is Windows Sandbox; a lightweight, temporary, environment for running untrusted applications. Because of the temporary nature of the Sandbox and insufficient documentation, digital forensic examiners are facing new challenges when examining this newly added feature which can be used to hide different illegal activities. Throughout this paper, the focus will be on analyzing different Windows artifacts and event logs, with various tools, left behind as a result of the user interaction with the Sandbox feature on a clear virtual environment. Additionally, the setup of testing environment will be explained, the results of testing and interpretation of the findings will be presented, as well as open-source tools used for the analysis.
The Open Data Cube (ODC) initiative, with support from the Committee on Earth Observation Satellites (CEOS) System Engineering Office (SEO) has developed a state-of-the-art suite of software tools and products to facilitate the analysis of Earth Observation data. This paper presents a short summary of our novel architecture approach in a project related to the Open Data Cube (ODC) community that provides users with their own ODC sandbox environment. Users can have a sandbox environment all to themselves for the purpose of running Jupyter notebooks that leverage the ODC. This novel architecture layout will remove the necessity of hosting multiple users on a single Jupyter notebook server and provides better management tooling for handling resource usage. In this new layout each user will have their own credentials which will give them access to a personal Jupyter notebook server with access to a fully deployed ODC environment enabling exploration of solutions to problems that can be supported by Earth observation data.
Firms collaborate with partners in research and development (R&D) of new technologies for many reasons such as to access complementary knowledge, know-how or skills, to seek new opportunities outside their traditional technology domain, to sustain their continuous flows of innovation, to reduce time to market, or to share risks and costs [1]. The adoption of collaborative research agreements (CRAs) or collaboration agreements (CAs) is rising rapidly as firms attempt to access innovation from various types of organizations to enhance their traditional in-house innovation [2], [3]. To achieve the objectives of their collaborations, firms need to share knowledge and jointly develop new knowledge. As more firms adopt open collaborative innovation strategies, intellectual property (IP) management has inevitably become important because clear and fair contractual IP terms and conditions such as IP ownership allocation, licensing arrangements and compensation for IP access are required for each collaborative project [4], [5]. Moreover, the firms need to adjust their IP management strategies to fit the unique characteristics and circumstances of each particular project [5].