Biblio
The number of applications and services that are hosted on cloud platforms is constantly increasing. Nowadays, more and more applications are hosted as services on cloud platforms, co-existing with other services in a mutually untrusted environment. Facilities such as virtual machines, containers and encrypted communication channels aim to offer isolation between the various applications and protect sensitive user data. However, such techniques are not always able to provide a secure execution environment for sensitive applications nor they offer guarantees that data are not monitored by an honest but curious provider once they reach the cloud infrastructure. The recent advancements of trusted execution environments within commodity processors, such as Intel SGX, provide a secure reverse sandbox, where code and data are isolated even from the underlying operating system. Moreover, Intel SGX provides a remote attestation mechanism, allowing the communicating parties to verify their identity as well as prove that code is executed on hardware-assisted software enclaves. Many approaches try to ensure code and data integrity, as well as enforce channel encryption schemes such as TLS, however, these techniques are not enough to achieve complete isolation and secure communications without hardware assistance or are not efficient in terms of performance. In this work, we design and implement a practical attestation system that allows the service provider to offer a seamless attestation service between the hosted applications and the end clients. Furthermore, we implement a novel caching system that is capable to eliminate the latencies introduced by the remote attestation process. Our approach allows the parties to attest one another before each communication attempt, with improved performance when compared to a standard TLS handshake.
The advanced persistent threat (APT) landscape has been studied without quantifiable data, for which indicators of compromise (IoC) may be uniformly analyzed, replicated, or used to support security mechanisms. This work culminates extensive academic and industry APT analysis, not as an incremental step in existing approaches to APT detection, but as a new benchmark of APT related opportunity. We collect 15,259 APT IoC hashes, retrieving subsequent sandbox execution logs across 41 different file types. This work forms an initial focus on Windows-based threat detection. We present a novel Windows APT executable (APT-EXE) dataset, made available to the research community. Manual and statistical analysis of the APT-EXE dataset is conducted, along with supporting feature analysis. We draw upon repeat and common APT paths access, file types, and operations within the APT-EXE dataset to generalize APT execution footprints. A baseline case analysis successfully identifies a majority of 117 of 152 live APT samples from campaigns across 2018 and 2019.
Software Defined Networking (SDN) is a concept that decouples the control plane and the user plane. So the network administrator can easily control the network behavior through its own programs. However, the administrator may unconsciously apply some malicious programs on SDN controllers so that the whole network may be under the attacker’s control. In this paper, we discuss the malicious software issue on SDN networks. We use the idea of sandbox to propose a sandbox network called SanboxNet. We emulate a virtual isolated network environment to verify the SDN application functions. With continuous monitoring, we can locate the suspicious SDN applications. We also consider the sandbox-evading issue in our framework. The emulated networks and the real world networks will be indistinguishable to the SDN controller.
In an increasingly asymmetric context of both instability and permanent innovation, organizations demand new capacities and learning patterns. In this sense, supervisors have adopted the metaphor of the "sandbox" as a strategy that allows their regulated parties to experiment and test new proposals in order to study them and adjust to the established compliance frameworks. Therefore, the concept of the "sandbox" is of educational interest as a way to revindicate failure as a right in the learning process, allowing students to think, experiment, ask questions and propose ideas outside the known theories, and thus overcome the mechanistic formation rooted in many of the higher education institutions. Consequently, this article proposes the application of this concept for educational institutions as a way of resignifying what students have learned.
The Open Data Cube (ODC) initiative, with support from the Committee on Earth Observation Satellites (CEOS) System Engineering Office (SEO) has developed a state-of-the-art suite of software tools and products to facilitate the analysis of Earth Observation data. This paper presents a short summary of our novel architecture approach in a project related to the Open Data Cube (ODC) community that provides users with their own ODC sandbox environment. Users can have a sandbox environment all to themselves for the purpose of running Jupyter notebooks that leverage the ODC. This novel architecture layout will remove the necessity of hosting multiple users on a single Jupyter notebook server and provides better management tooling for handling resource usage. In this new layout each user will have their own credentials which will give them access to a personal Jupyter notebook server with access to a fully deployed ODC environment enabling exploration of solutions to problems that can be supported by Earth observation data.
The root causes of many security vulnerabilities include a pernicious combination of two problems, often regarded as inescapable aspects of computing. First, the protection mechanisms provided by the mainstream processor architecture and C/C++ language abstractions, dating back to the 1970s and before, provide only coarse-grain virtual-memory-based protection. Second, mainstream system engineering relies almost exclusively on test-and-debug methods, with (at best) prose specifications. These methods have historically sufficed commercially for much of the computer industry, but they fail to prevent large numbers of exploitable bugs, and the security problems that this causes are becoming ever more acute.In this paper we show how more rigorous engineering methods can be applied to the development of a new security-enhanced processor architecture, with its accompanying hardware implementation and software stack. We use formal models of the complete instruction-set architecture (ISA) at the heart of the design and engineering process, both in lightweight ways that support and improve normal engineering practice - as documentation, in emulators used as a test oracle for hardware and for running software, and for test generation - and for formal verification. We formalise key intended security properties of the design, and establish that these hold with mechanised proof. This is for the same complete ISA models (complete enough to boot operating systems), without idealisation.We do this for CHERI, an architecture with hardware capabilities that supports fine-grained memory protection and scalable secure compartmentalisation, while offering a smooth adoption path for existing software. CHERI is a maturing research architecture, developed since 2010, with work now underway on an Arm industrial prototype to explore its possible adoption in mass-market commercial processors. The rigorous engineering work described here has been an integral part of its development to date, enabling more rapid and confident experimentation, and boosting confidence in the design.
With its huge real-world demands, large-scale confidential computing still cannot be supported by today's Trusted Execution Environment (TEE), due to the lack of scalable and effective protection of high-throughput accelerators like GPUs, FPGAs, and TPUs etc. Although attempts have been made recently to extend the CPU-like enclave to GPUs, these solutions require change to the CPU or GPU chips, may introduce new security risks due to the side-channel leaks in CPU-GPU communication and are still under the resource constraint of today's CPU TEE.To address these problems, we present the first Heterogeneous TEE design that can truly support large-scale compute or data intensive (CDI) computing, without any chip-level change. Our approach, called HETEE, is a device for centralized management of all computing units (e.g., GPUs and other accelerators) of a server rack. It is uniquely designed to work with today's data centres and clouds, leveraging modern resource pooling technologies to dynamically compartmentalize computing tasks, and enforce strong isolation and reduce TCB through hardware support. More specifically, HETEE utilizes the PCIe ExpressFabric to allocate its accelerators to the server node on the same rack for a non-sensitive CDI task, and move them back into a secure enclave in response to the demand for confidential computing. Our design runs a thin TCB stack for security management on a security controller (SC), while leaving a large set of software (e.g., AI runtime, GPU driver, etc.) to the integrated microservers that operate enclaves. An enclaves is physically isolated from others through hardware and verified by the SC at its inception. Its microserver and computing units are restored to a secure state upon termination.We implemented HETEE on a real hardware system, and evaluated it with popular neural network inference and training tasks. Our evaluations show that HETEE can easily support the CDI tasks on the real-world scale and incurred a maximal throughput overhead of 2.17% for inference and 0.95% for training on ResNet152.
Network security policies contain requirements - including system and software features as well as expected and desired actions of human actors. In this paper, we present a framework for evaluation of textual network security policies as requirements documents to identify areas for improvement. Specifically, our framework concentrates on completeness. We use topic modeling coupled with expert evaluation to learn the complete list of important topics that should be addressed in a network security policy. Using these topics as a checklist, we evaluate (students) a collection of network security policies for completeness, i.e., the level of presence of these topics in the text. We developed three methods for topic recognition to identify missing or poorly addressed topics. We examine network security policies and report the results of our analysis: preliminary success of our approach.
The rapid development of cloud computing and the arrival of the big data era make the relationship between users and cloud closer. Cloud computing has powerful data computing and data storage capabilities, which can ubiquitously provide users with resources. However, users do not fully trust the cloud server's storage services, so lots of data is encrypted and uploaded to the cloud. Searchable encryption can protect the confidentiality of data and provide encrypted data retrieval functions. In this paper, we propose a time-controlled searchable encryption scheme with regular language over encrypted big data, which provides flexible search pattern and convenient data sharing. Our solution allows users with data's secret keys to generate trapdoors by themselves. And users without data's secret keys can generate trapdoors with the help of a trusted third party without revealing the data owner's secret key. Our system uses a time-controlled mechanism to collect keywords queried by users and ensures that the querying user's identity is not directly exposed. The obtained keywords are the basis for subsequent big data analysis. We conducted a security analysis of the proposed scheme and proved that the scheme is secure. The simulation experiment and comparison of our scheme show that the system has feasible efficiency.
A metric and structure of computing 2020 is proposed in the form of Top 12 Technology Trends, which will influence on investment in science, education and industry in developing countries. The primary social and technological problem of the protection of society and critical facilities through the creation of Global Intelligent Cyber Security is formulated. The axioms of the constructive formation of developing countries on the basis of the adoption of moral relations are formulated. Models, methods and algorithms of cyber-social computing are proposed that are focused on processing big data, searching for keywords and test fragments. New characteristic equations of similarity - differences between the processes and phenomena are synthesized for the exact information retrieval by keywords in cyber-physical space. A computing model of the development of the Universe is formulated, where the binary interactions of entities and forms are harmonic functions of the phase state. A structure of interactive computing of the creative process based on a metric assessment of the development status with world achievements is proposed.
The growth of IoT devices during the last decade has led to the development of smart ecosystems, such as smart homes, prone to cyberattacks. Traditional security methodologies support to some extend the requirement for preserving privacy and security of such deployments, but their centralized nature in conjunction with low computational capabilities of smart home gateways make such approaches not efficient. Last achievements on blockchain technologies allowed the use of such decentralized architectures to support cybersecurity defence mechanisms. In this work, a blockchain framework is presented to support the cybersecurity mechanisms of smart homes installations, focusing on the immutability of users and devices that constitute such environments. The proposed methodology provides also the appropriate smart contracts support for ensuring the integrity of the smart home gateway and IoT devices, as well as the dynamic and immutable management of blocked malicious IPs. The framework has been deployed on a real smart home environment demonstrating its applicability and efficiency.