Biblio
Embedded virtualization has emerged as a valuable way to reduce costs, improve software quality, and decrease design time. Additionally, virtualization can enforce the overall system's security from several perspectives. One is security due to separation, where the hypervisor ensures that one domain does not compromise the execution of other domains. At the same time, the advances in the development of IoT applications opened discussions about the security flaws that were introduced by IoT devices. In a few years, billions of these devices will be connected to the cloud exchanging information. This is an opportunity for hackers to exploit their vulnerabilities, endangering applications connected to such devices. At this point, it is inevitable to consider virtualization as a possible approach for IoT security. In this paper we discuss how embedded virtualization could take place on IoT devices as a sound solution for security.
Cloud service providers typically adopt the multi-tenancy model to optimize resources usage and achieve the promised cost-effectiveness. Sharing resources between different tenants and the underlying complex technology increase the necessity of transparency and accountability. In this regard, auditing security compliance of the provider's infrastructure against standards, regulations and customers' policies takes on an increasing importance in the cloud to boost the trust between the stakeholders. However, virtualization and scalability make compliance verification challenging. In this work, we propose an automated framework that allows auditing the cloud infrastructure from the structural point of view while focusing on virtualization-related security properties and consistency between multiple control layers. Furthermore, to show the feasibility of our approach, we integrate our auditing system into OpenStack, one of the most used cloud infrastructure management systems. To show the scalability and validity of our framework, we present our experimental results on assessing several properties related to auditing inter-layer consistency, virtual machines co-residence, and virtual resources isolation.
This paper presents SplitBox, an efficient system for privacy-preserving processing of network functions that are outsourced as software processes to the cloud. Specifically, cloud providers processing the network functions do not learn the network policies instructing how the functions are to be processed. First, we propose an abstract model of a generic network function based on match-action pairs. We assume that this function is processed in a distributed manner by multiple honest-but-curious cloud service providers. Then, we introduce our SplitBox system for private network function virtualization and present a proof-of-concept implementation on FastClick, an extension of the Click modular router, using a firewall as a use case. Our experimental results achieve a throughput of over 2 Gbps with 1 kB-sized packets on average, traversing up to 60 firewall rules.
Network monitoring is vital to the administration and operation of networks, but it requires privileged access that only highly trusted parties are granted. This severely limits the opportunity for external parties, such as service or equipment providers, auditors, or even clients, to measure the health or operation of a network in which they are stakeholders, but do not have access to its internal structure. In this position paper we propose the use of middleboxes to open up network monitoring to external parties using privacy-preserving technology. This will allow distrusted parties to make more inferences about the network state than currently possible, without learning any precise information about the network or the data that crosses it. Thus the state of the network will be more transparent to external stakeholders, who will be empowered to verify claims made by network operators. Network operators will be able to provide more information about their network without compromising security or privacy.
Current State of the art technologies for detecting and neutralizing rogue DHCP servers are tediously complex and prone to error. Network operators can spend hours (even days) before realizing that a rogue server is affecting their network. Additionally, once network operators suspect that a rogue server is active on their network, even more hours can be spent finding the server's MAC address and preventing it from affecting other clients. Not only are such methods slow to eliminate rogue servers, they are also likely to affect other clients as network operators shutdown services while attempting to locate the server. In this paper, we present Network Flow Guard (NFG), a simple security application that utilizes the software defined networking (SDN) paradigm of programmable networks to detect and disable rogue servers before they are able to affect network clients. Consequently, the key contributions of NFG are its modular approach and its automated detection/prevention of rogue DHCP servers, which is accomplished with little impact to network architecture, protocols, and network operators.
Modern OS kernels including Windows, Linux, and Mac OS all have adopted kernel Address Space Layout Randomization (ASLR), which shifts the base address of kernel code and data into different locations in different runs. Consequently, when performing introspection or forensic analysis of kernel memory, we cannot use any pre-determined addresses to interpret the kernel events. Instead, we must derandomize the address space layout and use the new addresses. However, few efforts have been made to derandomize the kernel address space and yet there are many questions left such as which approach is more efficient and robust. Therefore, we present the first systematic study of how to derandomize a kernel when given a memory snapshot of a running kernel instance. Unlike the derandomization approaches used in traditional memory exploits in which only remote access is available, with introspection and forensics applications, we can use all the information available in kernel memory to generate signatures and derandomize the ASLR. In other words, there exists a large volume of solutions for this problem. As such, in this paper we examine a number of typical approaches to generate strong signatures from both kernel code and data based on the insight of how kernel code and data is updated, and compare them from efficiency (in terms of simplicity, speed etc.) and robustness (e.g., whether the approach is hard to be evaded or forged) perspective. In particular, we have designed four approaches including brute-force code scanning, patched code signature generation, unpatched code signature generation, and read-only pointer based approach, according to the intrinsic behavior of kernel code and data with respect to kernel ASLR. We have gained encouraging results for each of these approaches and the corresponding experimental results are reported in this paper.
Network flow classification is fundamental to network management and network security. However, it is challenging to classify network flows at very high line rates while simultaneously preserving user privacy. Machine learning based classification techniques utilize only meta-information of a flow and have been shown to be effective in identifying network flows. We analyze a group of widely used machine learning classifiers, and observe that the effectiveness of different classification models depends highly upon the protocol types as well as the flow features collected from network data.We propose vTC, a design of virtual network functions to flexibly select and apply the best suitable machine learning classifiers at run time. The experimental results show that the proposed NFV for flow classification can improve the accuracy of classification by up to 13%.
We demonstrate the infrastructure used in the TREC 2015 Total Recall track to facilitate controlled simulation of "assessor in the loop" high-recall retrieval experimentation. The implementation and corresponding design decisions are presented for this platform. This includes the necessary considerations to ensure that experiments are privacy-preserving when using test collections that cannot be distributed. Furthermore, we describe the use of virtual machines as a means of system submission in order to to promote replicable experiments while also ensuring the security of system developers and data providers.
Virtual machine live migration technology, as an important support for cloud computing, has become a central issue in recent years. The virtual machines' runtime environment is migrated from the original physical server to another physical server, maintaining the virtual machines running at the same time. Therefore, it can make load balancing among servers and ensure the quality of service. However, virtual machine migration security issue cannot be ignored due to the immature development of it. This paper we analyze the security threats of the virtual machine migration, and compare the current proposed protection measures. While, these methods either rely on hardware, or lack adequate security and expansibility. In the end, we propose a security model of live virtual machine migration based on security policy transfer and encryption, named as SPLM (Security Protection of Live Migration) and analyze its security and reliability, which proves that SPLM is better than others. This paper can be useful for the researchers to work on this field. The security study of live virtual machine migration in this paper provides a certain reference for the research of virtualization security, and is of great significance.
- « first
- ‹ previous
- 1
- 2
- 3