Biblio
Just as cloud customers have different performance requirements, they also have different security requirements for their computations in the cloud. Researchers have suggested a "security on demand" service model for cloud computing, where secure computing environment are dynamically provisioned to cloud customers according to their specific security needs. The availability of secure computing platforms is a necessary but not a sufficient solution to convince cloud customers to move their sensitive data and code to the cloud. Cloud customers need further assurance to convince them that the security measures are indeed deployed, and are working correctly. In this paper, we present Policy-Customized Trusted Cloud Service architecture with a new remote attestation scheme and a virtual machine migration protocol, where cloud customer can custom security policy of computing environment and validate whether the current computing environment meets the security policy in the whole life cycle of the virtual machine. To prove the availability of proposed architecture, we realize a prototype that support customer-customized security policy and a VM migration protocol that support customer-customized migration policy and validation based on open source Xen Hypervisor.
As cloud computing becomes increasingly pervasive, it is critical for cloud providers to support basic security controls. Although major cloud providers tout such features, relatively little is known in many cases about their design and implementation. In this paper, we describe several security features in OpenStack, a widely-used, open source cloud computing platform. Our contributions to OpenStack range from key management and storage encryption to guaranteeing the integrity of virtual machine (VM) images prior to boot. We describe the design and implementation of these features in detail and provide a security analysis that enumerates the threats that each mitigates. Our performance evaluation shows that these security features have an acceptable cost-in some cases, within the measurement error observed in an operational cloud deployment. Finally, we highlight lessons learned from our real-world development experiences from contributing these features to OpenStack as a way to encourage others to transition their research into practice.
In this paper, we propose a hardware-based defense system in Software-Defined Networking architecture to protect against the HTTP GET Flooding attacks, one of the most dangerous Distributed Denial of Service (DDoS) attacks in recent years. Our defense system utilizes per-URL counting mechanism and has been implemented on FPGA as an extension of a NetFPGA-based OpenFlow switch.
Distributed Denial of Service (DDoS) attack has been bringing serious security concerns on banks, finance incorporation, public institutions, and data centers. Also, the emerging wave of Internet of Things (IoT) raises new concerns on the smart devices. Software Defined Networking (SDN) and Network Functions Virtualization (NFV) have provided a new paradigm for network security. In this paper, we propose a new method to efficiently prevent DDoS attacks, based on a SDN/NFV framework. To resolve the problem that normal packets are blocked due to the inspection on suspicious packets, we developed a threshold-based method that provides a client with an efficient, fast DDoS attack mitigation. In addition, we use open source code to develop the security functions in order to implement our solution for SDN-based network security functions. The source code is based on NETCONF protocol [1] and YANG Data Model [2].
Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks.
Data deduplication has attracted many cloud service providers (CSPs) as a way to reduce storage costs. Even though the general deduplication approach has been increasingly accepted, it comes with many security and privacy problems due to the outsourced data delivery models of cloud storage. To deal with specific security and privacy issues, secure deduplication techniques have been proposed for cloud data, leading to a diverse range of solutions and trade-offs. Hence, in this article, we discuss ongoing research on secure deduplication for cloud data in consideration of the attack scenarios exploited most widely in cloud storage. On the basis of classification of deduplication system, we explore security risks and attack scenarios from both inside and outside adversaries. We then describe state-of-the-art secure deduplication techniques for each approach that deal with different security issues under specific or combined threat models, which include both cryptographic and protocol solutions. We discuss and compare each scheme in terms of security and efficiency specific to different security goals. Finally, we identify and discuss unresolved issues and further research challenges for secure deduplication in cloud storage.
Being an era of fast internet-based application environment, large volumes of relational data are being outsourced for business purposes. Therefore, ownership and digital rights protection has become one of the greatest challenges and among the most critical issues. This paper presents a novel fingerprinting technique to protect ownership rights of non-numeric digital data on basis of pattern generation and row association schemes. Firstly, fingerprint sequence is formulated by using secret key and buyer's Unique ID. With the chunks of these sequences and by applying the Fibonacci series, we select some rows. The selected rows are candidates of fingerprinting. The primary key of selected row is protected using RSA encryption; after which a pattern is designed by randomly choosing the values of different attributes of datasets. The encryption of primary key leads to develop an association between original and fake pattern; creating an ease in fingerprint detection. Fingerprint detection algorithm first finds the fake rows and then extracts the fingerprint sequence from the fake attributes, hence identifying the traitor. Some most important features of the proposed approach is to overcome major weaknesses such as error tolerance, integrity and accuracy in previously proposed fingerprinting techniques. The results show that technique is efficient and robust against several malicious attacks.
Data security is a primary concern for every communication system. Communication becomes an essential tool for any business, education, defense services etc. It is essential to transfer data safe and secure. At present, various cryptography algorithms have been proposed and implemented. Those algorithms are classified into symmetric and asymmetric algorithms based on the number of keys used. Even though several algorithms are used for data security, they are compromise the security at the certain period. Now the idea is to combine the several secure algorithms to provide a highly secure environment for data transmission. The algorithms that are going to be combined are AES symmetric cryptographic algorithm, RSA asymmetric algorithm and MD5 hashing algorithm. With these three algorithms, we can ensure three cryptography primitives confidentiality, authentication and integrity of data.
In this paper, the correctness of the routing algorithm for the distributed key-value store based on order preserving linear hashing and Skip Graph is proved. In this system, data are divided by linear hashing and Skip Graph is used for overlay network. The routing table of this system is very uniform. Then, short detours can exist in the route of forwarding. By using these detours, the number of hops for the query forwarding is reduced.
This paper proposes a novel recursive hashing scheme, in contrast to conventional "one-off" based hashing algorithms. Inspired by human's "nonsalient-to-salient" perception path, the proposed hashing scheme generates a series of binary codes based on progressively expanded salient regions. Built on a recurrent deep network, i.e., LSTM structure, the binary codes generated from later output nodes naturally inherit information aggregated from previously codes while explore novel information from the extended salient region, and therefore it possesses good scalability property. The proposed deep hashing network is trained via minimizing a triplet ranking loss, which is end-to-end trainable. Extensive experimental results on several image retrieval benchmarks demonstrate good performance gain over state-of-the-art image retrieval methods and its scalability property.
Hashing methods play an important role in large scale image retrieval. Traditional hashing methods use hand-crafted features to learn hash functions, which can not capture the high level semantic information. Deep hashing algorithms use deep neural networks to learn feature representation and hash functions simultaneously. Most of these algorithms exploit supervised information to train the deep network. However, supervised information is expensive to obtain. In this paper, we propose a pseudo label based unsupervised deep discriminative hashing algorithm. First, we cluster images via K-means and the cluster labels are treated as pseudo labels. Then we train a deep hashing network with pseudo labels by minimizing the classification loss and quantization loss. Experiments on two datasets demonstrate that our unsupervised deep discriminative hashing method outperforms the state-of-art unsupervised hashing methods.
We propose a multi-level CSI quantization and key reconciliation scheme for physical layer security. The noisy wireless channel estimates obtained by the users first run through a transformation, prior to the quantization step. This enables the definition of guard bands around the quantization boundaries, tailored for a specific efficiency and not compromising the uniformity required at the output of the quantizer. Our construction results in an better key disagreement and initial key generation rate trade-off when compared to other level-crossing quantization methods.
Cooperative MIMO communication is a promising technology which enables realistic solution for improving communication performance with MIMO technique in wireless networks that are composed of size and cost constrained devices. However, the security problems inherent to cooperative communication also arise. Cryptography can ensure the confidentiality in the communication and routing between authorized participants, but it usually cannot prevent the attacks from compromised nodes which may corrupt communications by sending garbled signals. In this paper, we propose a cross-layered approach to enhance the security in query-based cooperative MIMO sensor networks. The approach combines efficient cryptographic technique implemented in upper layer with a novel information theory based compromised nodes detection algorithm in physical layer. In the detection algorithm, a cluster of K cooperative nodes are used to identify up to K - 1 active compromised nodes. When the compromised nodes are detected, the key revocation is performed to isolate the compromised nodes and reconfigure the cooperative MIMO sensor network. During this process, beamforming is used to avoid the information leaking. The proposed security scheme can be easily modified and applied to cognitive radio networks. Simulation results show that the proposed algorithm for compromised nodes detection is effective and efficient, and the accuracy of received information is significantly improved.
In this work, Automatic-Repeat-Request (ARQ) and Maximal Ratio Combination (MRC), have been jointly exploited to enhance the confidentiality of wireless services requested by a legitimate user (Bob) against an eavesdropper (Eve). The obtained security performance is analyzed using Packet Error Rate (PER), where the exact PER gap between Bob and Eve is determined. PER is proposed as a new practical security metric in cross layers (Physical/MAC) security design since it reflects the influence of upper layers mechanisms, and it can be linked with Quality of Service (QoS) requirements for various digital services such as voice and video. Exact PER formulas for both Eve and Bob in i.i.d Rayleigh fading channel are derived. The simulation and theoretical results show that the employment of ARQ mechanism and MRC on a signal level basis before demodulation can significantly enhance data security for certain services at specific SNRs. However, to increase and ensure the security of a specific service at any SNR, adaptive modulation is proposed to be used along with the aforementioned scheme. Analytical and simulation studies demonstrate orders of magnitude difference in PER performance between eavesdroppers and intended receivers.
Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis.
A wireless sensor network (WSN) is composed of sensor nodes and a base station. In WSNs, constructing an efficient key-sharing scheme to ensure a secure communication is important. In this paper, we propose a new key-sharing scheme for groups, which shares a group key in a single broadcast without being dependent on the number of nodes. This scheme is based on geometric characteristics and has information-theoretic security in the analysis of transmitted data. We compared our scheme with conventional schemes in terms of communication traffic, computational complexity, flexibility, and security, and the results showed that our scheme is suitable for an Internet-of-Things (IoT) network.