Biblio
Cloud computing is becoming the main computing model in the future due to its advantages such as high resource utilization rate and save high cost of performance. The public environments is become necessary to secure their storage and transmission against possible attacks such as known-plain-text attack and semantic security. How to ensure the data security and the privacy preserving, however, becomes a huge obstacle to its development. The traditional way to solve Secure Multiparty Computation (SMC) problem is using Trusted Third Party (TTP), however, TTPs are particularly hard to achieve and compute complexity. To protect user's privacy data, the encrypted outsourcing data are generally stored and processed in cloud computing by applying homomorphic encryption. According to above situation, we propose Elliptic Curve Cryptography (ECC) based homomorphic encryption scheme for SMC problem that is dramatically reduced computation and communication cost. It shows that the scheme has advantages in energy consumption, communication consumption and privacy protection through the comparison experiment between ECC based homomorphic encryption and RSA&Paillier encryption algorithm. Further evidence, the scheme of homomorphic encryption scheme based on ECC is applied to the calculation of GPS data of the earthquake and prove it is proved that the scheme is feasible, excellent encryption effect and high security.
Separation of network control from devices in Software Defined Network (SDN) allows for centralized implementation and management of security policies in a cloud computing environment. The ease of programmability also makes SDN a great platform implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. Dynamic change of network topology, or host reconfiguration in such networks might require corresponding changes to the flow rules in the SDN based cloud environment. Verifying adherence of these new flow policies in the environment to the organizational security policies and ensuring a conflict free environment is especially challenging. In this paper, we extend the work on rule conflicts from a traditional environment to an SDN environment, introducing a new classification to describe conflicts stemming from cross-layer conflicts. Our framework ensures that in any SDN based cloud, flow rules do not have conflicts at any layer; thereby ensuring that changes to the environment do not lead to unintended consequences. We demonstrate the correctness, feasibility and scalability of our framework through a proof-of-concept prototype.
Federated cloud networks are formed by federating virtual network segments from different clouds, e.g. in a hybrid cloud, into a single federated network. Such networks should be protected with a global federated cloud network security policy. The availability of network function virtualisation and service function chaining in cloud platforms offers an opportunity for implementing and enforcing global federated cloud network security policies. In this paper we describe an approach for enforcing global security policies in federated cloud networks. The approach relies on a service manifest that specifies the global network security policy. From this manifest configurations of the security functions for the different clouds of the federation are generated. This enables automated deployment and configuration of network security functions across the different clouds. The approach is illustrated with a case study where communications between trusted and untrusted clouds, e.g. public clouds, are encrypted. The paper discusses future work on implementing this architecture for the OpenStack cloud platform with the service function chaining API.
Hypervisors are the main components for managing virtual machines on cloud computing systems. Thus, the security of hypervisors is very crucial as the whole system could be compromised when just one vulnerability is exploited. In this paper, we assess the vulnerabilities of widely used hypervisors including VMware ESXi, Citrix XenServer and KVM using the NIST 800-115 security testing framework. We perform real experiments to assess the vulnerabilities of those hypervisors using security testing tools. The results are evaluated using weakness information from CWE, and using vulnerability information from CVE. We also compute the severity scores using CVSS information. All vulnerabilities found of three hypervisors will be compared in terms of weaknesses, severity scores and impact. The experimental results showed that ESXi and XenServer have common weaknesses and vulnerabilities whereas KVM has fewer vulnerabilities. In addition, we discover a new vulnerability called HTTP response splitting on ESXi Web interface.
Information security deals with a large number of subjects like spoofed message detection, audio processing, video surveillance and cyber-attack detections. However the biggest threat for the homeland security is cyber-attacks. Distributed Denial of Service attack is one among them. Interconnected systems such as database server, web server, cloud computing servers etc., are now under threads from network attackers. Denial of service is common attack in the internet which causes problem for both the user and the service providers. Distributed attack sources can be used to enlarge the attack in case of Distributed Denial of Service so that the effect of the attack will be high. Distributed Denial of Service attacks aims at exhausting the communication and computational power of the network by flooding the packets through the network and making malicious traffic in the network. In order to be an effective service the DDoS attack must be detected and mitigated quickly before the legitimate user access the attacker's target. The group of systems that is used to perform the DoS attack is known as the botnets. This paper introduces the overview of the state of art in DDoS attack detection strategies.
Cloud platforms can leverage Trusted Platform Modules to help provide assurance to clients that cloud-based Web services are trustworthy and behave as expected. We discuss a variety of approaches to providing this assurance, and we implement one approach based on the concept of a trustworthy certificate authority. TaoCA, our prototype implementation, links cryptographic attestations from a cloud platform, including a Trusted Platform Module, with existing TLS-based authentication mechanisms. TaoCA is designed to enable certificate authorities, browser vendors, system administrators, and end users to define and enforce a range of trust policies for web services. Evaluation of the prototype implementation demonstrates the feasibility of the design, illustrates performance tradeoffs, and serves as an end-to-end, proof-of-concept evaluation of underlying trustworthy computing abstractions. The proposed approach can be deployed incrementally and provides new benefits while retaining compatibility with the existing public key infrastructure used for TLS.
Cloud computing is a wide-spread technology that enables the enterprises to provide services to their customers with a lower cost, higher performance, better availability and scalability. However, privacy and security in cloud computing has always been a major challenge to service providers and a concern to its users. Trusted computing has led its way in securing the cloud computing and virtualized environment, during the past decades. In this paper, first we study virtualized trusted platform modules and integration of vTPM in hypervisor-based virtualization. Then we propose two architectural solutions for integrating the vTPM in container-based virtualization model.
Cloud computing is revolutionizing many IT ecosystems through offering scalable computing resources that are easy to configure, use and inter-connect. However, this model has always been viewed with some suspicion as it raises a wide range of security and privacy issues that need to be negotiated. This research focuses on the construction of a trust layer in cloud computing to build a trust relationship between cloud service providers and cloud users. In particular, we address the rise of container-based virtualisation has a weak isolation compared to traditional VMs because of the shared use of the OS kernel and system components. Therefore, we will build a trust layer to solve the issues of weaker isolation whilst maintaining the performance and scalability of the approach. This paper has two objectives. Firstly, we propose a security system to protect containers from other guests through the addition of a Role-based Access Control (RBAC) model and the provision of strict data protection and security. Secondly, we provide a stress test using isolation benchmarking tools to evaluate the isolation in containers in term of performance.
Data sharing is a significant functionality in cloud storage. These cloud storage provider are answerable for keeping the data obtainable and available in addition to the physical environment protected and running. Here we can securely, efficiently, and flexibly share data with others in cloud storage. A new public-key cryptosystems is planned which create constant-size cipher texts such that efficient allocation of decryption rights for any set of cipher texts are achievable. The uniqueness means that one can aggregate any set of secret keys and make them as packed in as a single key, but encircling the power of all the keys being aggregated. This packed in aggregate key can be easily sent to others or be stored in a smart card with very restricted secure storage. In KAC, users encrypt a file with single key, that means every file have each file, also there will be aggregate keys for two or more files, which formed by using the tree structure. Through this, the user can share more files with a single key at a time.
Over transactional database systems MultiVersion concurrency control is maintained for secure, fast and efficient access to the shared data file implementation scenario. An effective coordination is supposed to be set up between owners and users also the developers & system operators, to maintain inter-cloud & intra-cloud communication Most of the services & application offered in cloud world are real-time, which entails optimized compatibility service environment between master and slave clusters. In the paper, offered methodology supports replication and triggering methods intended for data consistency and dynamicity. Where intercommunication between different clusters is processed through middleware besides slave intra-communication is handled by verification & identification protection. The proposed approach incorporates resistive flow to handle high impact systems that identifies and verifies multiple processes. Results show that the new scheme reduces the overheads from different master and slave servers as they are co-located in clusters which allow increased horizontal and vertical scalability of resources.
Within few years, Cloud computing has emerged as the most promising IT business model. Thanks to its various technical and financial advantages, Cloud computing continues to convince every day new users coming from scientific and industrial sectors. To satisfy the various users' requirements, Cloud providers must maximize the performance of their IT resources to ensure the best service at the lowest cost. The performance optimization efforts in the Cloud can be achieved at different levels and aspects. In the present paper, we propose to introduce a fuzzy logic process in scheduling strategy for public Cloud in order to improve the response time, processing time and total cost. In fact, fuzzy logic has proven his ability to solve the problem of optimization in several fields such as data mining, image processing, networking and much more.
Availability is one of the most important requirements in the production system. Keeping the level of high availability in Infrastructure-as-a-Service (IaaS) cloud computing is a challenge task because of the complexity of service providing. By definition, the availability can be maintain by using fault tolerance approaches. Recently, many fault tolerance methods have been developed, but few of them focus on the fault detection aspect. In this paper, after a rigorous analysis on the nature of failures, we would like to introduce a technique to identified the failures occurring in IaaS system. By using fuzzy logic algorithm, this proposed technique can provide better performance in terms of accuracy and detection speed, which is critical for the cloud system.
Traditional authentication techniques such as static passwords are vulnerable to replay and guessing attacks. Recently, many studies have been conducted on keystroke dynamics as a promising behavioral biometrics for strengthening user authentication, however, current keystroke based solutions suffer from a numerous number of features with an insufficient number of samples which lead to a high verification error rate and high verification time. In this paper, a keystroke dynamics based authentication system is proposed for cloud environments that supports fixed and free text samples. The proposed system utilizes the ReliefF dimensionality reduction method, as a preprocessing step, to minimize the feature space dimensionality. The proposed system applies clustering to users' profile templates to reduce the verification time. The proposed system is applied to two different benchmark datasets. Experimental results prove the effectiveness and efficiency of the proposed system.
Today, outsourcing query processing tasks to remote cloud servers becomes a viable option; such outsourcing calls for encrypting data stored at the server so as to render it secure against eavesdropping adversaries and/or an honest-but-curious server itself. At the same time, to be efficiently managed, outsourced data should be indexed, and even adaptively so, as a side-effect of query processing. Computationally heavy encryption schemes render such outsourcing unattractive; an alternative, Order-Preserving Encryption Scheme (OPES), intentionally preserves and reveals the order in the data, hence is unattractive from the security viewpoint. In this paper, we propose and analyze a scheme for lightweight and indexable encryption, based on linear-algebra operations. Our scheme provides higher security than OPES and allows for range and point queries to be efficiently evaluated over encrypted numeric data, with decryption performed at the client side. We implement a prototype that performs incremental, query-triggered adaptive indexing over encrypted numeric data based on this scheme, without leaking order information in advance, and without prohibitive overhead, as our extensive experimental study demonstrates.
A nonrepudiation protocol from a sender S to a set of potential receivers \R1, R2, ..., Rn\ performs two functions. First, this protocol enables S to send to every potential receiver Ri a copy of file F along with a proof that can convince an unbiased judge that F was indeed sent by S to Ri. Second, this protocol also enables each Ri to receive from S a copy of file F and to send back to S a proof that can convince an unbiased judge that F was indeed received by Ri from S. When a nonrepudiation protocol from S to \R1, R2, ..., Rn\ is implemented in a cloud system, the communications between S and the set of potential receivers \R1, R2, ..., Rn\ are not carried out directly. Rather, these communications are carried out through a cloud C. In this paper, we present a nonrepudiation protocol that is implemented in a cloud system and show that this protocol is correct. We also show that this protocol has two clear advantages over nonrepudiation protocols that are not implemented in cloud systems.
Cloud services have made large contributions to the agile developments and rapid revisions of various applications. However, the performance of these applications is still one of the largest concerns for developers. Although it has created many performance analysis frameworks, most of them have not been efficient for the rapid application revisions because they have required performance models, which may have had to be remodeled whenever application revisions occurred. We propose an analysis framework for diagnosis of application performance anomalies. We designed our framework so that it did not require any performance models to be efficient in rapid application revisions. That investigates the Pearson correlation and association rules between system metrics and application performance. The association rules are widely used in data-mining areas to find relations between variables in databases. We demonstrated through an experiment and testing on a real data set that our framework could select causal metrics even when the metrics were temporally correlated, which reduced the false negatives obtained from cause diagnosis. We evaluated our framework from the perspective of the expected remaining diagnostic costs of framework users. The results indicated that it was expected to reduce the diagnostic costs by 84.8\textbackslash% at most, compared with a method that only used the Pearson correlation.
With the increasing popularity of cloud storage services, many individuals and enterprises start to move their local data to the clouds. To ensure their privacy and data security, some cloud service users may want to encrypt their data before outsourcing them. However, this impedes efficient data utilities based on the plain text search. In this paper, we study how to construct a secure index that supports both efficient index updating and similarity search. Using the secure index, users are able to efficiently perform similarity searches tolerating input mistakes and update the index when new data are available. We formally prove the security of our proposal and also perform experiments on real world data to show its efficiency.
prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.
The popularity of Cloud computing has considerably increased during the last years. The increase of Cloud users and their interactions with the Cloud infrastructure raise the risk of resources faults. Such a problem can lead to a bad reputation of the Cloud environment which slows down the evolution of this technology. To address this issue, the dynamic and the complex architecture of the Cloud should be taken into account. Indeed, this architecture requires that resources protection and healing must be transparent and without external intervention. Unlike previous work, we suggest integrating the fundamental aspects of autonomic computing in the Cloud to deal with the self-healing of Cloud resources. Starting from the high degree of match between autonomic computing systems and multiagent systems, we propose to take advantage from the autonomous behaviour of agent technology to create an intelligent Cloud that supports autonomic aspects. Our proposed solution is a multi-agent system which interacts with the Cloud infrastructure to analyze the resources state and execute Checkpoint/Replication strategy or migration technique to solve the problem of failed resources.
Recent technology shifts such as cloud computing, the Internet of Things, and big data lead to a significant transfer of sensitive data out of trusted edge networks. To counter resulting privacy concerns, we must ensure that this sensitive data is not inadvertently forwarded to third-parties, used for unintended purposes, or handled and stored in violation of legal requirements. Related work proposes to solve this challenge by annotating data with privacy policies before data leaves the control sphere of its owner. However, we find that existing privacy policy languages are either not flexible enough or require excessive processing, storage, or bandwidth resources which prevents their widespread deployment. To fill this gap, we propose CPPL, a Compact Privacy Policy Language which compresses privacy policies by taking advantage of flexibly specifiable domain knowledge. Our evaluation shows that CPPL reduces policy sizes by two orders of magnitude compared to related work and can check several thousand of policies per second. This allows for individual per-data item policies in the context of cloud computing, the Internet of Things, and big data.
Inadvertent exposure of sensitive data is a major concern for potential cloud customers. Much focus has been on other data leakage vectors, such as side channel attacks, while issues of data disposal and assured deletion have not received enough attention to date. However, data that is not properly destroyed may lead to unintended disclosures, in turn, resulting in heavy financial penalties and reputational damage. In non-cloud contexts, issues of incomplete deletion are well understood. To the best of our knowledge, to date, there has been no systematic analysis of assured deletion challenges in public clouds. In this paper, we aim to address this gap by analysing assured deletion requirements for the cloud, identifying cloud features that pose a threat to assured deletion, and describing various assured deletion challenges. Based on this discussion, we identify future challenges for research in this area and propose an initial assured deletion architecture for cloud settings. Altogether, our work offers a systematization of requirements and challenges of assured deletion in the cloud, and a well-founded reference point for future research in developing new solutions to assured deletion.
Cloud computing platforms are becoming increasingly prevalent and readily available nowadays, providing us alternative and economic services for resource-constrained clients to perform large-scale computation. In this work, we address the problem of secure outsourcing of large-scale nonnegative matrix factorization (NMF) to a cloud in a way that the client can verify the correctness of results with small overhead. The input matrix protection is achieved by a lightweight, permutation-based encryption mechanism. By exploiting the iterative nature of NMF computation, we propose a single-round verification strategy, which can be proved to be effective. Both theoretical and experimental results are given to demonstrate the superior performance of our scheme.
We propose a game theoretic framework for task allocation in mobile cloud computing that corresponds to offloading of compute tasks to a group of nearby mobile devices. Specifically, in our framework, a distributor node holds a multidimensional auction for allocating the tasks of a job among nearby mobile nodes based on their computational capabilities and also the cost of computation at these nodes, with the goal of reducing the overall job completion time. Our proposed auction also has the desired incentive compatibility property that ensures that mobile devices truthfully reveal their capabilities and costs and that those devices benefit from the task allocation. To deal with node mobility, we perform multiple auctions over adaptive time intervals. We develop a heuristic approach to dynamically find the best time intervals between auctions to minimize unnecessary auctions and the accompanying overheads. We evaluate our framework and methods using both real world and synthetic mobility traces. Our evaluation results show that our game theoretic framework improves the job completion time by a factor of 2-5 in comparison to the time taken for executing the job locally, while minimizing the number of auctions and the accompanying overheads. Our approach is also profitable for the nearby nodes that execute the distributor's tasks with these nodes receiving a compensation higher than their actual costs.
Geo-distributed Situation Awareness applications are large in scale and are characterized by 24/7 data generation from mobile and stationary sensors (such as cameras and GPS devices); latency-sensitivity for converting sensed data to actionable knowledge; and elastic and bursty needs for computational resources. Fog computing [7] envisions providing computational resources close to the edge of the network, consequently reducing the latency for the sense-process-actuate cycle that exists in these applications. We propose Foglets, a programming infrastructure for the geo-distributed computational continuum represented by fog nodes and the cloud. Foglets provides APIs for a spatio-temporal data abstraction for storing and retrieving application generated data on the local nodes, and primitives for communication among the resources in the computational continuum. Foglets manages the application components on the Fog nodes. Algorithms are presented for launching application components and handling the migration of these components between Fog nodes, based on the mobility pattern of the sensors and the dynamic computational needs of the application. Evaluation results are presented for a Fog network consisting of 16 nodes using a simulated vehicular network as the workload. We show that the discovery and deployment protocol can be executed in 0.93 secs, and joining an already deployed application can be as quick as 65 ms. Also, QoS-sensitive proactive migration can be accomplished in 6 ms.