Biblio
As cloud services greatly facilitate file sharing online, there's been a growing awareness of the security challenges brought by outsourcing data to a third party. Traditionally, the centralized management of cloud service provider brings about safety issues because the third party is only semi-trusted by clients. Besides, it causes trouble for sharing online data conveniently. In this paper, the blockchain technology is utilized for decentralized safety administration and provide more user-friendly service. Apart from that, Ciphertext-Policy Attribute Based Encryption is introduced as an effective tool to realize fine-grained data access control of the stored files. Meanwhile, the security analysis proves the confidentiality and integrity of the data stored in the cloud server. Finally, we evaluate the performance of computation overhead of our system.
Users have accumulated years of personal data in cloud storage, creating potential privacy and security risks. This agglomeration includes files retained or shared with others simply out of momentum, rather than intention. We presented 100 online-survey participants with a stratified sample of 10 files currently stored in their own Dropbox or Google Drive accounts. We asked about the origin of each file, whether the participant remembered that file was stored there, and, when applicable, about that file's sharing status. We also recorded participants' preferences moving forward for keeping, deleting, or encrypting those files, as well as adjusting sharing settings. Participants had forgotten that half of the files they saw were in the cloud. Overall, 83% of participants wanted to delete at least one file they saw, while 13% wanted to unshare at least one file. Our combined results suggest directions for retrospective cloud data management.
Searchable encryption server protects privacal data of data owner from leaks. This paper analyzes the security of a multi-user searchable encryption scheme and points out that this scheme does not satisfy the invisibility of trapdoors. In order to improve the security of the original scheme, this paper proposes a probably secure multi-user multi-keyword searchable encryption scheme. New secheme not only ensures the confidentiality of the cipher text keyword, but also does not increase the encryption workload of the data owner when the new data user joins. In the random oracle model, based on the hard problem of decisional Diffie-Hellman, it is proved that the scheme has trapdoor indistinguishability. In the end, obtained by the simulation program to achieve a new computationally efficient communication at low cost.
Cloud storage backends such as Amazon S3 are a potential storage solution to enterprises. However, to couple enterprises with these backends, at least two problems must be solved: first, how to make these semi-trusted backends as secure as on-premises storage; and second, how to selectively retrieve files as easy as on-premises storage. A security proxy can address both the problems by building a local index from keywords in files before encrypting and uploading files to these backends. But, if the local index is built in plaintext, file content is still vulnerable to local malicious staff. Searchable Encryption (SE) can get rid of this vulnerability by making index into ciphertext; however, its known constructions often require modifications to index database, and, to support wildcard queries, they are not efficient at all. In this paper, we present a security proxy that, based on our wildcard SE construction, can securely and efficiently couple enterprises with these backends. In particular, since our SE construction can work directly with existing database systems, it incurs only a little overhead, and when needed, permits the security proxy to run with constantly small storage footprint by readily out-sourcing all built indices to existing cloud databases.
Data loss is perceived as one of the major threats for cloud storage. Consequently, the security community developed several challenge-response protocols that allow a user to remotely verify whether an outsourced file is still intact. However, two important practical problems have not yet been considered. First, clients commonly outsource multiple files of different sizes, raising the question how to formalize such a scheme and in particular ensuring that all files can be simultaneously audited. Second, in case auditing of the files fails, existing schemes do not provide a client with any method to prove if the original files are still recoverable. We address both problems and describe appropriate solutions. The first problem is tackled by providing a new type of "Proofs of Retrievability" scheme, enabling a client to check all files simultaneously in a compact way. The second problem is solved by defining a novel procedure called "Proofs of Recoverability", enabling a client to obtain an assurance whether a file is recoverable or irreparably damaged. Finally, we present a combination of both schemes allowing the client to check the recoverability of all her original files, thus ensuring cloud storage file recoverability.
Cloud computing is a wide architecture based on diverse models for providing different services of software and hardware. Cloud computing paradigm attracts different users because of its several benefits such as high resource elasticity, expense reduction, scalability and simplicity which provide significant preserving in terms of investment and work force. However, the new approaches introduced by the cloud, related to computation outsourcing, distributed resources, multi-tenancy concept, high dynamism of the model, data warehousing and the nontransparent style of cloud increase the security and privacy concerns and makes building and handling trust among cloud service providers and consumers a critical security challenge. This paper proposes a new approach to improve security of data in cloud computing. It suggests a classification model to categorize data before being introduced into a suitable encryption system according to the category. Since data in cloud has not the same sensitivity level, encrypting it with the same algorithms can lead to a lack of security or of resources. By this method we try to optimize the resources consumption and the computation cost while ensuring data confidentiality.
Audit logs are widely used in information systems nowadays. In cloud computing and cloud storage environment, audit logs are required to be encrypted and outsourced on remote servers to protect the confidentiality of data and the privacy of users. The searchable encrypted audit logs support a search on the encrypted audit logs. In this paper, we propose a privacy-preserving and unforgeable searchable encrypted audit log scheme based on PEKS. Only the trusted data owner can generate encrypted audit logs containing access permissions for users. The semi-honest server verifies the audit logs in a searchable encryption way before granting the operation rights to users and storing the audit logs. The data owner can perform a fine-grained conjunctive query on the stored audit logs, and accept only the valid audit logs. The scheme is immune to the collusion tamper or fabrication conducted by server and user. Concrete implementations of the scheme is put forward in detail. The correct of the scheme is proved, and the security properties, such as privacy-preserving, searchability, verifiability and unforgeability are analyzed. Further evaluation of computation load shows that the design is of considerable efficiency.
Nowadays, the Internet is developed, so that the requirements for on- and offline data storage have increased. Large storage IT projects, are related to large costs and high level of business risk. A storage service provider (SSP) provides computer storage space and management. In addition to that, it offers also back-up and archiving. Despite this, many companies fears security, privacy and integrity of outsourced data. As a solution, File Assured Deletion (FADE) is a system built upon standard cryptographic issues. It aims to guarantee their privacy and integrity, and most importantly, assuredly deleted files to make them unrecoverable to anybody (including those who manage the cloud storage) upon revocations of file access policies, by encrypting outsourced data files. Unfortunately, This system remains weak, in case the key manager's security is compromised. Our work provides a new scheme that aims to improve the security of FADE by using the TPM (Trusted Platform Module) that stores safely keys, passwords and digital certificates.
This paper introduces the first state-based formalization of isolation guarantees. Our approach is premised on a simple observation: applications view storage systems as black-boxes that transition through a series of states, a subset of which are observed by applications. Defining isolation guarantees in terms of these states frees definitions from implementation-specific assumptions. It makes immediately clear what anomalies, if any, applications can expect to observe, thus bridging the gap that exists today between how isolation guarantees are defined and how they are perceived. The clarity that results from definitions based on client-observable states brings forth several benefits. First, it allows us to easily compare the guarantees of distinct, but semantically close, isolation guarantees. We find that several well-known guarantees, previously thought to be distinct, are in fact equivalent, and that many previously incomparable flavors of snapshot isolation can be organized in a clean hierarchy. Second, freeing definitions from implementation-specific artefacts can suggest more efficient implementations of the same isolation guarantee. We show how a client-centric implementation of parallel snapshot isolation can be more resilient to slowdown cascades, a common phenomenon in large-scale datacenters.
Along with the growing popularisation of Cloud Computing. Cloud storage technology has been paid more and more attention as an emerging network storage technology which is extended and developed by cloud computing concepts. Cloud computing environment depends on user services such as high-speed storage and retrieval provided by cloud computing system. Meanwhile, data security is an important problem to solve urgently for cloud storage technology. In recent years, There are more and more malicious attacks on cloud storage systems, and cloud storage system of data leaking also frequently occurred. Cloud storage security concerns the user's data security. The purpose of this paper is to achieve data security of cloud storage and to formulate corresponding cloud storage security policy. Those were combined with the results of existing academic research by analyzing the security risks of user data in cloud storage and approach a subject of the relevant security technology, which based on the structural characteristics of cloud storage system.
Cloud systems offer a diversity of security mechanisms with potentially complex configuration options. So far, security engineering has focused on achievable security levels, but not on the costs associated with a specific security mechanism and its configuration. Through a series of experiments with a variety of cloud datastores conducted over the last years, we gained substantial knowledge on how one desired quality like security can have a significant impact on other system qualities like performance. In this paper, we report on select findings related to security-performance trade-offs for three prominent cloud datastores, focusing on data in transit encryption, and propose a simple, structured approach for making trade-off decisions based on factual evidence gained through experimentation. Our approach allows to rationally reason about security trade-offs.
In cloud storage systems, users can upload their data along with associated tags (authentication information) to cloud storage servers. To ensure the availability and integrity of the outsourced data, provable data possession (PDP) schemes convince verifiers (users or third parties) that the outsourced data stored in the cloud storage server is correct and unchanged. Recently, several PDP schemes with designated verifier (DV-PDP) were proposed to provide the flexibility of arbitrary designated verifier. A designated verifier (private verifier) is trustable and designated by a user to check the integrity of the outsourced data. However, these DV-PDP schemes are either inefficient or insecure under some circumstances. In this paper, we propose the first non-repudiable PDP scheme with designated verifier (DV-NRPDP) to address the non-repudiation issue and resolve possible disputations between users and cloud storage servers. We define the system model, framework and adversary model of DV-NRPDP schemes. Afterward, a concrete DV-NRPDP scheme is presented. Based on the computing discrete logarithm assumption, we formally prove that the proposed DV-NRPDP scheme is secure against several forgery attacks in the random oracle model. Comparisons with the previously proposed schemes are given to demonstrate the advantages of our scheme.
Data deduplication [3] is able to effectively identify and eliminate redundant data and only maintain a single copy of files and chunks. Hence, it is widely used in cloud storage systems to save the users' network bandwidth for uploading data. However, the occurrence of deduplication can be easily identified by monitoring and analyzing network traffic, which leads to the risk of user privacy leakage. The attacker can carry out a very dangerous side channel attack, i.e., learn-the-remaining-information (LRI) attack, to reveal users' privacy information by exploiting the side channel of network traffic in deduplication [1]. In the LRI attack, the attacker knows a large part of the target file in the cloud and tries to learn the remaining unknown parts via uploading all possible versions of the file's content. For example, the attacker knows all the contents of the target file X except the sensitive information \texttheta. To learn the sensitive information, the attacker needs to upload m files with all possible values of \texttheta, respectively. If a file Xd with the value \textthetad is deduplicated and other files are not, the attacker knows that the information \texttheta = \textthetad. In the threat model of the LRI attack, we consider a general cloud storage service model that includes two entities, i.e., the user and cloud storage server. The attack is launched by the users who aim to steal the privacy information of other users [1]. The attacker can act as a user via its own account or use multiple accounts to disguise as multiple users. The cloud storage server communicates with the users through Internet. The connections from the clients to the cloud storage server are encrypted by SSL or TLS protocol. Hence, the attacker can monitor and measure the amount of network traffic between the client and server but cannot intercept and analyze the contents of the transmitted data due to the encryption. The attacker can then perform the sophisticated traffic analysis with sufficient computing resources. We propose a simple yet effective scheme, called randomized redundant chunk scheme (RRCS), to significantly mitigate the risk of the LRI attack while maintaining the high bandwidth efficiency of deduplication. The basic idea behind RRCS is to add randomized redundant chunks to mix up the real deduplication states of files used for the LRI attack, which effectively obfuscates the view of the attacker, who attempts to exploit the side channel of network traffic for the LRI attack. RRCS includes three key function modules, range generation (RG), secure bounds setting (SBS), and security-irrelevant redundancy elimination (SRE). When uploading the random-number redundant chunks, RRCS first uses RG to generate a fixed range [0,$łambda$N] ($łambda$ $ε$ (0,1]), in which the number of added redundant chunks is randomly chosen, where N is the total number of chunks in a file and $łambda$ is a system parameter. However, the fixed range may cause a security issue. SBS is used to deal with the bounds of the fixed range to avoid the security issue. There may exist security-irrelevant redundant chunks in RRCS. SRE reduces the security-irrelevant redundant chunks to improve the deduplication efficiency. The design details are presented in our technical report [5]. Our security analysis demonstrates RRCS can significantly reduce the risk of the LRI attack [5]. We examine the performance of RRCS using three real-world trace-based datasets, i.e., Fslhomes [2], MacOS [2], and Onefull [4], and compare RRCS with the randomized threshold scheme (RTS) [1]. Our experimental results show that source-based deduplication eliminates 100% data redundancy which however has no security guarantee. File-level (chunk-level) RTS only eliminates 8.1% – 16.8% (9.8% – 20.3%) redundancy, due to only eliminating the redundancy of the files (chunks) that have many copies. RRCS with $łambda$ = 0.5 eliminates 76.1% – 78.0% redundancy and RRCS with $łambda$ = 1 eliminates 47.9% – 53.6% redundancy.
as data size is growing up, cloud storage is becoming more familiar to store a significant amount of private information. Government and private organizations require transferring plenty of business files from one end to another. However, we will lose privacy if we exchange information without data encryption and communication mechanism security. To protect data from hacking, we can use Asymmetric encryption technique, but it has a key exchange problem. Although Asymmetric key encryption deals with the limitations of Symmetric key encryption it can only encrypt limited size of data which is not feasible for a large amount of data files. In this paper, we propose a probabilistic approach to Pretty Good Privacy technique for encrypting large-size data, named as ``BigCrypt'' where both Symmetric and Asymmetric key encryption are used. Our goal is to achieve zero tolerance security on a significant amount of data encryption. We have experimentally evaluated our technique under three different platforms.
Cloud storage is vulnerable to advanced persistent threats (APTs), in which an attacker launches stealthy, continuous, well-funded and targeted attacks on storage devices. In this paper, cumulative prospect theory (CPT) is applied to study the interactions between a defender of cloud storage and an APT attacker when each of them makes subjective decisions to choose the scan interval and attack interval, respectively. Both the probability weighting effect and the framing effect are applied to model the deviation of subjective decisions of end-users from the objective decisions governed by expected utility theory, under uncertain attack durations. Cumulative decision weights are used to describe the probability weighting effect and the value distortion functions are used to represent the framing effect of subjective APT attackers and defenders in the CPT-based APT defense game, rather than discrete decision weights, as in earlier prospect theoretic study of APT defense. The Nash equilibria of the CPT-based APT defense game are derived, showing that a subjective attacker becomes risk-seeking if the frame of reference for evaluating the utility is large, and becomes risk-averse if the frame of reference for evaluating the utility is small.
Security has always been concern when it comes to data sharing in cloud computing. Cloud computing provides high computation power and memory. Cloud computing is convenient way for data sharing. But users may sometime needs to outsourced the shared data to cloud server though it contains valuable and sensitive information. Thus it is necessary to provide cryptographically enhanced access control for data sharing system. This paper discuss about the promising access control for data sharing in cloud which is identity-based encryption. We introduce the efficient revocation scheme for the system which is revocable-storage identity-based encryption scheme. It provides both forward and backward security of ciphertext. Then we will have glance at the architecture and steps involved in identity-based encryption. Finally we propose system that provide secure file sharing system using identity-based encryption scheme.
As cloud computing becomes prevalent, more and more data owners are likely to outsource their data to a cloud server. However, to ensure privacy, the data should be encrypted before outsourcing. Symmetric searchable encryption allows users to retrieve keyword over encrypted data without decrypting the data. Many existing schemes that are based on symmetric searchable encryption only support single keyword search, conjunctive keywords search, multiple keywords search, or single phrase search. However, some schemes, i.e., static schemes, only search one phrase in a query request. In this paper, we propose a multi-phrase ranked search over encrypted cloud data, which also supports dynamic update operations, such as adding or deleting files. We used an inverted index to record the locations of keywords and to judge whether the phrase appears. This index can search for keywords efficiently. In order to rank the results and protect the privacy of relevance score, the relevance score evaluation model is used in searching process on client-side. Also, the special construction of the index makes the scheme dynamic. The data owner can update the cloud data at very little cost. Security analyses and extensive experiments were conducted to demonstrate the safety and efficiency of the proposed scheme.
In the big data era, many users upload data to cloud while security concerns are growing. By using attribute-based encryption (ABE), users can securely store data in cloud while exerting access control over it. Revocation is necessary for real-world applications of ABE so that revoked users can no longer decrypt data. In actual implementations, however, revocation requires re-encryption of data in client side through download, decrypt, encrypt, and upload, which results in huge communication cost between the client and the cloud depending on the data size. In this paper, we propose a new method where the data can be re-encrypted in cloud without downloading any data. The experimental result showed that our method reduces the communication cost by one quarter in comparison with the trivial solution where re-encryption is performed in client side.