Biblio
Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks.
Aiming at the problem of internal attackers of database system, anomaly detection method of user behaviour is used to detect the internal attackers of database system. With using Discrete-time Markov Chains (DTMC), an anomaly detection system of user behavior is proposed, which can detect the internal threats of database system. First, we make an analysis on SQL queries, which are user behavior features. Then, we use DTMC model extract behavior features of a normal user and the detected user and make a comparison between them. If the deviation of features is beyond threshold, the detected user behavior is judged as an anomaly behavior. The experiments are used to test the feasibility of the detction system. The experimental results show that this detction system can detect normal and abnormal user behavior precisely and effectively.
The increased number of cyber attacks makes the availability of services a major security concern. One common type of cyber threat is distributed denial of service (DDoS). A DDoS attack is aimed at disrupting the legitimate users from accessing the services. It is easier for an insider having legitimate access to the system to deceive any security controls resulting in insider attack. This paper proposes an Early Detection and Isolation Policy (EDIP)to mitigate insider-assisted DDoS attacks. EDIP detects insider among all legitimate clients present in the system at proxy level and isolate it from innocent clients by migrating it to attack proxy. Further an effective algorithm for detection and isolation of insider is developed with the aim of maximizing attack isolation while minimizing disruption to benign clients. In addition, concept of load balancing is used to prevent proxies from getting overloaded.
Data deduplication has attracted many cloud service providers (CSPs) as a way to reduce storage costs. Even though the general deduplication approach has been increasingly accepted, it comes with many security and privacy problems due to the outsourced data delivery models of cloud storage. To deal with specific security and privacy issues, secure deduplication techniques have been proposed for cloud data, leading to a diverse range of solutions and trade-offs. Hence, in this article, we discuss ongoing research on secure deduplication for cloud data in consideration of the attack scenarios exploited most widely in cloud storage. On the basis of classification of deduplication system, we explore security risks and attack scenarios from both inside and outside adversaries. We then describe state-of-the-art secure deduplication techniques for each approach that deal with different security issues under specific or combined threat models, which include both cryptographic and protocol solutions. We discuss and compare each scheme in terms of security and efficiency specific to different security goals. Finally, we identify and discuss unresolved issues and further research challenges for secure deduplication in cloud storage.
Verifying the integrity of outsourced data is a classic, well-studied problem. However current techniques have fundamental performance and concurrency limitations for update-heavy workloads. In this paper, we investigate the potential advantages of deferred and batched verification rather than the per-operation verification used in prior work. We present Concerto, a comprehensive key-value store designed around this idea. Using Concerto, we argue that deferred verification preserves the utility of online verification and improves concurrency resulting in orders-of-magnitude performance improvement. On standard benchmarks, the performance of Concerto is within a factor of two when compared to state-of-the-art key-value stores without integrity.
ERP helps enterprises to integrate internal information and to improve operating performance and reaction capability. However, it is not enough to depend on ERP if enterprises want to develop quickly. The enterprise also needs several external supporting sub-systems such as personnel management system, equipment management system, etc. These sub-systems maybe outsourcing customized or developed by internal IT staff. They may be distributed in many branches or headquarter to collect the first line of data and then to deliver data to ERP for data integration. Most enterprises use human or timing batch process via internet to deliver data to ERP, but the two methods are not ideal from the view point of efficiency and security. This paper proposes a fast and safe way with both trigger and data replication techniques to deliver in time the distributed data to ERP for data integration.
In the verifiable database (VDB) model, a computationally weak client (database owner) delegates his database management to a database service provider on the cloud, which is considered untrusted third party, while users can query the data and verify the integrity of query results. Since the process can be computationally costly and has a limited support for sophisticated query types such as aggregated queries, we propose in this paper a framework that helps bridge the gap between security and practicality trade-offs. The proposed framework remodels the verifiable database problem using Stackelberg security game. In the new model, the database owner creates and uploads to the database service provider the database and its authentication structure (AS). Next, the game is played between the defender (verifier), who is a trusted party to the database owner and runs scheduled randomized verifications using Stackelberg mixed strategy, and the database service provider. The idea is to randomize the verification schedule in an optimized way that grants the optimal payoff for the verifier while making it extremely hard for the database service provider or any attacker to figure out which part of the database is being verified next. We have implemented and compared the proposed model performance with a uniform randomization model. Simulation results show that the proposed model outperforms the uniform randomization model. Furthermore, we have evaluated the efficiency of the proposed model against different cost metrics.
Delegated authorization protocols have become wide-spread to implement Web applications and services, where some popular providers managing people identity information and personal data allow their users to delegate third party Web services to access their data. In this paper, we analyze the risks related to untrusted providers not behaving correctly, and we solve this problem by proposing the first verifiable delegated authorization protocol that allows third party services to verify the correctness of users data returned by the provider. The contribution of the paper is twofold: we show how delegated authorization can be cryptographically enforced through authenticated data structures protocols, we extend the standard OAuth2 protocol by supporting efficient and verifiable delegated authorization including database updates and privileges revocation.
Cloud computing is a new computing paradigm which encourages remote data storage. This facility shoots up the necessity of secure data auditing mechanism over outsourced data. Several mechanisms are proposed in the literature for supporting dynamic data. However, most of the existing schemes lack the security feature, which can withstand collusion attacks between the cloud server and the abrogated users. This paper presents a technique to overthrow the collusion attacks and the data auditing mechanism is achieved by means of vector commitment and backward unlinkable verifier local revocation group signature. The proposed work supports multiple users to deal with the remote cloud data. The performance of the proposed work is analysed and compared with the existing techniques and the experimental results are observed to be satisfactory in terms of computational and time complexity.
Being an era of fast internet-based application environment, large volumes of relational data are being outsourced for business purposes. Therefore, ownership and digital rights protection has become one of the greatest challenges and among the most critical issues. This paper presents a novel fingerprinting technique to protect ownership rights of non-numeric digital data on basis of pattern generation and row association schemes. Firstly, fingerprint sequence is formulated by using secret key and buyer's Unique ID. With the chunks of these sequences and by applying the Fibonacci series, we select some rows. The selected rows are candidates of fingerprinting. The primary key of selected row is protected using RSA encryption; after which a pattern is designed by randomly choosing the values of different attributes of datasets. The encryption of primary key leads to develop an association between original and fake pattern; creating an ease in fingerprint detection. Fingerprint detection algorithm first finds the fake rows and then extracts the fingerprint sequence from the fake attributes, hence identifying the traitor. Some most important features of the proposed approach is to overcome major weaknesses such as error tolerance, integrity and accuracy in previously proposed fingerprinting techniques. The results show that technique is efficient and robust against several malicious attacks.
Outsourcing services to third-party providers comes with a high security cost-to fully trust the providers. Using trusted hardware can help, but current trusted execution environments do not adequately support services that process very large scale datasets. We present LASTGT, a system that bridges this gap by supporting the execution of self-contained services over a large state, with a small and generic trusted computing base (TCB). LASTGT uses widely deployed trusted hardware to guarantee integrity and verifiability of the execution on a remote platform, and it securely supplies data to the service through simple techniques based on virtual memory. As a result, LASTGT is general and applicable to many scenarios such as computational genomics and databases, as we show in our experimental evaluation based on an implementation of LAST-GT on a secure hypervisor. We also describe a possible implementation on Intel SGX.
Together with its great advantages, cloud storage brought many interesting security issues to our attention. Since 2007, with the first efficient storage integrity protocols Proofs of Retrievability (PoR) of Juels and Kaliski, and Provable Data Possession (PDP) of Ateniese et al., many researchers worked on such protocols.
The difference among PDP and PoR models were greatly debated. The first DPDP scheme was shown by Erway et al. in 2009, while the first DPoR scheme was created by Cash et al. in 2013. We show how to obtain DPoR from DPDP, PDP, and erasure codes, making us realize that even though we did not know it, we could have had a DPoR solution in 2009.
We propose a general framework for constructing DPoR schemes that encapsulates known DPoR schemes as its special cases. We show practical and interesting optimizations enabling better performance than Chandran et al. and Shi et al. constructions. For the first time, we show how to obtain constant audit bandwidth for DPoR, independent of the data size, and how the client can greatly speed up updates with O(λ√n) local storage (where n is the number of blocks, and λ is the security parameter), which corresponds to ~ 3MB for 10GB outsourced data, and can easily be obtained in today's smart phones, let alone computers.
As the amount of spatial data gets bigger, organizations realized that it is cheaper and more flexible to keep their data on the Cloud rather than to establish and maintain in-house huge data centers. Though this saves a lot for IT costs, organizations are still concerned about the privacy and security of their data. Encrypting the whole database before uploading it to the Cloud solves the security issue. But querying the database requires downloading and decrypting the data set, which is impractical. In this paper, we propose a new scheme for protecting the privacy and integrity of spatial data stored in the Cloud while being able to execute range queries efficiently. The proposed technique suggests a new index structure to support answering range query over encrypted data set. The proposed indexing scheme is based on the Z-curve. The paper describes a distributed algorithm for answering range queries over spatial data stored on the Cloud. We carried many simulation experiments to measure the performance of the proposed scheme. The experimental results show that the proposed scheme outperforms the most recent schemes by Kim et al. in terms of data redundancy.
With increasing popularity of cloud computing, the data owners are motivated to outsource their sensitive data to cloud servers for flexibility and reduced cost in data management. However, privacy is a big concern for outsourcing data to the cloud. The data owners typically encrypt documents before outsourcing for privacy-preserving. As the volume of data is increasing at a dramatic rate, it is essential to develop an efficient and reliable ciphertext search techniques, so that data owners can easily access and update cloud data. In this paper, we propose a privacy preserving multi-keyword ranked search scheme over encrypted data in cloud along with data integrity using a new authenticated data structure MIR-tree. The MIR-tree based index with including the combination of widely used vector space model and TF×IDF model in the index construction and query generation. We use inverted file index for storing word-digest, which provides efficient and fast relevance between the query and cloud data. Design an authentication set(AS) for authenticating the queries, for verifying top-k search results. Because of tree based index, our scheme achieves optimal search efficiency and reduces communication overhead for verifying the search results. The analysis shows security and efficiency of our scheme.
Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources; everything from applications to data centers over the Internet. Cloud is used not only for storing data, but also the stored data can be shared by multiple users. Due to this, the integrity of cloud data is subject to doubt. Every time it is not possible for user to download all data and verify integrity, so proposed system contain Third Party Auditor (TPA) to verify the integrity of shared data. During auditing, the shared data is kept private from public verifiers, who are able to verify shared data integrity without downloading or retrieving the entire data file. Group signature is used to preserve identity privacy of group members from third party auditor. Privacy preserving is done to ensure that the TPA cannot derive user's data content from the information collected during the auditing process.
The emergence of cloud computing allowed different IT services to be outsourced to cloud service providers (CSP). This includes the management and storage of user's structured data called Database as a Service (DBaaS). However, DBaaS requires users to trust the CSP to protect their data, which is inherent in all cloud-based services. Enterprises and Small-to-Medium Businesses (SMB) see this as a roadblock in adopting cloud services (and DBaaS) because they do not have full control of the security and privacy of the sensitive data they are storing on the cloud. One of the solutions is for the data owners to store their sensitive data in the cloud's storage services in encrypted form. However, to take full advantage of DBaaS, there should be a solution to manage the structured data while it is encrypted. Upcoming technologies like Secure Multi-Party Computing (MPC) and Fully Homomorphic Encryption (FHE) are recent advances in security that allow computation on encrypted data. FHE is considered as the holy grail of cryptography and the original blue print's processing performance is in the order of 1014 times longer than without encryption. Our work gives an insight on how far the state-of-the-art is into realizing it into a practical and viable solution for cloud computing data services. We achieved this by comparing two types of encrypted database management system (DBMS). We performed well-known complex database queries and measured the performance results of the two DBMS. We used an FHE-encrypted relational DBMS (RDBMS) and for specific query sets it takes only a few milliseconds, and the highest is in the order of 104 times longer than encrypted object-oriented DBMS (OODBMS). Aside from focusing on performance of the two databases, we also evaluated the network resource usage, standards availability, and application integration.
In the past decade, researchers have proposed various cloud storage integrity checking protocols to enable a cloud storage user to validate the integrity of the user's outsourced data. While the proposed solutions can in principle solve the cloud storage integrity checking problem, they are not sufficient for current cloud storage practices. In this position paper, we show the gaps between theoretical and practical cloud storage integrity checking solutions, through a categorization of existing solutions and an analysis of their underlying assumptions. To bridge the gap, we also call for practical cloud storage integrity checking solutions for three scenarios.
Trust is an important facilitator for successful business relationships and an important technology adoption determinant. However, thus far trust has received little attention in the context of cloud computing, resulting in a lack of understanding of the dimensions of trust in cloud services and trust-building antecedents. Although the literature provides various conceptual models of trust for contexts related to cloud computing that may serve as a reference, in particular trust in IT outsourcing providers and trust in IT artifacts, idiosyncrasies of trust in cloud computing require a novel conceptual model of trust. First, a cloud service has a dual nature of being an IT artifact and a service provided by an organization. Second, cloud services are offered in impersonal cloud marketplaces and build upon a nested network of cloud services within the cloud ecosystem. In this article, we first analyze the concept of trust in cloud contexts. Next, we develop a conceptual model that describes trust in cloud services. The conceptual model incorporates the duality of trust in a cloud provider organization and trust in an IT artifact, as well as trust types for the impersonal environment and the cloud computing ecosystem. Using the conceptual model as a lens we then review 43 empirical studies on trust in IT outsourcing and trust in IT artifacts that were identified by a structured literature search. The resulting conceptual model provides a conceptual typology of constructs for trust in cloud services, defines trust-building antecedents, and develops 19 propositions describing the relationships between trust constructs and between trust constructs and trust-building antecedents. The conceptual model contributes to research by creating grounds for future theory-building on trust in cloud contexts, integrating two previously disjoint strands in the trust literature, and identifying knowledge gaps. Based on the conceptual model, we furthermore provide practical advice for managers from service providers, platform providers, customers, and institutional authorities.
The emergence and wide availability of remote storage service providers prompted work in the security community that allows clients to verify integrity and availability of the data that they outsourced to a not fully trusted remote storage server at a relatively low cost. Most recent solutions to this problem allow clients to read and update (i.e., insert, modify, or delete) stored data blocks while trying to lower the overhead associated with verifying the integrity of the stored data. In this work, we develop a novel scheme, performance of which favorably compares with the existing solutions. Our solution additionally enjoys a number of new features, such as a natural support for operations on ranges of blocks, revision control, and support for multiple user access to shared content. The performance guarantees that we achieve stem from a novel data structure called a balanced update tree and removing the need for interaction during update operations in addition to communicating the updates themselves.
Database outsourcing has gained significance like the "Application-as-a-Service" model wherein a third party provider has not trusted. The problems related to security and privacy of outsourced XML data are data confidentiality, user privacy/data privacy and finally query assurance. Existing techniques of query assurance involve properties of certain cryptographic primitives in static scenarios. A novel dynamic index structure is called Merkle Hash and B+- Tree. The combination of B+- Tree and Merkle Hash Tree advantages has been proposed in this paper for dynamic outsourced XML databases. The query assurances having the issues are correctness query Completeness and Freshness for the stored XML Database. In addition, the outsourced XML database with integrity verification has been shown to be more efficient and supports updates in cloud paradigms.
Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis.
In recent works, numerous physical-layer security systems have been proposed as alternatives to classic cryptography. Such systems aim to use the intrinsic properties of radio signals and the wireless medium to provide confidentiality and authentication to wireless devices. However, fundamental vulnerabilities are often discovered in these systems shortly after their inception. We therefore challenge the assumptions made by existing physical-layer security systems, and postulate that weaker assumptions are needed in order to adapt for practical scenarios. We also argue that if no computational advantage over an adversary can be ensured, secure communication cannot be realistically achieved.
The vision of smart environments, systems, and services is driven by the development of the Internet of Things (IoT). IoT devices produce large amounts of data and this data is used to make critical decisions in many systems. The data produced by these devices has to satisfy various security related requirements in order to be useful in practical scenarios. One of these requirements is data provenance which allows a user to trust the data regarding its origin and location. The low cost of many IoT devices and the fact that they may be deployed in unprotected spaces requires security protocols to be efficient and secure against physical attacks. This paper proposes a light-weight protocol for data provenance in the IoT. The proposed protocol uses physical unclonable functions (PUFs) to provide physical security and uniquely identify an IoT device. Moreover, wireless channel characteristics are used to uniquely identify a wireless link between an IoT device and a server/user. A brief security and performance analysis are presented to give a preliminary validation of the protocol.
Cyber risk management largely reduces to a race for information between defenders of ICT systems and attackers. Defenders can gain advantage in this race by sharing cyber risk information with each other. Yet, they often exchange less information than is socially desirable, because sharing decisions are guided by selfish rather than altruistic reasons. A growing line of research studies these strategic aspects that drive defenders’ sharing decisions. The present survey systematizes these works in a novel framework. It provides a consolidated understanding of defenders’ strategies to privately or publicly share information and enables us to distill trends in the literature and identify future research directions. We reveal that many theoretical works assume cyber risk information sharing to be beneficial, while empirical validations are often missing.