Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2018-01-16
Kansal, V., Dave, M..  2017.  Proactive DDoS attack detection and isolation. 2017 International Conference on Computer, Communications and Electronics (Comptelix). :334–338.

The increased number of cyber attacks makes the availability of services a major security concern. One common type of cyber threat is distributed denial of service (DDoS). A DDoS attack is aimed at disrupting the legitimate users from accessing the services. It is easier for an insider having legitimate access to the system to deceive any security controls resulting in insider attack. This paper proposes an Early Detection and Isolation Policy (EDIP)to mitigate insider-assisted DDoS attacks. EDIP detects insider among all legitimate clients present in the system at proxy level and isolate it from innocent clients by migrating it to attack proxy. Further an effective algorithm for detection and isolation of insider is developed with the aim of maximizing attack isolation while minimizing disruption to benign clients. In addition, concept of load balancing is used to prevent proxies from getting overloaded.

Shin, Youngjoo, Koo, Dongyoung, Hur, Junbeom.  2017.  A Survey of Secure Data Deduplication Schemes for Cloud Storage Systems. ACM Comput. Surv.. 49:74:1–74:38.

Data deduplication has attracted many cloud service providers (CSPs) as a way to reduce storage costs. Even though the general deduplication approach has been increasingly accepted, it comes with many security and privacy problems due to the outsourced data delivery models of cloud storage. To deal with specific security and privacy issues, secure deduplication techniques have been proposed for cloud data, leading to a diverse range of solutions and trade-offs. Hence, in this article, we discuss ongoing research on secure deduplication for cloud data in consideration of the attack scenarios exploited most widely in cloud storage. On the basis of classification of deduplication system, we explore security risks and attack scenarios from both inside and outside adversaries. We then describe state-of-the-art secure deduplication techniques for each approach that deal with different security issues under specific or combined threat models, which include both cryptographic and protocol solutions. We discuss and compare each scheme in terms of security and efficiency specific to different security goals. Finally, we identify and discuss unresolved issues and further research challenges for secure deduplication in cloud storage.

Arasu, Arvind, Eguro, Ken, Kaushik, Raghav, Kossmann, Donald, Meng, Pingfan, Pandey, Vineet, Ramamurthy, Ravi.  2017.  Concerto: A High Concurrency Key-Value Store with Integrity. Proceedings of the 2017 ACM International Conference on Management of Data. :251–266.

Verifying the integrity of outsourced data is a classic, well-studied problem. However current techniques have fundamental performance and concurrency limitations for update-heavy workloads. In this paper, we investigate the potential advantages of deferred and batched verification rather than the per-operation verification used in prior work. We present Concerto, a comprehensive key-value store designed around this idea. Using Concerto, we argue that deferred verification preserves the utility of online verification and improves concurrency resulting in orders-of-magnitude performance improvement. On standard benchmarks, the performance of Concerto is within a factor of two when compared to state-of-the-art key-value stores without integrity.

Chen, Jeang-Kuo, Lee, Wei-Zhe.  2017.  Enterprise Data Integration by Internal and External Systems. Proceedings of the 2017 International Conference on E-Business and Internet. :50–53.

ERP helps enterprises to integrate internal information and to improve operating performance and reaction capability. However, it is not enough to depend on ERP if enterprises want to develop quickly. The enterprise also needs several external supporting sub-systems such as personnel management system, equipment management system, etc. These sub-systems maybe outsourcing customized or developed by internal IT staff. They may be distributed in many branches or headquarter to collect the first line of data and then to deliver data to ERP for data integration. Most enterprises use human or timing batch process via internet to deliver data to ERP, but the two methods are not ideal from the view point of efficiency and security. This paper proposes a fast and safe way with both trigger and data replication techniques to deliver in time the distributed data to ERP for data integration.

Eltayesh, Faryed, Bentahar, Jamal.  2017.  Verifiable Outsourced Database in the Cloud Using Game Theory. Proceedings of the Symposium on Applied Computing. :370–377.

In the verifiable database (VDB) model, a computationally weak client (database owner) delegates his database management to a database service provider on the cloud, which is considered untrusted third party, while users can query the data and verify the integrity of query results. Since the process can be computationally costly and has a limited support for sophisticated query types such as aggregated queries, we propose in this paper a framework that helps bridge the gap between security and practicality trade-offs. The proposed framework remodels the verifiable database problem using Stackelberg security game. In the new model, the database owner creates and uploads to the database service provider the database and its authentication structure (AS). Next, the game is played between the defender (verifier), who is a trusted party to the database owner and runs scheduled randomized verifications using Stackelberg mixed strategy, and the database service provider. The idea is to randomize the verification schedule in an optimized way that grants the optimal payoff for the verifier while making it extremely hard for the database service provider or any attacker to figure out which part of the database is being verified next. We have implemented and compared the proposed model performance with a uniform randomization model. Simulation results show that the proposed model outperforms the uniform randomization model. Furthermore, we have evaluated the efficiency of the proposed model against different cost metrics.

Ferretti, L., Marchetti, M., Colajanni, M..  2017.  Verifiable Delegated Authorization for User-Centric Architectures and an OAuth2 Implementation. 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). 2:718–723.

Delegated authorization protocols have become wide-spread to implement Web applications and services, where some popular providers managing people identity information and personal data allow their users to delegate third party Web services to access their data. In this paper, we analyze the risks related to untrusted providers not behaving correctly, and we solve this problem by proposing the first verifiable delegated authorization protocol that allows third party services to verify the correctness of users data returned by the provider. The contribution of the paper is twofold: we show how delegated authorization can be cryptographically enforced through authenticated data structures protocols, we extend the standard OAuth2 protocol by supporting efficient and verifiable delegated authorization including database updates and privileges revocation.

Kumar, P. S., Parthiban, L., Jegatheeswari, V..  2017.  Auditing of Data Integrity over Dynamic Data in Cloud. 2017 Second International Conference on Recent Trends and Challenges in Computational Models (ICRTCCM). :43–48.

Cloud computing is a new computing paradigm which encourages remote data storage. This facility shoots up the necessity of secure data auditing mechanism over outsourced data. Several mechanisms are proposed in the literature for supporting dynamic data. However, most of the existing schemes lack the security feature, which can withstand collusion attacks between the cloud server and the abrogated users. This paper presents a technique to overthrow the collusion attacks and the data auditing mechanism is achieved by means of vector commitment and backward unlinkable verifier local revocation group signature. The proposed work supports multiple users to deal with the remote cloud data. The performance of the proposed work is analysed and compared with the existing techniques and the experimental results are observed to be satisfactory in terms of computational and time complexity.

Ahmad, M., Shahid, A., Qadri, M. Y., Hussain, K., Qadri, N. N..  2017.  Fingerprinting non-numeric datasets using row association and pattern generation. 2017 International Conference on Communication Technologies (ComTech). :149–155.

Being an era of fast internet-based application environment, large volumes of relational data are being outsourced for business purposes. Therefore, ownership and digital rights protection has become one of the greatest challenges and among the most critical issues. This paper presents a novel fingerprinting technique to protect ownership rights of non-numeric digital data on basis of pattern generation and row association schemes. Firstly, fingerprint sequence is formulated by using secret key and buyer's Unique ID. With the chunks of these sequences and by applying the Fibonacci series, we select some rows. The selected rows are candidates of fingerprinting. The primary key of selected row is protected using RSA encryption; after which a pattern is designed by randomly choosing the values of different attributes of datasets. The encryption of primary key leads to develop an association between original and fake pattern; creating an ease in fingerprint detection. Fingerprint detection algorithm first finds the fake rows and then extracts the fingerprint sequence from the fake attributes, hence identifying the traitor. Some most important features of the proposed approach is to overcome major weaknesses such as error tolerance, integrity and accuracy in previously proposed fingerprinting techniques. The results show that technique is efficient and robust against several malicious attacks.

Vavala, B., Neves, N., Steenkiste, P..  2017.  Secure Tera-scale Data Crunching with a Small TCB. 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :169–180.

Outsourcing services to third-party providers comes with a high security cost-to fully trust the providers. Using trusted hardware can help, but current trusted execution environments do not adequately support services that process very large scale datasets. We present LASTGT, a system that bridges this gap by supporting the execution of self-contained services over a large state, with a small and generic trusted computing base (TCB). LASTGT uses widely deployed trusted hardware to guarantee integrity and verifiability of the execution on a remote platform, and it securely supplies data to the service through simple techniques based on virtual memory. As a result, LASTGT is general and applicable to many scenarios such as computational genomics and databases, as we show in our experimental evaluation based on an implementation of LAST-GT on a secure hypervisor. We also describe a possible implementation on Intel SGX.

Mohammad Etemad, Mohammad, Küpçü, Alptekin.  2016.  Generic Efficient Dynamic Proofs of Retrievability. Proceedings of the 2016 ACM on Cloud Computing Security Workshop. :85–96.

Together with its great advantages, cloud storage brought many interesting security issues to our attention. Since 2007, with the first efficient storage integrity protocols Proofs of Retrievability (PoR) of Juels and Kaliski, and Provable Data Possession (PDP) of Ateniese et al., many researchers worked on such protocols.

The difference among PDP and PoR models were greatly debated. The first DPDP scheme was shown by Erway et al. in 2009, while the first DPoR scheme was created by Cash et al. in 2013. We show how to obtain DPoR from DPDP, PDP, and erasure codes, making us realize that even though we did not know it, we could have had a DPoR solution in 2009.

We propose a general framework for constructing DPoR schemes that encapsulates known DPoR schemes as its special cases. We show practical and interesting optimizations enabling better performance than Chandran et al. and Shi et al. constructions. For the first time, we show how to obtain constant audit bandwidth for DPoR, independent of the data size, and how the client can greatly speed up updates with O(λ√n) local storage (where n is the number of blocks, and λ is the security parameter), which corresponds to ~ 3MB for 10GB outsourced data, and can easily be obtained in today's smart phones, let alone computers.

Ba-Hutair, M. N., Kamel, I..  2016.  A New Scheme for Protecting the Privacy and Integrity of Spatial Data on the Cloud. 2016 IEEE Second International Conference on Multimedia Big Data (BigMM). :394–397.

As the amount of spatial data gets bigger, organizations realized that it is cheaper and more flexible to keep their data on the Cloud rather than to establish and maintain in-house huge data centers. Though this saves a lot for IT costs, organizations are still concerned about the privacy and security of their data. Encrypting the whole database before uploading it to the Cloud solves the security issue. But querying the database requires downloading and decrypting the data set, which is impractical. In this paper, we propose a new scheme for protecting the privacy and integrity of spatial data stored in the Cloud while being able to execute range queries efficiently. The proposed technique suggests a new index structure to support answering range query over encrypted data set. The proposed indexing scheme is based on the Z-curve. The paper describes a distributed algorithm for answering range queries over spatial data stored on the Cloud. We carried many simulation experiments to measure the performance of the proposed scheme. The experimental results show that the proposed scheme outperforms the most recent schemes by Kim et al. in terms of data redundancy.

Gurjar, S. P. S., Pasupuleti, S. K..  2016.  A privacy-preserving multi-keyword ranked search scheme over encrypted cloud data using MIR-tree. 2016 International Conference on Computing, Analytics and Security Trends (CAST). :533–538.

With increasing popularity of cloud computing, the data owners are motivated to outsource their sensitive data to cloud servers for flexibility and reduced cost in data management. However, privacy is a big concern for outsourcing data to the cloud. The data owners typically encrypt documents before outsourcing for privacy-preserving. As the volume of data is increasing at a dramatic rate, it is essential to develop an efficient and reliable ciphertext search techniques, so that data owners can easily access and update cloud data. In this paper, we propose a privacy preserving multi-keyword ranked search scheme over encrypted data in cloud along with data integrity using a new authenticated data structure MIR-tree. The MIR-tree based index with including the combination of widely used vector space model and TF×IDF model in the index construction and query generation. We use inverted file index for storing word-digest, which provides efficient and fast relevance between the query and cloud data. Design an authentication set(AS) for authenticating the queries, for verifying top-k search results. Because of tree based index, our scheme achieves optimal search efficiency and reduces communication overhead for verifying the search results. The analysis shows security and efficiency of our scheme.

Ghutugade, K. B., Patil, G. A..  2016.  Privacy preserving auditing for shared data in cloud. 2016 International Conference on Computing, Analytics and Security Trends (CAST). :300–305.

Cloud computing, often referred to as simply “the cloud,” is the delivery of on-demand computing resources; everything from applications to data centers over the Internet. Cloud is used not only for storing data, but also the stored data can be shared by multiple users. Due to this, the integrity of cloud data is subject to doubt. Every time it is not possible for user to download all data and verify integrity, so proposed system contain Third Party Auditor (TPA) to verify the integrity of shared data. During auditing, the shared data is kept private from public verifiers, who are able to verify shared data integrity without downloading or retrieving the entire data file. Group signature is used to preserve identity privacy of group members from third party auditor. Privacy preserving is done to ensure that the TPA cannot derive user's data content from the information collected during the auditing process.

Miguel, Rodel Felipe, Dash, Akankshita, Aung, Khin Mi Mi.  2016.  A Study of Secure DBaaS with Encrypted Data Transactions. Proceedings of the 2Nd International Conference on Communication and Information Processing. :43–47.

The emergence of cloud computing allowed different IT services to be outsourced to cloud service providers (CSP). This includes the management and storage of user's structured data called Database as a Service (DBaaS). However, DBaaS requires users to trust the CSP to protect their data, which is inherent in all cloud-based services. Enterprises and Small-to-Medium Businesses (SMB) see this as a roadblock in adopting cloud services (and DBaaS) because they do not have full control of the security and privacy of the sensitive data they are storing on the cloud. One of the solutions is for the data owners to store their sensitive data in the cloud's storage services in encrypted form. However, to take full advantage of DBaaS, there should be a solution to manage the structured data while it is encrypted. Upcoming technologies like Secure Multi-Party Computing (MPC) and Fully Homomorphic Encryption (FHE) are recent advances in security that allow computation on encrypted data. FHE is considered as the holy grail of cryptography and the original blue print's processing performance is in the order of 1014 times longer than without encryption. Our work gives an insight on how far the state-of-the-art is into realizing it into a practical and viable solution for cloud computing data services. We achieved this by comparing two types of encrypted database management system (DBMS). We performed well-known complex database queries and measured the performance results of the two DBMS. We used an FHE-encrypted relational DBMS (RDBMS) and for specific query sets it takes only a few milliseconds, and the highest is in the order of 104 times longer than encrypted object-oriented DBMS (OODBMS). Aside from focusing on performance of the two databases, we also evaluated the network resource usage, standards availability, and application integration.

Chen, Fei, Zhang, Taoyi, Chen, Jianyong, Xiang, Tao.  2016.  Cloud Storage Integrity Checking: Going from Theory to Practice. Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. :24–28.

In the past decade, researchers have proposed various cloud storage integrity checking protocols to enable a cloud storage user to validate the integrity of the user's outsourced data. While the proposed solutions can in principle solve the cloud storage integrity checking problem, they are not sufficient for current cloud storage practices. In this position paper, we show the gaps between theoretical and practical cloud storage integrity checking solutions, through a categorization of existing solutions and an analysis of their underlying assumptions. To bridge the gap, we also call for practical cloud storage integrity checking solutions for three scenarios.

Lansing, Jens, Sunyaev, Ali.  2016.  Trust in Cloud Computing: Conceptual Typology and Trust-Building Antecedents. SIGMIS Database. 47:58–96.

Trust is an important facilitator for successful business relationships and an important technology adoption determinant. However, thus far trust has received little attention in the context of cloud computing, resulting in a lack of understanding of the dimensions of trust in cloud services and trust-building antecedents. Although the literature provides various conceptual models of trust for contexts related to cloud computing that may serve as a reference, in particular trust in IT outsourcing providers and trust in IT artifacts, idiosyncrasies of trust in cloud computing require a novel conceptual model of trust. First, a cloud service has a dual nature of being an IT artifact and a service provided by an organization. Second, cloud services are offered in impersonal cloud marketplaces and build upon a nested network of cloud services within the cloud ecosystem. In this article, we first analyze the concept of trust in cloud contexts. Next, we develop a conceptual model that describes trust in cloud services. The conceptual model incorporates the duality of trust in a cloud provider organization and trust in an IT artifact, as well as trust types for the impersonal environment and the cloud computing ecosystem. Using the conceptual model as a lens we then review 43 empirical studies on trust in IT outsourcing and trust in IT artifacts that were identified by a structured literature search. The resulting conceptual model provides a conceptual typology of constructs for trust in cloud services, defines trust-building antecedents, and develops 19 propositions describing the relationships between trust constructs and between trust constructs and trust-building antecedents. The conceptual model contributes to research by creating grounds for future theory-building on trust in cloud contexts, integrating two previously disjoint strands in the trust literature, and identifying knowledge gaps. Based on the conceptual model, we furthermore provide practical advice for managers from service providers, platform providers, customers, and institutional authorities.

Zhang, Yihua, Blanton, Marina.  2016.  Efficient Dynamic Provable Possession of Remote Data via Update Trees. Trans. Storage. 12:9:1–9:45.

The emergence and wide availability of remote storage service providers prompted work in the security community that allows clients to verify integrity and availability of the data that they outsourced to a not fully trusted remote storage server at a relatively low cost. Most recent solutions to this problem allow clients to read and update (i.e., insert, modify, or delete) stored data blocks while trying to lower the overhead associated with verifying the integrity of the stored data. In this work, we develop a novel scheme, performance of which favorably compares with the existing solutions. Our solution additionally enjoys a number of new features, such as a natural support for operations on ranges of blocks, revision control, and support for multiple user access to shared content. The performance guarantees that we achieve stem from a novel data structure called a balanced update tree and removing the need for interaction during update operations in addition to communicating the updates themselves.

Preethi, G., Gopalan, N. P..  2016.  Integrity Verification For Outsourced XML Database In Cloud Storage. Proceedings of the International Conference on Informatics and Analytics. :42:1–42:5.

Database outsourcing has gained significance like the "Application-as-a-Service" model wherein a third party provider has not trusted. The problems related to security and privacy of outsourced XML data are data confidentiality, user privacy/data privacy and finally query assurance. Existing techniques of query assurance involve properties of certain cryptographic primitives in static scenarios. A novel dynamic index structure is called Merkle Hash and B+- Tree. The combination of B+- Tree and Merkle Hash Tree advantages has been proposed in this paper for dynamic outsourced XML databases. The query assurances having the issues are correctness query Completeness and Freshness for the stored XML Database. In addition, the outsourced XML database with integrity verification has been shown to be more efficient and supports updates in cloud paradigms.

2018-01-10
Wu, Xiaotong, Dou, Wanchun, Ni, Qiang.  2017.  Game Theory Based Privacy Preserving Analysis in Correlated Data Publication. Proceedings of the Australasian Computer Science Week Multiconference. :73:1–73:10.

Privacy preserving on data publication has been an important research field over the past few decades. One of the fundamental challenges in privacy preserving data publication is the trade-off problem between privacy and utility of the single and independent data set. However, recent research works have shown that the advanced privacy mechanism, i.e., differential privacy, is vulnerable when multiple data sets are correlated. In this case, the trade-off problem between privacy and utility is evolved into a game problem, in which the payoff of each player is dependent not only on his privacy parameter, but also on his neighbors' privacy parameters. In this paper, we firstly present the definition of correlated differential privacy to evaluate the real privacy level of a single data set influenced by the other data sets. Then, we construct a game model of multiple players, who each publishes the data set sanitized by differential privacy. Next, we analyze the existence and uniqueness of the pure Nash Equilibrium and demonstrate the sufficient conditions in the game. Finally, we refer to a notion, i.e., the price of anarchy, to evaluate efficiency of the pure Nash Equilibrium.

Ping, Haoyue, Stoyanovich, Julia, Howe, Bill.  2017.  DataSynthesizer: Privacy-Preserving Synthetic Datasets. Proceedings of the 29th International Conference on Scientific and Statistical Database Management. :42:1–42:5.
To facilitate collaboration over sensitive data, we present DataSynthesizer, a tool that takes a sensitive dataset as input and generates a structurally and statistically similar synthetic dataset with strong privacy guarantees. The data owners need not release their data, while potential collaborators can begin developing models and methods with some confidence that their results will work similarly on the real dataset. The distinguishing feature of DataSynthesizer is its usability — the data owner does not have to specify any parameters to start generating and sharing data safely and effectively. DataSynthesizer consists of three high-level modules — DataDescriber, DataGenerator and ModelInspector. The first, DataDescriber, investigates the data types, correlations and distributions of the attributes in the private dataset, and produces a data summary, adding noise to the distributions to preserve privacy. DataGenerator samples from the summary computed by DataDescriber and outputs synthetic data. ModelInspector shows an intuitive description of the data summary that was computed by DataDescriber, allowing the data owner to evaluate the accuracy of the summarization process and adjust any parameters, if desired. We describe DataSynthesizer and illustrate its use in an urban science context, where sharing sensitive, legally encumbered data between agencies and with outside collaborators is reported as the primary obstacle to data-driven governance. The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/DataSynthesizer.
Deng, Xiyue, Mirkovic, Jelena.  2017.  Commoner Privacy And A Study On Network Traces. Proceedings of the 33rd Annual Computer Security Applications Conference. :566–576.
Differential privacy has emerged as a promising mechanism for privacy-safe data mining. One popular differential privacy mechanism allows researchers to pose queries over a dataset, and adds random noise to all output points to protect privacy. While differential privacy produces useful data in many scenarios, added noise may jeopardize utility for queries posed over small populations or over long-tailed datasets. Gehrke et al. proposed crowd-blending privacy, with random noise added only to those output points where fewer than k individuals (a configurable parameter) contribute to the point in the same manner. This approach has a lower privacy guarantee, but preserves more research utility than differential privacy. We propose an even more liberal privacy goal—commoner privacy—which fuzzes (omits, aggregates or adds noise to) only those output points where an individual's contribution to this point is an outlier. By hiding outliers, our mechanism hides the presence or absence of an individual in a dataset. We propose one mechanism that achieves commoner privacy—interactive k-anonymity. We also discuss query composition and show how we can guarantee privacy via either a pre-sampling step or via query introspection. We implement interactive k-anonymity and query introspection in a system called Patrol for network trace processing. Our evaluation shows that commoner privacy prevents common attacks while preserving orders of magnitude higher research utility than differential privacy, and at least 9-49 times the utility of crowd-blending privacy.
Zhang, Jun, Cormode, Graham, Procopiuc, Cecilia M., Srivastava, Divesh, Xiao, Xiaokui.  2017.  PrivBayes: Private Data Release via Bayesian Networks. ACM Trans. Database Syst.. 42:25:1–25:41.
Privacy-preserving data publishing is an important problem that has been the focus of extensive study. The state-of-the-art solution for this problem is differential privacy, which offers a strong degree of privacy protection without making restrictive assumptions about the adversary. Existing techniques using differential privacy, however, cannot effectively handle the publication of high-dimensional data. In particular, when the input dataset contains a large number of attributes, existing methods require injecting a prohibitive amount of noise compared to the signal in the data, which renders the published data next to useless. To address the deficiency of the existing methods, this paper presents PrivBayes, a differentially private method for releasing high-dimensional data. Given a dataset D, PrivBayes first constructs a Bayesian network N, which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of low-dimensional marginals of D. After that, PrivBayes injects noise into each marginal in P to ensure differential privacy and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, PrivBayes samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data. Intuitively, PrivBayes circumvents the curse of dimensionality, as it injects noise into the low-dimensional marginals in P instead of the high-dimensional dataset D. Private construction of Bayesian networks turns out to be significantly challenging, and we introduce a novel approach that uses a surrogate function for mutual information to build the model more accurately. We experimentally evaluate PrivBayes on real data and demonstrate that it significantly outperforms existing solutions in terms of accuracy.
He, Zaobo, Cai, Zhipeng, Sun, Yunchuan, Li, Yingshu, Cheng, Xiuzhen.  2017.  Customized Privacy Preserving for Inherent Data and Latent Data. Personal Ubiquitous Comput.. 21:43–54.
The huge amount of sensory data collected from mobile devices has offered great potentials to promote more significant services based on user data extracted from sensor readings. However, releasing user data could also seriously threaten user privacy. It is possible to directly collect sensitive information from released user data without user permissions. Furthermore, third party users can also infer sensitive information contained in released data in a latent manner by utilizing data mining techniques. In this paper, we formally define these two types of threats as inherent data privacy and latent data privacy and construct a data-sanitization strategy that can optimize the tradeoff between data utility and customized two types of privacy. The key novel idea lies that the developed strategy can combat against powerful third party users with broad knowledge about users and launching optimal inference attacks. We show that our strategy does not reduce the benefit brought by user data much, while sensitive information can still be protected. To the best of our knowledge, this is the first work that preserves both inherent data privacy and latent data privacy.
Aissaoui, K., idar, H. Ait, Belhadaoui, H., Rifi, M..  2017.  Survey on data remanence in Cloud Computing environment. 2017 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS). :1–4.

The Cloud Computing is a developing IT concept that faces some issues, which are slowing down its evolution and adoption by users across the world. The lack of security has been the main concern. Organizations and entities need to ensure, inter alia, the integrity and confidentiality of their outsourced sensible data within a cloud provider server. Solutions have been examined in order to strengthen security models (strong authentication, encryption and fragmentation before storing, access control policies...). More particularly, data remanence is undoubtedly a major threat. How could we be sure that data are, when is requested, truly and appropriately deleted from remote servers? In this paper, we aim to produce a survey about this interesting subject and to address the problem of residual data in a cloud-computing environment, which is characterized by the use of virtual machines instantiated in remote servers owned by a third party.

Vakilinia, I., Tosh, D. K., Sengupta, S..  2017.  3-Way game model for privacy-preserving cybersecurity information exchange framework. MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM). :829–834.

With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.