Biblio
Existing secure deletion approaches are inefficient in erasing data permanently because file systems have no knowledge of the data layout on the storage device, nor is the storage device aware of file information within the file systems. This inefficiency is exaggerated on the emerging shingled magnetic recording (SMR) drive due to its inherent sequential-write constraint. On SMR drives, secure deletion requests may lead to serious write amplification and performance degradation if the data layout is not properly configured. Such observation motivates us to propose a file-oriented fast secure deletion (FFSD) strategy to alleviate the negative impacts of SMR drives' sequential-write constraint and improve the efficiency of secure deletion operations on SMR drives. A series of experiments was conducted to demonstrate the capability of the proposed strategy on improving the efficiency of secure deletion on SMR drives.
We present a novel, and use case agnostic method of identifying and circumventing private data exposure across distributed and high-dimensional data repositories. Examples of distributed high-dimensional data repositories include medical research and treatment data, where oftentimes more than 300 describing attributes appear. As such, providing strong guarantees of data anonymity in these repositories is a hard constraint in adhering to privacy legislation. Yet, when applied to distributed high-dimensional data, existing anonymisation algorithms incur high levels of information loss and do not guarantee privacy defeating the purpose of anonymisation. In this paper, we address this issue by using Bayesian networks to handle data transformation for anonymisation. By evaluating every attribute combination to determine the privacy exposure risk, the conditional probability linking attribute pairs is computed. Pairs with a high conditional probability expose the risk of deanonymisation similar to quasi-identifiers and can be separated instead of deleted, as in previous algorithms. Attribute separation removes the risk of privacy exposure, and deletion avoidance results in a significant reduction in information loss. In other words, assimilating the conditional probability of outliers directly in the adjacency matrix in a greedy fashion is quick and thwarts de-anonymisation. Since identifying every privacy violating attribute combination is a W[2]-complete problem, we optimise the procedure with a multigrid solver method by evaluating the conditional probabilities between attribute pairs, and aggregating state space explosion of attribute pairs through manifold learning. Finally, incremental processing of new data is achieved through inexpensive, continuous (delta) learning.
As an efficient deletion method, unlinking is widely used in cloud storage. While unlinking is a kind of incomplete deletion, `deleted data' remains on cloud and can be recovered. To make `deleted data' unrecoverable, overwriting is an effective method on cloud. Users lose control over their data on cloud once deleted, so it is difficult for them to confirm overwriting. In face of such a crucial problem, we propose a Provable and Traceable Assured Deletion (PTAD) scheme in cloud storage based on blockchain. PTAD scheme relies on overwriting to achieve assured deletion. We reference the idea of data integrity checking and design algorithms to verify if cloud overwrites original blocks properly as specific patterns. We utilize technique of smart contract in blockchain to automatically execute verification and keep transaction in ledger for tracking. The whole scheme can be divided into three stages-unlinking, overwriting and verification-and we design one specific algorithm for each stage. For evaluation, we implement PTAD scheme on cloud and construct a consortium chain with Hyperledger Fabric. The performance shows that PTAD scheme is effective and feasible.
Crowdsensing, driven by the proliferation of sensor-rich mobile devices, has emerged as a promising data sensing and aggregation paradigm. Despite useful, traditional crowdsensing systems typically rely on a centralized third-party platform for data collection and processing, which leads to concerns like single point of failure and lack of operation transparency. Such centralization hinders the wide adoption of crowdsensing by wary participants. We therefore explore an alternative design space of building crowdsensing systems atop the emerging decentralized blockchain technology. While enjoying the benefits brought by the public blockchain, we endeavor to achieve a consolidated set of desirable security properties with a proper choreography of latest techniques and our customized designs. We allow data providers to safely contribute data to the transparent blockchain with the confidentiality guarantee on individual data and differential privacy on the aggregation result. Meanwhile, we ensure the service correctness of data aggregation and sanitization by delicately employing hardware-assisted transparent enclave. Furthermore, we maintain the robustness of our system against faulty data providers that submit invalid data, with a customized zero-knowledge range proof scheme. The experiment results demonstrate the high efficiency of our designs on both mobile client and SGX-enabled server, as well as reasonable on-chain monetary cost of running our task contract on Ethereum.
Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe \textbackslashtextgreater 20x greater guarantees on expected privacy against comparable worst case statistics.
Machine learning is a major area in artificial intelligence, which enables computer to learn itself explicitly without programming. As machine learning is widely used in making decision automatically, attackers have strong intention to manipulate the prediction generated my machine learning model. In this paper we study about the different types of attacks and its countermeasures on machine learning model. By research we found that there are many security threats in various algorithms such as K-nearest-neighbors (KNN) classifier, random forest, AdaBoost, support vector machine (SVM), decision tree, we revisit existing security threads and check what are the possible countermeasures during the training and prediction phase of machine learning model. In machine learning model there are 2 types of attacks that is causative attack which occurs during the training phase and exploratory attack which occurs during the prediction phase, we will also discuss about the countermeasures on machine learning model, the countermeasures are data sanitization, algorithm robustness enhancement, and privacy preserving techniques.
Scala programming language combines object-oriented and functional programming in one concise, high-level language, and the language supports static types that help to avoid bugs in complex programs. This paper proposes a dynamic taint analyzer called ScalaTaint for Scala applications. The analyzer traces the propagation of malicious inputs from untrusted sources to sensitive sink methods in programs that can be exploited by adversaries. In this work, we evaluated the accuracy of ScalaTaint with a security benchmark suite including 7 projects in Scala. As a result, our analyzer could report 49 vulnerabilities within 753,372 lines of code. Moreover, the result of our performance measurement on ScalaBench shows 67% runtime overhead that demonstrates the usefulness and efficiently of our technique in comparison with similar tools.
In order to protect individuals' privacy, data have to be "well-sanitized" before sharing them, i.e. one has to remove any personal information before sharing data. However, it is not always clear when data shall be deemed well-sanitized. In this paper, we argue that the evaluation of sanitized data should be based on whether the data allows the inference of sensitive information that is specific to an individual, instead of being centered around the concept of re-identification. We propose a framework to evaluate the effectiveness of different sanitization techniques on a given dataset by measuring how much an individual's record from the sanitized dataset influences the inference of his/her own sensitive attribute. Our intent is not to accurately predict any sensitive attribute but rather to measure the impact of a single record on the inference of sensitive information. We demonstrate our approach by sanitizing two real datasets in different privacy models and evaluate/compare each sanitized dataset in our framework.
Event logs that originate from information systems enable comprehensive analysis of business processes, e.g., by process model discovery. However, logs potentially contain sensitive information about individual employees involved in process execution that are only partially hidden by an obfuscation of the event data. In this paper, we therefore address the risk of privacy-disclosure attacks on event logs with pseudonymized employee information. To this end, we introduce PRETSA, a novel algorithm for event log sanitization that provides privacy guarantees in terms of k-anonymity and t-closeness. It thereby avoids disclosure of employee identities, their membership in the event log, and their characterization based on sensitive attributes, such as performance information. Through step-wise transformations of a prefix-tree representation of an event log, we maintain its high utility for discovery of a performance-annotated process model. Experiments with real-world data demonstrate that sanitization with PRETSA yields event logs of higher utility compared to methods that exploit frequency-based filtering, while providing the same privacy guarantees.
Because cloud storage services have been broadly used in enterprises for online sharing and collaboration, sensitive information in images or documents may be easily leaked outside the trust enterprise on-premises due to such cloud services. Existing solutions to this problem have not fully explored the tradeoffs among application performance, service scalability, and user data privacy. Therefore, we propose CloudDLP, a generic approach for enterprises to automatically sanitize sensitive data in images and documents in browser-based cloud storage. To the best of our knowledge, CloudDLP is the first system that automatically and transparently detects and sanitizes both sensitive images and textual documents without compromising user experience or application functionality on browser-based cloud storage. To prevent sensitive information escaping from on-premises, CloudDLP utilizes deep learning methods to detect sensitive information in both images and textual documents. We have evaluated the proposed method on a number of typical cloud applications. Our experimental results show that it can achieve transparent and automatic data sanitization on the cloud storage services with relatively low overheads, while preserving most application functionalities.
As data security has become one of the most crucial issues in modern storage system/application designs, the data sanitization techniques are regarded as the promising solution on 3D NAND flash-memory-based devices. Many excellent works had been proposed to exploit the in-place reprogramming, erasure and encryption techniques to achieve and implement the sanitization functionalities. However, existing sanitization approaches could lead to performance, disturbance overheads or even deciphered issues. Different from existing works, this work aims at exploring an instantaneous data sanitization scheme by taking advantage of programming disturbance properties. Our proposed design can not only achieve the instantaneous data sanitization by exploiting programming disturbance and error correction code properly, but also enhance the performance with the recycling programming design. The feasibility and capability of our proposed design are evaluated by a series of experiments on 3D NAND flash memory chips, for which we have very encouraging results. The experiment results show that the proposed design could achieve the instantaneous data sanitization with low overhead; besides, it improves the average response time and reduces the number of block erase count by up to 86.8% and 88.8%, respectively.
A prioritized cyber defense remediation plan is critical for effective risk management in cyber-physical systems (CPS). The increased integration of Information Technology (IT)/Operational Technology (OT) in CPS has to lead to the need to identify the critical assets which, when affected, will impact resilience and safety. In this work, we propose a methodology for prioritized cyber risk remediation plan that balances operational resilience and economic loss (safety impacts) in CPS. We present a platform for modeling and analysis of the effect of cyber threats and random system faults on the safety of CPS that could lead to catastrophic damages. We propose to develop a data-driven attack graph and fault graph-based model to characterize the exploitability and impact of threats in CPS. We develop an operational impact assessment to quantify the damages. Finally, we propose the development of a strategic response decision capability that proposes optimal mitigation actions and policies that balances the trade-off between operational resilience (Tactical Risk) and Strategic Risk.
With the growing scale of Cyber-Physical Systems (CPSs), it is challenging to maintain their stability under all operating conditions. How to reduce the downtime and locate the failures becomes a core issue in system design. In this paper, we employ a hierarchical contract-based resilience framework to guarantee the stability of CPS. In this framework, we use Assume Guarantee (A-G) contracts to monitor the non-functional properties of individual components (e.g., power and latency), and hierarchically compose such contracts to deduce information about faults at the system level. The hierarchical contracts enable rapid fault detection in large-scale CPS. However, due to the vast number of components in CPS, manually designing numerous contracts and the hierarchy becomes challenging. To address this issue, we propose a technique to automatically decompose a root contract into multiple lower-level contracts depending on I/O dependencies between components. We then formulate a multi-objective optimization problem to search the optimal parameters of each lower-level contract. This enables automatic contract refinement taking into consideration the communication overhead between components. Finally, we use a case study from the manufacturing domain to experimentally demonstrate the benefits of the proposed framework.