Biblio
Online Social Networks(OSN) plays a vital role in our day to day life. The most popular social network, Facebook alone counts currently 2.23 billion users worldwide. Online social network users are aware of the various security risks that exist in this scenario including privacy violations and they are utilizing the privacy settings provided by OSN providers to make their data safe. But most of them are unaware of the risk which exists after deletion of their data which is not really getting deleted from the OSN server. Self destruction of data is one of the prime recommended methods to achieve assured deletion of data. Numerous techniques have been developed for self destruction of data and this paper discusses and evaluates these techniques along with the various privacy risks faced by an OSN user in this web centered world.
Smart meters migrate conventional electricity grid into digitally enabled Smart Grid (SG), which is more reliable and efficient. Fine-grained energy consumption data collected by smart meters helps utility providers accurately predict users' demands and significantly reduce power generation cost, while it imposes severe privacy risks on consumers and may discourage them from using those “espionage meters". To enjoy the benefits of smart meter measured data without compromising the users' privacy, in this paper, we try to integrate distributed differential privacy (DDP) techniques into data-driven optimization, and propose a novel scheme that not only minimizes the cost for utility providers but also preserves the DDP of users' energy profiles. Briefly, we add differential private noises to the users' energy consumption data before the smart meters send it to the utility provider. Due to the uncertainty of the users' demand distribution, the utility provider aggregates a given set of historical users' differentially private data, estimates the users' demands, and formulates the data- driven cost minimization based on the collected noisy data. We also develop algorithms for feasible solutions, and verify the effectiveness of the proposed scheme through simulations using the simulated energy consumption data generated from the utility company's real data analysis.
In rural/remote areas, resource constrained smart micro-grid (RCSMG) architectures can offer a cost-effective power management and supply alternative to national power grid connections. RCSMG architectures handle communications over distributed lossy networks to minimize operation costs. However, the unreliable nature of lossy networks makes privacy an important consideration. Existing anonymisation works on data perturbation work mainly by distortion with additive noise. Apply these solutions to RCSMGs is problematic, because deliberate noise additions must be distinguishable both from system and adversarial generated noise. In this paper, we present a brief survey of privacy risks in RCSMGs centered on inference, and propose a method of mitigating these risks. The lesson here is that while RCSMGs give users more control over power management and distribution, good anonymisation is essential to protecting personal information on RCSMGs.
Nowadays we are witnessing an unprecedented evolution in how we gather and process information. Technological advances in mobile devices as well as ubiquitous wireless connectivity have brought about new information processing paradigms and opportunities for virtually all kinds of scientific and business activity. These new paradigms rest on three pillars: i) numerous powerful portable devices operated by human intelligence, ubiquitous in space and available, most of the time, ii) unlimited environment sensing capabilities of the devices, and iii) fast networks connecting the devices to Internet information processing platforms and services. These pillars implement the concepts of crowdsourcing and collective intelligence. These concepts describe online services that are based on the massive participation of users and the capabilities of their devices.in order to produce results and information which are "more than the sum of the part". The EU project Privacy Flag relies exactly on these two concepts in order to mobilize roaming citizens to contribute, through crowdsourcing, information about risky applications and dangerous web sites whose processing may produce emergent threat patterns, not evident in the contributed information alone, reelecting a collective intelligence action. Crowdsourcing and collective intelligence, in this context, has numerous advantages, such as raising privacy-awareness among people. In this paper we summarize our work in this project and describe the capabilities and functionalities of the Privacy Flag Platform.
Sharing cyber security data across organizational boundaries brings both privacy risks in the exposure of personal information and data, and organizational risk in disclosing internal information. These risks occur as information leaks in network traffic or logs, and also in queries made across organizations. They are also complicated by the trade-offs in privacy preservation and utility present in anonymization to manage disclosure. In this paper, we define three principles that guide sharing security information across organizations: Least Disclosure, Qualitative Evaluation, and Forward Progress. We then discuss engineering approaches that apply these principles to a distributed security system. Application of these principles can reduce the risk of data exposure and help manage trust requirements for data sharing, helping to meet our goal of balancing privacy, organizational risk, and the ability to better respond to security with shared information.
Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations – asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.