Visible to the public Biblio

Filters: Keyword is privacy risks  [Clear All Filters]
2020-07-13
Abur, Maria M., Junaidu, Sahalu B., Obiniyi, Afolayan A., Abdullahi, Saleh E..  2019.  Privacy Token Technique for Protecting User’s Attributes in a Federated Identity Management System for the Cloud Environment. 2019 2nd International Conference of the IEEE Nigeria Computer Chapter (NigeriaComputConf). :1–10.
Once an individual employs the use of the Internet for accessing information; carrying out transactions and sharing of data on the Cloud, they are connected to diverse computers on the network. As such, security of such transmitted data is most threatened and then potentially creating privacy risks of users on the federated identity management system in the Cloud. Usually, User's attributes or Personal Identifiable Information (PII) are needed to access Services on the Cloud from different Service Providers (SPs). Sometime these SPs may by themselves violate user's privacy by the reuse of user's attributes offered them for the release of services to the users without their consent and then carrying out activities that may appear malicious and then causing damage to the users. Similarly, it should be noted that sensitive user's attributes (e.g. first name, email, address and the likes) are received in their original form by needed SPs in plaintext. As a result of these problems, user's privacy is being violated. Since these SPs may reuse them or connive with other SPs to expose a user's identity in the cloud environment. This research is motivated to provide a protective and novel approach that shall no longer release original user's attributes to SPs but pseudonyms that shall prevent the SPs from violating user's privacy through connivance to expose the user's identity or other means. The paper introduces a conceptual framework for the proposed user's attributes privacy protection in a federated identity management system for the cloud. On the proposed system, the use of pseudonymous technique also called Privacy Token (PT) is employed. The pseudonymous technique ensures users' original attributes values are not sent directly to the SP but auto generated pseudo attributes values. The PT is composed of: Pseudo Attribute values, Timestamp and SPİD. These composition of the PT makes it difficult for the User's PII to be revealed and further preventing the SPs from being able to keep them or reuse them in the future without the user's consent for any purpose. Another important feature of the PT is its ability to forestall collusion among several collaborating service providers. This is due to the fact that each SP receives pseudo values that have no direct link to the identity of the user. The prototype was implemented with Java programming language and its performance tested on CloudAnalyst simulation.
2020-07-10
Reshmi, T S, Daniel Madan Raja, S.  2019.  A Review on Self Destructing Data:Solution for Privacy Risks in OSNs. 2019 5th International Conference on Advanced Computing Communication Systems (ICACCS). :231—235.

Online Social Networks(OSN) plays a vital role in our day to day life. The most popular social network, Facebook alone counts currently 2.23 billion users worldwide. Online social network users are aware of the various security risks that exist in this scenario including privacy violations and they are utilizing the privacy settings provided by OSN providers to make their data safe. But most of them are unaware of the risk which exists after deletion of their data which is not really getting deleted from the OSN server. Self destruction of data is one of the prime recommended methods to achieve assured deletion of data. Numerous techniques have been developed for self destruction of data and this paper discusses and evaluates these techniques along with the various privacy risks faced by an OSN user in this web centered world.

2019-06-24
Wang, J., Zhang, X., Zhang, H., Lin, H., Tode, H., Pan, M., Han, Z..  2018.  Data-Driven Optimization for Utility Providers with Differential Privacy of Users' Energy Profile. 2018 IEEE Global Communications Conference (GLOBECOM). :1–6.

Smart meters migrate conventional electricity grid into digitally enabled Smart Grid (SG), which is more reliable and efficient. Fine-grained energy consumption data collected by smart meters helps utility providers accurately predict users' demands and significantly reduce power generation cost, while it imposes severe privacy risks on consumers and may discourage them from using those “espionage meters". To enjoy the benefits of smart meter measured data without compromising the users' privacy, in this paper, we try to integrate distributed differential privacy (DDP) techniques into data-driven optimization, and propose a novel scheme that not only minimizes the cost for utility providers but also preserves the DDP of users' energy profiles. Briefly, we add differential private noises to the users' energy consumption data before the smart meters send it to the utility provider. Due to the uncertainty of the users' demand distribution, the utility provider aggregates a given set of historical users' differentially private data, estimates the users' demands, and formulates the data- driven cost minimization based on the collected noisy data. We also develop algorithms for feasible solutions, and verify the effectiveness of the proposed scheme through simulations using the simulated energy consumption data generated from the utility company's real data analysis.

2019-03-28
Ambassa, P. L., Kayem, A. V. D. M., Wolthusen, S. D., Meinel, C..  2018.  Privacy Risks in Resource Constrained Smart Micro-Grids. 2018 32nd International Conference on Advanced Information Networking and Applications Workshops (WAINA). :527-532.

In rural/remote areas, resource constrained smart micro-grid (RCSMG) architectures can offer a cost-effective power management and supply alternative to national power grid connections. RCSMG architectures handle communications over distributed lossy networks to minimize operation costs. However, the unreliable nature of lossy networks makes privacy an important consideration. Existing anonymisation works on data perturbation work mainly by distortion with additive noise. Apply these solutions to RCSMGs is problematic, because deliberate noise additions must be distinguishable both from system and adversarial generated noise. In this paper, we present a brief survey of privacy risks in RCSMGs centered on inference, and propose a method of mitigating these risks. The lesson here is that while RCSMGs give users more control over power management and distribution, good anonymisation is essential to protecting personal information on RCSMGs.

2018-05-30
Vlachos, Vasileios, Stamatiou, Yannis C., Madhja, Adelina, Nikoletseas, Sotiris.  2017.  Privacy Flag: A Crowdsourcing Platform for Reporting and Managing Privacy and Security Risks. Proceedings of the 21st Pan-Hellenic Conference on Informatics. :27:1–27:4.

Nowadays we are witnessing an unprecedented evolution in how we gather and process information. Technological advances in mobile devices as well as ubiquitous wireless connectivity have brought about new information processing paradigms and opportunities for virtually all kinds of scientific and business activity. These new paradigms rest on three pillars: i) numerous powerful portable devices operated by human intelligence, ubiquitous in space and available, most of the time, ii) unlimited environment sensing capabilities of the devices, and iii) fast networks connecting the devices to Internet information processing platforms and services. These pillars implement the concepts of crowdsourcing and collective intelligence. These concepts describe online services that are based on the massive participation of users and the capabilities of their devices.in order to produce results and information which are "more than the sum of the part". The EU project Privacy Flag relies exactly on these two concepts in order to mobilize roaming citizens to contribute, through crowdsourcing, information about risky applications and dangerous web sites whose processing may produce emergent threat patterns, not evident in the contributed information alone, reelecting a collective intelligence action. Crowdsourcing and collective intelligence, in this context, has numerous advantages, such as raising privacy-awareness among people. In this paper we summarize our work in this project and describe the capabilities and functionalities of the Privacy Flag Platform.

2017-02-23
Fisk, G., Ardi, C., Pickett, N., Heidemann, J., Fisk, M., Papadopoulos, C..  2015.  Privacy Principles for Sharing Cyber Security Data. 2015 IEEE Security and Privacy Workshops. :193–197.

Sharing cyber security data across organizational boundaries brings both privacy risks in the exposure of personal information and data, and organizational risk in disclosing internal information. These risks occur as information leaks in network traffic or logs, and also in queries made across organizations. They are also complicated by the trade-offs in privacy preservation and utility present in anonymization to manage disclosure. In this paper, we define three principles that guide sharing security information across organizations: Least Disclosure, Qualitative Evaluation, and Forward Progress. We then discuss engineering approaches that apply these principles to a distributed security system. Application of these principles can reduce the risk of data exposure and help manage trust requirements for data sharing, helping to meet our goal of balancing privacy, organizational risk, and the ability to better respond to security with shared information.

Y. Cao, J. Yang.  2015.  "Towards Making Systems Forget with Machine Unlearning". 2015 IEEE Symposium on Security and Privacy. :463-480.

Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations – asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.