Visible to the public Biblio

Filters: Keyword is privacy models and measurement  [Clear All Filters]
2020-04-20
Lefebvre, Dimitri, Hadjicostis, Christoforos N..  2019.  Trajectory-observers of timed stochastic discrete event systems: Applications to privacy analysis. 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT). :1078–1083.
Various aspects of security and privacy in many application domains can be assessed based on proper analysis of successive measurements that are collected on a given system. This work is devoted to such issues in the context of timed stochastic Petri net models. We assume that certain events and part of the marking trajectories are observable to adversaries who aim to determine when the system is performing secret operations, such as time intervals during which the system is executing certain critical sequences of events (as captured, for instance, in language-based opacity formulations). The combined use of the k-step trajectory-observer and the Markov model of the stochastic Petri net leads to probabilistic indicators helpful for evaluating language-based opacity of the given system, related timing aspects, and possible strategies to improve them.
Takbiri, Nazanin, Shao, Xiaozhe, Gao, Lixin, Pishro-Nik, Hossein.  2019.  Improving Privacy in Graphs Through Node Addition. 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :487–494.

The rapid growth of computer systems which generate graph data necessitates employing privacy-preserving mechanisms to protect users' identity. Since structure-based de-anonymization attacks can reveal users' identity's even when the graph is simply anonymized by employing naïve ID removal, recently, k- anonymity is proposed to secure users' privacy against the structure-based attack. Most of the work ensured graph privacy using fake edges, however, in some applications, edge addition or deletion might cause a significant change to the key property of the graph. Motivated by this fact, in this paper, we introduce a novel method which ensures privacy by adding fake nodes to the graph. First, we present a novel model which provides k- anonymity against one of the strongest attacks: seed-based attack. In this attack, the adversary knows the partial mapping between the main graph and the graph which is generated using the privacy-preserving mechanisms. We show that even if the adversary knows the mapping of all of the nodes except one, the last node can still have k- anonymity privacy. Then, we turn our attention to the privacy of the graphs generated by inter-domain routing against degree attacks in which the degree sequence of the graph is known to the adversary. To ensure the privacy of networks against this attack, we propose a novel method which tries to add fake nodes in a way that the degree of all nodes have the same expected value.

Djoudi, Aghiles, Pujolle, Guy.  2019.  Social Privacy Score Through Vulnerability Contagion Process. 2019 Fifth Conference on Mobile and Secure Services (MobiSecServ). :1–6.
The exponential usage of messaging services for communication raises many questions in privacy fields. Privacy issues in such services strongly depend on the graph-theoretical properties of users' interactions representing the real friendships between users. One of the most important issues of privacy is that users may disclose information of other users beyond the scope of the interaction, without realizing that such information could be aggregated to reveal sensitive information. Determining vulnerable interactions from non-vulnerable ones is difficult due to the lack of awareness mechanisms. To address this problem, we analyze the topological relationships with the level of trust between users to notify each of them about their vulnerable social interactions. Particularly, we analyze the impact of trusting vulnerable friends in infecting other users' privacy concerns by modeling a new vulnerability contagion process. Simulation results show that over-trusting vulnerable users speeds the vulnerability diffusion process through the network. Furthermore, vulnerable users with high reputation level lead to a high convergence level of infection, this means that the vulnerability contagion process infects the biggest number of users when vulnerable users get a high level of trust from their interlocutors. This work contributes to the development of privacy awareness framework that can alert users of the potential private information leakages in their communications.
Lecuyer, Mathias, Atlidakis, Vaggelis, Geambasu, Roxana, Hsu, Daniel, Jana, Suman.  2019.  Certified Robustness to Adversarial Examples with Differential Privacy. 2019 IEEE Symposium on Security and Privacy (SP). :656–672.
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
Khan, Muhammad Imran, Foley, Simon N., O'Sullivan, Barry.  2019.  PriDe: A Quantitative Measure of Privacy-Loss in Interactive Querying Settings. 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–5.
This paper presents, PriDe, a model to measure the deviation of an analyst's (user) querying behaviour from normal querying behaviour. The deviation is measured in terms of privacy, that is to say, how much of the privacy loss has incurred due to this shift in querying behaviour. The shift is represented in terms of a score - a privacy-loss score, the higher the score the more the loss in privacy. Querying behaviour of analysts are modelled using n-grams of SQL query and subsequently, behavioural profiles are constructed. Profiles are then compared in terms of privacy resulting in a quantified score indicating the privacy loss.
Xiao, Tianrui, Khisti, Ashish.  2019.  Maximal Information Leakage based Privacy Preserving Data Disclosure Mechanisms. 2019 16th Canadian Workshop on Information Theory (CWIT). :1–6.
It is often necessary to disclose training data to the public domain, while protecting privacy of certain sensitive labels. We use information theoretic measures to develop such privacy preserving data disclosure mechanisms. Our mechanism involves perturbing the data vectors to strike a balance in the privacy-utility trade-off. We use maximal information leakage between the output data vector and the confidential label as our privacy metric. We first study the theoretical Bernoulli-Gaussian model and study the privacy-utility trade-off when only the mean of the Gaussian distributions can be perturbed. We show that the optimal solution is the same as the case when the utility is measured using probability of error at the adversary. We then consider an application of this framework to a data driven setting and provide an empirical approximation to the Sibson mutual information. By performing experiments on the MNIST and FERG data sets, we show that our proposed framework achieves equivalent or better privacy than previous methods based on mutual information.
Yuan, Jing, Ou, Yuyi, Gu, Guosheng.  2019.  An Improved Privacy Protection Method Based on k-degree Anonymity in Social Network. 2019 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :416–420.

To preserve the privacy of social networks, most existing methods are applied to satisfy different anonymity models, but there are some serious problems such as huge large information losses and great structural modifications of original social network. Therefore, an improved privacy protection method called k-subgraph is proposed, which is based on k-degree anonymous graph derived from k-anonymity to keep the network structure stable. The method firstly divides network nodes into several clusters by label propagation algorithm, and then reconstructs the sub-graph by means of moving edges to achieve k-degree anonymity. Experimental results show that our k-subgraph method can not only effectively improve the defense capability against malicious attacks based on node degrees, but also maintain stability of network structure. In addition, the cost of information losses due to anonymity is minimized ideally.

Xiang, Wei.  2019.  An Efficient Location Privacy Preserving Model based on Geohash. 2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC). :1–5.
With the rapid development of location-aware mobile devices, location-based services have been widely used. When LBS (Location Based Services) bringing great convenience and profits, it also brings great hidden trouble, among which user privacy security is one of them. The paper builds a LBS privacy protection model and develops algorithm depend on the technology of one dimensional coding of Geohash geographic information. The results of experiments and data measurements show that the model the model has reached k-anonymity effect and has good performance in avoiding attacking from the leaked information in a continuous query with the user's background knowledge. It also has a preferable performance in time cost of system process.
Kundu, Suprateek, Suthaharan, Shan.  2019.  Privacy-Preserving Predictive Model Using Factor Analysis for Neuroscience Applications. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :67–73.
The purpose of this article is to present an algorithm which maximizes prediction accuracy under a linear regression model while preserving data privacy. This approach anonymizes the data such that the privacy of the original features is fully guaranteed, and the deterioration in predictive accuracy using the anonymized data is minimal. The proposed algorithm employs two stages: the first stage uses a probabilistic latent factor approach to anonymize the original features into a collection of lower dimensional latent factors, while the second stage uses an optimization algorithm to tune the anonymized data further, in a way which ensures a minimal loss in prediction accuracy under the predictive approach specified by the user. We demonstrate the advantages of our approach via numerical studies and apply our method to high-dimensional neuroimaging data where the goal is to predict the behavior of adolescents and teenagers based on functional magnetic resonance imaging (fMRI) measurements.
Kundu, Suprateek, Suthaharan, Shan.  2019.  Privacy-Preserving Predictive Model Using Factor Analysis for Neuroscience Applications. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :67–73.
The purpose of this article is to present an algorithm which maximizes prediction accuracy under a linear regression model while preserving data privacy. This approach anonymizes the data such that the privacy of the original features is fully guaranteed, and the deterioration in predictive accuracy using the anonymized data is minimal. The proposed algorithm employs two stages: the first stage uses a probabilistic latent factor approach to anonymize the original features into a collection of lower dimensional latent factors, while the second stage uses an optimization algorithm to tune the anonymized data further, in a way which ensures a minimal loss in prediction accuracy under the predictive approach specified by the user. We demonstrate the advantages of our approach via numerical studies and apply our method to high-dimensional neuroimaging data where the goal is to predict the behavior of adolescents and teenagers based on functional magnetic resonance imaging (fMRI) measurements.
Zhang, Xue, Yan, Wei Qi.  2018.  Comparative Evaluations of Privacy on Digital Images. 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). :1–6.
Privacy preservation on social networks is nowadays a societal issue. In this paper, our contributions are to establish such a model for privacy preservation. We use differential privacy for personal privacy analysis and measurement. Our conclusion is that privacy could be measured and preserved if the corresponding approaches could be taken.
Zhang, Xue, Yan, Wei Qi.  2018.  Comparative Evaluations of Privacy on Digital Images. 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). :1–6.
Privacy preservation on social networks is nowadays a societal issue. In this paper, our contributions are to establish such a model for privacy preservation. We use differential privacy for personal privacy analysis and measurement. Our conclusion is that privacy could be measured and preserved if the corresponding approaches could be taken.
To, Hien, Shahabi, Cyrus, Xiong, Li.  2018.  Privacy-Preserving Online Task Assignment in Spatial Crowdsourcing with Untrusted Server. 2018 IEEE 34th International Conference on Data Engineering (ICDE). :833–844.
With spatial crowdsourcing (SC), requesters outsource their spatiotemporal tasks (tasks associated with location and time) to a set of workers, who will perform the tasks by physically traveling to the tasks' locations. However, current solutions require the locations of the workers and/or the tasks to be disclosed to untrusted parties (SC server) for effective assignments of tasks to workers. In this paper we propose a framework for assigning tasks to workers in an online manner without compromising the location privacy of workers and tasks. We perturb the locations of both tasks and workers based on geo-indistinguishability and then devise techniques to quantify the probability of reachability between a task and a worker, given their perturbed locations. We investigate both analytical and empirical models for quantifying the worker-task pair reachability and propose task assignment strategies that strike a balance among various metrics such as the number of completed tasks, worker travel distance and system overhead. Extensive experiments on real-world datasets show that our proposed techniques result in minimal disclosure of task locations and no disclosure of worker locations without significantly sacrificing the total number of assigned tasks.
To, Hien, Shahabi, Cyrus, Xiong, Li.  2018.  Privacy-Preserving Online Task Assignment in Spatial Crowdsourcing with Untrusted Server. 2018 IEEE 34th International Conference on Data Engineering (ICDE). :833–844.
With spatial crowdsourcing (SC), requesters outsource their spatiotemporal tasks (tasks associated with location and time) to a set of workers, who will perform the tasks by physically traveling to the tasks' locations. However, current solutions require the locations of the workers and/or the tasks to be disclosed to untrusted parties (SC server) for effective assignments of tasks to workers. In this paper we propose a framework for assigning tasks to workers in an online manner without compromising the location privacy of workers and tasks. We perturb the locations of both tasks and workers based on geo-indistinguishability and then devise techniques to quantify the probability of reachability between a task and a worker, given their perturbed locations. We investigate both analytical and empirical models for quantifying the worker-task pair reachability and propose task assignment strategies that strike a balance among various metrics such as the number of completed tasks, worker travel distance and system overhead. Extensive experiments on real-world datasets show that our proposed techniques result in minimal disclosure of task locations and no disclosure of worker locations without significantly sacrificing the total number of assigned tasks.
Wang, Chong Xiao, Song, Yang, Tay, Wee Peng.  2018.  PRESERVING PARAMETER PRIVACY IN SENSOR NETWORKS. 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1316–1320.
We consider the problem of preserving the privacy of a set of private parameters while allowing inference of a set of public parameters based on observations from sensors in a network. We assume that the public and private parameters are correlated with the sensor observations via a linear model. We define the utility loss and privacy gain functions based on the Cramér-Rao lower bounds for estimating the public and private parameters, respectively. Our goal is to minimize the utility loss while ensuring that the privacy gain is no less than a predefined privacy gain threshold, by allowing each sensor to perturb its own observation before sending it to the fusion center. We propose methods to determine the amount of noise each sensor needs to add to its observation under the cases where prior information is available or unavailable.
Wang, Chong Xiao, Song, Yang, Tay, Wee Peng.  2018.  PRESERVING PARAMETER PRIVACY IN SENSOR NETWORKS. 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1316–1320.
We consider the problem of preserving the privacy of a set of private parameters while allowing inference of a set of public parameters based on observations from sensors in a network. We assume that the public and private parameters are correlated with the sensor observations via a linear model. We define the utility loss and privacy gain functions based on the Cramér-Rao lower bounds for estimating the public and private parameters, respectively. Our goal is to minimize the utility loss while ensuring that the privacy gain is no less than a predefined privacy gain threshold, by allowing each sensor to perturb its own observation before sending it to the fusion center. We propose methods to determine the amount of noise each sensor needs to add to its observation under the cases where prior information is available or unavailable.
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
Sule, Rupali, Chaudhari, Sangita.  2018.  Preserving Location Privacy in Geosocial Applications using Error Based Transformation. 2018 International Conference on Smart City and Emerging Technology (ICSCET). :1–4.
Geo-social applications deal with constantly sharing user's current geographic information in terms of location (Latitude and Longitude). Such application can be used by many people to get information about their surrounding with the help of their friend's locations and their recommendations. But without any privacy protection, these systems can be easily misused by tracking the users. We are proposing Error Based Transformation (ERB) approach for location transformation which provides significantly improved location privacy without adding uncertainty in to query results or relying on strong assumptions about server security. The key insight is to apply secure user-specific, distance-preserving coordinate transformations to all location data shared with the server. Only the friends of a user can get exact co-ordinates by applying inverse transformation with secret key shared with them. Servers can evaluate all location queries correctly on transformed data. ERB privacy mechanism guarantee that servers are unable to see or infer actual location data from the transformed data. ERB privacy mechanism is successful against a powerful adversary model where prototype measurements used to show that it provides with very little performance overhead making it suitable for today's mobile device.
Sule, Rupali, Chaudhari, Sangita.  2018.  Preserving Location Privacy in Geosocial Applications using Error Based Transformation. 2018 International Conference on Smart City and Emerging Technology (ICSCET). :1–4.
Geo-social applications deal with constantly sharing user's current geographic information in terms of location (Latitude and Longitude). Such application can be used by many people to get information about their surrounding with the help of their friend's locations and their recommendations. But without any privacy protection, these systems can be easily misused by tracking the users. We are proposing Error Based Transformation (ERB) approach for location transformation which provides significantly improved location privacy without adding uncertainty in to query results or relying on strong assumptions about server security. The key insight is to apply secure user-specific, distance-preserving coordinate transformations to all location data shared with the server. Only the friends of a user can get exact co-ordinates by applying inverse transformation with secret key shared with them. Servers can evaluate all location queries correctly on transformed data. ERB privacy mechanism guarantee that servers are unable to see or infer actual location data from the transformed data. ERB privacy mechanism is successful against a powerful adversary model where prototype measurements used to show that it provides with very little performance overhead making it suitable for today's mobile device.
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
Raber, Frederic, Krüger, Antonio.  2018.  Deriving Privacy Settings for Location Sharing: Are Context Factors Always the Best Choice? 2018 IEEE Symposium on Privacy-Aware Computing (PAC). :86–94.
Research has observed context factors like occasion and time as influential factors for predicting whether or not to share a location with online friends. In other domains like social networks, personality was also found to play an important role. Furthermore, users are seeking a fine-grained disclosement policy that also allows them to display an obfuscated location, like the center of the current city, to some of their friends. In this paper, we observe which context factors and personality measures can be used to predict the correct privacy level out of seven privacy levels, which include obfuscation levels like center of the street or current city. Our results show that a prediction is possible with a precision 20% better than a constant value. We will give design indications to determine which context factors should be recorded, and how much the precision can be increased if personality and privacy measures are recorded using either a questionnaire or automated text analysis.
Raber, Frederic, Krüger, Antonio.  2018.  Deriving Privacy Settings for Location Sharing: Are Context Factors Always the Best Choice? 2018 IEEE Symposium on Privacy-Aware Computing (PAC). :86–94.
Research has observed context factors like occasion and time as influential factors for predicting whether or not to share a location with online friends. In other domains like social networks, personality was also found to play an important role. Furthermore, users are seeking a fine-grained disclosement policy that also allows them to display an obfuscated location, like the center of the current city, to some of their friends. In this paper, we observe which context factors and personality measures can be used to predict the correct privacy level out of seven privacy levels, which include obfuscation levels like center of the street or current city. Our results show that a prediction is possible with a precision 20% better than a constant value. We will give design indications to determine which context factors should be recorded, and how much the precision can be increased if personality and privacy measures are recorded using either a questionnaire or automated text analysis.
Hu, Boyang, Yan, Qiben, Zheng, Yao.  2018.  Tracking location privacy leakage of mobile ad networks at scale. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–2.
The online advertising ecosystem is built upon the massive data collection of ad networks to learn the properties of users for targeted ad deliveries. Existing efforts have investigated the privacy leakage behaviors of mobile ad networks. However, there lacks a large-scale measurement study to evaluate the scale of privacy leakage through mobile ads. In this work, we present a study of the potential privacy leakage in location-based mobile advertising services based on a large-scale measurement. We first introduce a threat model in the mobile ad ecosystem, and then design a measurement system to perform extensive threat measurements and assessments. To counteract the privacy leakage threats, we design and implement an adaptive location obfuscation mechanism, which can be used to obfuscate location data in real-time while minimizing the impact to mobile ad businesses.