Visible to the public Biblio

Filters: Keyword is k-anonymity  [Clear All Filters]
2023-06-30
Kai, Liu, Jingjing, Wang, Yanjing, Hu.  2022.  Localized Differential Location Privacy Protection Scheme in Mobile Environment. 2022 IEEE 5th International Conference on Big Data and Artificial Intelligence (BDAI). :148–152.
When users request location services, they are easy to expose their privacy information, and the scheme of using a third-party server for location privacy protection has high requirements for the credibility of the server. To solve these problems, a localized differential privacy protection scheme in mobile environment is proposed, which uses Markov chain model to generate probability transition matrix, and adds Laplace noise to construct a location confusion function that meets differential privacy, Conduct location confusion on the client, construct and upload anonymous areas. Through the analysis of simulation experiments, the scheme can solve the problem of untrusted third-party server, and has high efficiency while ensuring the high availability of the generated anonymous area.
2022-02-09
Mygdalis, Vasileios, Tefas, Anastasios, Pitas, Ioannis.  2021.  Introducing K-Anonymity Principles to Adversarial Attacks for Privacy Protection in Image Classification Problems. 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.
The network output activation values for a given input can be employed to produce a sorted ranking. Adversarial attacks typically generate the least amount of perturbation required to change the classifier label. In that sense, generated adversarial attack perturbation only affects the output in the 1st sorted ranking position. We argue that meaningful information about the adversarial examples i.e., their original labels, is still encoded in the network output ranking and could potentially be extracted, using rule-based reasoning. To this end, we introduce a novel adversarial attack methodology inspired by the K-anonymity principles, that generates adversarial examples that are not only misclassified, but their output sorted ranking spreads uniformly along K different positions. Any additional perturbation arising from the strength of the proposed objectives, is regularized by a visual similarity-based term. Experimental results denote that the proposed approach achieves the optimization goals inspired by K-anonymity with reduced perturbation as well.
Weng, Jui-Hung, Chi, Po-Wen.  2021.  Multi-Level Privacy Preserving K-Anonymity. 2021 16th Asia Joint Conference on Information Security (AsiaJCIS). :61–67.
k-anonymity is a well-known definition of privacy, which guarantees that any person in the released dataset cannot be distinguished from at least k-1 other individuals. In the protection model, the records are anonymized through generalization or suppression with a fixed value of k. Accordingly, each record has the same level of anonymity in the published dataset. However, different people or items usually have inconsistent privacy requirements. Some records need extra protection while others require a relatively low level of privacy constraint. In this paper, we propose Multi-Level Privacy Preserving K-Anonymity, an advanced protection model based on k-anonymity, which divides records into different groups and requires each group to satisfy its respective privacy requirement. Moreover, we present a practical algorithm using clustering techniques to ensure the property. The evaluation on a real-world dataset confirms that the proposed method has the advantages of offering more flexibility in setting privacy parameters and providing higher data utility than traditional k-anonymity.
2021-01-28
Santos, W., Sousa, G., Prata, P., Ferrão, M. E..  2020.  Data Anonymization: K-anonymity Sensitivity Analysis. 2020 15th Iberian Conference on Information Systems and Technologies (CISTI). :1—6.

These days the digitization process is everywhere, spreading also across central governments and local authorities. It is hoped that, using open government data for scientific research purposes, the public good and social justice might be enhanced. Taking into account the European General Data Protection Regulation recently adopted, the big challenge in Portugal and other European countries, is how to provide the right balance between personal data privacy and data value for research. This work presents a sensitivity study of data anonymization procedure applied to a real open government data available from the Brazilian higher education evaluation system. The ARX k-anonymization algorithm, with and without generalization of some research value variables, was performed. The analysis of the amount of data / information lost and the risk of re-identification suggest that the anonymization process may lead to the under-representation of minorities and sociodemographic disadvantaged groups. It will enable scientists to improve the balance among risk, data usability, and contributions for the public good policies and practices.

Esmeel, T. K., Hasan, M. M., Kabir, M. N., Firdaus, A..  2020.  Balancing Data Utility versus Information Loss in Data-Privacy Protection using k-Anonymity. 2020 IEEE 8th Conference on Systems, Process and Control (ICSPC). :158—161.

Data privacy has been an important area of research in recent years. Dataset often consists of sensitive data fields, exposure of which may jeopardize interests of individuals associated with the data. In order to resolve this issue, privacy techniques can be used to hinder the identification of a person through anonymization of the sensitive data in the dataset to protect sensitive information, while the anonymized dataset can be used by the third parties for analysis purposes without obstruction. In this research, we investigated a privacy technique, k-anonymity for different values of on different number columns of the dataset. Next, the information loss due to k-anonymity is computed. The anonymized files go through the classification process by some machine-learning algorithms i.e., Naive Bayes, J48 and neural network in order to check a balance between data anonymity and data utility. Based on the classification accuracy, the optimal values of and are obtained, and thus, the optimal and can be used for k-anonymity algorithm to anonymize optimal number of columns of the dataset.

Kumar, B. S., Daniya, T., Sathya, N., Cristin, R..  2020.  Investigation on Privacy Preserving using K-Anonymity Techniques. 2020 International Conference on Computer Communication and Informatics (ICCCI). :1—7.

In the current world, day by day the data growth and the investigation about that information increased due to the pervasiveness of computing devices, but people are reluctant to share their information on online portals or surveys fearing safety because sensitive information such as credit card information, medical conditions and other personal information in the wrong hands can mean danger to the society. These days privacy preserving has become a setback for storing data in data repository so for that reason data in the repository should be made undistinguishable, data is encrypted while storing and later decrypted when needed for analysis purpose in data mining. While storing the raw data of the individuals it is important to remove person-identifiable information such as name, employee id. However, the other attributes pertaining to the person should be encrypted so the methodologies used to implement. These methodologies can make data in the repository secure and PPDM task can made easier.

Zhang, M., Wei, T., Li, Z., Zhou, Z..  2020.  A service-oriented adaptive anonymity algorithm. 2020 39th Chinese Control Conference (CCC). :7626—7631.

Recently, a large amount of research studies aiming at the privacy-preserving data publishing have been conducted. We find that most K-anonymity algorithms fail to consider the characteristics of attribute values distribution in data and the contribution value differences in quasi-identifier attributes when service-oriented. In this paper, the importance of distribution characteristics of attribute values and the differences in contribution value of quasi-identifier attributes to anonymous results are illustrated. In order to maximize the utility of released data, a service-oriented adaptive anonymity algorithm is proposed. We establish a model of reaction dispersion degree to quantify the characteristics of attribute value distribution and introduce the concept of utility weight related to the contribution value of quasi-identifier attributes. The priority coefficient and the characterization coefficient of partition quality are defined to optimize selection strategies of dimension and splitting value in anonymity group partition process adaptively, which can reduce unnecessary information loss so as to further improve the utility of anonymized data. The rationality and validity of the algorithm are verified by theoretical analysis and multiple experiments.

2020-12-28
Lee, H., Cho, S., Seong, J., Lee, S., Lee, W..  2020.  De-identification and Privacy Issues on Bigdata Transformation. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :514—519.

As the number of data in various industries and government sectors is growing exponentially, the `7V' concept of big data aims to create a new value by indiscriminately collecting and analyzing information from various fields. At the same time as the ecosystem of the ICT industry arrives, big data utilization is treatened by the privacy attacks such as infringement due to the large amount of data. To manage and sustain the controllable privacy level, there need some recommended de-identification techniques. This paper exploits those de-identification processes and three types of commonly used privacy models. Furthermore, this paper presents use cases which can be adopted those kinds of technologies and future development directions.

2020-08-13
Yang, Xudong, Gao, Ling, Wang, Hai, Zheng, Jie, Guo, Hongbo.  2019.  A Semantic k-Anonymity Privacy Protection Method for Publishing Sparse Location Data. 2019 Seventh International Conference on Advanced Cloud and Big Data (CBD). :216—222.

With the development of location technology, location-based services greatly facilitate people's life . However, due to the location information contains a large amount of user sensitive informations, the servicer in location-based services published location data also be subject to the risk of privacy disclosure. In particular, it is more easy to lead to privacy leaks without considering the attacker's semantic background knowledge while the publish sparse location data. So, we proposed semantic k-anonymity privacy protection method to against above problem in this paper. In this method, we first proposed multi-user compressing sensing method to reconstruct the missing location data . To balance the availability and privacy requirment of anonymity set, We use semantic translation and multi-view fusion to selected non-sensitive data to join anonymous set. Experiment results on two real world datasets demonstrate that our solution improve the quality of privacy protection to against semantic attacks.

Widodo, Budiardjo, Eko K., Wibowo, Wahyu C., Achsan, Harry T.Y..  2019.  An Approach for Distributing Sensitive Values in k-Anonymity. 2019 International Workshop on Big Data and Information Security (IWBIS). :109—114.

k-anonymity is a popular model in privacy preserving data publishing. It provides privacy guarantee when a microdata table is released. In microdata, sensitive attributes contain high-sensitive and low sensitive values. Unfortunately, study in anonymity for distributing sensitive value is still rare. This study aims to distribute evenly high-sensitive value to quasi identifier group. We proposed an approach called Simple Distribution of Sensitive Value. We compared our method with systematic clustering which is considered as very effective method to group quasi identifier. Information entropy is used to measure the diversity in each quasi identifier group and in a microdata table. Experiment result show our method outperformed systematic clustering when high-sensitive value is distributed.

Cheng, Chen, Xiaoli, Liu, Linfeng, Wei, Longxin, Lin, Xiaofeng, Wu.  2019.  Algorithm for k-anonymity based on ball-tree and projection area density partition. 2019 14th International Conference on Computer Science Education (ICCSE). :972—975.

K-anonymity is a popular model used in microdata publishing to protect individual privacy. This paper introduces the idea of ball tree and projection area density partition into k-anonymity algorithm.The traditional kd-tree implements the division by forming a super-rectangular, but the super-rectangular has the area angle, so it cannot guarantee that the records on the corner are most similar to the records in this area. In this paper, the super-sphere formed by the ball-tree is used to address this problem. We adopt projection area density partition to increase the density of the resulting recorded points. We implement our algorithm with the Gotrack dataset and the Adult dataset in UCI. The experimentation shows that the k-anonymity algorithm based on ball-tree and projection area density partition, obtains more anonymous groups, and the generalization rate is lower. The smaller the K is, the more obvious the result advantage is. The result indicates that our algorithm can make data usability even higher.

Zhou, Kexin, Wang, Jian.  2019.  Trajectory Protection Scheme Based on Fog Computing and K-anonymity in IoT. 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS). :1—6.
With the development of cloud computing technology in the Internet of Things (IoT), the trajectory privacy in location-based services (LBSs) has attracted much attention. Most of the existing work adopts point-to-point and centralized models, which will bring a heavy burden to the user and cause performance bottlenecks. Moreover, previous schemes did not consider both online and offline trajectory protection and ignored some hidden background information. Therefore, in this paper, we design a trajectory protection scheme based on fog computing and k-anonymity for real-time trajectory privacy protection in continuous queries and offline trajectory data protection in trajectory publication. Fog computing provides the user with local storage and mobility to ensure physical control, and k-anonymity constructs the cloaking region for each snapshot in terms of time-dependent query probability and transition probability. In this way, two k-anonymity-based dummy generation algorithms are proposed, which achieve the maximum entropy of online and offline trajectory protection. Security analysis and simulation results indicate that our scheme can realize trajectory protection effectively and efficiently.
Junjie, Jia, Haitao, Qin, Wanghu, Chen, Huifang, Ma.  2019.  Trajectory Anonymity Based on Quadratic Anonymity. 2019 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE). :485—492.
Due to the leakage of privacy information in the sensitive region of trajectory anonymity publishing, which is resulted by the attack, this paper aims at the trajectory anonymity algorithm of division of region. According to the start stop time of the trajectory, the current sensitive region is found with the k-anonymity set on the synchronous trajectory. If the distance between the divided sub-region and the adjacent anonymous area is not greater than the threshold d, the area will be combined. Otherwise, with the guidance of location mapping, the forged location is added to the sub-region according to the original location so that the divided sub-region can meet the principle of k-anonymity. While the forged location retains the relative position of each point in the sensitive region, making that the divided sub-region and the original Regional anonymity are consistent. Experiments show that compared with the existing trajectory anonymous algorithm and the synchronous trajectory data set with the same privacy, the algorithm is highly effective in both privacy protection and validity of data quality.
Augusto, Cristian, Morán, Jesús, De La Riva, Claudio, Tuya, Javier.  2019.  Test-Driven Anonymization for Artificial Intelligence. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :103—110.
In recent years, data published and shared with third parties to develop artificial intelligence (AI) tools and services has significantly increased. When there are regulatory or internal requirements regarding privacy of data, anonymization techniques are used to maintain privacy by transforming the data. The side-effect is that the anonymization may lead to useless data to train and test the AI because it is highly dependent on the quality of the data. To overcome this problem, we propose a test-driven anonymization approach for artificial intelligence tools. The approach tests different anonymization efforts to achieve a trade-off in terms of privacy (non-functional quality) and functional suitability of the artificial intelligence technique (functional quality). The approach has been validated by means of two real-life datasets in the domains of healthcare and health insurance. Each of these datasets is anonymized with several privacy protections and then used to train classification AIs. The results show how we can anonymize the data to achieve an adequate functional suitability in the AI context while maintaining the privacy of the anonymized data as high as possible.
2020-08-07
Mehta, Brijesh B., Gupta, Ruchika, Rao, Udai Pratap, Muthiyan, Mukesh.  2019.  A Scalable (\$\textbackslashtextbackslashalpha, k\$)-Anonymization Approach using MapReduce for Privacy Preserving Big Data Publishing. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—6.
Different tools and sources are used to collect big data, which may create privacy issues. k-anonymity, l-diversity, t-closeness etc. privacy preserving data publishing approaches are used data de-identification, but as multiple sources is used to collect the data, chance of re-identification is very high. Anonymization large data is not a trivial task, hence, privacy preserving approaches scalability has become a challenging research area. Researchers explore it by proposing algorithms for scalable anonymization. We further found that in some scenarios efficient anonymization is not enough, timely anonymization is also required. Hence, to incorporate the velocity of data with Scalable k-Anonymization (SKA) approach, we propose a novel approach, Scalable ( α, k)-Anonymization (SAKA). Our proposed approach outperforms in terms of information loss and running time as compared to existing approaches. With best of our knowledge, this is the first proposed scalable anonymization approach for the velocity of data.
2020-07-13
Andrew, J., Karthikeyan, J., Jebastin, Jeffy.  2019.  Privacy Preserving Big Data Publication On Cloud Using Mondrian Anonymization Techniques and Deep Neural Networks. 2019 5th International Conference on Advanced Computing Communication Systems (ICACCS). :722–727.

In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.

2020-07-09
Kassem, Ali, Ács, Gergely, Castelluccia, Claude, Palamidessi, Catuscia.  2019.  Differential Inference Testing: A Practical Approach to Evaluate Sanitizations of Datasets. 2019 IEEE Security and Privacy Workshops (SPW). :72—79.

In order to protect individuals' privacy, data have to be "well-sanitized" before sharing them, i.e. one has to remove any personal information before sharing data. However, it is not always clear when data shall be deemed well-sanitized. In this paper, we argue that the evaluation of sanitized data should be based on whether the data allows the inference of sensitive information that is specific to an individual, instead of being centered around the concept of re-identification. We propose a framework to evaluate the effectiveness of different sanitization techniques on a given dataset by measuring how much an individual's record from the sanitized dataset influences the inference of his/her own sensitive attribute. Our intent is not to accurately predict any sensitive attribute but rather to measure the impact of a single record on the inference of sensitive information. We demonstrate our approach by sanitizing two real datasets in different privacy models and evaluate/compare each sanitized dataset in our framework.

2020-04-20
Yuan, Jing, Ou, Yuyi, Gu, Guosheng.  2019.  An Improved Privacy Protection Method Based on k-degree Anonymity in Social Network. 2019 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :416–420.

To preserve the privacy of social networks, most existing methods are applied to satisfy different anonymity models, but there are some serious problems such as huge large information losses and great structural modifications of original social network. Therefore, an improved privacy protection method called k-subgraph is proposed, which is based on k-degree anonymous graph derived from k-anonymity to keep the network structure stable. The method firstly divides network nodes into several clusters by label propagation algorithm, and then reconstructs the sub-graph by means of moving edges to achieve k-degree anonymity. Experimental results show that our k-subgraph method can not only effectively improve the defense capability against malicious attacks based on node degrees, but also maintain stability of network structure. In addition, the cost of information losses due to anonymity is minimized ideally.

Xiang, Wei.  2019.  An Efficient Location Privacy Preserving Model based on Geohash. 2019 6th International Conference on Behavioral, Economic and Socio-Cultural Computing (BESC). :1–5.
With the rapid development of location-aware mobile devices, location-based services have been widely used. When LBS (Location Based Services) bringing great convenience and profits, it also brings great hidden trouble, among which user privacy security is one of them. The paper builds a LBS privacy protection model and develops algorithm depend on the technology of one dimensional coding of Geohash geographic information. The results of experiments and data measurements show that the model the model has reached k-anonymity effect and has good performance in avoiding attacking from the leaked information in a continuous query with the user's background knowledge. It also has a preferable performance in time cost of system process.
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
2019-01-31
Nakamura, T., Nishi, H..  2018.  TMk-Anonymity: Perturbation-Based Data Anonymization Method for Improving Effectiveness of Secondary Use. IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society. :3138–3143.

The recent emergence of smartphones, cloud computing, and the Internet of Things has brought about the explosion of data creation. By collating and merging these enormous data with other information, services that use information become more sophisticated and advanced. However, at the same time, the consideration of privacy violations caused by such merging is indispensable. Various anonymization methods have been proposed to preserve privacy. The conventional perturbation-based anonymization method of location data adds comparatively larger noise, and the larger noise makes it difficult to utilize the data effectively for secondary use. In this research, to solve these problems, we first clarified the definition of privacy preservation and then propose TMk-anonymity according to the definition.

2018-04-02
Wu, D., Zhang, Y., Liu, Y..  2017.  Dummy Location Selection Scheme for K-Anonymity in Location Based Services. 2017 IEEE Trustcom/BigDataSE/ICESS. :441–448.

Location-Based Service (LBS) becomes increasingly important for our daily life. However, the localization information in the air is vulnerable to various attacks, which result in serious privacy concerns. To overcome this problem, we formulate a multi-objective optimization problem with considering both the query probability and the practical dummy location region. A low complexity dummy location selection scheme is proposed. We first find several candidate dummy locations with similar query probabilities. Among these selected candidates, a cloaking area based algorithm is then offered to find K - 1 dummy locations to achieve K-anonymity. The intersected area between two dummy locations is also derived to assist to determine the total cloaking area. Security analysis verifies the effectiveness of our scheme against the passive and active adversaries. Compared with other methods, simulation results show that the proposed dummy location scheme can improve the privacy level and enlarge the cloaking area simultaneously.

Al-Zobbi, M., Shahrestani, S., Ruan, C..  2017.  Implementing A Framework for Big Data Anonymity and Analytics Access Control. 2017 IEEE Trustcom/BigDataSE/ICESS. :873–880.

Analytics in big data is maturing and moving towards mass adoption. The emergence of analytics increases the need for innovative tools and methodologies to protect data against privacy violation. Many data anonymization methods were proposed to provide some degree of privacy protection by applying data suppression and other distortion techniques. However, currently available methods suffer from poor scalability, performance and lack of framework standardization. Current anonymization methods are unable to cope with the massive size of data processing. Some of these methods were especially proposed for MapReduce framework to operate in Big Data. However, they still operate in conventional data management approaches. Therefore, there were no remarkable gains in the performance. We introduce a framework that can operate in MapReduce environment to benefit from its advantages, as well as from those in Hadoop ecosystems. Our framework provides a granular user's access that can be tuned to different authorization levels. The proposed solution provides a fine-grained alteration based on the user's authorization level to access MapReduce domain for analytics. Using well-developed role-based access control approaches, this framework is capable of assigning roles to users and map them to relevant data attributes.

2018-03-19
Haakensen, T., Thulasiraman, P..  2017.  Enhancing Sink Node Anonymity in Tactical Sensor Networks Using a Reactive Routing Protocol. 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). :115–121.

Tactical wireless sensor networks (WSNs) are deployed over a region of interest for mission centric operations. The sink node in a tactical WSN is the aggregation point of data processing. Due to its essential role in the network, the sink node is a high priority target for an attacker who wishes to disable a tactical WSN. This paper focuses on the mitigation of sink-node vulnerability in a tactical WSN. Specifically, we study the issue of protecting the sink node through a technique known as k-anonymity. To achieve k-anonymity, we use a specific routing protocol designed to work within the constraints of WSN communication protocols, specifically IEEE 802.15.4. We use and modify the Lightweight Ad hoc On-Demand Next Generation (LOADng) reactive-routing protocol to achieve anonymity. This modified LOADng protocol prevents an attacker from identifying the sink node without adding significant complexity to the regular sensor nodes. We simulate the modified LOADng protocol using a custom-designed simulator in MATLAB. We demonstrate the effectiveness of our protocol and also show some of the performance tradeoffs that come with this method.