Biblio

Filters: Author is Srivatsa, Mudhakar  [Clear All Filters]
2020-04-20
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
2017-08-18
Ji, Shouling, Li, Weiqing, Srivatsa, Mudhakar, He, Jing Selena, Beyah, Raheem.  2016.  General Graph Data De-Anonymization: From Mobility Traces to Social Networks. ACM Trans. Inf. Syst. Secur.. 18:12:1–12:29.

When people utilize social applications and services, their privacy suffers a potential serious threat. In this article, we present a novel, robust, and effective de-anonymization attack to mobility trace data and social data. First, we design a Unified Similarity (US) measurement, which takes account of local and global structural characteristics of data, information obtained from auxiliary data, and knowledge inherited from ongoing de-anonymization results. By analyzing the measurement on real datasets, we find that some data can potentially be de-anonymized accurately and the other can be de-anonymized in a coarse granularity. Utilizing this property, we present a US-based De-Anonymization (DA) framework, which iteratively de-anonymizes data with accuracy guarantee. Then, to de-anonymize large-scale data without knowledge of the overlap size between the anonymized data and the auxiliary data, we generalize DA to an Adaptive De-Anonymization (ADA) framework. By smartly working on two core matching subgraphs, ADA achieves high de-anonymization accuracy and reduces computational overhead. Finally, we examine the presented de-anonymization attack on three well-known mobility traces: St Andrews, Infocom06, and Smallblue, and three social datasets: ArnetMiner, Google+, and Facebook. The experimental results demonstrate that the presented de-anonymization framework is very effective and robust to noise. The source code and employed datasets are now publicly available at SecGraph [2015].