Visible to the public Biblio

Filters: Keyword is data-driven applications  [Clear All Filters]
2020-04-20
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
Lim, Yeon-sup, Srivatsa, Mudhakar, Chakraborty, Supriyo, Taylor, Ian.  2018.  Learning Light-Weight Edge-Deployable Privacy Models. 2018 IEEE International Conference on Big Data (Big Data). :1290–1295.
Privacy becomes one of the important issues in data-driven applications. The advent of non-PC devices such as Internet-of-Things (IoT) devices for data-driven applications leads to needs for light-weight data anonymization. In this paper, we develop an anonymization framework that expedites model learning in parallel and generates deployable models for devices with low computing capability. We evaluate our framework with various settings such as different data schema and characteristics. Our results exhibit that our framework learns anonymization models up to 16 times faster than a sequential anonymization approach and that it preserves enough information in anonymized data for data-driven applications.
2018-02-02
Tramèr, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., Juels, A., Lin, H..  2017.  FairTest: Discovering Unwarranted Associations in Data-Driven Applications. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :401–416.

In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice. We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities. We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects. We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger.