FairTest: Discovering Unwarranted Associations in Data-Driven Applications
Title | FairTest: Discovering Unwarranted Associations in Data-Driven Applications |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Tramèr, F., Atlidakis, V., Geambasu, R., Hsu, D., Hubaux, J. P., Humbert, M., Juels, A., Lin, H. |
Conference Name | 2017 IEEE European Symposium on Security and Privacy (EuroS P) |
Keywords | Algorithmic Fairness, algorithmic fairness formalization, Computer bugs, Data analysis, Data collection, data privacy, data-driven applications, debugging capabilities, decision making, decision-making, fairness metrics, FairTest, Google, machine learning algorithms, Measurement, Medical services, Metrics, Predictive Metrics, program debugging, pubcrawl, sensitive user attributes, software tools, Statistics, Systems, Testing, Tools, UA framework, unwarranted association discovery |
Abstract | In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice. We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities. We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects. We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger. |
URL | http://ieeexplore.ieee.org/document/7961993/ |
DOI | 10.1109/EuroSP.2017.29 |
Citation Key | tramer_fairtest:_2017 |
- machine learning algorithms
- Algorithmic Fairness
- algorithmic fairness formalization
- Computer bugs
- data analysis
- Data collection
- data privacy
- data-driven applications
- debugging capabilities
- Decision Making
- decision-making
- fairness metrics
- FairTest
- unwarranted association discovery
- Measurement
- Medical services
- Metrics
- Predictive Metrics
- program debugging
- pubcrawl
- sensitive user attributes
- software tools
- Statistics
- Systems
- testing
- tools
- UA framework