Title | The Best of Both Worlds: Challenges in Linking Provenance and Explainability in Distributed Machine Learning |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Scherzinger, Stefanie, Seifert, Christin, Wiese, Lena |
Conference Name | 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS) |
Date Published | jul |
Keywords | basic transformations, composability, Computational modeling, consistent data, Data analysis, data analysis pipeline, Data models, data pre-processing steps, data preparation, data provenance, Decision trees, distributed computing, Distributed databases, distributed file system, distributed machine learning, distributed setting, distributed system, end-to-end explainability, entire data set, Entropy, explainable machine learning, explainable models, homogeneous data, Human Behavior, learning (artificial intelligence), linking provenance, machine learning, machine learning experts, machine learning models, Metrics, Provenance, pubcrawl, Resiliency, single data |
Abstract | Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models. |
DOI | 10.1109/ICDCS.2019.00161 |
Citation Key | scherzinger_best_2019 |