Biblio
Filters: Keyword is Explainable ML [Clear All Filters]
Don't Forget Your Roots! Using Provenance Data for Transparent and Explainable Development of Machine Learning Models. 2019 34th IEEE/ACM International Conference on Automated Software Engineering Workshop (ASEW). :37–40.
.
2019. Explaining reasoning and behaviour of artificial intelligent systems to human users becomes increasingly urgent, especially in the field of machine learning. Many recent contributions approach this issue with post-hoc methods, meaning they consider the final system and its outcomes, while the roots of included artefacts are widely neglected. However, we argue in this position paper that there needs to be a stronger focus on the development process. Without insights into specific design decisions and meta information that accrue during the development an accurate explanation of the resulting model is hardly possible. To remedy this situation we propose to increase process transparency by applying provenance methods, which serves also as a basis for increased explainability.