Visible to the public Biblio

Filters: Author is Agarwal, Chirag  [Clear All Filters]
2021-05-13
Bansal, Naman, Agarwal, Chirag, Nguyen, Anh.  2020.  SAM: The Sensitivity of Attribution Methods to Hyperparameters. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :11–21.
Attribution methods can provide powerful insights into the reasons for a classifier's decision. We argue that a key desideratum of an explanation method is its robustness to input hyperparameters which are often randomly set or empirically tuned. High sensitivity to arbitrary hyperparameter choices does not only impede reproducibility but also questions the correctness of an explanation and impairs the trust of end-users. In this paper, we provide a thorough empirical study on the sensitivity of existing attribution methods. We found an alarming trend that many methods are highly sensitive to changes in their common hyperparameters e.g. even changing a random seed can yield a different explanation! Interestingly, such sensitivity is not reflected in the average explanation accuracy scores over the dataset as commonly reported in the literature. In addition, explanations generated for robust classifiers (i.e. which are trained to be invariant to pixel-wise perturbations) are surprisingly more robust than those generated for regular classifiers.