Visible to the public Biblio

Filters: Keyword is interpretability  [Clear All Filters]
2021-05-03
Zalasiński, Marcin, Cpałka, Krzysztof, Łapa, Krystian.  2020.  An interpretable fuzzy system in the on-line signature scalable verification. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–9.
This paper proposes new original solutions for the use of interpretable flexible fuzzy systems for identity verification based on an on-line signature. Such solutions must be scalable because the verification of the identity of each user must be carried out independently of one another. In addition, a large number of system users limit the possibilities of iterative system learning. An important issue is the ability to interpret the system rules because it explains how the similarity of test signatures to reference signature templates is assessed. In this paper, we propose an approach that meets all of the above requirements and works effectively for the on-line signatures' database used in the simulations.
2020-05-11
Cui, Zhicheng, Zhang, Muhan, Chen, Yixin.  2018.  Deep Embedding Logistic Regression. 2018 IEEE International Conference on Big Knowledge (ICBK). :176–183.
Logistic regression (LR) is used in many areas due to its simplicity and interpretability. While at the same time, those two properties limit its classification accuracy. Deep neural networks (DNNs), instead, achieve state-of-the-art performance in many domains. However, the nonlinearity and complexity of DNNs make it less interpretable. To balance interpretability and classification performance, we propose a novel nonlinear model, Deep Embedding Logistic Regression (DELR), which augments LR with a nonlinear dimension-wise feature embedding. In DELR, each feature embedding is learned through a deep and narrow neural network and LR is attached to decide feature importance. A compact and yet powerful model, DELR offers great interpretability: it can tell the importance of each input feature, yield meaningful embedding of categorical features, and extract actionable changes, making it attractive for tasks such as market analysis and clinical prediction.
2019-01-16
Peake, Georgina, Wang, Jun.  2018.  Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2060–2069.
The widescale use of machine learning algorithms to drive decision-making has highlighted the critical importance of ensuring the interpretability of such models in order to engender trust in their output. The state-of-the-art recommendation systems use black-box latent factor models that provide no explanation of why a recommendation has been made, as they abstract their decision processes to a high-dimensional latent space which is beyond the direct comprehension of humans. We propose a novel approach for extracting explanations from latent factor recommendation systems by training association rules on the output of a matrix factorisation black-box model. By taking advantage of the interpretable structure of association rules, we demonstrate that predictive accuracy of the recommendation model can be maintained whilst yielding explanations with high fidelity to the black-box model on a unique industry dataset. Our approach mitigates the accuracy-interpretability trade-off whilst avoiding the need to sacrifice flexibility or use external data sources. We also contribute to the ill-defined problem of evaluating interpretability.