Visible to the public Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI)

TitleConstrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI)
Publication TypeConference Paper
Year of Publication2020
AuthorsD’Alterio, P., Garibaldi, J. M., John, R. I.
Conference Name2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
Date Publishedjul
Keywordsartificial intelligence, classification reliability, Constrained interval type-2, constrained interval type-2 fuzzy classification systems, constrained interval type-2 fuzzy sets, explainable AI, explainable artificial intelligence, explainable CIT2 classifiers, explainable type-2 fuzzy systems, Fuzzy logic, fuzzy logic systems, fuzzy set theory, Fuzzy sets, Generators, Indexes, inherently interpretable structure, Intelligent systems, interval type-2 fuzzy logic, IT2, Linguistics, natural language, output interval centroid, pattern classification, pubcrawl, Resiliency, Scalability, Shape, Switches, type-reduction step, xai
AbstractIn recent year, there has been a growing need for intelligent systems that not only are able to provide reliable classifications but can also produce explanations for the decisions they make. The demand for increased explainability has led to the emergence of explainable artificial intelligence (XAI) as a specific research field. In this context, fuzzy logic systems represent a promising tool thanks to their inherently interpretable structure. The use of a rule-base and linguistic terms, in fact, have allowed researchers to create models that are able to produce explanations in natural language for each of the classifications they make. So far, however, designing systems that make use of interval type-2 (IT2) fuzzy logic and also give explanations for their outputs has been very challenging, partially due to the presence of the type-reduction step. In this paper, it will be shown how constrained interval type-2 (CIT2) fuzzy sets represent a valid alternative to conventional interval type-2 sets in order to address this issue. Through the analysis of two case studies from the medical domain, it is shown how explainable CIT2 classifiers are produced. These systems can explain which rules contributed to the creation of each of the endpoints of the output interval centroid, while showing (in these examples) the same level of accuracy as their IT2 counterpart.
DOI10.1109/FUZZ48607.2020.9177671
Citation Keydalterio_constrained_2020