REAL

Comparative Study of Interpretable Image Classification Models

Bajcsi, Adél and Bajcsi, Anna and Pável, Szabolcs and Portik, Ábel and Sándor, Csanád and Szenkovits, Annamária and Vas, Orsolya and Bodó, Zalán and Csató, Lehel (2023) Comparative Study of Interpretable Image Classification Models. INFOCOMMUNICATIONS JOURNAL, 15 (SI). pp. 20-26. ISSN 2061-2079

[img]
Preview
Text
InfocomJournal_2023_SpecISS_ICAI_4.pdf

Download (1MB) | Preview

Abstract

Explainable models in machine learning are increas- ingly popular due to the interpretability-favoring architectural features that help human understanding and interpretation of the decisions made by the model. Although using this type of model – similarly to “robustification” – might degrade prediction accuracy, a better understanding of decisions can greatly aid in the root cause analysis of failures of complex models, like deep neural networks. In this work, we experimentally compare three self-explainable image classification models on two datasets – MNIST and BDD100K –, briefly describing their operation and highlighting their characteristics. We evaluate the backbone models to be able to observe the level of deterioration of the prediction accuracy due to the interpretable module introduced, if any. To improve one of the models studied, we propose modifications to the loss function for learning and suggest a framework for automatic assessment of interpretability by examining the linear separability of the prototypes obtained.

Item Type: Article
Uncontrolled Keywords: deep learning, image classification, interpretability, self-explainable models
Subjects: Q Science / természettudomány > QA Mathematics / matematika > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány
SWORD Depositor: MTMT SWORD
Depositing User: MTMT SWORD
Date Deposited: 07 Sep 2023 12:42
Last Modified: 07 Sep 2023 12:42
URI: http://real.mtak.hu/id/eprint/172972

Actions (login required)

Edit Item Edit Item