PuSH - Publikationsserver des Helmholtz Zentrums München

Panda, M.P.* ; Tiezzi, M.* ; Vilas, M.* ; Roig, G.* ; Eskofier, B.M. ; Zanca, D.*

FovEx: Human-inspired explanations for vision transformers and convolutional neural networks.

Int. J. Comput. Vis., DOI: 10.1007/s11263-025-02543-y (2025)
Verlagsversion DOI
Closed
Open Access Green möglich sobald Postprint bei der ZB eingereicht worden ist.
Explainability in artificial intelligence (XAI) remains a crucial aspect for fostering trust and understanding in machine learning models. Current visual explanation techniques, such as gradient-based or class-activation-based methods, often exhibit a strong dependence on specific model architectures. Conversely, perturbation-based methods, despite being model-agnostic, are computationally expensive as they require evaluating models on a large number of forward passes. We introduce Foveation-based Explanations (FovEx), a novel XAI method inspired by human vision, which combines biologically inspired foveation-based transformations with gradient-driven overt attention to iteratively select locations of interest. These locations are selected to maximize the performance of the model to be explained with respect to the downstream task and then combined to generate an attribution map. We provide a thorough evaluation with qualitative and quantitative assessments on established benchmarks. Our method achieves state-of-the-art performance on both transformers (on 4 out of 5 metrics) and convolutional models (on 3 out of 5 metrics), demonstrating its versatility among various architectures. Furthermore, we show the alignment between the explanation map produced by FovEx and human gaze patterns (+14% in NSS compared to RISE, +203% in NSS compared to GradCAM). This comparison enhances our confidence in FovEx's ability to close the interpretation gap between humans and machines.
Impact Factor
Scopus SNIP
Altmetric
9.300
5.150
Tags
Anmerkungen
Besondere Publikation
Auf Hompepage verbergern

Zusatzinfos bearbeiten
Eigene Tags bearbeiten
Privat
Eigene Anmerkung bearbeiten
Privat
Auf Publikationslisten für
Homepage nicht anzeigen
Als besondere Publikation
markieren
Publikationstyp Artikel: Journalartikel
Dokumenttyp Wissenschaftlicher Artikel
Schlagwörter Human-inspired; Attribution Maps; Explainable Artificial Intelligence; Transformers; CNNs; Visual Explanations
Sprache englisch
Veröffentlichungsjahr 2025
HGF-Berichtsjahr 2025
ISSN (print) / ISBN 0920-5691
e-ISSN 1573-1405
Verlag Springer
Verlagsort Van Godewijckstraat 30, 3311 Gz Dordrecht, Netherlands
Begutachtungsstatus Peer reviewed
POF Topic(s) 30205 - Bioengineering and Digital Health
Forschungsfeld(er) Enabling and Novel Technologies
PSP-Element(e) G-540008-001
Erfassungsdatum 2025-10-08