as soon as is submitted to ZB.
Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images.
In: (Information Processing in Medical Imaging). Berlin [u.a.]: Springer, 2023. 170-182 (Lect. Notes Comput. Sc. ; 13939 LNCS)
Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients’ blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model’s decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.
Altmetric
Additional Metrics?
Edit extra informations
Login
Publication type
Article: Conference contribution
Keywords
Blood Cancer Cytology ; Multiple Instance Learning ; Pixel-level Explainability; Classification
ISSN (print) / ISBN
0302-9743
e-ISSN
1611-3349
Conference Title
Information Processing in Medical Imaging
Quellenangaben
Volume: 13939 LNCS,
Pages: 170-182
Publisher
Springer
Publishing Place
Berlin [u.a.]
Non-patent literature
Publications
Institute(s)
Institute of AI for Health (AIH)