as soon as is submitted to ZB.
Using test-time augmentation to investigate explainable AI: Inconsistencies between method, model and human intuition.
J. Cheminformatics 16:39 (2024)
Stakeholders of machine learning models desire explainable artificial intelligence (XAI) to produce human-understandable and consistent interpretations. In computational toxicity, augmentation of text-based molecular representations has been used successfully for transfer learning on downstream tasks. Augmentations of molecular representations can also be used at inference to compare differences between multiple representations of the same ground-truth. In this study, we investigate the robustness of eight XAI methods using test-time augmentation for a molecular-representation model in the field of computational toxicity prediction. We report significant differences between explanations for different representations of the same ground-truth, and show that randomized models have similar variance. We hypothesize that text-based molecular representations in this and past research reflect tokenization more than learned parameters. Furthermore, we see a greater variance between in-domain predictions than out-of-domain predictions, indicating XAI measures something other than learned parameters. Finally, we investigate the relative importance given to expert-derived structural alerts and find similar importance given irregardless of applicability domain, randomization and varying training procedures. We therefore caution future research to validate their methods using a similar comparison to human intuition without further investigation. SCIENTIFIC CONTRIBUTION: In this research we critically investigate XAI through test-time augmentation, contrasting previous assumptions about using expert validation and showing inconsistencies within models for identical representations. SMILES augmentation has been used to increase model accuracy, but was here adapted from the field of image test-time augmentation to be used as an independent indication of the consistency within SMILES-based molecular representation models.
Altmetric
Additional Metrics?
Edit extra informations
Login
Publication type
Article: Journal article
Document type
Scientific Article
Keywords
Explainability ; Interpretation ; Ml ; Representation Learning ; Robustness ; Test-time Augmentation ; Xai
e-ISSN
1758-2946
Journal
Journal of Cheminformatics
Quellenangaben
Volume: 16,
Issue: 1,
Article Number: 39
Publisher
BioMed Central
Non-patent literature
Publications
Reviewing status
Peer reviewed
Institute(s)
Institute of Structural Biology (STB)