PuSH - Publikationsserver des Helmholtz Zentrums München

Meissen, F.* ; Breuer, S.* ; Knolle, M.* ; Buyx, A.* ; Müller, R.* ; Kaissis, G. ; Wiestler, B.* ; Rückert, D.*

(Predictable) performance bias in unsupervised anomaly detection.

EBioMedicine 101:105002 (2024)
Verlagsversion DOI PMC
Open Access Gold
Creative Commons Lizenzvertrag
BACKGROUND: With the ever-increasing amount of medical imaging data, the demand for algorithms to assist clinicians has amplified. Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection. While previous studies have thoroughly explored fairness in supervised models in healthcare, for UAD, this has so far been unexplored. METHODS: In this study, we evaluated how dataset composition regarding subgroups manifests in disparate performance of UAD models along multiple protected variables on three large-scale publicly available chest X-ray datasets. Our experiments were validated using two state-of-the-art UAD models for medical images. Finally, we introduced subgroup-AUROC (sAUROC), which aids in quantifying fairness in machine learning. FINDINGS: Our experiments revealed empirical "fairness laws" (similar to "scaling laws" for Transformers) for training-dataset composition: Linear relationships between anomaly detection performance within a subpopulation and its representation in the training data. Our study further revealed performance disparities, even in the case of balanced training data, and compound effects that exacerbate the drop in performance for subjects associated with multiple adversely affected groups. INTERPRETATION: Our study quantified the disparate performance of UAD models against certain demographic subgroups. Importantly, we showed that this unfairness cannot be mitigated by balanced representation alone. Instead, the representation of some subgroups seems harder to learn by UAD models than that of others. The empirical "fairness laws" discovered in our study make disparate performance in UAD models easier to estimate and aid in determining the most desirable dataset composition. FUNDING: European Research Council Deep4MI.
Impact Factor
Scopus SNIP
Altmetric
9.700
0.000
Tags
Anmerkungen
Besondere Publikation
Auf Hompepage verbergern

Zusatzinfos bearbeiten
Eigene Tags bearbeiten
Privat
Eigene Anmerkung bearbeiten
Privat
Auf Publikationslisten für
Homepage nicht anzeigen
Als besondere Publikation
markieren
Publikationstyp Artikel: Journalartikel
Dokumenttyp Wissenschaftlicher Artikel
Schlagwörter Algorithmic Bias ; Anomaly Detection ; Artificial Intelligence ; Machine Learning ; Subgroup Disparities; Bare Bones
Sprache englisch
Veröffentlichungsjahr 2024
HGF-Berichtsjahr 2024
ISSN (print) / ISBN 2352-3964
e-ISSN 2352-3964
Zeitschrift EBioMedicine
Quellenangaben Band: 101, Heft: , Seiten: , Artikelnummer: 105002 Supplement: ,
Verlag Elsevier
Verlagsort Amsterdam [u.a.]
Begutachtungsstatus Peer reviewed
Institut(e) Institute for Machine Learning in Biomed Imaging (IML)
POF Topic(s) 30205 - Bioengineering and Digital Health
Forschungsfeld(er) Enabling and Novel Technologies
PSP-Element(e) G-507100-001
Förderungen European Research Council
PubMed ID 38335791
Erfassungsdatum 2024-04-19