PuSH - Publikationsserver des Helmholtz Zentrums München

Tayebi Arasteh, S.* ; Ziller, A.* ; Kühl, C.* ; Makowski, M.* ; Nebelung, S.* ; Braren, R.* ; Rueckert, D.* ; Truhn, D.* ; Kaissis, G.

Preserving fairness and diagnostic accuracy in private large-scale AI models for medical imaging.

Commun. Med. 4:46 (2024)
Verlagsversion DOI PMC
Open Access Gold
Creative Commons Lizenzvertrag
BACKGROUND: Artificial intelligence (AI) models are increasingly used in the medical domain. However, as medical data is highly sensitive, special precautions to ensure its protection are required. The gold standard for privacy preservation is the introduction of differential privacy (DP) to model training. Prior work indicates that DP has negative implications on model accuracy and fairness, which are unacceptable in medicine and represent a main barrier to the widespread use of privacy-preserving techniques. In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training. METHODS: We used two datasets: (1) A large dataset (N = 193,311) of high quality clinical chest radiographs, and (2) a dataset (N = 1625) of 3D abdominal computed tomography (CT) images, with the task of classifying the presence of pancreatic ductal adenocarcinoma (PDAC). Both were retrospectively collected and manually labeled by experienced radiologists. We then compared non-private deep convolutional neural networks (CNNs) and privacy-preserving (DP) models with respect to privacy-utility trade-offs measured as area under the receiver operating characteristic curve (AUROC), and privacy-fairness trade-offs, measured as Pearson's r or Statistical Parity Difference. RESULTS: We find that, while the privacy-preserving training yields lower accuracy, it largely does not amplify discrimination against age, sex or co-morbidity. However, we find an indication that difficult diagnoses and subgroups suffer stronger performance hits in private training. CONCLUSIONS: Our study shows that - under the challenging realistic circumstances of a real-life clinical dataset - the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
Impact Factor
Scopus SNIP
Scopus
Cited By
Altmetric
5.400
0.000
2
Tags
Anmerkungen
Besondere Publikation
Auf Hompepage verbergern

Zusatzinfos bearbeiten
Eigene Tags bearbeiten
Privat
Eigene Anmerkung bearbeiten
Privat
Auf Publikationslisten für
Homepage nicht anzeigen
Als besondere Publikation
markieren
Publikationstyp Artikel: Journalartikel
Dokumenttyp Wissenschaftlicher Artikel
Sprache englisch
Veröffentlichungsjahr 2024
HGF-Berichtsjahr 2024
ISSN (print) / ISBN 2730-664X
e-ISSN 2730-664X
Quellenangaben Band: 4, Heft: 1, Seiten: , Artikelnummer: 46 Supplement: ,
Verlag Springer
Verlagsort Campus, 4 Crinan St, London, N1 9xw, England
Begutachtungsstatus Peer reviewed
Institut(e) Institute for Machine Learning in Biomed Imaging (IML)
POF Topic(s) 30205 - Bioengineering and Digital Health
Forschungsfeld(er) Enabling and Novel Technologies
PSP-Element(e) G-507100-001
Förderungen ERC Grant Deep4MI
German Ministry of Education and Research (BMBF)
Deutsche Forschungsgemeinschaft (DFG)
German Federal Ministry of Education
European Union
Deutsches Konsortium fur Translationale Krebsforschung (DKTK)
German Federal Ministry of Education and Research
Bavarian State Ministry for Science and the Arts through the Munich Centre for Machine Learning
Radiological Cooperative Network (RACOON) under the German Federal Ministry of Education and Research (BMBF)
Scopus ID 85203684703
PubMed ID 38486100
Erfassungsdatum 2024-05-07