PuSH - Publikationsserver des Helmholtz Zentrums München

Starke, S.* ; Zwanenburg, A.* ; Leger, K.* ; Lohaus, F.* ; Linge, A.* ; Kalinauskaite, G.* ; Tinhofer, I.* ; Guberina, N.* ; Guberina, M.* ; Balermpas, P.* ; Grün, J.V.* ; Ganswindt, U. ; Belka, C. ; Peeken, J.C. ; Combs, S.E. ; Boeke, S.* ; Zips, D.* ; Richter, C.* ; Troost, E.G.C.* ; Krause, M.* ; Baumann, M.* ; Löck, S.*

Multitask learning with convolutional neural networks and vision transformers for outcome prediction of head and neck cancer patients

Cancers 15:21 (2023)
Verlagsversion DOI PMC
Open Access Gold
Creative Commons Lizenzvertrag
Neural-network-based outcome predictions may enable further treatment personalization of patients with head and neck cancer. The development of neural networks can prove challenging when a limited number of cases is available. Therefore, we investigated whether multitask learning strategies, implemented through the simultaneous optimization of two distinct outcome objectives (multi-outcome) and combined with a tumor segmentation task, can lead to improved performance of convolutional neural networks (CNNs) and vision transformers (ViTs). Model training was conducted on two distinct multicenter datasets for the endpoints loco-regional control (LRC) and progression-free survival (PFS), respectively. The first dataset consisted of pre-treatment computed tomography (CT) imaging for 290 patients and the second dataset contained combined positron emission tomography (PET)/CT data of 224 patients. Discriminative performance was assessed by the concordance index (C-index). Risk stratification was evaluated using log-rank tests. Across both datasets, CNN and ViT model ensembles achieved similar results. Multitask approaches showed favorable performance in most investigations. Multi-outcome CNN models trained with segmentation loss were identified as the optimal strategy across cohorts. On the PET/CT dataset, an ensemble of multi-outcome CNNs trained with segmentation loss achieved the best discrimination (C-index: 0.29, 95% confidence interval (CI): 0.22-0.36) and successfully stratified patients into groups with low and high risk of disease progression (p=0.003). On the CT dataset, ensembles of multi-outcome CNNs and of single-outcome ViTs trained with segmentation loss performed best (C-index: 0.26 and 0.26, CI: 0.18-0.34 and 0.18-0.35, respectively), both with significant risk stratification for LRC in independent validation (p=0.002 and p=0.011). Further validation of the developed multitask-learning models is planned based on a prospective validation study, which has recently completed recruitment.
Impact Factor
Scopus SNIP
Altmetric
5.200
0.000
Tags
Anmerkungen
Besondere Publikation
Auf Hompepage verbergern

Zusatzinfos bearbeiten
Eigene Tags bearbeiten
Privat
Eigene Anmerkung bearbeiten
Privat
Auf Publikationslisten für
Homepage nicht anzeigen
Als besondere Publikation
markieren
Publikationstyp Artikel: Journalartikel
Dokumenttyp Wissenschaftlicher Artikel
Schlagwörter Cox Proportional Hazards ; Convolutional Neural Network ; Discrete-time Survival Models ; Head And Neck Cancer ; Loco-regional Control ; Multitask Learning ; Progression-free Survival ; Survival Analysis ; Tumor Segmentation ; Vision Transformer; Radiomics; Models
Sprache englisch
Veröffentlichungsjahr 2023
HGF-Berichtsjahr 2023
ISSN (print) / ISBN 2072-6694
Zeitschrift Cancers
Quellenangaben Band: 15, Heft: 19, Seiten: , Artikelnummer: 21 Supplement: ,
Verlag MDPI
Verlagsort St Alban-anlage 66, Ch-4052 Basel, Switzerland
Begutachtungsstatus Peer reviewed
POF Topic(s) 30504 - Mechanisms of Genetic and Environmental Influences on Health and Disease
30203 - Molecular Targets and Therapies
Forschungsfeld(er) Radiation Sciences
PSP-Element(e) G-521800-001
G-501300-001
Scopus ID 85173848542
PubMed ID 37835591
Erfassungsdatum 2023-11-28