TY - JOUR AB - Purpose: To investigate the integration of differential privacy (DP) and analyze its impact on model performance as compared with models trained without DP. Materials and Methods: Leveraging more than 590 000 chest radiographs from five institutions, including VinDr-CXR from Vietnam, ChestX-ray14 and CheXpert from the United States, UKA-CXR from Germany, and PadChest from Spain, the authors evaluated the efficacy of DP-enhanced domain transfer (DP-DT) in classifying cardiomegaly, pleural effusion, pneumonia, atelectasis, and healthy individuals. Diagnostic performance and sex-specific and age-specific demographic fairness of DP-DT and of non–DP-DT models were compared using the area under the receiver operating characteristic curve (AUC) as the main metric, as well as accuracy, sensitivity, and specificity as secondary metrics, and evaluated for statistical significance using paired Student t tests. Results: Even with high privacy levels (ε ≈ 1), DP-DT showed no evidence of differences compared with non–DP-DT in terms of a decrease in AUC of cross-institutional performance as compared with single-institutional performance (VinDr-CXR: 0.07 vs 0.07, P = .96; ChestX-ray14: 0.07 vs 0.06, P = .12; CheXpert: 0.07 vs 0.07, P = .18; UKA-CXR: 0.18 vs 0.18, P = .90; and PadChest: 0.07 vs 0.07, P = .35). Furthermore, AUC differences between DP-DT and non–DP-DT models were less than 1% for all sex subgroups (P > .33 for female and P > .22 for male, for all domains) and nearly all age subgroups (P > .16 for younger participants, P > .33 for adults, and P > .27 for older adults, for nearly all domains). Conclusion: Cross-institutional performance of artificial intelligence models was not affected by DP. AU - Tayebi Arasteh, S.* AU - Lotfinia, M.* AU - Nolte, T.* AU - Sähn, M.J.* AU - Isfort, P.* AU - Kühl, C.* AU - Nebelung, S.* AU - Kaissis, G. AU - Truhn, D.* C1 - 70002 C2 - 55008 CY - 820 Jorie Blvd, Suite 200, Oak Brook, Illinois, United States TI - Securing collaborative medical aI by using differential privacy: Domain transfer for classification of chest radiographs. JO - Radiol. Artif. Intell. VL - 6 IS - 1 PB - Radiological Soc North America (rsna) PY - 2024 SN - 2638-6100 ER - TY - JOUR AB - PURPOSE: To analyze the performance of deep learning (DL) models for segmentation of the neonatal lung in MRI and investigate the use of automated MRI-based features for assessment of neonatal lung disease. MATERIALS AND METHODS: Quiet-breathing MRI was prospectively performed in two independent cohorts of preterm infants (median gestational age, 26.57 weeks; IQR, 25.3-28.6 weeks; 55 female and 48 male infants) with (n = 86) and without (n = 21) chronic lung disease (bronchopulmonary dysplasia [BPD]). Convolutional neural networks were developed for lung segmentation, and a three-dimensional reconstruction was used to calculate MRI features for lung volume, shape, pixel intensity, and surface. These features were explored as indicators of BPD and disease-associated lung structural remodeling through correlation with lung injury scores and multinomial models for BPD severity stratification. RESULTS: The lung segmentation model reached a volumetric Dice coefficient of 0.908 in cross-validation and 0.880 on the independent test dataset, matching expert-level performance across disease grades. MRI lung features demonstrated significant correlations with lung injury scores and added structural information for the separation of neonates with BPD (BPD vs no BPD: average area under the receiver operating characteristic curve [AUC], 0.92 ± 0.02 [SD]; no or mild BPD vs moderate or severe BPD: average AUC, 0.84 ± 0.03). CONCLUSION: This study demonstrated high performance of DL models for MRI neonatal lung segmentation and showed the potential of automated MRI features for diagnostic assessment of neonatal lung disease while avoiding radiation exposure.Keywords: Bronchopulmonary Dysplasia, Chronic Lung Disease, Preterm Infant, Lung Segmentation, Lung MRI, BPD Severity Assessment, Deep Learning, Lung Imaging Biomarkers, Lung Topology Supplemental material is available for this article. Published under a CC BY 4.0 license.See also the commentary by Parraga and Sharma in this issue. AU - Mairhörmann, B. AU - Castelblanco, A. AU - Häfner, F. AU - Koliogiannis, V. AU - Haist, L. AU - Winter, D. AU - Flemmer, A.W. AU - Ehrhardt, H. AU - Stöcklein, S. AU - Dietrich, O. AU - Förster, K. AU - Hilgendorff, A. AU - Schubert, B. C1 - 68987 C2 - 53800 CY - 820 Jorie Blvd, Suite 200, Oak Brook, Illinois, United States TI - Automated MRI lung segmentation and 3D morphologic features for quantification of neonatal lung disease. JO - Radiol. Artif. Intell. VL - 5 IS - 6 PB - Radiological Soc North America (rsna) PY - 2023 SN - 2638-6100 ER - TY - JOUR AB - Fully automated and fast assessment of visceral and subcutaneous adipose tissue compartments using whole-body MRI is feasible with a deep learning network; a robust and generalizable architecture was investigated that enables objective segmentation and quick phenotypic profiling.PurposeTo enable fast and reliable assessment of subcutaneous and visceral adipose tissue compartments derived from whole-body MRI.Materials and MethodsQuantification and localization of different adipose tissue compartments derived from whole-body MR images is of high interest in research concerning metabolic conditions. For correct identification and phenotyping of individuals at increased risk for metabolic diseases, a reliable automated segmentation of adipose tissue into subcutaneous and visceral adipose tissue is required. In this work, a three-dimensional (3D) densely connected convolutional neural network (DCNet) is proposed to provide robust and objective segmentation. In this retrospective study, 1000 cases (average age, 66 years ± 13 [standard deviation]; 523 women) from the Tuebingen Family Study database and the German Center for Diabetes research database and 300 cases (average age, 53 years ± 11; 152 women) from the German National Cohort (NAKO) database were collected for model training, validation, and testing, with transfer learning between the cohorts. These datasets included variable imaging sequences, imaging contrasts, receiver coil arrangements, scanners, and imaging field strengths. The proposed DCNet was compared to a similar 3D U-Net segmentation in terms of sensitivity, specificity, precision, accuracy, and Dice overlap.ResultsFast (range, 5–7 seconds) and reliable adipose tissue segmentation can be performed with high Dice overlap (0.94), sensitivity (96.6%), specificity (95.1%), precision (92.1%), and accuracy (98.4%) from 3D whole-body MRI datasets (field of view coverage, 450 × 450 × 2000 mm). Segmentation masks and adipose tissue profiles are automatically reported back to the referring physician.ConclusionAutomated adipose tissue segmentation is feasible in 3D whole-body MRI datasets and is generalizable to different epidemiologic cohort studies with the proposed DCNet. AU - Küstner, T.* AU - Hepp, T.* AU - Fischer, M.* AU - Schwartz, M.* AU - Fritsche, A. AU - Häring, H.-U. AU - Nikolaou, K.* AU - Bamberg, F.* AU - Yang, B.* AU - Schick, F. AU - Gatidis, S.* AU - Machann, J. C1 - 61247 C2 - 49783 TI - Fully automated and standardized segmentation of adipose tissue compartments via deep learning in 3D whole-body MRI of epidemiological cohort studies. JO - Radiol. Artif. Intell. VL - 2 IS - 6 PY - 2021 SN - 2638-6100 ER -