TY - CONF AB - Pancreatic Ductal Adenocarcinoma (PDAC) is one of the most lethal cancers, with an increasing incidence. Lymph node metastasis (LNM) is a critical factor that influences both patient prognosis and treatment approaches. Current methods for LNM detection using contrast-enhanced CT scans often suffer from low sensitivity and inaccuracies, highlighting the need for improved predictive models. This paper presents a Deep Learning (DL) approach that integrates imaging features with non-imaging clinical attributes to enhance the accuracy of LNM detection in PDAC. Our method involves a retrospective study of 366 PDAC in multi-institute datasets, leveraging clinical data alongside CT scans to train a model capable of detecting LNM without relying on the segmentation of lymph nodes (LNs). Our results demonstrate a significant improvement in balanced accuracy, increasing from 0.447 to 0.6532 with the incorporation of clinical attributes, underscoring the importance of holistic data integration in enhancing LNM detection. This work emphasizes the potential of collaborative, multi-center efforts in advancing predictive modeling for improved patient outcomes in PDAC. Our code is available online: https://github.com/albarqounilab/DELTA. AU - Gaviria, D.D.* AU - Kupczyk, P.* AU - Lee, B.* AU - Gibbs, P.* AU - Ko, H.J.* AU - Semaan, A.* AU - Albarqouni, S. C1 - 74228 C2 - 57260 SP - 66-76 TI - Deep learning for lymph node metastasis detection in pancreatic ductal adenocarcinoma. JO - LNEE VL - 1372 LNEE PY - 2025 SN - 1876-1100 ER - TY - CONF AB - Giant Cell Arteritis (GCA) is a serious autoimmune disease that affects large and medium-size arteries, potentially leading to severe complications like vision loss if not diagnosed promptly. Current diagnostic methods rely heavily on clinical judgment and ultrasound imaging techniques, which require experience and expertise. This work investigates the use of supervised deep learning to improve the detection of hypoechoic wall thickening in ultrasound images, a key indicator of GCA. We developed an affordable and efficient artery-fusion deep learning model that receives an ultrasound image as input and considers artery type to enhance the detection accuracy. Our Artery Fusion Model, which integrates simple yet crucial artery-type information, demonstrated superior diagnostic accuracy with an F1-score of 81% and 74% and an AUROC score of 0.94 and 0.87 for larger arteries (AAX and ATC), outperforming both the Artery-Specific and Combined models by 2.7% (AAX) or 8.3% (ATC) and 2.8% (AAX) or 0.9% (ATC), respectively. However, the performance was lower for smaller arteries (ATF and ATP), reflecting the inherent challenges associated with these vessels. We employed Monte Carlo Batch Normalization and Class Activation Maps to improve interpretability and reliability. Our results demonstrate that uncertainty quantification enhances model performance by excluding uncertain predictions, underscoring its potential to revolutionize GCA detection and diagnosis. AU - Schaab, S.* AU - Bauer, C.J.* AU - Schäfer, V.S.* AU - Albarqouni, S. C1 - 74230 C2 - 57261 SP - 396-406 TI - Artery-fusion deep learning for enhanced ultrasound diagnosis of giant cell arteritis. JO - LNEE VL - 1372 LNEE PY - 2025 SN - 1876-1100 ER - TY - CONF AB - An increasing number of AI products for medical imaging solutions are offered to healthcare organizations, but frequently these are considered to be a ‘black-box’, offering only limited insights into the AI model functionality. Therefore, model-agnostic methods are required to provide Explainable AI (XAI) in order to improve clinicians’ trust and thus accelerate adoption. However, there is a current lack of published methods to explain 3D classification models with systematic evaluation for medical imaging applications. Here, the popular explainability method RISE is modified so that, for the first time to the best of our knowledge, it can be applied to 3D medical image classification. The method was assessed using recently proposed guidelines for clinical explainable AI. When different parameters were tested using a 3D CT dataset and a classifier to detect the presence of brain hemorrhage, we found that combining different algorithms to produce 3D occlusion patterns led to better and more reliable explainability results. This was confirmed using both quantitative metrics and interpretability assessment of the 3D saliency heatmaps by a clinical expert. AU - Highton, J.* AU - Chong, Q.Z.* AU - Crawley, R.* AU - Schnabel, J.A. AU - Bhatia, K.K.* C1 - 70406 C2 - 55569 CY - 152 Beach Road, #21-01/04 Gateway East, Singapore, 189721, Singapore SP - 41-51 TI - Evaluation of randomized input sampling for explanation (RISE) for 3D XAI - proof of concept for black-box brain-hemorrhage classification. JO - LNEE VL - 1166 LNEE PB - Springer-verlag Singapore Pte Ltd PY - 2024 SN - 1876-1100 ER -