TY - JOUR AB - Synthesizing medical images while preserving their structural information is crucial in medical research. In such scenarios, the preservation of anatomical content becomes especially important. Although recent advances have been made by incorporating instance-level information to guide translation, these methods overlook the spatial coherence of structural-level representation and the anatomical invariance of content during translation. To address these issues, we introduce hierarchical granularity discrimination, which exploits various levels of semantic information present in medical images. Our strategy utilizes three levels of discrimination granularity: pixel-level discrimination using a Brain Memory Bank, structure-level discrimination on each brain structure with a re-weighting strategy to focus on hard samples, and global-level discrimination to ensure anatomical consistency during translation. The image translation performance of our strategy has been evaluated on three independent datasets (UK Biobank, IXI, and BraTS 2018), and it has outperformed state-of-the-art algorithms. Particularly, our model excels not only in synthesizing normal structures but also in handling abnormal (pathological) structures, such as brain tumors, despite the variations in contrast observed across different imaging modalities due to their pathological characteristics. The diagnostic value of synthesized MR images containing brain tumors has been evaluated by radiologists. This indicates that our model may offer an alternative solution in scenarios where specific MR modalities of patients are unavailable. Extensive experiments further demonstrate the versatility of our method, providing unique insights into medical image translation. AU - Yu, Z.* AU - Zhao, B.* AU - Zhang, S.* AU - Chen, X.* AU - Yan, F.* AU - Feng, J.* AU - Peng, T. AU - Zhang, X.Y.* C1 - 72567 C2 - 56655 CY - Radarweg 29, 1043 Nx Amsterdam, Netherlands TI - HiFi-Syn: Hierarchical granularity discrimination for high-fidelity synthesis of MR images with structure preservation. JO - Med. Image Anal. VL - 100 PB - Elsevier PY - 2024 SN - 1361-8415 ER - TY - JOUR AB - The diagnostic value of ultrasound images may be limited by the presence of artefacts, notably acoustic shadows, lack of contrast and localised signal dropout. Some of these artefacts are dependent on probe orientation and scan technique, with each image giving a distinct, partial view of the imaged anatomy. In this work, we propose a novel method to fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent 3D volume of the full anatomy. Firstly, a stream of freehand 3D US images is acquired using a single probe, capturing as many different views of the head as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using a recurrent spatial transformer network, making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the most consistent and salient features from all images, producing a more detailed compounding, while minimising artefacts. We evaluated our method quantitatively and qualitatively, using image quality metrics and expert ratings, yielding state of the art performance in terms of image quality and robustness to misalignments. Being online, fast and fully automated, our method shows promise for clinical use and deployment as a real-time tool in the fetal screening clinic, where it may enable unparallelled insight into the shape and structure of the face, skull and brain. AU - Wright, R.* AU - Gomez, A.* AU - Zimmer, V.A.* AU - Toussaint, N.* AU - Khanal, B.* AU - Matthew, J.* AU - Skelton, E.* AU - Kainz, B.* AU - Rueckert, D.* AU - Hajnal, J.V.* AU - Schnabel, J.A. C1 - 68077 C2 - 54555 CY - Radarweg 29, 1043 Nx Amsterdam, Netherlands TI - Fast fetal head compounding from multi-view 3D ultrasound. JO - Med. Image Anal. VL - 89 PB - Elsevier PY - 2023 SN - 1361-8415 ER - TY - JOUR AB - Deep neural networks (DNNs) have achieved physician-level accuracy on many imaging-based medical diagnostic tasks, for example classification of retinal images in ophthalmology. However, their decision mechanisms are often considered impenetrable leading to a lack of trust by clinicians and patients. To alleviate this issue, a range of explanation methods have been proposed to expose the inner workings of DNNs leading to their decisions. For imaging-based tasks, this is often achieved via saliency maps. The quality of these maps are typically evaluated via perturbation analysis without experts involved. To facilitate the adoption and success of such automated systems, however, it is crucial to validate saliency maps against clinicians. In this study, we used three different network architectures and developed ensembles of DNNs to detect diabetic retinopathy and neovascular age-related macular degeneration from retinal fundus images and optical coherence tomography scans, respectively. We used a variety of explanation methods and obtained a comprehensive set of saliency maps for explaining the ensemble-based diagnostic decisions. Then, we systematically validated saliency maps against clinicians through two main analyses — a direct comparison of saliency maps with the expert annotations of disease-specific pathologies and perturbation analyses using also expert annotations as saliency maps. We found the choice of DNN architecture and explanation method to significantly influence the quality of saliency maps. Guided Backprop showed consistently good performance across disease scenarios and DNN architectures, suggesting that it provides a suitable starting point for explaining the decisions of DNNs on retinal images. AU - Ayhan, M.S.* AU - Kuemmerle, L. AU - Kühlewein, L.* AU - Inhoffen, W.* AU - Aliyeva, G.* AU - Ziemssen, F.* AU - Berens, P.* C1 - 64202 C2 - 52105 TI - Clinical validation of saliency maps for understanding deep neural networks in ophthalmology. JO - Med. Image Anal. VL - 77 PY - 2022 SN - 1361-8415 ER - TY - JOUR AB - In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094. AU - Bilic, P.* AU - Christ, P.* AU - Li, H.B.* AU - Vorontsov, E.A.* AU - Ben-Cohen, A.* AU - Kaissis, G.* AU - Szeskin, A.* AU - Jacobs, C.* AU - Mamani, G.E.H.* AU - Chartrand, G.* AU - Lohöfer, F.* AU - Holch, J.W.* AU - Sommer, W.* AU - Hofmann, F.* AU - Hostettler, A.* AU - Lev-Cohain, N.* AU - Drozdzal, M.* AU - Amitai, M.M.* AU - Vivanti, R.* AU - Sosna, J.* AU - Ezhov, I.* AU - Sekuboyina, A.* AU - Navarro, F.* AU - Kofler, F. AU - Paetzold, J.C. AU - Shit, S.* AU - Hu, X.* AU - Lipkova, J.* AU - Rempfler, M.* AU - Piraud, M. AU - Kirschke, J.* AU - Wiestler, B.* AU - Zhang, Z.* AU - Hülsemeyer, C.* AU - Beetz, M.* AU - Ettlinger, F.* AU - Antonelli, M.* AU - Bae, W.* AU - Bellver, M.* AU - Bi, L.* AU - Chen, H.* AU - Chlebus, G.* AU - Dam, E.B.* AU - Dou, Q.* AU - Fu, C.W.* AU - Georgescu, B.* AU - Giró-I-Nieto, X.* AU - Gruen, F.* AU - Han, X.* AU - Heng, P.A.* AU - Hesser, J.* AU - Moltz, J.H.* AU - Igel, C.* AU - Isensee, F.* AU - Jäger, P.* AU - Jia, F.* AU - Kaluva, K.C.* AU - Khened, M.* AU - Kim, I.* AU - Kim, J.H.* AU - Kim, S.* AU - Kohl, S.* AU - Konopczynski, T.* AU - Kori, A.* AU - Krishnamurthi, G.* AU - Li, F.* AU - Li, H.* AU - Li, J.* AU - Li, X.* AU - Lowengrub, J.* AU - Ma, J.* AU - Maier-Hein, K.* AU - Maninis, K.K.* AU - Meine, H.* AU - Merhof, D.* AU - Pai, A.* AU - Perslev, M.* AU - Petersen, J.* AU - Pont-Tuset, J.* AU - Qi, J.* AU - Qi, X.* AU - Rippel, O.* AU - Roth, K.* AU - Sarasua, I.* AU - Schenk, A.* AU - Shen, Z.* AU - Torres, J.* AU - Wachinger, C.* AU - Wang, C.* AU - Weninger, L.* AU - Wu, J.* AU - Xu, D.* AU - Yang, X.* AU - Yu, S.C.H.* AU - Yuan, Y.* AU - Yue, M.* AU - Zhang, L.* AU - Cardoso, J.* AU - Bakas, S.* AU - Braren, R.* AU - Heinemann, V.* AU - Pal, C.* AU - Tang, A.* AU - Kadoury, S.* AU - Soler, L.* AU - Van Ginneken, B.* AU - Greenspan, H.* AU - Joskowicz, L.* AU - Menze, B.* C1 - 67003 C2 - 53372 CY - Radarweg 29, 1043 Nx Amsterdam, Netherlands TI - The Liver Tumor Segmentation Benchmark (LiTS). JO - Med. Image Anal. VL - 84 PB - Elsevier PY - 2022 SN - 1361-8415 ER - TY - JOUR AB - Left atrial (LA) and atrial scar segmentation from late gadolinium enhanced magnetic resonance imaging (LGE MRI) is an important task in clinical practice. The automatic segmentation is however still challenging due to the poor image quality, the various LA shapes, the thin wall, and the surrounding enhanced regions. Previous methods normally solved the two tasks independently and ignored the intrinsic spatial relationship between LA and scars. In this work, we develop a new framework, namely AtrialJSQnet, where LA segmentation, scar projection onto the LA surface, and scar quantification are performed simultaneously in an end-to-end style. We propose a mechanism of shape attention (SA) via an implicit surface projection to utilize the inherent correlation between LA cavity and scars. In specific, the SA scheme is embedded into a multi-task architecture to perform joint LA segmentation and scar quantification. Besides, a spatial encoding (SE) loss is introduced to incorporate continuous spatial information of the target in order to reduce noisy patches in the predicted segmentation. We evaluated the proposed framework on 60 post-ablation LGE MRIs from the MICCAI2018 Atrial Segmentation Challenge. Moreover, we explored the domain generalization ability of the proposed AtrialJSQnet on 40 pre-ablation LGE MRIs from this challenge and 30 post-ablation multi-center LGE MRIs from another challenge (ISBI2012 Left Atrium Fibrosis and Scar Segmentation Challenge). Extensive experiments on public datasets demonstrated the effect of the proposed AtrialJSQnet, which achieved competitive performance over the state-of-the-art. The relatedness between LA segmentation and scar quantification was explicitly explored and has shown significant performance improvements for both tasks. The code has been released via https://zmiclab.github.io/projects.html. AU - Li, L.* AU - Zimmer, V.A.* AU - Schnabel, J.A. AU - Zhuang, X.* C1 - 63745 C2 - 51618 CY - Radarweg 29, 1043 Nx Amsterdam, Netherlands TI - AtrialJSQnet: A New framework for joint segmentation and quantification of left atrium and scars incorporating spatial and shape information. JO - Med. Image Anal. VL - 76 PB - Elsevier PY - 2022 SN - 1361-8415 ER - TY - JOUR AB - Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars. The position and extent of LA scars provide important information on the pathophysiology and progression of atrial fibrillation (AF). Hence, LA LGE MRI computing and analysis are essential for computer-assisted diagnosis and treatment stratification of AF patients. Since manual delineations can be time-consuming and subject to intra- and inter-expert variability, automating this computing is highly desired, which nevertheless is still challenging and under-researched. This paper aims to provide a systematic review on computing methods for LA cavity, wall, scar, and ablation gap segmentation and quantification from LGE MRI, and the related literature for AF studies. Specifically, we first summarize AF-related imaging techniques, particularly LGE MRI. Then, we review the methodologies of the four computing tasks in detail and summarize the validation strategies applied in each task as well as state-of-the-art results on public datasets. Finally, the possible future developments are outlined, with a brief survey on the potential clinical applications of the aforementioned methods. The review indicates that the research into this topic is still in the early stages. Although several methods have been proposed, especially for the LA cavity segmentation, there is still a large scope for further algorithmic developments due to performance issues related to the high variability of enhancement appearance and differences in image acquisition. AU - Li, L.* AU - Zimmer, V.A.* AU - Schnabel, J.A. AU - Zhuang, X.* C1 - 64244 C2 - 52124 TI - Medical image analysis on left atrial LGE MRI for atrial fibrillation studies: A review. JO - Med. Image Anal. VL - 77 PY - 2022 SN - 1361-8415 ER - TY - JOUR AB - Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes. AU - Zimmer, V.A.* AU - Gomez, A.* AU - Skelton, E.* AU - Wright, R.* AU - Wheeler, G.* AU - Deng, S.* AU - Ghavami, N.* AU - Lloyd, K.* AU - Matthew, J.* AU - Kainz, B.* AU - Rueckert, D.* AU - Hajnal, J.V.* AU - Schnabel, J.A. C1 - 66480 C2 - 52827 CY - Radarweg 29, 1043 Nx Amsterdam, Netherlands TI - Placenta segmentation in ultrasound imaging: Addressing sources of uncertainty and limited field-of-view. JO - Med. Image Anal. VL - 83 PB - Elsevier PY - 2022 SN - 1361-8415 ER - TY - JOUR AB - Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data—a necessity for and pitfall of current supervised Deep Learning—and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions. AU - Baur, C.* AU - Denner, S.* AU - Wiestler, B.* AU - Navab, N.* AU - Albarqouni, S. C1 - 61056 C2 - 49889 CY - Radarweg 29, 1043 Nx Amsterdam, Netherlands TI - Autoencoders for unsupervised anomaly segmentation in brain MR images: A comparative study. JO - Med. Image Anal. VL - 69 PB - Elsevier PY - 2021 SN - 1361-8415 ER - TY - JOUR AB - Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VERSE) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VERSE: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VERSE content and code can be accessed at: https://github.com/anjany/verse. AU - Sekuboyina, A.* AU - Husseini, M.E.* AU - Bayat, A.* AU - Löffler, M.* AU - Liebl, H.* AU - Li, H.* AU - Tetteh, G.* AU - Kukacka, J. AU - Payer, C.* AU - Štern, D.* AU - Urschler, M.* AU - Chen, M.* AU - Cheng, D.S.* AU - Lessmann, N.* AU - Hu, Y.* AU - Wang, T.* AU - Yang, D.* AU - Xu, D.* AU - Ambellan, F.* AU - Amiranashvili, T.* AU - Ehlke, M.* AU - Lamecker, H.* AU - Lehnert, S.* AU - Lirio, M.* AU - Olaguer, N.P.d.* AU - Ramm, H.* AU - Sahu, M.* AU - Tack, A.* AU - Zachow, S.* AU - Jiang, T.* AU - Ma, X.* AU - Angerman, C.* AU - Wang, X.* AU - Brown, K.* AU - Kirszenberg, A.* AU - Puybareau, É.* AU - Chen, D.* AU - Bai, Y.* AU - Rapazzo, B.H.* AU - Yeah, T.* AU - Zhang, A.* AU - Xu, S.* AU - Hou, F.* AU - He, Z.* AU - Zeng, C.* AU - Xiangshang, Z.* AU - Liming, X.* AU - Netherton, T.J.* AU - Mumme, R.P.* AU - Court, L.E.* AU - Huang, Z.* AU - He, C.* AU - Wang, L.W.* AU - Ling, S.H.* AU - Huỳnh, L.D.* AU - Boutry, N.* AU - Jakubicek, R.* AU - Chmelik, J.* AU - Mulay, S.* AU - Sivaprakasam, M.* AU - Paetzold, J.C.* AU - Shit, S.* AU - Ezhov, I.* AU - Wiestler, B.* AU - Glocker, B.* AU - Valentinitsch, A.* AU - Rempfler, M.* AU - Menze, B.H.* AU - Kirschke, J.S.* C1 - 62736 C2 - 51037 CY - Radarweg 29, 1043 Nx Amsterdam, Netherlands TI - VERSE: A Vertebrae labelling and segmentation benchmark for multi-detector CT images. JO - Med. Image Anal. VL - 73 PB - Elsevier PY - 2021 SN - 1361-8415 ER - TY - JOUR AB - The examination of biopsy samples plays a central role in the diagnosis and staging of numerous diseases, including most cancer types. However, because of the large size of the acquired images, the localization and quantification of diseased portions of a tissue is usually time-consuming, as pathologists must scroll through the whole slide to look for objects of interest which are often only scarcely distributed. In this work, we introduce an approach to facilitate the visual inspection of large digital histopathological slides. Our method builds on a random forest classifier trained to segment the structures sought by the pathologist. However, moving beyond the pixelwise segmentation task, our main contribution is an interactive exploration framework including: (i) a region scoring function which is used to rank and sequentially display regions of interest to the user, and (ii) a relevance feedback capability which leverages human annotations collected on each suggested region. Thereby, an online domain adaptation of the learned pixelwise segmentation model is performed, so that the region scores adapt on-the-fly to possible discrepancies between the original training data and the slide at hand. Three real-time update strategies are compared, including a novel approach based on online gradient descent which supports faster user interaction than an accurate delineation of objects. Our method is evaluated on the task of extramedullary hematopoiesis quantification within mouse liver slides. We assess quantitatively the retrieval abilities of our approach and the benefit of the interactive adaptation scheme. Moreover, we demonstrate the possibility of extrapolating, after a partial exploration of the slide, the surface covered by hematopoietic cells within the whole tissue. AU - Peter, L.* AU - Mateus, D. AU - Chatelain, P.* AU - Declara, D.* AU - Schworm, N.* AU - Stangl, S.* AU - Multhoff, G. AU - Navab, N.* C1 - 49753 C2 - 40898 CY - Amsterdam SP - 655-668 TI - Assisting the examination of large histopathological slides with adaptive forests. JO - Med. Image Anal. VL - 35 PB - Elsevier Science Bv PY - 2016 SN - 1361-8415 ER - TY - JOUR AB - The development of post-processing reconstruction techniques has opened new possibilities for the study of in-utero fetal brain MRI data. Recent cortical surface analysis have led to the computation of quantitative maps characterizing brain folding of the developing brain. In this paper, we describe a novel feature selection-based approach that is used to extract the most discriminative and sparse set of features of a given dataset. The proposed method is used to sparsely characterize cortical folding patterns of an in-utero fetal MR dataset, labeled with heterogeneous gestational age ranging from 26 weeks to 34 weeks. The proposed algorithm is validated on a synthetic dataset with both linear and non-linear dynamics, supporting its ability to capture deformation patterns across the dataset within only a few features. Results on the fetal brain dataset show that the temporal process of cortical folding related to brain maturation can be characterized by a very small set of points, located in anatomical regions changing across time. Quantitative measurements of growth against time are extracted from the set selected features to compare multiple brain regions (e.g. lobes and hemispheres) during the considered period of gestation. AU - Pontabry, J. AU - Rousseau, F.* AU - Studholme, C.* AU - Koob, M.* AU - Dietemann, J.L.* C1 - 49238 C2 - 33756 CY - Amsterdam SP - 313-326 TI - A discriminative feature selection approach for shape analysis: Application to fetal brain cortical folding. JO - Med. Image Anal. VL - 35 PB - Elsevier Science Bv PY - 2016 SN - 1361-8415 ER - TY - JOUR AB - In this paper, a new segmentation framework with prior knowledge is proposed and applied to the left ventricles in cardiac Cine MRI sequences. We introduce a new formulation of the random walks method, coined as guided random walks, in which prior knowledge is integrated seamlessly. In comparison with existing approaches that incorporate statistical shape models, our method does not extract any principal model of the shape or appearance of the left ventricle. Instead, segmentation is accompanied by retrieving the closest subject in the database that guides the segmentation the best. Using this techniques, rare cases can also effectively exploit prior knowledge from few samples in training set. These cases are usually disregarded in statistical shape models as they are outnumbered by frequent cases (effect of class population). In the worst-case scenario, if there is no matching case in the database to guide the segmentation, performance of the proposed method reaches to the conventional random walks, which is shown to be accurate if sufficient number of seeds is provided. There is a fast solution to the proposed guided random walks by using sparse linear matrix operations and the whole framework can be seamlessly implemented in a parallel architecture. The method has been validated on a comprehensive clinical dataset of 3D+t short axis MR images of 104 subjects from 5 categories (normal, dilated left ventricle, ventricular hypertrophy, recent myocardial infarction, and heart failure). The average segmentation errors were found to be 1.54 mm for the endocardium and 1.48 mm for the epicardium. The method was validated by measuring different algorithmic and physiologic indices and quantified with manual segmentation ground truths, provided by a cardiologist. AU - Eslami, A. AU - Karamalis, A.* AU - Katouzian, A.* AU - Navab, N.* C1 - 24459 C2 - 31551 SP - 236-253 TI - Segmentation by retrieval with guided random walks: Application to left ventricle segmentation in MRI. JO - Med. Image Anal. VL - 17 IS - 2 PB - Elsevier Science PY - 2013 SN - 1361-8415 ER - TY - JOUR AB - Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. AU - Kirişli, H.A.* AU - Schaap, M.* AU - Metz, C.T.* AU - Dharampal, A.S.* AU - Meijboom, W.B.* AU - Papadopoulou, S.L.* AU - Dedic, A.* AU - Nieman, K.* AU - de Graaf, M.A.* AU - Meijs, M.F.L.* AU - Cramer, M.J.* AU - Broersen, A.* AU - Cetin, S.* AU - Eslami, A. AU - Flórez-Valencia, L.* AU - Lor, K.L.* AU - Matuszewski, B.* AU - Melki, I.* AU - Mohr, B.* AU - Öksüz, I.* AU - Shahzad, R.* AU - Wang, C.* AU - Kitslaar, P.H.* AU - Unal, G.* AU - Katouzian, A.* AU - Orkisz, M.* AU - Chen, C.M.* AU - Precioso, F.* AU - Najman, L.* AU - Masood, S.* AU - Ünay, D.* AU - van Vliet, L.* AU - Moreno, R.* AU - Goldenberg, R.* AU - Vucini, E.* AU - Krestin, G.P.* AU - Niessen, W.J.* AU - van Walsum, T.* C1 - 26019 C2 - 32025 SP - 859-876 TI - Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography. JO - Med. Image Anal. VL - 17 IS - 8 PB - Elsevier Science PY - 2013 SN - 1361-8415 ER - TY - JOUR AB - Diagnostic nuclear imaging modalities like SPECT typically employ gantries to ensure a densely sampled geometry of detectors in order to keep the inverse problem of tomographic reconstruction as well-posed as possible. In an intra-operative setting with mobile freehand detectors the situation changes significantly, and having an optimal detector trajectory during acquisition becomes critical. In this paper we propose an incremental optimization method based on the numerical condition of the system matrix of the underlying iterative reconstruction method to calculate optimal detector positions during acquisition in real-time. The performance of this approach is evaluated using simulations. A first experiment on a phantom using a robot-controlled intra-operative SPECT-like setup demonstrates the feasibility of the approach. AU - Vogel, J.* AU - Lasser, T. AU - Gardiazabal, J.* AU - Navab, N.* C1 - 26296 C2 - 32157 SP - 723-731 TI - Trajectory optimization for intra-operative nuclear tomographic imaging. JO - Med. Image Anal. VL - 17 IS - 7 PB - Elsevier Science PY - 2013 SN - 1361-8415 ER -