Whole-body imaging of mice is a key source of information for research. Organ segmentation is a prerequisite for quantitative analysis but is a tedious and error-prone task if done manually. Here, we present a deep learning solution called AIMOS that automatically segments major organs (brain, lungs, heart, liver, kidneys, spleen, bladder, stomach, intestine) and the skeleton in less than a second, orders of magnitude faster than prior algorithms. AIMOS matches or exceeds the segmentation quality of state-of-the-art approaches and of human experts. We exemplify direct applicability for biomedical research for localizing cancer metastases. Furthermore, we show that expert annotations are subject to human error and bias. As a consequence, we show that at least two independently created annotations are needed to assess model performance. Importantly, AIMOS addresses the issue of human bias by identifying the regions where humans are most likely to disagree, and thereby localizes and quantifies this uncertainty for improved downstream analysis. In summary, AIMOS is a powerful open-source tool to increase scalability, reduce bias, and foster reproducibility in many areas of biomedical research.
Institute(s)Institute for Tissue Engineering and Regenerative Medicine (ITERM)
GrantsNVIDIA DFG Fritz Thyssen Stiftung Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germanys Excellence Strategy within the framework of the Munich Cluster for Systems Neurology Vascular Dementia Research Foundation German Federal Ministry of Education and Research via the Software Campus initiative