IEEE This paper addresses digital staining and classification of the unstained white blood cell images obtained with a differential contrast microscope. We have data coming from multiple domains that are partially labeled and partially matching across the domains. Using unstained images removes time-consuming staining procedures and could facilitate and automatize comprehensive diagnostics. To this aim, we propose a method that translates unstained images to realistically looking stained images preserving the inter-cellular structures, crucial for the medical experts to perform classification. We achieve better structure preservation by adding auxiliary tasks of segmentation and direct reconstruction. Segmentation enforces that the network learns to generate correct nucleus and cytoplasm shape, while direct reconstruction enforces reliable translation between the matching images across domains. Besides, we build a robust domain agnostic latent space by injecting the target domain label directly to the generator, i.e., bypassing the encoder. It allows the encoder to extract features independently of the target domain and enables an automated domain invariant classification of the white blood cells. We validated our method on a large dataset composed of leukocytes of 24 patients, achieving state-of-the-art performance on both digital staining and classification tasks.
QuellenangabenBand: 40, Heft: 10, Seiten: 2897-2910, Artikelnummer: , Supplement: Special Issue on Annotation-efficient Deep Learning for Medical Imaging
Reihe
VerlagInstitute of Electrical and Electronics Engineers (IEEE)