Automatic segmentation of mouse brain structures in magnetic resonance (MR) images plays a crucial role in understanding brain organization and function in both basic and translational research. Due to fundamental differences in contrast, image size, and anatomical structure between the human and mouse brains, existing neuroimaging analysis tools designed for the human brain are not readily applicable to the mouse brain. To address this problem, we propose a generative adversarial network (GAN)-based network, named MouseGAN, to synthesize multiple MRI modalities and to segment mouse brain structures using a single MRI modality. MouseGAN contains a modality translation module to project multi-modality image features into a shared latent content space that encodes modality-invariant brain structures and a modality-specific attribute. In addition, the content encoder learned from the modality translation module is reused for the segmentation module to improve the structural segmentation. Our results demonstrate that MouseGAN can segment up to 50 mouse brain structures with an averaged dice coefficient of 83%, which is a 7–10% increase compared to baseline U-Net segmentation. To the best of our knowledge, it is the first Atlas-free tool for segmenting mouse brain structures from MRI data. Another benefit is that with the help of the shared encoder, MouseGAN can handle missing MRI modalities without significant sacrifice of the performance. We will release our code and trained model to promote its free usage for neuroimaging applications.