Generating Synthetic B-Mode Fetal Ultrasound Images Using CycleGAN-Based Deep Learning
Downloads
B-mode ultrasound (USG) is a key imaging modality for fetal assessment, providing a noninvasive approach to monitor anatomical development and detect congenital anomalies at an early stage. However, portable ultrasound devices commonly used in low-resource healthcare settings often yield low-resolution images with significant speckle noise, reducing diagnostic accuracy. Furthermore, the scarcity of labeled medical data, caused by privacy regulations such as HIPAA and the high cost of expert annotation, poses a significant challenge for developing robust artificial intelligence (AI) diagnostic models. This study proposes a CycleGAN-based deep learning model enhanced with a histogram-guided discriminator (HisDis) to generate realistic synthetic B-mode fetal ultrasound images. A publicly available dataset from the Zenodo repository containing 1,000 grayscale fetal head images was utilized. Preprocessing included normalization, histogram equalization, and image resizing, while the architecture combined two ResNet-based generators and a dual discriminator configuration integrating PatchGAN and histogram-guided evaluation. The model was trained using standard optimization settings to ensure stable convergence. Experimental results demonstrate that the proposed HisDis module accelerates convergence by 18 epochs and reduces the Fréchet Inception Distance (FID) by 23.6 percent from 1580.72 to 1208.49 compared with the baseline CycleGAN. Statistical analysis revealed consistent pixel-intensity distributions between the original and synthetic images, with entropy from 7.16 to 7.40. At the same time, visual assessment confirmed that critical anatomical structures, including the brain midline and lateral ventricles, were well preserved. These results indicate that the CycleGAN-HisDis model produces statistically and visually realistic fetal ultrasound images suitable for medical data augmentation and AI-based diagnostic training. Furthermore, this approach holds potential to enhance diagnostic reliability and clinical education in healthcare settings with limited imaging resources. Future work will focus on clinical validation and generalization across diverse fetal ultrasound datasets.
[1] F. A. Hermawati, H. Tjandrasa, and N. Suciati, “Phase-based thresholding schemes for segmentation of fetal thigh cross-sectional region in ultrasound images,” Journal of King Saud University - Computer and Information Sciences, vol. 34, no. 7, pp. 4448–4460, Jul. 2022, doi: 10.1016/j.jksuci.2021.02.004.
[2] F. A. Hermawati et al., “Impact of Training Data Quality on Deep Speckle Noise Reduction in Ultrasound Images,” in 2023 7th International Conference on Computational Biology and Bioinformatics (ICCBB 2023), Kuala Lumpur, Malaysia: ACM, Dec. 2023. doi: 10.1145/3638569.3638578.
[3] N. J. Cronin, T. Finni, and O. Seynnes, “Using deep learning to generate synthetic B-mode musculoskeletal ultrasound images,” Comput Methods Programs Biomed, vol. 196, Nov. 2020, doi: 10.1016/j.cmpb.2020.105583.
[4] S. Athreya, A. Radhachandran, V. Ivezić, V. Sant, C. W. Arnold, and W. Speier, “Ultrasound Image Enhancement using CycleGAN and Perceptual Loss,” Dec. 2023, doi: 10.2196/58911.
[5] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” Aug. 2020, [Online]. Available: http://arxiv.org/abs/1703.10593
[6] A. Makhlouf, M. Maayah, N. Abughanam, and C. Catal, “The use of generative adversarial networks in medical image augmentation,” Dec. 01, 2023, Springer Science and Business Media Deutschland GmbH. doi: 10.1007/s00521-023-09100-z.
[7] S. Zama et al., “Clinical Utility of Breast Ultrasound Images Synthesized by a Generative Adversarial Network,” Medicina (Lithuania), vol. 60, no. 1, Jan. 2024, doi: 10.3390/medicina60010014.
[8] T. Pang, J. H. D. Wong, W. L. Ng, and C. S. Chan, “Semi-supervised GAN-based Radiomics Model for Data Augmentation in Breast Ultrasound Mass Classification,” Comput Methods Programs Biomed, vol. 203, May 2021, doi: 10.1016/j.cmpb.2021.106018.
[9] M. Alruily, W. Said, A. M. Mostafa, M. Ezz, and M. Elmezain, “Breast Ultrasound Images Augmentation and Segmentation Using GAN with Identity Block and Modified U-Net 3,” Sensors (Basel), vol. 23, no. 20, Oct. 2023, doi: 10.3390/s23208599.
[10] Y. Liu, L. Meng, and J. Zhong, “MAGAN: Mask Attention Generative Adversarial Network for Liver Tumor CT Image Synthesis,” J Healthc Eng, vol. 2021, 2021, doi: 10.1155/2021/6675259.
[11] J. Jason Jeong, B. Patel, and I. Banerjee, “GAN augmentation for multiclass image classification using hemorrhage detection as a case-study,” Journal of Medical Imaging, vol. 9, no. 03, Jun. 2022, doi: 10.1117/1.jmi.9.3.035504.
[12] M. Hamghalam, T. Wang, and B. Lei, “High tissue contrast image synthesis via multistage attention-GAN: Application to segmenting brain MR scans,” Neural Networks, vol. 132, pp. 43–52, Dec. 2020, doi: 10.1016/j.neunet.2020.08.014.
[13] L. Xu, Y. Lei, B. Zheng, J. Lv, and W. Li, “ADGAN: Adaptive Domain Medical Image Synthesis Based on Generative Adversarial Networks,” CAAI Artificial Intelligence Research, p. 9150035, Dec. 2024, doi: 10.26599/air.2024.9150035.
[14] P. Mirowski and A. Fabijańska, “Diffusion model-based synthesis of brain images for data augmentation,” Biomed Signal Process Control, vol. 113, Mar. 2026, doi: 10.1016/j.bspc.2025.108940.
[15] M. Lapaeva et al., “A comprehensive comparative study of generative adversarial network architectures for synthetic computed tomography generation in the abdomen,” Med Phys, vol. 52, no. 8, Aug. 2025, doi: 10.1002/mp.18038.
[16] M. Iskandar et al., “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis,” arXiv:2304.03941, Apr. 2023, [Online]. Available: http://arxiv.org/abs/2304.03941
[17] E. Dahan et al., “CSG: A Context-Semantic Guided Diffusion Approach in De Novo Musculoskeletal Ultrasound Image Generation,” Dec. 2024, [Online]. Available: http://arxiv.org/abs/2412.05833
[18] W. Wang and H. Li, “A novel CycleGAN network applicable for enhancing low-quality ultrasound images of multiple organs,” Journal of King Saud University Computer and Information Sciences, vol. 37, no. 9, p. 261, Nov. 2025, doi: 10.1007/s44443-025-00285-y.
[19] M. Domínguez, Y. Velikova, N. Navab, and M. F. Azampour, “Diffusion as Sound Propagation: Physics-inspired Model for Ultrasound Image Generation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2024, Springer, Cham, Jul. 2024. [Online]. Available: http://arxiv.org/abs/2407.05428
[20] B. Freiche et al., “Ultrasound Image Generation using Latent Diffusion Models,” arXiv:2502.08580, Feb. 2025, [Online]. Available: http://arxiv.org/abs/2502.08580
[21] D. Stojanovski, U. Hermida, P. Lamata, A. Beqiri, and A. Gomez, “Echo from Noise: Synthetic Ultrasound Image Generation Using Diffusion Models for Real Image Segmentation,” arXiv:2305.05424, vol. 14337, pp. 34–43, 2023, doi: 10.1007/978-3-031-44521-7.
[22] L. Bargsten and A. Schlaefer, “SpeckleGAN: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing,” Int J Comput Assist Radiol Surg, vol. 15, no. 9, pp. 1427–1436, Sep. 2020, doi: 10.1007/s11548-020-02203-1.
[23] J. Liang et al., “Sketch guided and progressive growing GAN for realistic and editable ultrasound image synthesis,” Med Image Anal, vol. 79, no. July, May 2022, [Online]. Available: http://arxiv.org/abs/2204.06929
[24] Z. Ren, S. X. Yu, and D. Whitney, “Controllable Medical Image Generation via GAN,” Journal of Perceptual Imaging, vol. 5, no. 0, pp. 000502-1-000502–15, Mar. 2022, doi: 10.2352/j.percept.imaging.2022.5.000502.
[25] B. Hardiansyah and E. D. Hartono, “Enhanced Face Image Super-Resolution Using Generative Adversarial Network,” PIKSEL : Penelitian Ilmu Komputer Sistem Embedded and Logic, vol. 10, no. 1, pp. 31–40, Mar. 2022, doi: 10.33558/piksel.v10i1.4158.
[26] Md. I. Hossain, M. Xue, L. Wang, and Q. Zhu, “DOT-AE-GAN: a hybrid autoencoder–GAN model for enhanced ultrasound-guided diffuse optical tomography reconstruction,” J Biomed Opt, vol. 30, no. 07, Jul. 2025, doi: 10.1117/1.jbo.30.7.076003.
[27] Q. Tian et al., “SDnDTI: Self-supervised deep learning-based denoising for diffusion tensor MRI,” Neuroimage, vol. 253, Jun. 2022, doi: 10.1016/j.neuroimage.2022.119033.
[28] A. Zulfiqar, S. Muhammad Daudpota, A. Shariq Imran, Z. Kastrati, M. Ullah, and S. Sadhwani, “Synthetic Image Generation Using Deep Learning: A Systematic Literature Review,” Comput Intell, vol. 40, no. 5, Oct. 2024, doi: 10.1111/coin.70002.
[29] L. Maack, L. Holstein, and A. Schlaefer, “GANs for generation of synthetic ultrasound images from small datasets,” in Current Directions in Biomedical Engineering, Walter de Gruyter GmbH, Jul. 2022, pp. 17–20. doi: 10.1515/cdbme-2022-0005.
[30] A. Z. Alsinan, C. Rule, M. Vives, V. M. Patel, and I. Hacihaliloglu, “GAN-based Realistic Bone Ultrasound Image and Label Synthesis for Improved Segmentation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, Springer, Cham, 2020.
[31] V. Kumar et al., “Enhancing Left Ventricular Segmentation in Echocardiograms Through GAN-Based Synthetic Data Augmentation and MultiResUNet Architecture,” Diagnostics, vol. 15, no. 6, Mar. 2025, doi: 10.3390/diagnostics15060663.
[32] M. Usama, E. Nyman, U. Naslund, and C. Gronlund, “A Domain Adaptation Model for Carotid Ultrasound: Image Harmonization, Noise Reduction, and Impact on Cardiovascular Risk Markers,” Comput Biol Med, vol. 190, Jul. 2025, [Online]. Available: http://arxiv.org/abs/2407.05163
[33] N. Fatima et al., “Synthetic Lung Ultrasound Data Generation Using Autoencoder With Generative Adversarial Network,” IEEE Trans Ultrason Ferroelectr Freq Control, vol. 72, no. 5, pp. 624–635, 2025, doi: 10.1109/TUFFC.2025.3555447.
[34] D. Z. Haq and C. Fatichah, “Ultrasound Image Synthetic Generating Using Deep Convolution Generative Adversarial Network For Breast Cancer Identification,” IPTEK The Journal of Technology and Science, vol. 34, no. 1, pp. 853–4098, 2023, doi: 10.12962/j20882033.v34i3.14968.
[35] A. B. Abdusalomov, R. Nasimov, N. Nasimova, B. Muminov, and T. K. Whangbo, “Evaluating Synthetic Medical Images Using Artificial Intelligence with the GAN Algorithm,” Sensors, vol. 23, no. 7, Apr. 2023, doi: 10.3390/s23073440.
[36] Y. Skandarani, P. M. Jodoin, and A. Lalande, “GANs for Medical Image Synthesis: An Empirical Study,” J Imaging, vol. 9, no. 3, Mar. 2023, doi: 10.3390/jimaging9030069.
[37] K. Li, “Basic GAN Models and the Application in Medical Image Field,” in Journal of Physics: Conference Series, Institute of Physics, 2022. doi: 10.1088/1742-6596/2386/1/012040.
[38] T. L. A. van den Heuvel, D. de Bruijn, C. L. de Korte, and B. van Ginneken, “Automated measurement of fetal head circumference,” Jul. 27, 2018.
[39] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” ArXiv, vol. 1505.04597, 2015, doi: https://doi.org/10.48550/arXiv.1505.04597.
[40] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, “Least Squares Generative Adversarial Networks,” in Proceedings of the IEEE International Conference on Computer Vision, Institute of Electrical and Electronics Engineers Inc., Dec. 2017, pp. 2813–2821. doi: 10.1109/ICCV.2017.304.
Copyright (c) 2025 Fajar Astuti Hermawati, Bagus Hardiansyah, Andrianto Andrianto (Author)

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).





