Table of Links
V. CONCLUSION
This study introduced the S-CycleGAN, adaptation of the CycleGAN framework, enhanced with semantic discriminators for generating synthetic ultrasound images from CT data. The primary innovation of this approach lies in its ability to preserve anatomical details during the image translation process. Our model has demonstrated promising results in generating high-quality ultrasound images that closely replicate the characteristics of authentic scans. These outcomes are particularly significant in the context of training deep learning models for semantic segmentation, where the availability of diverse and accurately annotated ultrasound images is often limited. However, the current study is not without its limitations. Suitable metrics that comprehensively evaluate the effectiveness of ultrasound image synthesis in a numerical manner are still absent. Future work will
include developing these metrics and incorporating feedback from medical experts through structured evaluation protocols. These efforts will not only validate the clinical applicability of the synthetic images but also refine the model’s performance to meet the stringent requirements of medical diagnostics.
REFERENCES
[1] Rania Almajalid, Juan Shan, Yaodong Du, and Ming Zhang. Development of a deep-learning-based method for breast ultrasound image segmentation. In IEEE International Conference on Machine Learning and Applications, pages 1103–1108, 2018.
[2] S Kevin Zhou, Hayit Greenspan, and Dinggang Shen. Deep learning for medical image analysis. Academic Press, 2023.
[3] Johann Li, Guangming Zhu, Cong Hua, Mingtao Feng, Basheer Bennamoun, Ping Li, Xiaoyuan Lu, Juan Song, Peiyi Shen, Xu Xu, et al. A systematic collection of medical image datasets for deep learning. ACM Computing Surveys, 56(5):1–51, 2023.
[4] Yuhan Song, Armagan Elibol, and Nak Young Chong. Abdominal multi-organ segmentation based on feature pyramid network and spatial recurrent neural network. IFAC-PapersOnLine, 56(2):3001– 3008, 2023. 22nd IFAC World Congress.
[5] Yuhan Song, Armagan Elibol, and Nak Young Chong. Two-path augmented directional context aware ultrasound image segmentation. In 2023 IEEE International Conference on Mechatronics and Automation (ICMA), pages 1815–1822. IEEE, 2023.
[6] Yuhan Song, Armagan Elibol, and Nak Young Chong. Abdominal multi-organ segmentation using multi-scale and context-aware neural networks. IFAC Journal of Systems and Control, page 100249, 2024.
[7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
[8] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
[9] Bin Sun, Shuangfu Jia, Xiling Jiang, and Fucang Jia. Double u-net cyclegan for 3d mr to ct image synthesis. International Journal of Computer Assisted Radiology and Surgery, 18(1):149–156, 2023. (a) Real CT (b) Fake US (c) CT Label (d) Predicted US Mask Fig. 6: CT-to-ultrasound translation example.2
[10] Ying Chen, Hongping Lin, Wei Zhang, Wang Chen, Zonglai Zhou, Ali Asghar Heidari, Huiling Chen, and Guohui Xu. Icycle-gan: Improved cycle generative adversarial networks for liver medical image generation. Biomedical Signal Processing and Control, 92:106100, 2024.
[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
[12] Jiamin Liang, Xin Yang, Yuhao Huang, Haoming Li, Shuangchi He, Xindi Hu, Zejian Chen, Wufeng Xue, Jun Cheng, and Dong Ni. Sketch guided and progressive growing gan for realistic and editable ultrasound image synthesis. Medical image analysis, 79:102461, 2022.
[13] David Stojanovski, Uxio Hermida, Pablo Lamata, Arian Beqiri, and Alberto Gomez. Echo from noise: synthetic ultrasound image generation using diffusion models for real image segmentation. In International Workshop on Advances in Simplifying Medical Ultrasound, pages 34–43. Springer, 2023.
[14] Santiago Vitale, Jose Ignacio Orlando, Emmanuel Iarussi, and Ignacio ´ Larrabide. Improving realism in patient-specific abdominal ultrasound simulation using cyclegans. International journal of computer assisted radiology and surgery, 15(2):183–192, 2020.
[15] Yuxin Song, Jing Zheng, Long Lei, Zhipeng Ni, Baoliang Zhao, and Ying Hu. Ct2us: Cross-modal transfer learning for kidney segmentation in ultrasound images with synthesized data. Ultrasonics, 122:106706, 2022.
[16] Jun Ma, Yao Zhang, Song Gu, Cheng Zhu, Cheng Ge, Yichi Zhang, Xingle An, Congcong Wang, Qiyuan Wang, Xin Liu, Shucheng Cao, Qi Zhang, Shangqing Liu, Yunpeng Wang, Yuhui Li, Jian He, and Xiaoping Yang. Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6695–6714, 2022.
[17] Tinghui Zhou, Philipp Krahenbuhl, Mathieu Aubry, Qixing Huang, and Alexei A Efros. Learning dense correspondence via 3d-guided cycle consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 117–126, 2016.
[18] Anoop Cherian and Alan Sullivan. Sem-gan: Semantically-consistent image-to-image translation. In 2019 ieee winter conference on applications of computer vision (wacv), pages 1797–1806. IEEE, 2019.
[19] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234–241. Springer, 2015.
Authors:
(1) Yuhan Song, School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa 923-1292, Japan ([email protected]);
(2) Nak Young Chong, School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa 923-1292, Japan ([email protected]).
This paper is