Gülşen İbrahim Tevfik, Kuran Alican, Evli Cengiz, Baydar Oğuzhan, Dinç Başar Kevser, Bilgir Elif, Çelik Özer, Bayrakdar İbrahim Şevki, Orhan Kaan, Acu Berat
Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Alanya Alaaddin Keykubat University, Antalya, 07425, Turkey.
Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli University, Kocaeli, 41190, Turkey.
Oral Radiol. 2025 Aug 1. doi: 10.1007/s11282-025-00848-9.
The purpose of this study is the development of a deep learning model based on nnU-Net v2 for the automated segmentation of sphenoid sinus and middle skull base anatomic structures in cone-beam computed tomography (CBCT) volumes, followed by an evaluation of the model's performance.
In this retrospective study, the sphenoid sinus and surrounding anatomical structures in 99 CBCT scans were annotated using web-based labeling software. Model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.01 for 1000 epochs. The performance of the model in automatically segmenting these anatomical structures in CBCT scans was evaluated using a series of metrics, including accuracy, precision, recall, dice coefficient (DC), 95% Hausdorff distance (95% HD), intersection on union (IoU), and AUC.
The developed deep learning model demonstrated a high level of success in segmenting sphenoid sinus, foramen rotundum, and Vidian canal. Upon evaluation of the DC values, it was observed that the model demonstrated the highest degree of ability to segment the sphenoid sinus, with a DC value of 0.96.
The nnU-Net v2-based deep learning model achieved high segmentation performance for the sphenoid sinus, foramen rotundum, and Vidian canal within the middle skull base, with the highest DC observed for the sphenoid sinus (DC: 0.96). However, the model demonstrated limited performance in segmenting other foramina of the middle skull base, indicating the need for further optimization for these structures.
本研究旨在开发一种基于nnU-Net v2的深度学习模型,用于在锥束计算机断层扫描(CBCT)容积中自动分割蝶窦和中颅底解剖结构,随后评估该模型的性能。
在这项回顾性研究中,使用基于网络的标注软件对99例CBCT扫描中的蝶窦及周围解剖结构进行标注。使用nnU-Net v2深度学习模型进行模型训练,学习率为0.01,共训练1000个轮次。使用一系列指标评估该模型在CBCT扫描中自动分割这些解剖结构的性能,包括准确率、精确率、召回率、骰子系数(DC)、95%豪斯多夫距离(95%HD)、交并比(IoU)和AUC。
所开发的深度学习模型在分割蝶窦、圆孔和翼管方面取得了很高的成功率。通过评估DC值发现,该模型分割蝶窦的能力最强,DC值为0.96。
基于nnU-Net v2的深度学习模型在中颅底内对蝶窦、圆孔和翼管实现了较高的分割性能,其中蝶窦的DC值最高(DC:0.96)。然而,该模型在分割中颅底其他孔道方面表现有限,表明这些结构需要进一步优化。