Chang Li-Jen, Chou Chu-Kuang, Mukundan Arvind, Karmakar Riya, Chen Tsung-Hsien, Syna Syna, Ko Chou-Yuan, Wang Hsiang-Chen
Division of Gastroenterology and Hepatology, Department of Internal Medicine, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 60002, Taiwan.
Obesity Center, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 60002, Taiwan.
Cancers (Basel). 2025 Jun 19;17(12):2049. doi: 10.3390/cancers17122049.
Esophageal carcinoma (EC) is the eighth most prevalent cancer and the sixth leading cause of cancer-related mortality worldwide. Early detection is vital for improving prognosis, particularly for dysplasia and squamous cell carcinoma (SCC). This study evaluates a hyperspectral imaging conversion method, the Spectrum-Aided Vision Enhancer (SAVE), for its efficacy in enhancing esophageal cancer detection compared to conventional white-light imaging (WLI). Five deep learning models (YOLOv9, YOLOv10, YOLO-NAS, RT-DETR, and Roboflow 3.0) were trained and evaluated on a dataset comprising labeled endoscopic images, including normal, dysplasia, and SCC classes. : Across all five evaluated deep learning models, the SAVE consistently outperformed conventional WLI in detecting esophageal cancer lesions. For SCC, the F1 score improved from 84.3% to 90.4% in regard to the YOLOv9 model and from 87.3% to 90.3% in regard to the Roboflow 3.0 model when using the SAVE. Dysplasia detection also improved, with the precision increasing from 72.4% (WLI) to 76.5% (SAVE) in regard to the YOLOv9 model. Roboflow 3.0 achieved the highest F1 score for dysplasia of 64.7%. YOLO-NAS exhibited balanced performance across all lesion types, with the dysplasia precision rising from 75.1% to 79.8%. Roboflow 3.0 also recorded the highest SCC sensitivity of 85.7%. In regard to SCC detection with YOLOv9, the WLI F1 score was 84.3% (95% CI: 71.7-96.9%) compared to 90.4% (95% CI: 80.2-100%) with the SAVE ( = 0.03). For dysplasia detection, the F1 score increased from 60.3% (95% CI: 51.5-69.1%) using WLI to 65.5% (95% CI: 57.0-73.8%) with SAVE ( = 0.04). These findings demonstrate that the SAVE enhances lesion detectability and diagnostic performance across different deep learning models. The amalgamation of the SAVE with deep learning algorithms markedly enhances the detection of esophageal cancer lesions, especially squamous cell carcinoma and dysplasia, in contrast to traditional white-light imaging. This underscores the SAVE's potential as an essential clinical instrument for the early detection and diagnosis of cancer.
食管癌(EC)是全球第八大常见癌症,也是癌症相关死亡的第六大主要原因。早期检测对于改善预后至关重要,尤其是对于发育异常和鳞状细胞癌(SCC)。本研究评估了一种高光谱成像转换方法,即光谱辅助视觉增强器(SAVE),与传统白光成像(WLI)相比,其在增强食管癌检测方面的效果。在一个包含标记内镜图像的数据集上对五个深度学习模型(YOLOv9、YOLOv10、YOLO-NAS、RT-DETR和Roboflow 3.0)进行了训练和评估,该数据集包括正常、发育异常和SCC类别。在所有五个评估的深度学习模型中,SAVE在检测食管癌病变方面始终优于传统WLI。对于SCC,使用SAVE时,YOLOv9模型的F1分数从84.3%提高到90.4%,Roboflow 3.0模型的F1分数从87.3%提高到90.3%。发育异常检测也有所改善,YOLOv9模型的精度从72.4%(WLI)提高到76.5%(SAVE)。Roboflow 3.0在发育异常方面取得了最高的F1分数,为64.7%。YOLO-NAS在所有病变类型中表现出平衡的性能,发育异常的精度从75.1%提高到79.8%。Roboflow 3.0还记录了最高的SCC敏感性,为85.7%。在使用YOLOv9进行SCC检测时,WLI的F1分数为84.3%(95%CI:71.7 - 96.9%),而使用SAVE时为90.4%(95%CI:80.2 - 100%)(P = 0.03)。对于发育异常检测,F1分数从使用WLI时的60.3%(95%CI:51.5 - 69.1%)提高到使用SAVE时的65.5%(95%CI:57.0 - 73.8%)(P = 0.04)。这些发现表明,SAVE提高了不同深度学习模型对病变的可检测性和诊断性能。与传统白光成像相比,SAVE与深度学习算法的结合显著提高了食管癌病变的检测能力,尤其是鳞状细胞癌和发育异常。这突出了SAVE作为癌症早期检测和诊断重要临床工具的潜力。