Ding Jialin, Zhao Rubin, Qiu Qingtao, Chen Jinhu, Duan Jinghao, Cao Xiujuan, Yin Yong
School of Physics and Electronics, Shandong Normal University, Jinan, China.
Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
Quant Imaging Med Surg. 2022 Feb;12(2):1517-1528. doi: 10.21037/qims-21-722.
Although surgical pathology or biopsy are considered the gold standard for glioma grading, these procedures have limitations. This study set out to evaluate and validate the predictive performance of a deep learning radiomics model based on contrast-enhanced T1-weighted multiplanar reconstruction images for grading gliomas.
Patients from three institutions who diagnosed with gliomas by surgical specimen and multiplanar reconstructed (MPR) images were enrolled in this study. The training cohort included 101 patients from institution 1, including 43 high-grade glioma (HGG) patients and 58 low-grade glioma (LGG) patients, while the test cohorts consisted of 50 patients from institutions 2 and 3 (25 HGG patients, 25 LGG patients). We then extracted radiomics features and deep learning features using six pretrained models from the MPR images. The Spearman correlation test and the recursive elimination feature selection method were used to reduce the redundancy and select most predictive features. Subsequently, three classifiers were used to construct classification models. The performance of the grading models was evaluated using the area under the receiver operating curve, sensitivity, specificity, accuracy, precision, and negative predictive value. Finally, the prediction performances of the test cohort were compared to determine the optimal classification model.
For the training cohort, 62% (13 out of 21) of the classification models constructed with MPR images from multiple planes outperformed those constructed with single-plane MPR images, and 61% (11 out of 18) of classification models constructed with both radiomics features and deep learning features had higher area under the curve (AUC) values than those constructed with only radiomics or deep learning features. The optimal model was a random forest model that combined radiomic features and VGG16 deep learning features derived from MPR images, which achieved AUC of 0.847 in the training cohort and 0.898 in the test cohort. In the test cohort, the sensitivity, specificity, and accuracy of the optimal model were 0.840, 0.760, and 0.800, respectively.
Multiplanar CE-T1W MPR imaging features are more effective than features from single planes when differentiating HGG and LGG. The combination of deep learning features and radiomics features can effectively grade glioma and assist clinical decision-making.
尽管手术病理或活检被认为是胶质瘤分级的金标准,但这些方法存在局限性。本研究旨在评估和验证基于对比增强T1加权多平面重建图像的深度学习放射组学模型对胶质瘤分级的预测性能。
本研究纳入了来自三个机构的患者,这些患者通过手术标本和多平面重建(MPR)图像被诊断为胶质瘤。训练队列包括来自机构1的101例患者,其中43例为高级别胶质瘤(HGG)患者,58例为低级别胶质瘤(LGG)患者,而测试队列由来自机构2和3的50例患者组成(25例HGG患者,25例LGG患者)。然后,我们使用来自MPR图像的六个预训练模型提取放射组学特征和深度学习特征。使用Spearman相关性检验和递归消除特征选择方法来减少冗余并选择最具预测性的特征。随后,使用三个分类器构建分类模型。使用受试者操作曲线下面积、敏感性、特异性、准确性、精确性和阴性预测值来评估分级模型的性能。最后,比较测试队列的预测性能以确定最佳分类模型。
对于训练队列,使用来自多个平面的MPR图像构建的分类模型中有62%(21个中的13个)优于使用单平面MPR图像构建的模型,并且使用放射组学特征和深度学习特征构建的分类模型中有61%(18个中的11个)的曲线下面积(AUC)值高于仅使用放射组学或深度学习特征构建的模型。最佳模型是一个随机森林模型,它结合了从MPR图像中提取的放射组学特征和VGG16深度学习特征,在训练队列中AUC为0.847,在测试队列中为0.898。在测试队列中,最佳模型的敏感性、特异性和准确性分别为0.840、0.760和0.800。
在区分HGG和LGG时,多平面CE-T1W MPR成像特征比单平面特征更有效。深度学习特征和放射组学特征的结合可以有效地对胶质瘤进行分级并辅助临床决策。