Suppr超能文献

基于可学习加权池化的3D PET脑图像分类在阿尔茨海默病诊断中的高效训练

Efficient Training on Alzheimer's Disease Diagnosis with Learnable Weighted Pooling for 3D PET Brain Image Classification.

作者信息

Xing Xin, Rafique Muhammad Usman, Liang Gongbo, Blanton Hunter, Zhang Yu, Wang Chris, Jacobs Nathan, Lin Ai-Ling

机构信息

Department of Computer Science, University of Kentucky,Lexington, KY 40506, USA.

Department of Radiology, University of Missouri, Columbia, MO 65212, USA.

出版信息

Electronics (Basel). 2023 Jan 2;12(2). doi: 10.3390/electronics12020467. Epub 2023 Jan 16.

Abstract

Three-dimensional convolutional neural networks (3D CNNs) have been widely applied to analyze Alzheimer's disease (AD) brain images for a better understanding of the disease progress or predicting the conversion from cognitively impaired (CU) or mild cognitive impairment status. It is well-known that training 3D-CNN is computationally expensive and with the potential of overfitting due to the small sample size available in the medical imaging field. Here we proposed a novel 3D-2D approach by converting a 3D brain image to a 2D fused image using a Learnable Weighted Pooling (LWP) method to improve efficient training and maintain comparable model performance. By the 3D-to-2D conversion, the proposed model can easily forward the fused 2D image through a pre-trained 2D model while achieving better performance over different 3D and 2D baselines. In the implementation, we chose to use ResNet34 for feature extraction as it outperformed other 2D CNN backbones. We further showed that the weights of the slices are location-dependent and the model performance relies on the 3D-to-2D fusion view, with the best outcomes from the coronal view. With the new approach, we were able to reduce 75% of the training time and increase the accuracy to 0.88, compared with conventional 3D CNNs, for classifying amyloid-beta PET imaging from the AD patients from the CU participants using the publicly available Alzheimer's Disease Neuroimaging Initiative dataset. The novel 3D-2D model may have profound implications for timely AD diagnosis in clinical settings in the future.

摘要

三维卷积神经网络(3D CNN)已被广泛应用于分析阿尔茨海默病(AD)脑图像,以更好地了解疾病进展或预测认知受损(CU)或轻度认知障碍状态的转化。众所周知,训练3D-CNN在计算上代价高昂,并且由于医学成像领域可用的样本量较小,存在过拟合的可能性。在此,我们提出了一种新颖的3D-2D方法,通过使用可学习加权池化(LWP)方法将3D脑图像转换为2D融合图像,以提高训练效率并保持可比的模型性能。通过3D到2D的转换,所提出的模型可以轻松地通过预训练的2D模型转发融合后的2D图像,同时在不同的3D和2D基线之上实现更好的性能。在实现过程中,我们选择使用ResNet34进行特征提取,因为它优于其他2D CNN主干。我们进一步表明,切片的权重与位置相关,并且模型性能依赖于3D到2D的融合视图,冠状视图的结果最佳。使用新方法,与传统的3D CNN相比,我们能够将训练时间减少75%,并将使用公开可用的阿尔茨海默病神经影像学倡议数据集对AD患者与CU参与者的淀粉样蛋白β PET成像进行分类的准确率提高到0.88。这种新颖的3D-2D模型可能对未来临床环境中AD的及时诊断具有深远意义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0548/9910214/c40816202835/nihms-1867576-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验