Rhazes Lab, Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA.
Kimia Lab, University of Waterloo, Waterloo, ON, Canada.
Commun Biol. 2023 Mar 22;6(1):304. doi: 10.1038/s42003-023-04583-x.
Deep learning methods are widely applied in digital pathology to address clinical challenges such as prognosis and diagnosis. As one of the most recent applications, deep models have also been used to extract molecular features from whole slide images. Although molecular tests carry rich information, they are often expensive, time-consuming, and require additional tissue to sample. In this paper, we propose tRNAsformer, an attention-based topology that can learn both to predict the bulk RNA-seq from an image and represent the whole slide image of a glass slide simultaneously. The tRNAsformer uses multiple instance learning to solve a weakly supervised problem while the pixel-level annotation is not available for an image. We conducted several experiments and achieved better performance and faster convergence in comparison to the state-of-the-art algorithms. The proposed tRNAsformer can assist as a computational pathology tool to facilitate a new generation of search and classification methods by combining the tissue morphology and the molecular fingerprint of the biopsy samples.
深度学习方法在数字病理学中得到了广泛应用,以解决预后和诊断等临床挑战。作为最近的应用之一,深度模型也被用于从全幻灯片图像中提取分子特征。虽然分子测试携带丰富的信息,但它们通常昂贵、耗时,并且需要额外的组织进行采样。在本文中,我们提出了 tRNAsformer,这是一种基于注意力的拓扑结构,可以同时学习从图像预测批量 RNA-seq 并表示载玻片的全幻灯片图像。tRNAsformer 使用多实例学习来解决弱监督问题,而图像没有像素级注释。与最先进的算法相比,我们进行了几项实验,实现了更好的性能和更快的收敛。所提出的 tRNAsformer 可以作为计算病理学工具,通过结合组织形态和活检样本的分子指纹,促进新一代的搜索和分类方法。