Laboratory of Computational and Quantitative Biology, Institut de Biologie Paris Seine, CNRS, Sorbonne Université, Paris 75005, France.
Department of Computing Sciences, Bocconi University, Milan 20100, Italy.
Proc Natl Acad Sci U S A. 2024 Jun 11;121(24):e2316401121. doi: 10.1073/pnas.2316401121. Epub 2024 Jun 5.
The accurate prediction of binding between T cell receptors (TCR) and their cognate epitopes is key to understanding the adaptive immune response and developing immunotherapies. Current methods face two significant limitations: the shortage of comprehensive high-quality data and the bias introduced by the selection of the negative training data commonly used in the supervised learning approaches. We propose a method, Transformer-based Unsupervised Language model for Interacting Peptides and T cell receptors (TULIP), that addresses both limitations by leveraging incomplete data and unsupervised learning and using the transformer architecture of language models. Our model is flexible and integrates all possible data sources, regardless of their quality or completeness. We demonstrate the existence of a bias introduced by the sampling procedure used in previous supervised approaches, emphasizing the need for an unsupervised approach. TULIP recognizes the specific TCRs binding an epitope, performing well on unseen epitopes. Our model outperforms state-of-the-art models and offers a promising direction for the development of more accurate TCR epitope recognition models.
准确预测 T 细胞受体 (TCR) 与其同源表位之间的结合对于理解适应性免疫反应和开发免疫疗法至关重要。当前的方法面临两个重大限制:全面高质量数据的短缺和监督学习方法中常用的负训练数据选择带来的偏差。我们提出了一种方法,基于 Transformer 的用于相互作用肽和 T 细胞受体的无监督语言模型 (TULIP),通过利用不完整的数据和无监督学习以及使用语言模型的 Transformer 架构来解决这两个限制。我们的模型灵活,集成了所有可能的数据源,无论其质量或完整性如何。我们证明了以前监督方法中使用的采样过程引入了偏差,强调了需要采用无监督方法。TULIP 识别与表位结合的特定 TCR,在未见表位上表现良好。我们的模型优于最先进的模型,为开发更准确的 TCR 表位识别模型提供了有希望的方向。