Kim JaeYong, Vajravelu Bathri Narayan
School of Pharmacy, Massachusetts College of Pharmacy and Health Sciences, Boston, MA, United States.
Department of Physician Assistant Studies, Massachusetts College of Pharmacy and Health Sciences, 179 Longwood Avenue, Boston, MA, 02115, United States, 1 6177322961.
JMIR Form Res. 2025 Jan 16;9:e51319. doi: 10.2196/51319.
The integration of large language models (LLMs), as seen with the generative pretrained transformers series, into health care education and clinical management represents a transformative potential. The practical use of current LLMs in health care sparks great anticipation for new avenues, yet its embracement also elicits considerable concerns that necessitate careful deliberation. This study aims to evaluate the application of state-of-the-art LLMs in health care education, highlighting the following shortcomings as areas requiring significant and urgent improvements: (1) threats to academic integrity, (2) dissemination of misinformation and risks of automation bias, (3) challenges with information completeness and consistency, (4) inequity of access, (5) risks of algorithmic bias, (6) exhibition of moral instability, (7) technological limitations in plugin tools, and (8) lack of regulatory oversight in addressing legal and ethical challenges. Future research should focus on strategically addressing the persistent challenges of LLMs highlighted in this paper, opening the door for effective measures that can improve their application in health care education.
如生成式预训练变换器系列所示,将大语言模型(LLMs)整合到医疗保健教育和临床管理中具有变革潜力。当前大语言模型在医疗保健中的实际应用引发了人们对新途径的极大期待,然而,对其的接受也引发了相当多的担忧,需要仔细斟酌。本研究旨在评估最先进的大语言模型在医疗保健教育中的应用,突出以下缺点,这些是需要重大且迫切改进的领域:(1)对学术诚信的威胁;(2)错误信息的传播和自动化偏差风险;(3)信息完整性和一致性方面的挑战;(4)获取机会不平等;(5)算法偏差风险;(6)道德不稳定的表现;(7)插件工具的技术限制;(8)在应对法律和伦理挑战方面缺乏监管监督。未来的研究应专注于从战略上应对本文所强调的大语言模型的持续挑战,为能改善其在医疗保健教育中应用的有效措施打开大门。