Dong Chaowu, You Xuqun, Li Ying
School of Psychology, Shaanxi Normal University, Xi'an 710062, China.
Behav Sci (Basel). 2025 Jul 30;15(8):1038. doi: 10.3390/bs15081038.
Automated vehicles controlled by artificial intelligence are becoming capable of making moral decisions independently. This study investigates the differences in participants' perceptions of the moral decision-maker's permissibility when viewing scenarios (pre-test) and after witnessing the outcomes of moral decisions (post-test). It also investigates how permissibility, ten typical moral emotions, and perceived moral agency fluctuate when AI and the human driver make deontological or utilitarian decisions in a pedestrian-sacrificing dilemma (Experiment 1, = 254) and a driver-sacrificing dilemma (Experiment 2, = 269) from a third-person perspective. Moreover, by conducting binary logistic regression, this study examined whether these factors could predict the non-decrease in permissibility ratings. In both experiments, participants preferred to delegate decisions to human drivers rather than to AI, and they generally preferred utilitarianism over deontology. The results of perceived moral emotions and moral agency provide evidence. Moreover, Experiment 2 elicited greater variations in permissibility, moral emotions, and perceived moral agency compared to Experiment 1. Moreover, deontology and gratitude could positively predict the non-decrease in permissibility ratings in Experiment 1, while contempt had a negative influence. In Experiment 2, the human driver and disgust were significant negative predictor factors, while perceived moral agency had a positive influence. These findings deepen the comprehension of the dynamic processes of autonomous driving's moral decision-making and facilitate understanding of people's attitudes toward moral machines and their underlying reasons, providing a reference for developing more sophisticated moral machines.
由人工智能控制的自动驾驶车辆正逐渐能够独立做出道德决策。本研究调查了参与者在观看情景(预测试)时以及目睹道德决策结果后(后测试),对道德决策者可允许性的认知差异。研究还从第三人称视角,调查了在行人牺牲困境(实验1,n = 254)和驾驶员牺牲困境(实验2,n = 269)中,当人工智能和人类驾驶员做出义务论或功利主义决策时,可允许性、十种典型道德情绪以及感知到的道德能动性是如何波动的。此外,通过进行二元逻辑回归分析,本研究检验了这些因素是否能够预测可允许性评分不会降低。在两个实验中,参与者都更倾向于将决策权交给人类驾驶员而非人工智能,并且他们总体上更倾向于功利主义而非义务论。感知到的道德情绪和道德能动性的结果提供了证据。此外,与实验1相比,实验2在可允许性、道德情绪和感知到的道德能动性方面引发了更大的变化。此外,在实验1中,义务论和感恩能够正向预测可允许性评分不会降低,而轻蔑则有负面影响。在实验2中,人类驾驶员和厌恶是显著的负向预测因素,而感知到的道德能动性有正向影响。这些发现加深了对自动驾驶道德决策动态过程的理解,有助于理解人们对道德机器的态度及其潜在原因,为开发更复杂的道德机器提供了参考。