Suppr超能文献

使用混合现实以及口舌界面实现对多余机器人肢体的共享控制。

Shared Control of Supernumerary Robotic Limbs Using Mixed Realityand Mouth-and-Tongue Interfaces.

作者信息

Jing Hongwei, Zhao Sikai, Zheng Tianjiao, Li Lele, Zhang Qinghua, Sun Kerui, Zhao Jie, Zhu Yanhe

机构信息

State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150006, China.

出版信息

Biosensors (Basel). 2025 Jan 23;15(2):70. doi: 10.3390/bios15020070.

Abstract

Supernumerary Robotic Limbs (SRLs) are designed to collaborate with the wearer, enhancing operational capabilities. When human limbs are occupied with primary tasks, controlling SRLs flexibly and naturally becomes a challenge. Existing methods such as electromyography (EMG) control and redundant limb control partially address SRL control issues. However, they still face limitations like restricted degrees of freedom and complex data requirements, which hinder their applicability in real-world scenarios. Additionally, fully autonomous control methods, while efficient, often lack the flexibility needed for complex tasks, as they do not allow for real-time user adjustments. In contrast, shared control combines machine autonomy with human input, enabling finer control and more intuitive task completion. Building on our previous work with the mouth-and-tongue interface, this paper integrates a mixed reality (MR) device to form an interactive system that enables shared control of the SRL. The system allows users to dynamically switch between voluntary and autonomous control, providing both flexibility and efficiency. A random forest model classifies 14 distinct tongue and mouth operations, mapping them to six-degree-of-freedom SRL control. In comparative experiments involving ten healthy subjects performing assembly tasks under three control modes (shared control, autonomous control, and voluntary control), shared control demonstrates a balance between machine autonomy and human input. While autonomous control offers higher task efficiency, shared control achieves greater task success rates and improves user experience by combining the advantages of both autonomous operation and voluntary control. This study validates the feasibility of shared control and highlights its advantages in providing flexible switching between autonomy and user intervention, offering new insights into SRL control.

摘要

超冗余机器人肢体(SRLs)旨在与佩戴者协作,增强操作能力。当人类肢体忙于执行主要任务时,灵活自然地控制SRLs就成为了一项挑战。现有的方法,如肌电图(EMG)控制和冗余肢体控制,部分解决了SRL控制问题。然而,它们仍然面临诸如自由度受限和数据要求复杂等局限性,这阻碍了它们在实际场景中的应用。此外,完全自主控制方法虽然高效,但往往缺乏复杂任务所需的灵活性,因为它们不允许用户进行实时调整。相比之下,共享控制将机器自主性与人类输入相结合,实现更精细的控制和更直观的任务完成。基于我们之前在口舌界面方面的工作,本文集成了一个混合现实(MR)设备,形成了一个能够对SRL进行共享控制的交互系统。该系统允许用户在自主控制和手动控制之间动态切换,提供了灵活性和效率。一个随机森林模型对14种不同的舌部和口部动作进行分类,将它们映射到六自由度的SRL控制上。在涉及10名健康受试者在三种控制模式(共享控制、自主控制和手动控制)下执行装配任务的对比实验中,共享控制展示了机器自主性与人类输入之间的平衡。虽然自主控制提供了更高的任务效率,但共享控制通过结合自主操作和手动控制的优势,实现了更高的任务成功率并改善了用户体验。这项研究验证了共享控制的可行性,并突出了其在自主与用户干预之间灵活切换方面的优势,为SRL控制提供了新的见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc09/11853150/c6d890d8f973/biosensors-15-00070-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验