Suppr超能文献

多模态意图检测传感器套件,用于上肢机器人假肢的共享自主性。

A Multimodal Intention Detection Sensor Suite for Shared Autonomy of Upper-Limb Robotic Prostheses.

机构信息

Moonshine Inc., London W12 0LN, UK.

Department of Mechanical Engineering, UK Dementia Research Institute Care-Research and Technology Centre (DRI-CRT) Imperial College London, London SW7 2AZ, UK.

出版信息

Sensors (Basel). 2020 Oct 27;20(21):6097. doi: 10.3390/s20216097.

Abstract

Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human-machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human-robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design.

摘要

神经机器人增强(例如机器人辅助)现在已被广泛应用于支持运动功能受损的个体。然而,一个尚未解决的主要挑战是人机界面(HMI)所需的过高认知负荷。抓握控制仍然是最具挑战性的 HMI 任务之一,需要在关节和人机任务空间中遵循特定时间模式的同时,对多个自由度(DoF)进行同时、灵活和精确的控制。大多数市售系统要么使用间接模式切换配置,要么使用有限的顺序控制策略,每次只能激活一个自由度。为了解决这个挑战,我们引入了一个以低成本多模态传感器套件为中心的共享自主框架,该套件融合了:(a)肌电描记术(MMG)以估计预期的肌肉激活,(b)基于摄像头的视觉信息用于集成自主物体识别,以及(c)惯性测量以基于抓取轨迹增强意图预测。完整的系统基于自然运动过程中测量的动态特征来预测用户的抓取意图。从传感器套件中提取了总共 84 个运动特征,并对 10 名健全人和 1 名截肢者进行了测试,他们使用机器人手抓取常见的家用物体。使用视觉和运动特征进行实时抓取分类的准确率在所有参与者中分别达到了 100%、82.5%和 88.9%,用于检测和执行瓶子、盖子和盒子的抓取动作。所提出的多模态传感器套件是一种新颖的方法,可用于预测不同的抓取策略,并使用商业上肢假肢设备自动化任务性能。由于直观的控制设计,该系统还有望提高现代神经机器人系统的可用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/80c4/7662487/f729c79aa211/sensors-20-06097-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验