首页> 外文会议>IEEE International Conference on Biomedical Robotics and Biomechatronics >Context-Aware Learning from Demonstration: Using Camera Data to Support the Synergistic Control of a Multi-Joint Prosthetic Arm
【24h】

Context-Aware Learning from Demonstration: Using Camera Data to Support the Synergistic Control of a Multi-Joint Prosthetic Arm

机译:演示中的上下文感知学习:使用相机数据支持多关节假肢的协同控制

获取原文

摘要

Ahstract- Muscle synergies in humans are context-dependent-they are based on the integration of vision, sensorimotor information and proprioception. In particular, visual information plays a significant role in the execution of goal-directed grasping movements. Based on a desired motor task, a limb is directed to the correct spatial location and the posture of the hand reflects the size, shape and orientation of the grasped object. Such contextual synergies are largely absent from modern prosthetic robots. In this work, we therefore introduce a new algorithmic contribution to support the context-aware, synergistic control of multiple degrees-of-freedom of an upper-limb prosthesis. In our previous work, we showcased an actor-critic reinforcement learning method that allowed someone with an amputation to use their non-amputated arm to teach their prosthetic arm how to move through a range of coordinated motions and grasp patterns. We here extend this approach to include visual information that could potentially help achieve context-dependent movement. To study the integration of visual context into coordinated grasping, we recorded computer vision information, myoelectic signals, inertial measurements, and positional information during a subject's training a robotic arm. Our approach was evaluated via prediction learning, wherein our algorithm was tasked with accurately distinguishing between three different muscle synergies involving similar myoelectric signals based on visual context from a robot-mounted camera. These preliminary results suggest that even simple visual data can help a learning system disentangle synergies that would be indistinguishable based solely on motor and myoelectric signals recorded from the human user and their robotic arm. We therefore suggest that integrating learned, vision-contingent predictions about movement synergies into a prosthetic control system could potentially allow systems to better adapt to diverse situations of daily-life prosthesis use.
机译:Ahstract-人类的肌肉协同作用取决于上下文-它们基于视觉,感觉运动信息和本体感受的整合。特别地,视觉信息在执行目标定向的抓握动作中起着重要作用。根据所需的运动任务,将肢体指向正确的空间位置,手的姿势反映所抓物体的大小,形状和方向。现代义肢机器人在很大程度上没有这种上下文协同作用。因此,在这项工作中,我们引入了一种新的算法贡献来支持上下文感知的上肢假肢的多个自由度的协同控制。在之前的工作中,我们展示了一种演员批评强化学习方法,该方法允许截肢患者使用其非截肢手臂来教其假肢手臂如何通过一系列协调的动作和抓握方式运动。在这里,我们将这种方法扩展为包括可能有助于实现上下文相关运动的视觉信息。为了研究视觉上下文在协调抓握中的整合,我们在受试者训练机械臂期间记录了计算机视觉信息,肌电信号,惯性测量和位置信息。我们通过预测学习对我们的方法进行了评估,其中我们的算法的任务是根据机器人安装的摄像头的视觉上下文,准确区分涉及相似肌电信号的三种不同肌肉协同作用。这些初步结果表明,即使是简单的视觉数据也可以帮助学习系统消除仅基于从人类使用者及其机械手记录的运动和肌电信号就无法区分的协同作用。因此,我们建议将关于运动协同作用的学习的,视情况而定的预测整合到假体控制系统中,可能会使系统更好地适应日常生活中使用假体的各种情况。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号