首页> 外文期刊>Frontiers in Bioengineering and Biotechnology >On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping
【24h】

On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping

机译:被截肢者和健康人在抓取过程中的运动行为

获取原文
           

摘要

Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects.
机译:视觉注意力通常可以预测人类未来的行为。在操作任务中,即使在开始达到抓握之前,眼睛仍会注视感兴趣的对象。最近的一些研究提出利用这种预期的注视行为来改善对灵巧的上肢假体的控制。这需要对运动协调性有一个详细的了解,以确定哪些时间窗凝视可以提供有用的信息。在本文中,我们验证并量化了14位经radi骨截肢者的注视和运动行为,这些被截肢者被要求抓住和操纵肢体缺失的常见家庭物品。为了进行比较,我们还包括来自30名身体健康的受试者的数据,这些受试者的右臂执行相同的实验方案。数据集包含凝视,第一人称视频,头部角速度以及前臂的肌电图和加速度计。为了分析大量视频,我们基于最新的深度学习方法开发了一种程序,以自动检测和分割所有感兴趣的对象。这使我们能够准确确定每个单独帧中凝视点,目标对象和肢体之间的像素距离。我们的分析显示,在达到抓握阶段,眼睛和四肢之间的协调性很明显,这证实了完整的和截肢的受试者都比他们的眼睛先握了500毫秒以上。此外,我们注意到被截肢者的注视行为与健壮对照组相比非常相似,尽管他们无法物理操纵对象。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号